playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_18_Case_Hx_Cancer_Diagnostics.txt
TODD GOLUB: So I come at this actually originally from a pediatric oncology perspective. So I'm going to start by giving examples of a couple of patients that I saw in the Jimmy Fund clinic at the Dana-Farber that were typical. So the first patient was a nine-year-old girl who presented to her pediatrician with-- turn these off? With a fever and bruises. She got a blood test and then a bone marrow test that revealed that her bone marrow had been replaced by acute leukemia cells-- acute lymphoblastic leukemia cells, or ALL. And so she was enrolled on what was a standard chemotherapy protocol, which was nine different drugs in rotation and combination. She entered remission in three weeks and is still alive and well. And then, a few months later, there is a second patient-- this kid happened to be a boy, about the same age-- same presentation, same diagnosis-- acute lymphoblastic leukemia-- was enrolled on the same treatment protocol. So got the same drugs. Same hospital settings. So it was as close to a controlled experiment as one could do in a human being because, of course, response to any kind of therapeutic intervention is not only a measure of the treatment itself but how it's delivered. so that was controlled, But this patient didn't respond, unfortunately, and died about six months later. So the overarching question, that I'm sure you've been addressing in this course, also, is generally how to understand this clinical variability and to discern whether there's a molecular underpinning to that variability. So we're particularly interested in this patient who responded particularly well and so did what is pretty straightforward, which is to do a standard chromosome analysis, karyotyping experiment using some molecular techniques known as fluorescence in situ hybridization. Details don't really matter, except to say that, while it wasn't apparent at the routine morphological analysis of the chromosomes, if you look molecularly, it was clear that there is a translocation between chromosomes 12 and 21 that fused two genes-- actually two transcription factors-- one called TEL, one called AML-- to make this tell AML1 translocation. And it turned out that, while it hadn't been previously discovered, actually, the majority of known genetic abnormalities in childhood leukemia-- the lion's share of those actually have this translocation. And of those patients in retrospective and now perspective analyzes, about 90% of those patients survive. It turned out that patient number two had a different translocation-- a 922 translocation-- that fuses two genes BCR and ABL-- that you've probably talked about. And it's known that these patients, give them the same therapeutic regimen, only have about 10% survival. And so these are molecular tests done at the time of diagnosis. So what I think this really sets up-- which is now perhaps generally accepted but was really emerging at this time-- was the notion that cancer is a genetic disease, it seems like, and that the outcome clinical outcome is predictable based on molecular determinants at the time of diagnosis as long as you know what to look for. Yeah. AUDIENCE: Was this before Gleevec? TODD GOLUB: This is before Gleevec. Turns out that, even with Gleevec, which targets BCR-ABL, in these patients, it's not particularly effective. AUDIENCE: What is the one [INAUDIBLE]?? TODD GOLUB: Uncharacterized. [AUDIO OUT] balance translocations that identify a single oncogene at a breakpoint. So we started thinking about alternative ways to try to think about molecular classification of cancers more generally. And this is just the obvious experiment-- two biological states. They could be clinical states or biological states-- that you collect some kind of genomic information on, like microarray data, and then have this pesky problem of actually trying to figure out how to interpret the patterns that emerge. And I'll say quite a bit about that part because it's tough. Have you talked about the non-analytical but these laboratory aspects of microarrays at all? Let me just spend one slide talking about this. So all of the microarrays are based on the basic principle of somehow labeling mRNA or its derivatives from a cell and hybridizing to probes on some of a solid support, whether it's a microscope slide a silica wafer or something. They're cDNA arrays, and they're oligonucleotide arrays. And you can either make them yourself or you can buy them commercially. cDNAs generally take the form of two-color hybridizations, where you simultaneously hybridize a test sample and a reference or a control, whereas the oligonucleotide arrays generally use a single color. This is completely and entirely a historical artifact of how these things were developed. There's nothing intrinsic about cDNA arrays that makes them require two-color hybridization. The only reason for using two-color hybridization is, if the quality of arrays is so bad that you need an internal control for every spot. Otherwise you can't interpret the data. And such was the case for the earliest cDNA arrays, which were the first ones to be made. And I would venture to say that two arrays, in general, are going to become obsolete very quickly, in favor of oligonucleotide arrays-- probably commercial ones. The genome is finite. Once you have the genome represented and identified, probably not much advantage to making your own. And most of the arguments that say, oh, we can make them much cheaper making them ourselves are usually Enron-type accounting practices where you don't-- well, we didn't count the cost of all the people involved in making these things. We didn't include the fact that we spent three years trying to figure out how to do these and they didn't exactly work. When you really get right down to it, usually buying them is cheaper, and they're higher quality. We should also mention that there are non-microarray based expression profiling methods, like serial analysis of gene expression or something called MPSS, that are transcript-counting-based methods, where you use essentially, DNA, sequencing to enumerate the precise number of copies of a given transcript in a given cell. And the proponents of this method say-- and they're correct-- this is the only way to really know how many copies of a particular transcript that are in a particular cell. I would say that's absolutely true, but it's actually not that interesting because, for most biological questions, it actually doesn't matter how many absolute copies of transcript there are. It's not that useful information. What's useful is to have some kind of comparative experiment that tells you there are more with statistical significance-- more in this sample than that sample. And rarely do you want to know that there are 682 copies, but it is true. But I would say that the low throughput and cost of these counting based methods greatly outweighs the benefit that you get from this absolute counting-based method. Yes. AUDIENCE: If you have more copies of [INAUDIBLE] this is what we're talking about with [INAUDIBLE],, with the same gene. They are copies of the gene? TODD GOLUB: Well, copies of the transcript, not of the gene. So you could say-- we sequenced-- we identified a million total transcripts in a cell of which 684 came-- were transcripts from-- encoded by gene X. So that can be useful quantitative information, but not so useful to-- these experiments are still many thousands of dollars per sample. AUDIENCE: When it comes down to a pharmaceutical company trying to figure out the mechanism of the disease and actually how to go in and develop target drug [INAUDIBLE] that might interfere-- I'm assuming that this technique would be useful. For example, if you did one of the above two [INAUDIBLE] and you don't know the quantitative number of the transcripts you have present, you're just going to, qualitatively, more or less compared to disease state or not or [INAUDIBLE] people [INAUDIBLE]. TODD GOLUB: Yeah. AUDIENCE: If you know a number, doesn't it give you ability to make speculative speculation about how those transits are being processed and whether that's an important process and the technology becomes easier [INAUDIBLE]? TODD GOLUB: I would say it doesn't, actually, because if you tell me-- if God tells me that there are 684 copies of a given transcript, I don't know what that means. I do know what it means that-- if there are five times as much of a given transcript in a disease state compared to a normal state, I can at least say that having one fifth the number of transcripts isn't sufficient to give you the disease state, whereas, if you just give me an absolute number without a comparative thing, I don't know what that means, And I'm not sure that saying, there's 600 in this state and 3,000 in that state-- to me, that's not really more useful information than knowing relative abundance. AUDIENCE: So it's really a question of what the resolution is, in terms of common [INAUDIBLE].. When you get an exact number from the bottom [INAUDIBLE] up to the top get a sufficiently resolved difference between samples that you can say-- TODD GOLUB: That's right. That's right. AUDIENCE: 104 through 500. TODD GOLUB: That's right. And I would say, for most biological and clinical applications, knowing that the ratio is 1 to 5 gives you about 95% of the information that's useful, compared to saying, it's 500 versus 100. You might be able to think of some special experiments where you really want to know the number. But usually, it doesn't help you very much, in my opinion. There are all sorts of sources of variability that we won't discuss. Except, there's only one that matters. And that's the biological and clinical variability that goes into these experiments. There are a lot of people that spend a lot of time hand-wringing over these very technical things, and none of them make any difference, really, as far as I can tell because they're overwhelmed by biological variability. So making microarrays that are somewhat more accurate or precise in their measurement won't actually make that much of a dent in the problem because that variability is exceeded so tremendously by biological variability. That's all I'll say about that. So let me give some examples of applying this. And I know you've discussed these general methods before, but that's OK. So here's an experiment where we are interested in the differentiable clinical outcome of children with a brain tumor called medulloblastoma. Have you discussed this particular example? AUDIENCE: [INAUDIBLE] but not this one. TODD GOLUB: OK. So we had 60 pre-treatment biopsies of patients with this brain tumor. Tumors were biopsied. And we knew the long-term clinical outcome of these patients-- whether they survived from their disease or they died despite therapy. They had the tumors removed and were treated with chemotherapy and radiation. And we knew that some of the patients survived and some of them did not. And based on that, we said, well, perhaps there are two classes-- subclasses of the disease. So we clustered the data. This was across, I think, 6,800 genes on a microarray. So each dot represents a different patient sample. We said, well, if there are two classes, let's cluster into two groups because we have a clinical suspicion there are two classes. And of course, if you ask the algorithm to cluster into two groups, in this case, it's something called a self-organizing map. But it doesn't matter. You could take the two major branches on a dendrogram from a hierarchical clustering dendrogram. Whatever. You get the same thing. You get two classes. And the patients are shown there-- about equally distributed. And now, if we fill in the labels of the patients-- that is, whether their survivors-- turned out to be survivors or not survivors-- you get this picture, where-- you don't need a statistician to tell you that there's no correlation between this class structure and survival. So what do you take? What do you think of this experiment? AUDIENCE: What do you guys think of this? TODD GOLUB: What can you conclude? AUDIENCE: What do you do with this? How are you going to rescue this? Are you saying [INAUDIBLE] was [INAUDIBLE] signature that [INAUDIBLE] survival? What do you say? AUDIENCE: There's a difference, but it may not be related to [INAUDIBLE]. AUDIENCE: But maybe there was [INAUDIBLE] would work somewhere else. [INAUDIBLE] TODD GOLUB: Right. I think you're both hitting on the relevant point here, which is that the unsupervised learning clustering algorithm found some structure-- dominant structure that made these two classes. They just don't happen to have anything to do with the question that we're interested in, which was one of survival. And so that gets back to this basic notion of two general approaches to data analysis, which you've probably discussed but I think often gets confused. So unsupervised learning-- which is not exactly synonymous with clustering, but it's a reasonable first approximation, or supervised learning classification. So here, you're interested in finding a dominant structure defined only by the intrinsic gene expression patterns in a given data set, irrespective of anything you happen to know about the samples, such as their clinical outcome. Here, you're saying, whatever. Maybe there's some other interesting biological structure. But I'm not interested in that right now. I want to know whether there's a gene expression pattern that's correlated with the thing I care about in this particular example outcome. So we take the same data set-- the same matrix of data samples by gene expression values and now apply supervised learning approaches. This happens to be a K nearest neighbor classifier. Again, it doesn't make any difference what you use here. And this happens to be an eight-gene model. You classify the samples using a leave-one-out cross-validation approach so that you don't-- you attempt not to overfit the data-- the model to the initial training set. And then you ask, well, , of the two classes that are predicted how do those patients actually fare? And here is a survival plot in terms of months-- months of survival for those patients who are predicted to be alive versus those that are predicted to be dead. So what do you think about this? AUDIENCE: This? So we'll pick those genes? TODD GOLUB: Is this significant? So if you look in a basic biostatistics textbook about how to calculate statistical significance of a Kaplan-Meier survival curve such as this is, they'll tell you to do the log rank test. And if you did that, you would get a p-value that, if you've looked at a lot of these things, would match your intuition for this degree of separation. So it would look something like that. Is that reasonable? Well, that looks quite significant. But you should ask, well, how come this model is eight genes, for example? How did you choose that number? Well, it's quite easy. We chose that because I worked the best. It worked better than six, worked better than 10, or 50. So we had to pay some penalty for overfitting a model, potentially, to this particular data set. So the ways that you could then really test the statistical significance of this model would be to apply it to another data set. That's the gold standard. But short of that, a reasonable thing to do to better approximate the significance is to take into account the fact that there are a number of parameters of this model that were optimized to fit this data set. And you can do this by doing a permutation test, where you don't scramble the gene expression values themselves, but you randomize the class labels in terms of, are the patients alive or dead, and you go through the same procedure of attempting to build an optimal classifier and including choosing the number of-- optimal number of genes to ask, if you really try hard with these machine learning methods, how often can you make a classifier that works as well as this one does? And when we did that 1,000 times, 9 times out of 1,000, we could do this well or better. So we estimated the significance of this model here, which is still decent. But you can see we took a hit of a couple of orders of magnitude on this p-value. So if you had a nominal p-value of 0.05 or something, that result would entirely vanish when you appropriately attempt to correct for such multiple hypothesis testing. And this is independent of what particular classifier you used. So I would say, much of the literature-- and everyone's figuring on how to do this as we go along. But much of the literature and worries about failure to reproduce an initial model are due to the problem of overestimating the significance of an initial model because of these overfitting types of problems. So let me just make a couple of general comments about supervised learning. And some of them may seem obvious, but turn out to be actually problematic. And this is one example. So the first step-- establish the class labels of what you're trying to classify. So in one of our first experiments where we were trying to classify the two different basic types of acute leukemia-- acute lymphoblastic leukemia or acute myeloid leukemia-- the way you build a classifier is to choose examples of one class, examples of another class, and then find gene expression patterns that are correlated with that. Well, who's to say that we have these right, right? What should you use as the gold standard for these things? It's not always obvious, particularly for the very things that we want to build better molecular classifiers for because the current clinical diagnostics are so poor. It doesn't really feel like a good gold standard to go back, rely on the clinical labels as the gold standard. And so this is, in general, a. Real problem for survival studies, if you force the question into a simple two-class problem-- survivor or non-survivor-- well everyone's a non survivor at some point. So at what point do you declare that a patient is a survivor of their tumor, for example? That requires some judgment call in terms of which bin to put the samples into. And I would say that much of at least our effort and time has gone into trying to figure out how to get this right. And there are some approaches that one could take to not have to be so rigid on how you assign these labels, but it's a challenge. So the second general step in making the classifier is selecting the features that you're going to feed into a model-- features in this case being genes. The details don't matter-- their whole long list of ways that you can rank genes. This is a simple one that is based on the mean expression level in the two classes and their standard deviation. It's a relatively unsophisticated way to select genes because it assumes that there's a uniform behavior of these marker genes in the two classes, which, in many cases, is not at all the case. There are many other methods as well. And then I'm sure you talked about these at length. So I think maybe I'll skip this, that you then take these things and classify. So for unsupervised learning, there are all these methods. And I think they basically don't matter-- that, either for clustering or supervised machine learning methods, if you get a result that is obtainable with only one magical-- one person's really special algorithm, I would worry deeply that there is a problem with-- there's an information leak somehow or something's not right because, at least in my experience, when there's really biologically or clinically meaningful structure to be found, you can find it with a number of different approaches. And in fact, that's a reasonable sanity check to make sure that you can recover structure, whether you're using machine learning or unsupervised clustering algorithms-- that you can recover it with multiple different methods. There are some examples to that that can be interesting. But on the whole, I'm confident saying it doesn't really matter, and it's really the input to these data sets that matters the most-- that is, are you really sampling the diversity of, for example, the disease process that you're studying with the samples that are in your initial data set? What's more challenging is, how do you evaluate the output of these clusters in terms of their biology, in terms of really knowing how robust the structure is, given any algorithm? And then how do you actually know when you-- having seen a given structure once, how do you actually apply it to another data set to know whether you see it there as well? It's not obvious. Any questions about that general stuff? I wasn't going to say any more because I know you've covered it. Let me give a couple other examples of applying these principles to some data sets. Did you talk about this one? AUDIENCE: No. TODD GOLUB: OK, so here-- again, focusing on childhood leukemia-- most kids with childhood ALL respond to chemotherapy. I told you about this subgroup of BCR-ABL patients that does not. Another group that does not respond well are infants less than a year of age, who generally don't respond. It turns out that most of those patients have translocations into a gene called MLL. But it's clinically of interest because these patients don't respond to conventional chemotherapy, and this just shows you that, using standard clinical criteria, they're hard to distinguish. So what if we take conventional ALL samples, these infant MLL rearranged leukemias, and some AML-- the myeloid leukemias and we apply an unsupervised learning approach? This happens to be principal component analysis, but could be your favorite clustering algorithm. What do you see here? AUDIENCE: What do you see? AUDIENCE: Three different classes. TODD GOLUB: Three different classes. Why do you say that? AUDIENCE: Jose, speak up. AUDIENCE: I mean, you're maximizing the separation so you see some separation-- some MLL [INAUDIBLE].. TODD GOLUB: OK, so right here. So, yes, you see three classes, but only if you have the colors filled in. So if you imagine this is just a group of leukemias, you might get the sense that there was something going on over here. But if you imagine these are all black, it's not so obvious, maybe. AUDIENCE: [INAUDIBLE] analyzed or have been optimized [INAUDIBLE]? TODD GOLUB: No, this is completely unsupervised. So that's the first point, is that these things often look clearer when you actually impose knowledge on them. Even though the structure here is done in an unsupervised way, you get the impression that it's really clean result if you superimpose knowledge afterwards. That's the first thing. But let's say, yes, they are three classes. And I think you can appreciate that. One question was, well, maybe these infants with the MLL-- rearranged genomes in green-- maybe they don't respond to therapy because they're babies and you know that this is a metabolic post-metabolism problem. And their leukemias are the same as the conventional ALLs shown in dark blue. This would argue that it's actually not the case, that they're fundamentally different leukemia. Is this helpful? So it's helpful, maybe, from a taxonomy perspective, but does it tell you what to do for these patients? So what if you wanted to gain some biological insight into what was different about these MLL green infant leukemias? What might you do? You've got these data. You see that those patients define a different class. What could you do? AUDIENCE: Any of you have any idea? What would you do with this? AUDIENCE: Inspect the component? Look for the things that have higher weight? Try to define biological function related to the [INAUDIBLE]? AUDIENCE: So the genes whose weights explain the most is separation. TODD GOLUB: That's right. You could do that. As it turns out, in this case, there are a lot of genes that actually have relatively equal weight. So you still have a large list. And the three principal components don't perfectly separate the classes. So you could go back and say, well now I believe that these MLL leukemias are a distinct entity. That would be reasonable. That's a reasonable thing to do. But the other way that you could do is to say, well, now this tells me that I believe that these MLL leukemias are a distinct entity. Now let's use supervised learning types of methods to identify the genes that are most correlated with the class of interest-- for example, high in the MLL class versus the others. That would be a straightforward thing to do. So you could rank the genes according to that distinction. So we did that and did what I think-- yeah. AUDIENCE: [INAUDIBLE] In the first case, [INAUDIBLE] compare between the different classifications-- the difference-- the different genes-- [INAUDIBLE] expression or whatever? And the second case, you're just thinking comparing the one classification? Is that correct? AUDIENCE: No. TODD GOLUB: No, I think it's more that, if you didn't have these colors to look at and you said, ah, there's some structure here. I don't know what it is. What's the biological basis of this structure? Looking at the weights of the genes that are driving this distinction would be a reasonable thing to do. In this case, we had a specific question. Are these leukemias unique, or are they admixed with the others? Having determined that they are unique, it's a little bit cleaner to say, all right, let's use supervised methods to find the genes that distinguish one class from the other. Of course, if you had perfect separation, it would reduce to the same experiment. But because it's imperfect, there are some advantages to using class labels here. I should mention also that-- you see this blue guy here sitting in a sea of green? So this is a patient that, based on gene expression, one would predict to be MLL rearranged. But the clinical record for this patient study was not. But when we went back and actually looked at this, it turns out that there was a missed translocation into the MLL gene that you could recover by FISH. So this is not a public health menace-- diagnosing these leukemias properly. But there are examples of missed diagnoses that I think can be-- I think looking at these multi-parameter gene expression readouts can serve as a unifier, an integrator of lots of upstream genetic activity. And so I think the power to detect those upstream events is going to be higher when you look at some downstream pattern, such as an RNA pattern, as opposed to developing specific tests for each of the individual genetic abnormalities that could cause the same phenotype because, in the end, all you care about is knowing whether the molecular program has been activated. So you rank the genes according to this distinction and just start at the top of the list-- here is a gene that was top of the list of 12,600 or whatever that we're on on the list. And any time a tyrosine kinase rears its head in a cancer classification-- cancer biology experiment, you pay attention to it, particularly given the Gleevec story. So what do you think about this? I tell you, oh, look at this. The RNA level of a kinase-- so a receptor tyrosine kinase called FLT3 is characteristically high in the MLLs, compared to the others. What do you think of that, in terms of therapeutic-- potential therapeutic significance? AUDIENCE: What would you do with that? AUDIENCE: [INAUDIBLE]. Or maybe this already popped out, but can't you check the levels in the other two classifications? AUDIENCE: Because these [INAUDIBLE] slower. TODD GOLUB: Right. So we define this list by virtue of the fact that it's high in the MLLs compared to the other two combined. AUDIENCE: So how do we get the therapeutic K out of this? AUDIENCE: For the patients patient are responsive [INAUDIBLE] Are they classified as far as the response, the FLT3 inhibitor? TODD GOLUB: To a FLT3 inhibitor. AUDIENCE: Yeah. TODD GOLUB: So you want to treat patients with a FLT3 inhibitor? Well, that's not an FDA-approved drug, so you can't do that. AUDIENCE: OK, so what else do we have? AUDIENCE: [INAUDIBLE] TODD GOLUB: So there is. The hypothesis would be that MLL leukemia cells are dependent on FLT3 kinase activity for survival. If that's not the case, then you don't care. Unless that's the case, the overexpression of this thing is totally irrelevant from a therapeutic perspective. So you could do that genetically-- for example, using RNA interference to knock down the expression, or you could do it pharmacologically, if there was a compound in development-- not yet a drug-- but a compound in development that inhibits kinase activity. And so that's what this experiment is. AUDIENCE: So [INAUDIBLE] Doing RNA interference-- is that something that you can just do in a person? TODD GOLUB: You can't do it in vivo in a person, but you can do it in human-derived cell lines. AUDIENCE: So clinically, that would not be [INAUDIBLE].. AUDIENCE: You could test the hypothesis that you want to go down that path. AUDIENCE: I see. TODD GOLUB: That's right. Now, you can make the argument that, well, doing these things in cell lines in mice-- that's not real disease. And so I don't care what any of this stuff shows. But still, if your hypothesis is that a given gene-- the overexpression of a given gene is important and you do the experiments to ablate the expression of that gene and nothing happens, that should deflate your enthusiasm a little bit. So here's the experiment, though. Here, now, taking patient-derived human cells that have been engineered to express firefly luciferase genes so that they glow-- and you can monitor in vivo tumor burden. So here, mice that, on week one, you inject in the tail vein infant leukemia-derived tumor cells. And you see, over 4 weeks time, the amount of luciferase activity increases as the cells grow and the mice start to die around week 4. And here is a cohort of mice also injected but treated with a drug once a day by mouth that functions as a FLT3 kinase inhibitor. And you can see that the development of the leukemia is significantly abrogated, which, at least to our first approximation, validates the hypothesis that FLT3 overexpression is not just a diagnostic marker of this class, but it's actually a potential therapeutic target. And so, based on this and some other preclinical data, the clinical trial that you wanted to do with a FLT3 inhibitor is being planned to treat patients. Yeah. AUDIENCE: [INAUDIBLE] TODD GOLUB: So the cells are infected with a retrovirus that contains the cDNA for the firefly luciferase gene so that, if you inject these mice with the compound luciferin, they will emit the same enzyme that fireflies do, and they will glow. So usually, this is done in vitro in test tubes. But here, you introduce it into the cells and the animal so that you can monitor. What you used to have to do would be to inject a whole bunch of mice, kill some of them here, kill some of them here, and then examine the bone marrows to evaluate the progression of the disease. What's nice here is that you can follow a cohort of mice non-invasively. AUDIENCE: Let me ask a dumb question, because I've never actually done this. When you actually look at these mice, can you tell that they're fluorescent? TODD GOLUB: No. No. AUDIENCE: They don't actually look-- no. TODD GOLUB: No, you need to use a special device that can measure, I think, in the near-infrared range. There are green fluorescent protein mice that actually do glow. And you can tell that they're green. AUDIENCE: The mice that we use-- are they immune [INAUDIBLE],, which means you don't have a massive immune response? TODD GOLUB: So you have to do this in immunodeficient mice so that they don't reject the human tumors. AUDIENCE: Does that factor at all in your determination of the degree of proposed-- the spread of the tumor cells and whatnot because they're just there? The immune system can direct attacks against-- so when you're considering these experiments and saying, OK, I see this spread across the entire mouse and this level of [INAUDIBLE] how do you factor that in? TODD GOLUB: You don't. You factor that in by saying that there are many things that are occurring in these models that don't recapitulate what happens in the mouse. Most people don't get cancer by having intravenous injection of a tumor into them. Most patients have an immune system. So I think it's just one of the limitations that-- it would not be worth the time and expense to have a drug development project around every little inkling that comes out of a microarray experiment. So you need to do something, even though it's deficient in many ways-- and you've hit on some of them-- to say, is this interesting or is it not? I think-- not yet at the point where one can do this entirely computationally and have any kind of confidence. That being said, these so-called xenograft models, where you put a human tumor into a mouse model, are not particularly predictive of efficacy of a drug in the human clinical trial. But in the absence of anything better, it's still what most people do first. AUDIENCE: [INAUDIBLE] more robust, in other words, if there's no effect on the xenografts, then you're really a loser if you go to the human [INAUDIBLE]? TODD GOLUB: No. If anything, it's the opposite, that, if you show some activity in the xenografts, you often see activity. But failure to see activity-- failure to see activity in xenografts is not particularly-- particularly for molecularly targeted therapies, where it may be that you can show in the mouse that you've really shut down the pathway. Let's say you've inhibited FLT3 completely. Drug companies are starting to use graphs in that way to say, all the mouse is is a test tube so that I can ask-- have I inhibited FLT3 enzymatic activity? Yes or no. If I have and I believe that FLT3 is a good target, I don't care whether the tumor's actually shrunk or not. I'm going to bring it forward to clinical trial. But you need something to convince you that the target is reasonable, yeah. AUDIENCE: An interesting [INAUDIBLE].. So the small spots in the [INAUDIBLE] three mouses are-- the purple ones are the xenographs, right? The purple spots are the xenographs. AUDIENCE: Are the tumors. TODD GOLUB: Yeah. AUDIENCE: But that means the spots are all very fixed in specific areas of the mouse. TODD GOLUB: Yeah, I think you see them there because the cells-- they're injected intravenously in the tail vein, but they home to the bone marrow and you're seeing large bone marrow cavities, which is why you see them over the flank there. I think I'm going to skip this. OK, did you talk about this? AUDIENCE: No. TODD GOLUB: Good. So when you do these experiments, the data usually present you with two-- you either have one or two problems after you do all this appropriate correction for multiple hypothesis testing that I told you about. Despite having done that, you still have a list of genes that's too impossibly long to bring biological understanding to, or you've corrected away everything and you have the impression that there's actually nothing that is differentially expressed in your two cases. And so let me show some of the more recent approaches to dealing with this because it's really substantially changing our thinking about how to do these kinds of experiments. So this is not a cancer example but a diabetes experiment where there were patients who were either-- adult patients who either had type 2 diabetes or they were normal, as defined by having a normal glucose tolerance test. And they underwent voluntary skeletal muscle biopsies under a euglycemic clamp. 18 of these patients, 17 of these patients-- it's a simple two-class problem. Get the expression data to identify those genes that are differentially expressed in these two classes. Do the appropriate permutation testing to make sure that you correct for multiple hypothesis testing. And here's the result. Nothing meets significance. So out of the 20,000 genes on the array, even the top-ranked gene doesn't meet statistical significance. Possible that this is the case. But the question is, are there other ways that you might go about recovering a biological story here? And so the way that Vamsi Mootha and graduate student, Arvind Subramanian took to this was to define groups of genes or gene sets whose activity as a collection of genes could be interrogated in these data sets. And we could have a rich discussion about, how does one define such gene sets? You could do it based on the literature. So ask Zach what genes are important in some pathway that he knows something about. That could be a list. Or you could say, we don't trust that about-- that'll bring Zach's bias. We're not interested in that or anyone else's intuition. Let's just experimentally derive lists of genes by one way or the other-- perturb cells, get the gene expression change, and that makes a gene set. And you can collect as many of these as you could stand. For these experiments, we made 150 of these gene sets. AUDIENCE: Before we go on, [INAUDIBLE] twice on the slide. TODD GOLUB: Oh, yeah. AUDIENCE: So he's enriched one. TODD GOLUB: He's enriched. Yeah, he should be. So then how do you do this? So first thing you do is rank all the genes on the array-- 1 through 20,000-- according to how well they're correlated with the distinction. I already told you this top one-- even that one doesn't meet significance as a single gene. And then you interrogate each of these gene sets and ask, are they enriched? And so here would be an example of a hypothetical gene set, so each gene in the gene set of a dozen genes or whatever, that is not enriched towards the top of this rank-ordered set list, whereas here is a hypothetical gene set. It's not perfect. But it's non-randomly distributed on this rank order list. It's enriched towards the top. AUDIENCE: Do you see this as a similar operation to the following-- there's a bunch of proteins that they would look at [INAUDIBLE] giving microarray result at gene ontology, and they'd say, what classes of gene ontology are overrepresented given-- in this set of genes? TODD GOLUB: Yes. So you can define these gene sets based on a gene ontology annotation. That's an example. The important part is to make sure that you appropriately correct for testing all the gene sets. So now, instead of 22,000 genes, we have 150 gene sets. But that you should think of 150 hypotheses. So you should do the same permutation type of testing and say, if I randomize-- in this case-- the diabetes versus normal distinction, is my favorite gene ontology class still enriched? And that's what some of the current approaches to that kind of annotation don't do. And so you can codify this in something called a Kolmogorov-Smirnov statistic. It doesn't matter. You can come up with an enrichment score for these things. And if you do that in this example you essentially get 1 gene set which meets-- AUDIENCE: [INAUDIBLE] TODD GOLUB: Which gets quite high statistical significance for a set of genes. So how do you reconcile this? How do you reconcile this thing and this thing? How could that be? AUDIENCE: That's my point. I want to understand what you just were trying to explain. So the first thing-- you're saying that you didn't pick up any difference in the expression-- AUDIENCE: On a gene-by-gene basis. TODD GOLUB: That's right. AUDIENCE: But then, when you group a couple of them together, all of a sudden, there is a difference. TODD GOLUB: Right. So how could that be? AUDIENCE: How could that be? [INTERPOSING VOICES] AUDIENCE: --getting more information out of that. Maybe a weak signal [INAUDIBLE] sample [INAUDIBLE] the coherence. AUDIENCE: [INAUDIBLE] a couple things things being combined [INAUDIBLE]. Make it one up and one down. [INAUDIBLE] TODD GOLUB: So the microarrays themselves-- the precision of these arrays is not so fantastic. And so you can imagine if there's a subtle signal. On a gene by gene basis, it's difficult to detect it. But if you consider the coordinate regulation of a group of genes all in the same direction, as a group, this might be quite striking. And this is shown right here, which is really quite amazing when you think about it. So here, look at the mean expression level of all the diabetic patients versus all the normal patients. All the genes on the array are shown in gray. And so you would expect there are no outliers. There's nothing really way off the diagonal. If they were, those would show up as single genes that were differentially expressed. Of course, you could have one massive outlier that could screw you up with looking at the means. But still, you get my point. And here are these oxidative phosphorylation-- the gene set that was defined by those genes that are involved in oxidative phosphorylation. And you can see that, with only a few exceptions, they're all lined up just below the diagonal. Their change in gene expression is only about 20% compared to normal, but it's all in the same direction. So 20% change in this number of genes is quite significant. AUDIENCE: Are you getting that? Let me try to-- because it's an incredibly important point. The chance-- if you look at any given dot of thing, what does it means to be 1-- near the diagonal, one side or the other. I'm not going to make [INAUDIBLE] story. [INAUDIBLE] about. It's going to be a quick one-- the fact that it's on one side or the other. And then, just by dumb luck, they're all on one side of the diagonal if they're put diagonal. That's going to be incredibly unlikely. And so each individual gene is one side of the diagonal. The fact that all genes that we have pre-assigned beforehand of the other type-- in this case, [INAUDIBLE] phosphorylation-- that will end up on one side of diagonal-- that's hugely unlikely, the fact that you can just, by some luck, put them all on one side of the diagonal. AUDIENCE: So it's less tied to the enrichment [INAUDIBLE] the probability that you define [INAUDIBLE] that, given that you've said these are genes that should be related to some [INAUDIBLE]. AUDIENCE: Yes. AUDIENCE: That makes sense. TODD GOLUB: That's right. AUDIENCE: [INAUDIBLE] together, the classes can be formed. TODD GOLUB: That's right. Because if you look-- if you take this point in isolation, there's no way that that's going to be significant because it's right in the middle-- in this thing. So this is an eye-opening experiment, and it's causing us to go back and reanalyze, using this methodology called gene set enrichment analysis, some old data sets. Let me give you a couple of other examples of unpublished examples that in a slightly different-- use it in a slightly different way. So I told you about our medulloblastoma outcome prediction experiments before. And around the same time, there was a paper published that looked at the same question, essentially. Non-metastatic versus metastatic medulloblastoma-- different patients, different arrays, different groups, whatever. They made a classifier that was centered around the PDGF receptor alpha gene-- was a predictor and also a number of the downstream players of PDGF receptor alpha. And so we asked, are any o-- when we look at our classifier of outcome, which I showed you is pretty decent, where's the PDGF receptor alpha pathway on there? And neither PDGF receptor alpha or the genes in that pathway were among the top predictors-- top 50 genes in our data set, one which I think would lead 1 to believe that one or both of those data sets is wrong or the models derived from them are wrong. But if you take this PDGF receptor alpha-related genes as a gene set and ask, is it enriched in our data set using this methodology shown schematically here, it's enriched. This list is 12,000 genes long or so. So you can see they're not all stacked up, like 1 through 50, but they're non-randomly distributed, which we take to mean that, actually, the two data sets are consistent. If you had data sets of infinite size, then you'd start to see convergence of the markers being overlapping at the very top of the list. But with these smaller data sets and the clinical variability-- AUDIENCE: This puts some formalism around what [INAUDIBLE] was waving his hands and yelling about. For five years, people said, well, this microarray [INAUDIBLE]. You can't tell the difference between them. TODD GOLUB: Yeah. AUDIENCE: This is the pattern. It's the overall pattern. This is a much more formal way-- a pattern that's released to pick [INAUDIBLE].. TODD GOLUB: So here's another example of that thing. It's also unpublished. So we looked at lung cancer-- human adenocarcinoma in the lung and identify some-- we just drew the line at 50 because it's a nice number-- predictors of outcome in the Boston lung cancer patients. University of Michigan did the same experiment, published around the same time. Overlap in gene of these two-- listed 50 genes-- zero-- concerning. But if you look in the space of gene set space and you ask, what gene sets are enriched in one data set? What gene sets-- which you can think of loosely as pathways. They're not really pathways, but it's reasonable think of them for this purpose. There's really quite significant overlap in gene set space. So I think what this is saying is that Botstein is right, that there is more biologic coherence in these data sets. It's just we haven't been smart enough to really know how to see it. AUDIENCE: How did you choose your gene set? [INAUDIBLE] What do you use-- [INAUDIBLE] function or is it a pathway? Or how do you actually [INAUDIBLE]?? TODD GOLUB: We now have about 450 or so such gene sets-- some of which are good, some of which aren't particularly useful. They include some go annotation. I don't think those are particularly useful because the granularity isn't fine enough. I think, in the end, the most useful types of gene sets are going to be those that are experimentally derived. But this is a mixture of those, and we're not yet at the point where we even started try to understand-- of these 35 enriched sets of genes, what are they and what's the biological story? AUDIENCE: You threw at it on the order of 60 gene sets? TODD GOLUB: No. AUDIENCE: No? TODD GOLUB: No. We threw at it 400-and-something gene sets and asked, how many of those are enriched in the Boston data set? And the answer is 35 plus 18. And 35 plus 12 were enriched here. And so the majority of the sets enriched in one were also enriched in the other. AUDIENCE: That's helpful. AUDIENCE: Are those cancer-specific or just biological? TODD GOLUB: No, they're not cancer specific. AUDIENCE: And those gene sets are manually annotated by their group, or is the [INAUDIBLE]?? TODD GOLUB: There a combination, as I said. Some are these [INAUDIBLE] pathways that are so-so annotation. Some are entirely computationally derived. That is, they're the nearest neighbor genes of a given index gene in a data set. They are various things. And what the definitive collection of gene sets would actually look like isn't obvious to me. AUDIENCE: [INAUDIBLE] learn more and more about mechanism. TODD GOLUB: That's right. On the one hand, I think it will be useful to just not fret about it too much and worry about exactly how to define these things. Just get them in there. The nice thing about this GSCA methodology that I didn't really go through in detail is that it's forgiving-- how you calculate these enrichment scores is forgiving of the definition of the gene sets because you're looking for enriched-- non-random enrichment of the gene. Set so the fact that a third or a 1/2 of the gene set may actually be inappropriately there doesn't make any difference because there's still enough that significantly enriched to detect it. AUDIENCE: You also [INAUDIBLE]? TODD GOLUB: You can. We didn't happen to do it here, but you can. Again, like any of these other things, there are going to be a number of different metrics that you could apply to measure significant enrichment. The most important thing is just to make sure that you correct for the possibility of whatever metric you use-- that you're detecting something beyond what you'd expect by chance. AUDIENCE: So I have two questions about [INAUDIBLE].. One would be-- so I'm assuming that you can also detect-- you have a [INAUDIBLE] of a particular gene, which you haven't mentioned so far. So is that [INAUDIBLE] an example of an actual [INAUDIBLE]. Is that something you would find by going back to your healthy samples and comparing-- looking for enrichment relative to your disease? AUDIENCE: A positive score? TODD GOLUB: So actually, the way you calculate this is the metric doesn't specifically look for enrichment towards the top. It looks for a non-random distribution. You would find depletion. What you could also find-- which I think is-- this score will capture and is not desirable-- would be something that's concentrated in the middle, which is very uninteresting. So there are some false positives in there. AUDIENCE: That may not actually be so interesting-- for example, in T-cell activation, you get normal [INAUDIBLE]. You get upregulation of certain proteins and you get downregulation of others. And so what I haven't heard yet is how you account for, perhaps, enrichment of part of your genes [INAUDIBLE] increasing another with that-- TODD GOLUB: So there is another version of this that tries to dissect the gene sets into those components that move coherently in one direction versus the other, because you're absolutely right. AUDIENCE: [INAUDIBLE] TODD GOLUB: Yeah. If, for example, you take a-- use GO annotation or something like that, or some pathway, if half the genes in the pathway go up and half the genes go down, that could look like no enrichment at all, whereas, if you separate those somehow, you could see it. How are we doing for time? AUDIENCE: Well, you've got 20 minutes-- 18 minutes. TODD GOLUB: OK. So let me push this to-- not classification but some newer directions that we're thinking about-- how can you use these signatures for useful things, particularly to think about something that's closer to drug discovery. So this is the way the usual discovery pipeline would look like. You have some disease process or biological process you care about. You do some microarray experiments. And then a miracle's is supposed to occur whereby you develop sufficient molecular understanding of what the data are telling you, that you can identify the smoking gun target. And then you partner with a drug company and say, screen for a small molecule that inhibits this critical therapeutic target. The problem is that this part is really tough. And so what we've been thinking about is, well, could you bypass the understanding part, at least initially, whereby you screen for small molecules based on their ability simply to perturb a signature of interest. And then, once you have those in hand, you could use them to further dissect the biology or, if you're lucky, think about them like drugs. And so the proof-of-concept experiment is shown here, where-- here are two biological states, for example-- a leukemia cell, which is undifferentiated, and a normal blood cell, which is fully mature and is differentiated along the myeloid pathway-- a peripheral blood neutrophil. It's not known what the critical targets are of this pathway. So it's hard to do a small molecule screen to induce this process, which would be nice, if you could simply induce your leukemia cells to turn into normal cells. So the question is, could we define a signature of this state, a signature of this state, and then screen for compounds that trigger the signature? So the details don't matter here, but the concept is, define signatures of the two states of interest. So we call this thing GEHTS, for gene expression based high throughput screening. Define the signatures-- now standards-- what we've been talking about-- our microarrays, where the experiment would be-- treat cells with various different chemical compounds and ask whether any of those compounds trigger the signature of interest. And to make this feasible, we simplify these complex signatures into a handful of genes that you can measure by multiplexed PCR. AUDIENCE: I read the paper. Was just purely a cost issue as opposed to going directly to the microarrays? TODD GOLUB: It's a cost and throughput issue. Yeah, so if you wanted to-- AUDIENCE: We're good. [INAUDIBLE] TODD GOLUB: If you wanted to screen tens of thousands of compounds, not really feasible if it costs you $500 a pop in the throughput issue. So yeah, it's a practical matter here. This part doesn't matter. Suffice it to say there's a method for how to measure a simplified signature in high throughput. So we screened a couple thousand compounds and asked, do any of them trigger this little mini gene signature? And some of them did. Details don't matter. But then the question should be, well, maybe these things just trigger these five genes that are-- I'm sorry, these compounds trigger the five genes but actually don't do anything. So one way that you could sort out whether they actually do anything biologically is to now step back and look across the whole genome again, take cells, treat them with these candidate compounds, and ask, did you actually recapitulate the overall molecular program, not just of these five genes but of the whole thing-- of the whole molecular program? So if you turn back to genome-wide arrays, you can see that a number of these compounds recapitulated the molecular program of differentiation. Does that make sense? So you use the simplified high throughput assay just as a readout of whether you've triggered the signature or not. And then, with those candidates in hand, you go back and interrogate them. AUDIENCE: Does everybody follow that? TODD GOLUB: So the genes in this little signature itself-- AUDIENCE: We don't if they're actually doing anything to the rest of the cell. TODD GOLUB: I'm pretty confident they don't, actually. But it's irrelevant. So you define these signatures not based on their being important or somebody thinks, oh, that's a good target, and that's important for leukemia or differentiation or whatever. It simply-- AUDIENCE: Represents the class. TODD GOLUB: It represents the class. All you care about. And that you can measure it well. So sometimes you find a good candidate marker that, for some reason, it doesn't behave nicely in this assay so you chuck it out and replace it with something else. So here's just an example to say that then, as you might expect, when you treat leukemia cells with these candidate compounds, discovered solely based on their gene expression changes, they do the thing-- the cells do the things that maturing leukemia cells should do, like-- they become phagocytic. They start engulfing-- AUDIENCE: [INAUDIBLE] much more differentiated like in their behavior. TODD GOLUB: Yeah, so it's a promising idea. This is one unpublished example that says, well, it is known that blood cell differentiation is largely governed at the transcriptional level. So maybe that's why you can define these transcriptional signatures of the differentiation process and screen for things. But here's an example of defining a gene expression signature. Again, the signature itself is devoid of any real biological meaning, other than it reads out, in this case, activation of a signal transduction pathway. So the usual way of thinking about this is, well, if you want to look at signal transduction-- proteins talk to each other, RNA has no place in that. RNA profiling has no place in that. But here the idea is, if we stimulate a signaling pathway-- in this case, by stimulating cells with platelet-derived growth factor and then capture the transcriptional response at the RNA level, could we use an RNA signature as a readout for PDGF receptor activation and a screen for inhibitors of the signal transduction pathway using RNA as a readout? And here, this is work of a MIT graduate student-- chemistry graduate student, who pulled out a compound called aurintricarboxylic acid as an example, which turns out to be a previously unknown inhibitor of the PDGF receptor itself. But it was discovered by looking down here at a signature. So I think this is going to be useful for various-- being able to screen for things that you can't otherwise screen for. So this is a chemical structure that is not currently being explored in people who know much more than I do about kinase inhibitors because no one thought to look at it. And it was discovered simply by using as a signature, as a readout. So the last five minutes, let me push the signature idea perhaps further than I should. AUDIENCE: But it's the end of the class. TODD GOLUB: But it's the end of the class, and I'm excited about the idea. But it has less data surrounding it. And that's the idea of using these signatures-- so you can see, at least in-- my thinking has shifted over the past year or so, much away from finding, oh, what's the needle-- using these microarray types of experiments to find a needle in a haystack-- what's the gene that's responsible for something I care about? To thinking about the power of these signatures as readouts for various things. So the idea here is to use the signatures-- RNA signatures as a vehicle for establishing connectivity between components of the genome and each other, that you manipulate through perturbation. It's a little bit different than relevance networks, I think, but it's conceptually similar. AUDIENCE: [INAUDIBLE] TODD GOLUB: Establishing connections between drugs and drugs and drugs and genes. So the idea would be that, if you can define a gene expression signature of, let's say-- comprehensively of all drugs-- there are only about 2,000 FDA-approved drugs. It's actually amazing that that's just not publicly available information, what happens to cells when you treat them with drugs we give to patients. If you had that and, for example, you had a signature that was a result of ablating each gene in the genome sequentially-- only 26,000 genes in the genome, and there are now reagents coming online with RNA interference where you can actually do that experiment-- then you'd have a systematic matrix of perturbations whereby you use these gene expression signatures as the universal bio assay to connect genes with genes, genes with drugs, and drugs with drugs. Does that make sense? Do you have a question? AUDIENCE: [INAUDIBLE] How do we prevent from [INAUDIBLE] to the [INAUDIBLE] cancer cells [INAUDIBLE] cancer cells. But because of [INAUDIBLE]-- because of [INAUDIBLE]. TODD GOLUB: Well, we're arguing that-- don't bother doing this in lower organisms because you can do the experiment in human cells. So we're going to do it in human cells. And I would say-- the idea, again, for establishing these connections is not to be able to-- AUDIENCE: Be perfect. TODD GOLUB: I think people who think that you can create a wiring diagram and reverse engineer a cell are out of their minds based on these data. It's just not feasible. So the question is, can you find-- can you find, for example, enrichment in the same GSCA kind of thinking? Can you find enrichment of one signature in another, thereby establishing connectivity, in which case yes, you'll get some of it wrong because the context isn't right or the species mapping isn't right. But will you be able to see enough connections to establish connectivity? AUDIENCE: [INAUDIBLE] TODD GOLUB: Yep. AUDIENCE: Sure. TODD GOLUB: Yep, certainly some. We're finding, actually-- I'll give you a couple of examples that-- while that is certain to be-- that context-dependence is certain to be the case-- AUDIENCE: There's lots that's shared. TODD GOLUB: There's lots that shared. And so, when we first started talking about this project, there were a lot of objections to the idea-- oh, still are, that, oh, what if you don't choose the right cell line to do this in? Ideally, you would do it in like 100 different cell types. But then it gets then it gets to be a serious experiment. Even in one cell line, it's a huge experiment. AUDIENCE: I think I told this group that I've seen the same thing in [INAUDIBLE] relationships. Many, many cell types that people wouldn't think they would be there. They're just there. TODD GOLUB: Yeah, exactly. So let me give a couple examples and then we'll end. And it also-- again, comes back to this idea of enrichment, looking for enrichment using a Kolmogorov-Smirnov enrichment test. So here is an experiment that was published not by us, but by a group at Abbott Laboratories, where they are interested in this class of drugs called histone deacetylase inhibitors. They took-- what is it? Five of these things, treated cells, took a common set of genes that were regulated, defined a signature of 22 genes. It's a gene set. It's the HDAC inhibitor gene set. We took breast cancer cells-- not the cell type they used-- and treated them with a bunch of different drugs, including one, valproic acid, which is actually used to treat seizures, as it turns out. It was later discovered valproic acid actually has histone deacetylase inhibitory activity. It's weak, but it's there. And we ask, can we see enrichment of this signature in any of those compounds? And the answer is, yes. So Trichostatin A was actually-- where is it? Here's the top one-- was one of the drugs that they used to define the signature. So we recovered it-- not surprisingly. But it's a little bit cheating because it wasn't a new example. But here, you see sodium valproate, third and fourth ranked on the list, which was not used to define the signature but simply based on the signature connectivity. Had we not known it, we could have rediscovered that-- we could have discovered that sodium valproate was an extract inhibitor because of this connectivity. You can also see here-- Trichostatin A-- so the signature is defined-- I can't even remember-- in one cell type. And we see it triggering the signature-- the HDAC inhibitor signature in breast cancer cells and in leukemia cells. So it's robust [INAUDIBLE] context. Interestingly, we also ran our little connectivity-- mini connectivity map across the oxidative phosphorylation signature that we define. AUDIENCE: Does the [INAUDIBLE] protease cause hypoglycemia? TODD GOLUB: It does. We didn't know that. We thought we discovered something new. AUDIENCE: It's the wrong specialty. TODD GOLUB: In the wrong specialty. Yeah, it was reported about 20 years ago that valproate causes hyperglycemia and modulates oxidative phosphorylation-- separable activities, separable from the anti seizure activity, separable from the HDAC inhibitor activity, and they trigger separate gene sets. So one last example-- a signature defined of a drug called rapamycin, which sits in this pathway simplified here of PI3 kinase AKT, and a protein called mTOR. And it turns out that, if you define the signature of rapamycin treatment here, its defining T-cells published by David Sabatini, and apply that signature to our connectivity matrix, you see that rapamycin itself is recovered in two different cell types. But also, this thing-- LY294002-- we didn't know what that was initially. But then you look it up. It's actually an inhibitor of PI3 kinase, which is upstream of mTOR. So it puts, in the same pathway, two drugs that act together and trigger the same signature. | again, I don't think it's going to be possible in the near term to actually reconstruct signal transduction pathways in their entirety. But to be able to put either genes or drugs together in a pathway not previously known to be in the same pathway, I think, is going to be possible with this approach. So we're committed to figuring out how to launch what would be a large scale public domain connectivity map project, where we would do these perturbations and put the data in the public domain so that people could use the data to find their own connections. AUDIENCE: [INAUDIBLE] There is a database that's recording some molecules [INAUDIBLE]. What will be the major difference between this project and that one? TODD GOLUB: I'm not sure I know what-- AUDIENCE: I think she's talking about the [INAUDIBLE].. An expression, right? AUDIENCE: Yeah, very expression when conditions on the molecules-- TODD GOLUB: On response to the molecules. This is actual response to the molecules. So that data set is the resting, untreated expression of the cell lines, which you can then correlate with how they respond. This is the acute changes of response, because the nice thing about that experiment is that you only have to measure each cell line once and then you can correlate it with all those things. Here, you actually have to do a microarray for each different-- AUDIENCE: Condition time point. TODD GOLUB: Yep. OK, so the last slide. Whoops, let's skip that. There's some future challenges. Just to remember that these prognostic signatures that everyone's developing are really a function of therapy. They're likely only to be useful if you continue to use the same therapy. So in this medulloblastoma example, we can predict things pretty well as long as we're trying to predict response to therapy that was given 10 years ago. But the therapy has evolved, and it's not certain that our classifier will still hold. This is going to be challenging because the clinical trials are generally small and underpowered to do this kind of thing. Still not clear how, once these signatures are actually vetted and validated, what form will they take when they actually go to the clinic? Will they be a microarray? Maybe. I've given you some ideas about how one might do signature-based chemical discovery. But turning a chemical into a drug is a big deal and not easy. And so for that reason, pharmaceutical companies aren't dropping their current approach to drug discovery in favor of this. And in general, how to integrate these kind of signatures into the drug development process is still something to think about. But I still think this notion of using these tools-- whatever personalized medicine means-- but to gain more insight into the particulars of a given individual's disease to better match them with an existing or new therapeutic is likely to be here to stay even though there aren't a lot of examples of it happening yet. That's all I have.
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_17_Direct_Prediction_of_Outcome_Mortality.txt
PETER PARK: For today, I'll just talk a little bit more generally at the beginning about a few observations that I've had. Perhaps a little bit about reliability microarray studies. I'll talk about classification problem in general. And then, I'll talk more about phenotypes. And then review some literature that are well cited. So I bet you've had some exposure to that, right? Did anyone-- maybe Marco, or Zack, talk about lymphoma studies? Do you remember any other papers that you've covered? Yeah? AUDIENCE: [INAUDIBLE] PETER PARK: Anything else you remember? All right, so-- AUDIENCE: [INAUDIBLE] PETER PARK: Yeah, right, so maybe this stuff that I have is new. OK, good. So you probably know all this. It's actually-- so the cDNA arrays are cheaper. So they are used a lot of biology labs. The ones that I work with, for example, in the new building, they have their own libraries. They print their own. Affy arrays are-- I think the consensus that the Affy arrays are probably the most accurate at this point. But it's still $8 to $1,000 per array. A lot of groups are experimenting with these printed oligonucleotide arrays. So Agilent has-- so there are just whole variety now-- Agilent has one type of platform where they actually give you the chips, and you have to basically buy their whole setup. And it's cheaper. People seem to like it, but who knows what's going to happen in the future. I mean, it's all unclear at this point exactly what platform is the best information per dollar spent. But there are other companies that will actually make these oligonucleotides. And then you just buy those and then you print it yourself. So you just have those spray printers and then print those. I think those will run at this point, maybe a third of the Affy cost. So that's a good option for the future. But it's not clear how good those are. So it's cheaper at this point, but not as good. We are actually trying to print that at the [? Harvard ?] Partner Center ourselves. And so if we can get that to work-- we're doing it for human and mouse at this point. If we can get that to work, that would be very good for our investigators. SAGE Is a very accurate way of doing expression data analysis. It basically involves sequencing every little tag. So advantage is that you get a very-- even for a very small copy number of transcripts, you can get them accurately. The downside is that it's expensive. But there are a bunch of groups around here that have large libraries. So I'm personally interested in the second point, which is, nowadays, called systems biology-- so that is integrating a lot of high throughput technologies in genomics and proteomics, and putting that together. So microarray is where everything started. Now that all these other technologies are getting more mature, we're all interested in combining all of those. And there are a lot of public and commercial tools, tools, and databases that you should be aware of. So I think the problem is that even the best biologists don't know what's out there. So the challenge for me and a lot of other people in a similar situation is that you have to, instead of assuming that they know what's out there, you have to tell them what to do. Tell the biologists that, you should try such and such, because they're just unaware of what's going on in mathematics, generally. So in terms of data integration, initially people did a lot of expression data studies. There have been some nice programs that linked literature. For example, we, at the center, just got a commercial license to something called Ingenuity. You basically put your expression data into the software and it'll generate nice networks. And you can-- well, there are many ways of being this data. But you can, for example, click on links between genes, and it will bring up all literature that site those genes. I mean, there are lots of programs too that do that, but this particular program does it really well. So something like that is enormously helpful, if you know that such things exist and if you have access to it. There are some free ones out there, but they're just not as good. I mean, this software, they have tons of people-- not tons, a lot of people, just sitting there, curating literature by hand. And so they've accumulated this huge database. But I think that's-- eventually, this is, I think, how a lot of us are going to do research. So these tools will get smarter and smarter, so you don't have to go through PubMed, yourself. There are other types of data. I don't if you've actually covered any of these in class, but there's a lot of protein DNA binding data. So the most famous data set is from the Whitehead, from Rick Young's lab, where for yeast, they basically took all the transcription factors, about a little more than 100 for yeast, and then they found where in the genome these transcription factors actually bind. So in vivo. So they can't locate-- you don't have enough resolutions to pinpoint exactly what base pair they are binding, but you know which genes are actually using these transcription factors for their regulation. So that data is enormous useful. I think they're trying to do that for human genome, but I don't know how that's going. Protein microarrays-- so that has not been as successful as people initially, thought it would be. There are just a lot of difficulties in just manufacturing. But for example, at the center, Harvard Partner Center, there are some contracts with some companies that make these protein arrays with about 50 or 60 molecules sitting there, then I guess these are 50 or 60 really common antibody detection systems. So I think that could be very helpful. But I think the technology is still not quite there. At least not for the price that's reasonable for most investigators. Mass spec-- that's an enormous area that has taken off in the past, I'd say, at least in terms of informatics, maybe just past a year or two. There are enormous amounts of data and a lot of informatics challenges. And so another set of data that could be integrated into your studies. So when people talk about systems biology, people mean different things. But generally, they refer to the fact that you're combining all these different types to generate some coherent picture. And it's actually really hard. And I think some of the reasons will be clear in the next few slides. OK, so let me just skip that. So just to give you a funny story on the necessity for a good analysis with all this data. There was a well-cited paper in Nature Genetics in 2001 called Transcriptional Regulation and Function During the Human Cell Cycle. So I think it actually came from the group that did the yeast cell cycle a couple of years back, prior to that. And in that paper, they claimed that there are large number of mammalian genes, about 700, to have a cell cycle specific patterns. So I think this was done on Affy data, actually. Anyway, the next year, there's a paper in PNAS called "Analysis of cell cycle specific gene expression in human cells as determined by microarrays and double diamidine block synchronization. So from the title, you can't really tell what the subject is. But basically what they did in the 2002 paper is go back and reanalyze the data from 2001 paper. And so, if you look at the text here, the original microarray data presented to support the existence of cyclic gene expression in human cells is now re-examined with a statistical approach, we find that there is internal evidence implying that the original microarray data do not support the proposed patterns of gene expression. So one of these-- I think it was the first author that's a statistician. So they claimed that to study the cell cycle, you need to synchronize at the beginning, using one of many methods. And they look at this data and say, well after randomizing data, we still see a bunch of genes that have cyclic behavior. In fact, if you study it carefully, cells are not even synchronized. So they claim that the first experiment was just junk. And the 700 genes that they observed was something that just could have come by chance. Because I mean, because you have so many genes, you're going to see some genes that go in cycles. So it came out in PNAS. And so since then, at least, there were just exchanges back and forth. So the first group has a website that details this is what we did, this is what the PNAS paper did, and this is why they're wrong. It's just a whole website. And I think the second guys has also had a rebuttal to that website. So I mean, this is an important thing to settle, right? Because it came out in Nature Genetics, a lot of people are using this data, but the other group says, this is all nonsense. And I think-- I mean, so I read a little bit of that. I think what happened was-- I mean, as usual the truth is somewhere in between. I mean, the first group was so interested in showing that there are lots of genes that are cycling, that they just used methods that will be advantageous to them. And the second paper, they were so interested in showing that the data was wrong, that they defined things in such a way that there would be no cycling genes. AUDIENCE: [INAUDIBLE] if you found the same genes in another [INAUDIBLE]. PETER PARK: Maybe, I don't know. Yeah. I mean, I bet, depending on how you do it. I would think that they would find some overlap with the first one, but not 700. So I guess the lesson is that you should make sure you do things well before you publish. At least in such a way that no one can attack you like this. And this is just one example. There are a lot of problems with early studies. And I'll actually tell you about another problem that has been very common. So one thing that I've been interested in with a student, who actually published a paper on this before anyone else, so there are substantial differences in different technologies for microarray platforms. So cDNAs-- they're very different from Affy arrays. So one of the common questions is, what platform should I use? If something's cheaper, does it work as well? Do they agree? So which platform is the best? And so there are a bunch of papers on this, some of which are cited in the article, but I mean, that's just a very small subset. And most of them are conflicting, except that they agree that they do not agree very well. So they're conflicting in terms of which one is better, how accurate are they. But I think overall, the overlap is surprisingly small. So if you do your experiment with cDNA arrays, find genes that are different with [INAUDIBLE].. So you repeat the experiment with Affy arrays, you get a very different result. So that's a big problem, right? I mean, for one thing, you're not sure if the results that you get are correct, or not. And another thing is that eventually, we'd like to combine data from different people. So if someone else, in another school, has done very similar experiment, and you want to see how your experiment compares with that, then you don't want to have to repeat their experiment. You'd like to just do your experiment on whatever platform you have and compare it against what someone else has done, which may not be on the same platform. And so this is a big issue. And it's far away from being settled-- just there are so many issues and problems, so that I think it'll be a while before these things get sorted out. So as I mentioned, Affy arrays appear to have the most reproducible experiment. I think most people that have tried multiple privatizations will say that they agree with correlation of coin 98, 99. But one thing I've done with Steve Greenberg, whom you've met, I think, couple weeks ago, is that we've studied how reproducible there are between two generations of Affy arrays. So there is a very popular set of arrays-- the first one was U95-- that's 95 refers to the version of gene database that they use to collect the information about the probes. So that, I think, people used for a couple of years-- lots of chips, I mean, many thousands of chips. And then, you U133 is the newer one. So U95 actually came in five chips, and U133 came in two chips. So they changed a bunch of things so they were able to fit into two chips. Now, there is just a single chip with the whole human genes. So from five arrays, three or four years ago, now you have a single array. And so a lot of experiments have been done. These are human arrays. Just to give you a sense of how many arrays are out there, when I was talking to someone from Aventis pharmaceutical, they have a proprietary database with-- I think they said about I can't remember, it was either 20,000 or 30,000 chips. AUDIENCE: Of their own? PETER PARK: Or their own, right. So their internal database. So 20,000, let's say, time's $1,000, that's $20 million, right? That's a lot of money. So anyway, there's a lot of data using these. Actually, I don't if these arrays are not the one-- they didn't have 20,000 of these arrays. But they have a lot of Affy arrays. These are 20,000 Affy arrays. And that's just one company. Anyway, so what Steve and I wanted to do is look at how the results would vary depending on which platform you use. And he actually had done the experiment on U95, and then he had a new grant, he had more samples, so he want to do uniformity again, and so he did it on U133. And it's not because he had so much money that he just wanted to try this. So what he did was he hybridized 14 muscle life samples from with patients with inflammatory myopathies, which is his specialty, then they were hybridized to both chips. So I looked at this data-- we submitted the paper just recently. But what we looked at was, what happens when we grouped the arrays using hierarchical clustering, which you know by now, and when we find differentially expressed genes in each case. So you can't read the labels from there, I bet, if you look at the top left one-- so we basically pulled all the data together and said, OK, just can I cluster them? So what you should see is, same sample, hybridized to 95 and 133 should be next to each other, and that's going to be the case for all your samples. In this clustering, everything on the left is U133, and everything on the right is U95. Yeah. So if you cluster, it's not the sample characteristics, but the array type that is the distinguishing feature. AUDIENCE: What did he [INAUDIBLE]?? PETER PARK: Well, in this case, we used Pearson correlations. So any linear normalization would not affect the result. But no matter what you try, it's going to turn out this way. And so this is disturbing, right? So there are some ways to fix this. So in this particular case, we normalized the genes in a particular way, and then we were able to actually get this to line up correctly. But what we did in this case, in general, doesn't work. So we could only do it here. But in general, it doesn't work. So remember, this is one platform, just different generations. So if you were to compare different platforms, I mean, things are just much harder. And so, that's just an idea of-- that should give you some idea as to how different they are. Another way that we looked at was, from each platform, we looked at-- so the samples were naturally divided into two groups. So we looked at, for each platform, what genes are differentially expressed. So from the U95 samples, we got a long list, from 133, we got a long list. How much overlap is there? That was the question. Does that make sense? So if you look at the dotted line, if you look at-- AUDIENCE: [INAUDIBLE] PETER PARK: Right, so we had 14. And there were basically five of one type, and the rest could be considered as another type. So using those five and nine chips, we looked at the different questions. So we did that separately for each one. So I mean, they should overlap, right? Exactly same samples. We did a lot of analysis to make sure that there wasn't a lot of degradation of RNA, and such. But if you look at this-- so forget the solid line for now-- if you look at the dotted dashed line, out of top 100, there are about 20%, less than 20% that overlap. So about 20 genes will show up in both, but 80 genes are not in common. So that's somewhat disturbing, right? I mean, if you look at top 50-- and usually, when investigators do these experiments, they can't really look at more than top 20, 30, 50, it's about 15% that are in common. So 15%, that's like seven or eight genes out of 50. So that's somewhat disturbing. So for the paper, we actually found some way of fixing this problem in the general case. So we could raise the percentages a lot. But most people who will not go through this trouble of what we did should be aware that their list is not all that robust. And in some sense, this is not surprising. Although the degree to which that they disagree is surprising. Generally, you have several hundred, or maybe even thousands of genes, that are differentially expressed. And there are different genes expressed at a very similar level. So any perturbation that you make to the data, things are going to just shuffle, and they're generally not very stable. So it's not surprising that they don't agree very much. But in this case, because it was so tightly controlled, I mean, because they get the same RNA labeled at the same time, they were hybridized in the same facility, we expected this to be a little higher. But it's not. So this is a challenge, I think. And it should be a caution for people doing experiments. One thing that-- so one thing that people should pay more attention to, therefore, is not so much what genes are expressed, but what kind of genes are expressed. So people are now thinking of more robust way of looking at the list. I mean, for example, what kind of goal category genes are represented at the top, that kind of thing. AUDIENCE: [INAUDIBLE] PETER PARK: Yeah, so that's a good question. So we looked at that in detail. So it turns out that if the probes-- so you all know how Affy arrays work. So if the probes are exactly the same, the sequences are exactly the same. They are very reproducible. So they give very good results. But if they are different, I mean, I don't have pictures here to show you, but no relationship. Even though they are supposed to detect the same transcript. So basically, I mean, the lesson is that slight changes in the sequence information results in huge changes in your expression. AUDIENCE: [INAUDIBLE] PETER PARK: No, the length is the same. The length that-- those are 25 mers. And the reason that they're-- so there is a debate as to what the optimal length is. And so people have-- so the commercial suppliers that I listed earlier, they have different lengths. So there are some 30 mers, 50 mers, a lot of people think that the longer nucleotides, oligonucleotides usually have 70 mers, or that size, are the best. But that's still a bit of an open question. But Affy would probably like to have longer probes, but they're limited by technology. So they are built by-- it's the same technology that you use to build semiconductor chips, and so you can't tile so many nucleotides. So what they did-- so the reason that Affy got better, is that this-- it's so hard to figure out exactly which sequences would give you the best result. So basically, what Affy has done over the years is that, throughout the generations, they figured out, just through trial and error, which ones give bad results. And so next time the product comes out, they get rid of all the bad ones and just try new ones. So by that trial and error process, in my view, they've gotten pretty good. So if you look at two or three years ago, if you look at the old chip, just how much variation there is among the probes that are supposed to detect the same gene, you'd see just things all over the place. It's just maybe half the probes may give you completely different numbers. But now, it is much better. Things have certainly improved. And I think Affy, the platform itself, has some advantages, because they are using some multiple probes. Other platforms, they don't have the luxury of doing that. So they have a single, longer probe, but they have no idea whether it's correct or not. And I've looked at, actually, commercial arrays, not Affy, and compared that against Affy, and you just get disturbing results. So you probably have heard a lot of good things about microarrays, and that's probably true overall. But there are some issues that you need to be careful about. So yeah, I was a little surprised about this. And I guess the recommendation is that if you have different platforms, at least, unless you're going to go through, unless you are able to go through what we went through, not to combine the arrays. But then, that's not a very satisfactory solution. I mean, if you have a database with all these steps that everyone else has done, what use is it if you can't use them. Other than just each data set in its own state. So OK, I don't think we need to talk about how many chips we need. I don't if anyone is planning to do array experiments. Your array experiments-- oh, OK. So generally, I think for most platforms that have been around for a while, most people would agree that biological variability dominates technical measurement errors. So for Affy, you don't want to do many replicates of the same tissue or sample. In terms of exactly how many you need, how many chips you need, that's a hard question. Only because you don't know how much variability that you have in your samples. So if someone can tell me, I have this much variability in my samples, I want this false positive rate, this false negative rate, I can give you numbers based on some model. But usually people can't give me those numbers. So there's no point in talking about this. Some, generally speaking, if you have data in cancer data, where you have a lot of variability, I think people would say-- some studies have said, having samples like 10 to 15 per group for a tumor comparison might give you something like 75% false positive rate. And for detecting genes that are three-fold variable. So yeah, there can be some estimates like that for given parameters. But in general, it's hard. So yeah, let's leave it at that. I think I'm going to skip most of these. I'm sure you've seen these before. Let's talk about p-values, multiple testing. OK, so maybe not. So just to give you one example. So suppose you flip a coin 10 times and get all heads. This is like a statistics 101 problem question. What's the probability that you get all heads? And it turns out it's about 0.001-- it's like 1 of our 2 to the 10th. So yeah, it's about 0.1%. So if the question is, is it biased? You'd say, probably, right? Pretty unlikely. But what if there are 10,000 people flipping coins, and one person gets 10 heads. So should you be surprised that someone got 10 heads? Probably not, right? So you need to adjust for this. This is the same problem that I mentioned earlier with the Nature Genetics paper. You need to adjust for the fact that there are so many genes. In some sense, people find it somewhat difficult to understand, because I'm interested in one gene, but if my probability depends on what kind of chip I use-- for example, if I use a chip with 10,000 genes, versus a chip with 30,000 genes, my probability will be different for the gene that I'm interested in. So people find it sometimes hard to understand. But some correction has to be made, I think. So it's often referred to as multiple testing adjustments. I think there is some consensus now on how to do this. So maybe we won't talk about this here. But people generally-- and I agree with this, as well, there's something called false discovery, that you may have heard of. I think that's a very reasonable approach. Exactly how to calculate that, it's not so simple. But it can be done. All right, so let's talk a little more about the subject that I'm interested in, in terms of application. So eventually, for people that are interested in applications, clinical application of microarray, or any other type of high throughput data, is that you like to use this in your clinical setting. That is treatment related things. So the challenge is to stop doing these simple methodological problems at some point and move on to real problems. And that means incorporating large amount of phenotypic data. So in a clinical setting, typically, you have lots of other types of data listed here. And we like to find relationships between genomic data and phenotype data-- phenotypic data. So questions could be, what genes are variable or correlated with particular phenotype? What should we use as predictors? So I'm just introducing the framework at a basic level. And then we'll look at some papers. So the most simple case that people have spent a lot of effort on is the binary case. So the phenotypic data, in this case, is just a label-- zero, one, yes, or no, disease, versus normal. And there are basically every method out there that could be applied to this, has been applied, and there's tons of papers on this-- how to do this better. How do you pick out genes that are related to the label? And how do you do the prediction? Have guys talked about prediction? Like how to-- do you know leave-one-out cross validation is? Right, so good. And then, people did multiple subclasses, not just two, but multiple cases. And then, in this case, it's non-ordered-- so you have different subtypes. You could have an ordered subclasses-- that is, if you have a rating for severity of disease, maybe you have one through 5 as your phenotype. Continuous-- for some reason, this hasn't really-- the last two haven't been done as well as they should have been, or they should be. Yeah, as well as they should be. But it's certainly in an area where a lot of progress needs to be made. So for example, I did one collaboration where the phenotype was some invasive ability of cells. So what genes predict the ability of these cancer cells? And so how to apply through all this finding genes, and then finding predictions-- prediction algorithms, and then making predictions, how to do that optimally, it's not completely settled. And finally, the sensor data type is something that I'm very interested in. And I'll talk a little more about that later. So it turns out that a lot of these phenotypes can be reduced to the binary type. So if you have multiple subclasses, and combine them into two. If you have continuous data you can say, well, low invasive versus high, you have severity of disease, you can divide into two. So you can always turn into a simpler problem. And that's what people have done in many cases. But you lose a lot of information. So you'd like to preserve that and do that-- carry the same procedure out. So I mean, you probably have seen this before, there are lots of questions that could be asked, and have been asked. So just to give you a general framework, so because there are so many genes in data, usually there are some type of dimensionality reduction. Even when you have a very good method, you still need to reduce to the data set size. Sometimes unless, you have a nice computer, things may not even fit on your data unless you do some of reduction. So this problem is called feature selection in computer science. So there are lots of different ways of filtering genes you know threshold value from expression, variation filtering, and so on. There are lots of tools for doing the dimensionality reduction. So typically, either you reduce dimension by just cutting down the number of genes, or you could actually find some of minor combinations of genes. And then, use those as the reduced dimension. So if you do like principal components, or a single value composition, you reduce the dimension, but the dimensions that you have in the end are not genes. Those are combinations of genes. So there are different ways of doing this. And with disadvantages and advantages for each one. So I won't go through this, but t-test you probably have heard of. So let me just spend maybe two or three slides on just one thing that a lot of people have noticed. This problem has been fixed in most cases, say, starting a year or two ago. But if you go back to earlier papers, they've done a lot of suboptimal things, or in this case, just some wrong thing in their study. So there are a lot of papers on classifying disease categories. And this has been the primary, or one of the primary applications, of Affy technology, or cDNA technology. Of course, the performance depends on the methodology used. A lot of times you see these machine learning algorithms, like singular-- not singular, support vector machines. Neural networks, they appear to give pretty good results, often. And because you probably know, if there aren't enough samples, you want to use set training and testing sets to assess your accuracy. If not, use leave-one-out cross-validation. It turns out that a lot of papers in good journals have made mistakes from prediction accuracy. And what they state are overestimates of the real accuracy. So this wasn't discussed by other people, right? AUDIENCE: Talked about a paper that [INAUDIBLE].. PETER PARK: Oh, he did? In breast cancer? Right, so let me very briefly then-- so let me just talk about this one simulation study. Did he talk about this? OK, all right, so this is Rick's-- Rex Simon at NIH, he's a statistician who's done a lot of good work. He published this paper-- actually I think the date is-- it may have been 2002. Anyway, Journal of National Cancer Institute. So he did a simulation study just to show you what kind of mistakes could be made. So he generated simulated data-- 20 expression profiles, 10 randomly assigned to one class, the rest of the other. So in this case, because the data are all random, there's no true underlying differences. And that means whatever prediction you should make, the error should be about half, right? And he averaged this procedure over 2,000 data sets. So resubstitution here means-- OK, so you have 20 samples. So you built your model, prediction model, based on all 20, so you have your model. And then, you test the model on each one. So if you do that, 98.2% of the time, you make no error. You always get the labels correct. Cancer or not, or whatever labels you might be in a particular study, that's the label. And that's clearly a wrong thing to do, right? You can't build your model based on your data, and then predict the data. But what a lot of people have done in good journals is to remove the test sample after gene selection. So I have my 20 samples, and then I filter the genes, and then I pick the genes that are informative of the classes. And then, to predict the model, I'll leave one out, or to predict the label, I leave one out, build my model, but based on the genes that I've already selected. And then predict the one that's left out. Then you leave another one out, build a new model to the prediction, and so on. So you think, well, that should be OK, right? But it's OK if the gene set that you get is the same every time you-- if they were the same every time you delete the sample and recalculate. But I mean, as we saw earlier, the gene not very stable. So if you do that, there's a lot of bias. So if you do that in this simulated data set, 90% of the time, on a random data set, you have 90% accuracy. So generally, the difference is not this big. But it could be substantial, depending on, say, the size of your data set. Now, if you do it correctly, that is, you remove the data set before selection, before selecting the genes, you reselect the genes and you do the prediction, and so on, then you get median at 11 misclassification. So you should get 10, so you get something close to something close to the same. Of course, no one really does it exactly the way it should be done. So the way it should be done is, delete one sample, and then start everything from scratch. Like you renormalize, you do everything. But no one really does that. So they just actually normalize using all the data, and then, nowadays, you move on, and then you move one to the gene selection, and so on. So I think people probably suspected that there was a problem with this, but not as big as it is in certain cases. So nowadays, like if you see a paper in good journals, it will have the supplementary information, they'll talk about what the bias is. But they often report the better rate in the paper. So not good. OK, so let's talk about survival times now. So how many of you are familiar with my censor data? OK, so I won't really talk about the basics too much then. I mean, the censoring rate often is in the order of-- it could be like 50%. That's not unusual. So you want to make sure that you take care of your censoring correctly. And generally, we talk about the right censoring-- you have the patient, a study's terminated, [INAUDIBLE] patients die, your patient drops out of a study. OK, so as n the other phenotypes, the simplest thing you can do is to use standard univariate approach. That is, I look at my genes one at a times, see as this gene relates to survival. So there are many ways of doing that because, I mean, this problem has been around for decades. And so there are good methods for doing this. For example, log ranked tests will give you some p-value as to how that gene is correlated with your survival data. And so a lot of people will do that, some sort of univariate approach, considering [INAUDIBLE] time, and then they put together at the end, some type of voting method. So to do a prediction of the patient, you say, well, what does my gene say about this patient. What about gene two? What about 100? They add up. Sometimes weighted differently, depending on how good the gene is in predicting-- or in its correlation to the phenotype. So as I'll talk a little more later, this is often not optimal. So I mean, it's how people do it because it's easy, but it's not optimal. So I'll skip that. So let's just do some-- take a look at some examples. So eventually, you like to do your study and then correlate that with your phenotype. And then actually go on to show that whatever-- you don't want to just show that it's related to the phenotype. You want to show that it's better than what people use currently. So this wasn't done until, let's say, 2001, 2002. So people just said, oh, my expression profiles are related to my phenotype. But it's actually better. So it's actually-- I guess this is an interesting question for debate. I mean, people have published all these papers for two or three years. But I actually don't know of any hospital that will do the testing and then do prediction on the patient. I mean, there was some report of a plan to do this in Netherlands, but this hasn't really come about. So I mean, I actually had a good friend in the pathology department who's actually done a lot of arrays himself, and so now that he finished his PhD, he's in a pathology fellowship, he said, OK, maybe I'll try to get this thing to the clinic. And it turns out, there's just a lot of problems. For one thing, a lot of these studies have not shown that, compared to the cost involved, that it's actually a good deal. So apparently, there are very simple tests for a lot of, say, cancer-related diagnoses. There are very simple tests that are very cheap, and very easy to do. Take very little time. So he said, it's not clear in the clinic whether those tests are actually less accurate than this. It's much simpler, people already do this. And so there isn't a great incentive to switch to this. And probably the bigger reason, though, is money. So insurance companies these days are paid per patient-- so I forget the terminology for this. But if the hospital wants to do an extra procedure like this test, basically they don't get reimbursed by the insurance company unless it can be shown to the insurance company that this is a necessary procedure to be done on all these patients. And that's a difficult thing. So my friend looked into this. And it's not easy-- a hospital, or insurance companies are not willing to shell out money unless it's proven so in a reasonably good way. And of course, if I were to, for example, show them the earlier results on how [? useful ?] these are, no one's going to, say, pay $1,000 per patient. So I don't know at what point this is going to really be done in the clinic. But I think as more people publish better papers, and do a more comprehensive cost benefit analysis, maybe it will happen. I think initially, people are very excited that this is what you're going to do, right? You walk in the clinic, they do this, and give you all these diagnoses. But who knows when this is going to happen. So yeah, those are some practical issues that are not so easy to resolve. In any case, this is a paper that came out in Nature. This is, I think, a group from the Netherlands, where I think they are more-- at least from a couple articles that I read, they are more closer to actually doing this in the clinic. And one article actually cited the results of this paper as a basis for doing this in clinic. So one problem is that I don't think what was done in this paper is that great. So it's a bit of an issue. But at least in this paper, they claim that this gene expression profile will outperform all currently used clinical parameters in predicting disease outcome in breast cancer. So is anyone familiar with this paper? AUDIENCE: [INAUDIBLE] PETER PARK: All right, so did he talk about the methodology, as well? AUDIENCE: [INAUDIBLE] PETER PARK: Oh, OK. I must have forgotten. Did he talk about this briefly, as well? It's also a breast cancer-- AUDIENCE: [INAUDIBLE] PETER PARK: OK, but so you talked about this paper, but not-- OK, I apologize. I can't remember from last year what papers were covered. And I knew that the lymphoma papers were covered by Zack, or somebody. But then I forgot that Steve might have covered this. Anyway, so actually, there isn't that much time so it's good that he covered this. So basically, the approach is you build some sort of classifier, you pick out the genes through some classifier. And then, so the way that these papers prove that things are good, or the expression profiles are better, is that you do some of Kaplan-Meier plot for each one, and then you do a log rank test, or some tests like that, to show that there's a big difference. So they do this for a variety of stratified groups. So you might take patients that were assigned one category of disease by your typical prognosis, and then you show that, within that group, there's enough variability. So somehow, there is some information in the expression data that's not captured by this. And the next step that are done increasingly now is after you get this classifier, you put that together with all the other data. So in this case-- in this case, they have, basically out of all the expression data that they have, they just come up with one signature. So yes, it looks good, versus no, it doesn't. And so that becomes just a one variable in your multivariate model that you have. So typically, most studies, without expression data, will just have a multivariate model, Cox model, sometimes. So Cox model is just a multiple linear regression, sort of done differently for censor data. So for Cox model, you fit everything. And then if you say, if you see that the p-value is small for that, then you say, well it's a new parameter, or new variable that you should take into account in your study. So I think this is a fairly reasonable approach. There is another study-- you may have seen this, as well-- where they do something they do something better, I think. Which is, after you get all your expression data, they actually manually go through the top genes, and then try to come up with new variables. So I think this good signature-- which is batch signature-- is too crude. It doesn't really give you as much information from the data. I think you could get, I think, a lot more information from your expression data. So in this example, they actually have not yes or no score, but they actually have a prediction score. So that's a continuous variable. And then, they actually classify their genes, depending on their expression profiles, and their known and annotations into these groups. So in this case they have five groups, so proliferation signature value-- this is a set of genes in P6-- that's just one gene that just happened to be different from all the other profiles. And then they have these signature values. So I think this is a fairly reasonable approach. And I think it gives you a lot more information. Plus these coefficients, I think, give you a more robust view of how that expression, that group of genes, is impacting your survival in this case. So you get a prediction score. And then, I guess from here, you could do-- you could put other things, other variables in this setting, in the regression setting. Actually, there's a paper that came out this past week-- this week. And-- you're smiling? AUDIENCE: I know this one. PETER PARK: Oh, you know this one. OK. So I mean, I feel like at some point, someone can write a software where you add the data set, and it'll just do all the things that are done. Because it's, I think, fairly similar in all the papers. Except the algorithms-- and basically, the choice of algorithms used is like who the investigator knows, who knows how to do these things. So in this case, there's Tef Srini is a statistician at Stanford, and so this paper uses the methods that he developed, which I think is a fairly good method. So just to give you a rough idea of what they do here is, so they make predictions in the end-- so they select genes by the method that Tef Srini, which is actually-- it's a little complicated to explain. But it basically does a search through all your expression of genes, but you can also actually look for combinations of genes. So if you remember, basically all the typical methods will look for a gene that's related to the phenotype. So if there is any interaction between genes, you basically lose them all. So there is some fancy method that he developed where you basically search through all your genes-- it's like regression, stepwise regression type, where you basically search through your genes, you keep the one that gives you the best information. And then, you try next set, and then you try combinations, and that kind of thing. So they identify a bunch of genes. And then they do a prediction. So they had 116 adults, you have one group-- a training set with 59, you do a prediction on 57. The way they do the prediction is an algorithm that was published in TNAS a year or two ago. Basically, you take a new sample, and then see which of the clusters the profile is closest to. But they do this in a slightly clever way. So instead of just looking at correlations, they do something a little bit fancier. But I mean, the underlying idea is the same. And then, in the end, they do the multivariate analysis gene expression predictor-- is strongly independent prognostic factor. So this is a figure in the paper. I guess at this point, we'll skip this. So for the next actually about five, 10 minutes, let me talk about something that I did. So most studies so far are used survival curves as a way of verifying the results. So you basically do the clustering, unsupervised clustering, and then you define groups and say, are they really different? So if they're really different as verified by these curves, then you say, oh, that's great. But in some sense, this is an indirect way of doing things. I mean, if maybe there is a different way of clustering that gives you a better result, who knows. This is just checking to see if your clustering was done correctly. So in another way, another problem with this is that-- yeah, well, I mean, so that's, I think, one way of doing things. Another way, actually, to do the censoring is as was done in a lot of the other papers, is to turn survival times into binary indicator, as I mentioned. So something that we were interested in was whether there is some combination. So maybe it's not really gene A that's predictive, but it's gene A plus half of gene B, plus 2 times gene C-- maybe that's really the most predictive of survival time. So otherwise, you don't know exactly how to combine your genes. If you do some of [? boding ?] method, for example, you are combining information from all your genes, but you're not doing it in an optimal way. So we actually thought about this problem, and then came up with what I think is a good solution. Of course, the difficulty is that it's too complicated, and so it's hard to write a software that someone can use to just press a button and get a good result. And we can't just do analysis for other people. So as in many bioinformatics algorithms, it's kind of there, but we haven't really done as much as we could have done with it. So just to give you a brief overview of this-- so the basic problem is that you have too many variables. So we use some method, it's called partial least squares. It's actually becoming very popular now. And it turns out that it's a compromise between just doing least squares, like your regression, versus PCA. So principle component analysis is good in that you try to maximize information contained in your few components, but it doesn't really have anything to do with the phenotype. So if you pick variables based on PCA, you do it to measure reduction in some optimal way. But it may not be related to your phenotype. If you do regression, you are picking something that's really tightly correlated with your phenotype. But you're not really doing any dimension reduction. Or you can't fit it in an optimal way. So there's something called partial least squares that's a compromise between the two, and it appears to work really well in most cases. And it turns out that someone had worked this out. So they figured out how to do partial least squares for different types of phenotypes continuous and binary phenotype. So it gets a little complicated, as soon as you have a phenotype that's not binary, or as soon as it's not continuous but it has been worked out. But when you have censoring, well, again, if you have a small number of genes, you could use something like Cox model to correlate gene expression to censoring. But when you have too many genes, you can't do that. Your standard methods fail. So the question was, can we do this? Can we apply a method that's well known to work for regular, just other simpler phenotype, but can we work-- can we get this to work for censor data? So it turns out that we came up with a fairly good solution. It turns out that this was worked out many years ago, we discovered, but there was no need-- they didn't have high throughput data. So there was no need to use this. It was done by a statistician, and it's very hard to understand. I mean, I can barely understand it myself. So in some extent, it was reinventing the wheel. But we do it a little differently. And then we were able to apply this. So it turns out that we were actually able to get some really good results. For example, if I use the results of that algorithm to divide the patients into different groups, then it's more significant than using some other method of doing things. So there are methods like this that could be-- I think that will be available. And unfortunately, it does require some mathematical training, but for people here that are interested in this and can invest the time and energy, I mean, I think knowing that the better algorithms will give you a lot of insights into what's going on underneath your data. AUDIENCE: [INAUDIBLE] PETER PARK: Well, that's the problem right? AUDIENCE: You know, because just so many things will always be a problem. Even if they randomly [INAUDIBLE] had nothing to do with the survival, they can always find [INAUDIBLE].. PETER PARK: But not as well, right? Yeah, but it's actually not that likely, if you actually do the calculation. But-- AUDIENCE: [INAUDIBLE] PETER PARK: Yeah, you could find some combinations that will-- and it's true for anything, right? But even for the binary case, if you have 10 in each group, what's the probability that you're going to find a random predictor that will be lined-- if it's continuous data-- that will be lined up exactly as in the order of the patients? It's not that likely. So if it's not binary, right? It's much less likely. AUDIENCE: So you'd think the binary [INAUDIBLE].. PETER PARK: Yeah, yeah, exactly. Yeah, so that's the problem, right? Even for these Kaplan-Meier curves that people show, they need to adjust for multiple testing in some way, right? So you can certainly find genes that will give you a significant p-value, even though the data are random. So some people do permutation tests to adjust for the p-value. So conclusions-- I don't need to go with that here. So let's see-- just as a final comment. To summarize briefly, I would say that some of the recent papers have done it pretty well. I think, analysis. I think that involves not just doing the typical cluster, and then finding the p-value for the Kaplan-Meier curves, but actually getting a good score of some sort, and then incorporating that into the multivariate model. And then, I mean, that part is actually not trivial. But if you're familiar with this statistical package, something like this, it is actually not hard. So you should be able to, I think, if you understand these papers really well, there are a lot of little steps that you have to worry about, you should probably be able to do everything on your own, I think. So without too much difficulty. So finally, the software that I use all the time is R-- are you guys familiar with the software? What statistics software, if any, do you guys know? MATLAB? MATLAB, a lot of people know. So R is-- so if you go to a statistics department, they'll use one of two softwares. One is at SAS, people use that. And the other one is S+. So SAS has been around forever. It's like Fortran-- it's like it's been around forever. There's a lot of people that have written good software for it. You know that it works. But it's very clumsy. A lot of things that you want to do, you can't do. They have a better Windows-driven package now for SAS. But even just a few years ago, you will draw plot like using asterisks on a text kind of page, so draw little characters on your screen as a plot. But I think the younger generation that are more computer savvy are more likely to use S+ unless they have to use SAS in their statistics courses. So S+ is more like MATLAB, so it's fairly powerful, and it gives you a lot of freedom. And R-- so there's so the reason it's called S+ is there is a statistical language called S-- I don't know why it's called S. Why C? Programming language called C. And then some-- and this was developed at Bell Labs. And some company took the code, and then made that as a product, and they called it S+. And then now there's a bunch of people, a lot of whom worked on-- a lot of whom are familiar with S that developed R. So actually the guy that wrote one of the two co-authors of the original R is at Dana-Farber, and he's been doing a lot of work in microarray analysis, as well. But so the software is free, and it's like Linux. People contribute-- it's well tested. And very powerful. And that's what I use all the time. There are some issues like memory management, and certain data types. But I think things are improving. And so I'm quite happy with it. And everyone that I've recommended to it are pretty happy with it. It has a feel of MATLAB, like you don't declare variables. You just kind of use them. Matrix manipulations are very similar. Everything is done in vector form. I showed this to one guy in one of the labs at the children's hospital-- he's a fellow. And then he was trying to do some microarray data analysis in Excel, like doing some sort of permutation. And I showed him how to do this, and he said, this one line, I mean, it took me three hours to do this in Excel, this is one line in R. He's very happy. AUDIENCE: [INAUDIBLE] PETER PARK: MATLAB has more statistics, but I think R has more sophisticated statistics. So MATLAB is more engineering oriented. MATLAB has better graphics. It's probably better for solving for algorithms-- like if you want to solve a large matrix, it's probably better in MATLAB. Just because a lot more people are working with MATLAB, and it's a big company, they spent a lot of money on it. But for statistics, which means, I think, R is more sophisticated. You have a lot more options. Yeah, so I mean, I used to use MATLAB all the time. But now, I'm happy with R. But if you know one, it's easy to pick up the other. I mean, you always get confused as to how do I comment something out, or that sort of thing. But they're basically the same flavor-- the same reason that MATLAB is popular. Let's see, steep learning curve, but worthwhile in the long run. And there is something called Bioconductor-- I don't if anyone has heard of this. So this was an effort that Robert Gentleman, who's at the Dana-Farber Institute, who was one of the founders of R, started. So I was of more of a part of it-- more part of this project before, and he suggested at the very beginning for the title, or for the name of the project, MAD Men-- like microarray data management. But in the end, it was decided that NIH would not want to fund something called MAD Men So he came up with Bioconductor, which is a much better name. So if you go to bioconductor.org, there are packages that you can download. So what's happening now is-- I think this package has caught on, so that if someone writes or comes up with a good algorithm, they'll write a routine and deposit it somewhere in this website so that you can download it and run it. And that's been done for a lot of good algorithms out there. So if you want to normalize your cDNA data using some fancy method, it's there. Download it. It's pretty easy. And there's a Windows version of this that's pretty easy. And R comes in Linux and Windows, but it gets compiled for-- gets ported to Windows frequently, and I've had no problems with it. I think someone was trying to port it to Apple, but I don't know how successful it is. I wouldn't recommend it. But I'm pretty happy with it. This is what I use all the time. This is what a lot of people use who do microarray analysis. OK, so I see that I have ended three minutes early. AUDIENCE: [INAUDIBLE] PETER PARK: Yeah, so the problem-- yeah, there is. The problem is that I think R is just a bad name, because you can't really search for R. You can go to a Bioconductor R, and there's a link back to R, or you can do-- you can if you do a statistical package R. R used to be-- I mean, it used to be hard to run these things. But Robert and his friends have really made this easy, so that you basically click a button and it'll install on your Windows. So all right.
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_15_Microarray_Disease_Classification.txt
STEVEN A. GREENBERG: And did you go through the methods used to classify disease in that or-- here and there, OK. Well, that's what I'm focused on in the next two blocks. And I guess, this is a block of four lectures that are going to focus on this area. And this is the Use of Microarrays for Disease Classification. Oh, thank you. And I think it's important to keep it straight in your mind when looking at papers that use microarray technology, or in your own use of it, which of these two very distinct applications you're using it for because there is some overlap between the two. But they are fundamentally distinct uses of the technology. And the two uses are for the understanding of disease pathophysiology. And that's fairly straightforward in a certain way. You look at expression of genes, and a tissue of interest, and a disease of interest, and try and understand what the mechanism of disease is in that tissue. The other approach is simply disease classification, which can be done completely independently of understanding anything about particular genes but is essentially a strictly computational approach to looking at a large sequence of numbers that are generated by an experiment for a particular tissue. And doing this in multiple tissues and trying to classify disease, again, without reference to what the underlying biology is at all. So that's where we're focused. And I think the title of this block is The New Histopathology. And there's a number of papers that have come out over the last couple of years that are representative. Is that going to put anyone to sleep, you think? OK. So there's been a number of approaches to using microarray in this way and particularly in cancer. And I guess, you did hear about the approach in leukemia here. But there's a few different applications within classification. They include things like prediction over here, trying to predict outcome, another predictive one. This one is more of just straight classification. But we're going to talk about these different methods and approaches here. Fundamentally, I think this is similar to what happened when medical scientists started using microscopes to analyze disease. And this is a quote that microarrays have the potential to serve as microscopes to see a comprehensive dynamic molecular picture of a living cell. And so I'm going to carry that analogy through a little bit more. And we'll take a group of diseases that are muscle diseases, which is an area I work in, and look at it's classification. And so before microscopes were invented and used in medicine, physicians did certainly know about muscle diseases and had a certain classification of them. But having the microscope and looking at muscle tissue from patients under the microscope allowed for refinements in that classification. And the way that works is you look at tissue under a microscope, and this is a cross section of muscle. These are muscle fibers, the red one. There's some inflammatory cells here. But we can essentially make a list of the abnormal findings that we see in the muscle and use that to then classify disease. So other cases-- yeah. AUDIENCE: I've got a quick question So is the defining character of the inflammatory cells the fact that there are these little bodies, these little dark-staining bodies inside of them? Or are those the actual-- STEVEN A. GREENBERG: So some of these dark-staining bodies are nuclei. This is a nucleus, here. Muscle is a syncytium. So muscle fibers are tubes that have fused myoblasts that join their cytoplasm and have multiple nuclei. And so in this particular picture here, what you're looking at, that's a nucleus. That's a nucleus within a muscle fiber. These are inflammatory cells. They're a little bit larger. They stain a little bit darker. They have other characteristics. Yeah? AUDIENCE: They're also much smaller too. [INAUDIBLE] So what's that surrounding the pink region? STEVEN A. GREENBERG: This here? What this is, this is a degenerating-- a necrotic part of this muscle fiber that these inflammatory cells are invading. I can't tell you which cells they are from this view here. But they're likely to be macrophages and CD8-positive T cells that are attacking muscle. But the reason I use this example is, we can look at a variety of features when we look at muscle under the microscope. We have-- I better turn off my auto-- we have a lot of different features that we see that are characteristic of different diseases. Inflammatory cells are characteristic of inflammatory myopathies. Other myopathies, like muscular dystrophies, have a different pattern. There's no inflammation. There are large muscle fibers. Other diseases, like toxic myopathies have yet another characteristic feature when we look at it pathologically. And so the point I'm trying to make here is that looking under the microscope has allowed us to see different patterns within muscle and to realize that some of the things that were just called muscle diseases in the past are actually two different types of muscle disease. So what you're seeing on the left is a sample from a patient with polymyositis, which is a type of inflammatory muscle disease. And on the right is something called inclusion body myositis, and they're mainly distinguished by the presence of vacuoles that you can see under the microscope there. And they're very different diseases. Polymyositis responds to treatment with immunosuppressive agents, and inclusion body myositis does not. And so the microscope allows for a very meaningful classification of diseases that we were previously unaware of represented subtypes of disease. And we take that for granted now that part of accurate diagnosis for many diseases involves examination of tissue specimens under the microscope. But it's really-- the same analogy goes through with microarrays, and I just wanted to carry that through because we're not yet taking that for granted. So in a way, the microscope lets us look at tissue and enumerate a list of features in the tissue, and they're limited. The sort of things that we can see in diseases of muscle compared to normal muscle is a small list of different things that we might see. Microarrays, similarly, allow us to take a tissue of interest and to measure the expression of genes at the level of messenger RNA for thousands of genes simultaneously and similarly provides us with a quantitative list of expression levels for all these different genes. And so just how we-- similarly, how looking at the pattern of the microscope has helped to classify disease, looking at the pattern of numbers from a microarray experiment is also being used to classify disease. Is that clear? Yes? AUDIENCE: You can also-- so the example you gave with the microscope, if you combine that with what we know about immunological inflammation process, it could also be used in that other branch that you mentioned at the start, the pathophysiology. So that's where the analogy is coming from, is that you can-- the microscopic classification of disease, it's more that the microscopes are the direct analogies [INAUDIBLE]? STEVEN A. GREENBERG: Well, so what we see under the microscope also helps us to understand disease as well as to classify it. I mean, so going back to this here, this process is completely independent of understanding disease here. We don't know what vacuoles are due to an IBM. Nobody really has much idea about why vacuoles form. Similarly with polymyositis, we know they're inflammatory cells. It's believed to be an autoimmune disorder, but we don't really have any antigens. We don't really know how it starts. We don't if it's antigen-specific, even. But even putting that completely aside, if you just describe features, which are internal nuclei, inflammatory cells, et cetera-- just describing features allows one to distinguish meaningful categories of disease that are clinically important. There are many patients with inclusion body myositis who are being treated with steroids and get labeled as so-called refractory polymyositis before somebody pays sufficient attention to their muscle biopsy slides and realizes they don't have polymyositis. So it's a very clinically relevant area, but it's completely independent of any biological knowledge here. And so that's why I stress this extreme. They certainly-- both microscopes and microarrays have applications in both areas. But this one area of disease classification can take place completely independently of understanding mechanism through this analogy. So disease classification, the area of disease classification actually has a few different areas within it. And I want to formalize that a bit. And they include discovery, diagnostics, but they all have in common the concept of an expression profile representing a signature. So the expression profile is a term we use for this vector of numbers that we get from a microarray experiment on a single tissue. So you take one muscle sample. You extract its RNA. You hybridize it to a microarray experiment overnight, and you get back a list of 10,000 or 20,000 genes and their expression levels. And we can view that list as an ordered vector, and we can compare that to other muscle samples that had a microarray experiment and also gave back a list of 20,000 ordered numbers. And we can ask questions about how similar these two ordered lists of numbers are to each other. Is that clear? OK, we'll go through that more. So but the basic idea is that the expression-- so an expression profile is this vector of numbers. And it's the expression profile for a single tissue under a single experiment. And they're often viewed as signatures, as unique to this tissue or this to disease. And so we just want to explore that concept of signatures a little bit. As an aside, this is the disease pathophysiology, the other branch. And I'm contrasting a bit back and forth, just to be clear. I don't if you guys have done this or had lectures about doing this and how to identify differentially expressed genes and tissues versus controls. They're fairly simple methods of just comparing ratios for doing statistical analyses. And I'm going to skip that part and get back to classic. So staying back in classification now, the first question formalizes what is an expression profile signature and to ask whether it truly exists in a given situation. There are many papers out there that will say, we looked at an expression profile signature. But they fail to demonstrate that what they got out of a microarray experiment was a signature of that tissue other than, say, random noise. And so there are some important aspects to a signature. And that analogy goes to that signature we use when we write. And signatures that we use are useful because they're distinct. My signature is different than yours because it's reproducible. Whenever I write my name down and my signature, it's pretty similar in that it's readable in a certain way. That's not always the case with signatures. This is similarly true of what we would call classes. And classes are groups of conditions that fit into one category of some sort. In this context, in the context of using microarrays to classify disease, they are, say, the expression pattern-- the expression profile from 10 patients with a single disease. Does that form a class, those 10 expression profiles together? And classes need to have similar properties as signatures. OK, I don't know why I put that up there for that one. OK, so this signature idea, again, is, that's my signature, and that expression profile would represent the profile of a tissue specimen under microarray experiments. And so the first question is, is it reproducible? And so when you do look at papers in this area and you're interested in data, you need to see that somebody can do the same thing twice on a given tissue and get the same pattern out. I could take bucket of paint and throw it on the wall and call that a signature, but it only is meaningful if I can reproduce that and get the same pattern when I throw the bucket of paint on. So you do need to show that you can, and, certainly, it is possible to do it. This is data from our experiments. But you should see this type of data when you look at papers that people have reproduced the profile they get out, done two experiments on their tissue sample and gotten the same pattern of numbers out each time. It needs to be clear that this is distinct, that the signature from, say, one disease is different from another. And there are ways to do that. And then the question of readability comes out. And that's, how do we read a signature? So if I do an experiment, I have these vectors of numbers. And I'm going to do this for 10 different tissues and compare and ask, how are these vectors similar to each other? We have to define some way of comparing vectors to each other and judging their similarity. These are 20,000-dimensional vectors here. And there's a number of different measures that are used to look at similarity of these profiles. And they include use of correlation coefficient-- so just taking the Pearson correlation coefficient of these two vectors. And that's what I did here. This is just the Pearson correlation coefficient of one set of microarray numbers against another. Euclidean distance-- mutual information has just about dropped off the scene at this point as a use in this field. But there are a number of similarity measures, and this will change how one reads these profiles and compares them. And so the same principles that apply to signatures also apply to classes. If we define classes, the classes also have to have the same features. And the way we look at distinctness of classes are mostly through clustering methods. So clustering methods, which you may have had some exposure to here, what clustering methods are a way of organizing tissue samples by similarity into classes. And they differ substantially in terms of the way that they create a organizational class structure or classification. So hierarchical classification is used quite a lot, and these are trees, basically, of relationships. They're not actually true classes in the sense that they're distinct. Everything is related to everything else under hierarchical clustering. But the degree of relationship is just greater for profiles that get classified together on the same tree, the same node. There's different types of hierarchical classes. There's non-hierarchical classification. And so the point here is to emphasize there's a variety of mathematical methods that are still used in this field to try and define distinctness of classes. Reproducibility-- so again, the signature issue was distinctness, reproducibility, and readability. Reproducibility of classes is a big issue here, that when you see papers which will look to say we have a certain classification. And like the leukemia one, we have ALL and AML, separable into two classes. It's important to ask whether this is a reproducible classification system, that if you do this under some slightly perturbed way, whether you end up with the same classification. And this is a somewhat neglected area still within this field of demonstrating reproducibility of classes. Readability-- we mentioned the different similarity measures that are used to read out expression profiles. This is just the correlation coefficient. OK, so we're going to go into a little bit more detail about a hierarchical clustering because it's such a commonly used technique now and how tissues are organized by similarity. And the basic method for this is to-- so again, our basic data set consists of, let's say, 20 different tissue samples and the expression profile for each one-- so 10,000 genes measured for each of 20 different profiles. And we're going to try and organize these 20 different tissue samples into groups or some structure based on their similarity of their expression profiles. And so the way hierarchical clustering works is to first look at all pairwise correlation coefficients of these 20 samples. So sample number 1 and sample number 4 have a certain correlation with each other. And 1 and 5 have a certain correlation with them. And we're just going to calculate all correlation coefficients. That's the first step. And then for the single highest correlation coefficient-- so the two samples that are most correlated, it might be sample 8 and sample 15-- we're going to define those as being together, as close together and being two leaves, basically, on the end of a branch of a tree. And then we're going to repeat this for the remaining samples and build a tree up, basically, from the leaves. This is one type of hierarchical clustering. So here, there's scatterplot diagrams for three of the pairs within a data set that are shown. But you do this for all pairs here. And then you start-- let's say, sample 1 and 2 were the most correlated from the whole sample. You join them together as leaves. And then, let's say, the next ones with the highest correlation coefficient are sample 7 and 5. They get joined together. And then the next highest correlation coefficient might be the average coefficient of the S1/S2 group and the 75, so they get joined together. So this tree gets built up. And I guess, I'm assuming you've seen some of these pictures of hierarchical clustering diagrams. Have you seen any pictures? OK, well, maybe I'll show you one. Another way to look at this process is to switch to colors. And this is a pretty common visual representation in this field of how to represent these data sets. So what I have here is, my columns are each a different tissue specimen. So they're different, let's say, muscle biopsy specimens from patients. And for each one, we did a microarray experiment. So there were seven microarray experiments done. And we measured 12,000 genes and their expression here. And we put this into a table and then color code it according to some color scale so that the greens represent the lowest expression levels and the reds, the highest. And so we end up in this color diagram here. And so the process of them classifying our tissues is a process whereby we try and find specimens that have similar colors at each gene. And so classification involves shuffling these columns. One can also classify rows. But for the purpose of classifying disease, we'll shuffle the columns accordingly. And when you do that using hierarchical clustering or other algorithms to do that, this contrived example, there were three different classes represented in here. And that's what disease classification is using microarrays. One can also shuffle the rows and classify the gene or cluster the genes here. What that tends to do is to bring genes together of similar function within the rows so that genes that are behaving similarly across this data set of tissues will tend to get grouped together. And that's a different-- there's things you can do with that. You can infer gene function and other things. But it's not what we're focused on. Yes? AUDIENCE: I got a quick question. So this is just a representation of a microarray experiment? STEVEN A. GREENBERG: Of a group, a data set made up of seven microarray experiments. AUDIENCE: Right. And when you're choosing your tissue specimens, the seven that you're looking at, are those the actual, I guess, specimens from tissues that you suspect for now has a particular clinical phenotype of a disease? Or do you throw in your control tissue specimens within this, basically, to demonstrate that there isn't? STEVEN A. GREENBERG: It depends on what you want to study, what you want to demonstrate. And so there certainly are-- in the diseases I study, there are patients whose muscle biopsy looks normal under the microscope, even though they have a muscle disease. And so that's of interest to us then to do a classification experiment here, where we take that data from, say, five patients with a disease, but they have normal muscle biopsies, and five patients with normal muscle biopsies who don't have a disease, and to shuffle them, and see if they fall into two classes properly. And they do. I'll show you an example of that. So in that way, we can demonstrate that, sometimes, microarrays can see things that we don't see under the microscope. But it strictly depends on a particular application. The principles are what we're focused on here. AUDIENCE: And when you're doing these samples, are you doing them more in the sense that you're organizing these according to either row or column before you know what's going on? STEVEN A. GREENBERG: Yeah, they're being done strictly through, say, an algorithm like this, hierarchical clustering, where you're just looking at the columns of numbers, and you're saying, this vector is similar to this vector. Let's put them together. This other vector is the next similar most. Let's put that together. So it's completely independent of the labels. So a lot of people think of this field as supervised or-- so this particular application is unsupervised classification. Have those terms come up, supervised versus un? It's the way a lot of the artificial intelligence and machine learning community thinks about this area. I don't like to think about this area in that way because that's secondary whether a method is unsupervised or supervised. That hasn't helped. That's confused me so many times that I like to think about it in the way that I'm presenting it to you. So this is disease classification. So there is a group of distinct methods that we call class methods that are applicable for a number of different types of applications within disease classification. And the three methods are class comparison, class prediction, and class discovery. And so class comparison is an area in which we are simply describing the different classes that might exist within one of these data sets that we put together from-- let's take, for an example, 50 samples and a microarray experiment for each of those, and consider that one data set. And so class comparison is a exploratory method to just compare and contrast whether there are classes within this data set, what the differences are. There's a variety of computational tools that are used, and these include cluster analysis, the so-called supervised learning, fold analysis. But I'm focused more on not the computational tool that's being used, whether we're doing unsupervised or supervised classification, but the broader concept about what we're trying to do. And so one group of applications, we're just trying to explore the data, and its class structure. And that's called comparison. Prediction is a whole different area. And prediction is a method by which we want to predict the class of a new sample that's given to us based on our prior data. So that involves using data that we have and constructing some function, essentially, or some method based on that data set so that if we encounter a new sample, that we can then predict something about that new sample based on the prior data that we had. That's prediction-- very different than class comparison, although some of them use similar methods. So the tools that might be used to do class prediction can also be used to do class comparison, some of them. Class discovery is yet a third application. And class discovery is focused on discovering new disease subtypes. So it's most analogous to what I started to talk with in terms of looking at things we learned when we looked under the microscope, initially, about muscle disease. We learned that there are different types of diseases that truly are subtypes of muscle disease that we didn't know about beforehand. And so class discovery is focused on looking at microarray data from tissue samples and trying to understand if there might be some subtype that's defined by the microarray data that we weren't aware of beforehand. Such a subtype has to have some meaning to it. It can't just be that we found, looking at these very undetermined data sets, that we found different classes of these vectors. It has to translate to something meaningful. So one has to define classes and then go back and say whether this class actually means something, whether these patients who fell into this class are really different than the other patients, and people just never noticed that before, all right? So going through those, class comparison, again, is used to establish distinctness of classes and just compare and contrast. There's no direct medical applications of this tool. There are indirect applications. It can be useful for generating ideas about classification and pathophysiology. And there's a variety of computational tools that can be used to compare classes here. This is an example of class comparison where we looked at 45 muscle biopsy tissue specimens from four different groups. We had normals. We had patients with Duchenne muscular dystrophy, nemaline myopathies, and then a group of inflammatory myopathies, of which there's a variety of subtypes. And one of the tools for class comparison is hierarchical clustering, which we did. And when we used hierarchical clustering applied to this data set using one of these color schemes, this is the raw data here in this column. And again, this is 45 columns here. And this is just a portion. Maybe this is 3,000 or 4,000 genes that are represented in rows here. And you just look at it, and you can see streaks of green or red going down, suggesting that there are certain classes here, that certain tissues seem to have, say, low expression levels for these genes and high expression levels for these genes. This, here, is a blow-up diagram of this particular section here. And it has the labels. And so the hierarchical tree here has four branches, essentially, here. This branch is all of the normal specimens. So all the normal specimens got classified correctly here. These are all the inflammatory myopathies. These are mostly patients from a third group, nemaline. And these are mostly patients with Duchenne muscular dystrophy. But this is class comparison. We just applied hierarchical clustering to this data set and found classes here that we're interested in just looking at. We knew about these classes already. Yeah? AUDIENCE: The reason that this is not a direct point of application is because you're not going in and trying to-- I'm trying to make connections between these different patterns that have allowed you to classify these or make these different-- STEVEN A. GREENBERG: Well, it has indirect applications. But by direct, I mean, very direct. So that if I have a new patient, and I do a microarray experiment on it, can I make any predictions of that based on this approach that I've used so far? Or have I discovered any new classes of disease here amongst these groups? Those would be fairly direct applications. There are certainly indirect applications here. So there are things that come up. Like, the only misclassification for the inflammatory myopathies was this one, one patient with an inflammatory myopathy. This one, who had this disease called IBM, got classified in this group instead of this group. And it turns out that this one was really different than all of the others and that it was a patient who had a familial form of IBM. We don't know what genes are involved, if any. And we don't know what this is. It's very rare for familial instances of IBM. But this particular patient had familial IBM. So this might tell me that there really is something different about that disease because under the microscope, we don't see any difference in that patient compared to the other patients with inflammatory myopathies. So it helps you generate hypotheses and ideas. AUDIENCE: I guess, I'm just confused because, in this particular example that you've given us, it started as something that you knew [INAUDIBLE] your data set. So there was no new identification of the disease-- STEVEN A. GREENBERG: Correct, right. AUDIENCE: But given a sample of tissue samples that were from patients that had diseases that affected something else, you didn't know about this [INAUDIBLE],, it seems like could have this classification of new disease. STEVEN A. GREENBERG: You can use this tool to do that, but you have to add in a few other things. Yeah, so I mean, this tool of hierarchical clustering does cut across the three different major class methods-- exploration, prediction, and discovery. But you have to add more to it. So I didn't mean to confuse you and say that way. But you do want to keep the tools separate from the method, the overall approach that you're doing. If you're just exploring data, there's a bunch of tools. If you're trying to make predictions, diagnostic or prognostic predictions of new patients, it's an overlapping set of tools but a totally different approach or a different procedure. So yeah, let's skip this one. And that one too. OK, so class discovery, then, is the next area of those three methods that I outlined. And this one uses expression data to discover previously unrecognized but clinically relevant disease subtypes. It also doesn't have direct medical applications. And by direct, I mean things that would immediately be applicable to patients-- and I mean, patients who you might see in the office. And maybe I should revise that one here. I haven't over the last year, but maybe. But certainly, indirect, it does advance the field because it improves our ability to recognize diagnostic and potentially prognostic subtypes and potentially treat patients differently based on this subtype of disease that they may have that we didn't previously know about. And so the basic method of class discovery often starts with cluster analysis to try and define classes. And it can be any hierarchical or K means-- any variety of approaches to defining classes here. But after that, one explores the phenotypic variations within the defined classes for a number of different types of phenotypic variables. So you take 50 patients with, let's say, a type of cancer, who you think all have the same type of cancer. Our best diagnostic classification system now of looking at history, and physical exam, and doing tests says that all these patients are indistinguishable. And you do microarray experiments, classify the data, and then you'll will certainly get out classes, whether they're real or not or meaningful, doesn't matter. But the algorithms are guaranteed to spit out classes for you. And if you then want to go back and ask, well, is there something different about this class compared to that class that's clinically meaningful? And one thing might be time to some endpoint, such as survival. Do patients in one class, based on their expression profiles, live longer than patients in a different class? Or any other endpoint-- do they not respond to treatment? So I go back and I see there were 20 patients in this class and 10 in another class and say, how did the patients in this class respond to treatment compared to the patients in this class? And this is actually a fairly powerful method to discover new disease subtypes that we don't know about. Yeah, I think it's an excellent method to use. It certainly has been successful in a variety of areas and cancer, for sure. Its application to other diseases is lagging behind that of cancer. There's a few examples where it's been done. I think I was planning on going through a couple. I don't if we need to. Do want me to go through an example? This is a lymphoma paper as an example. OK. In this paper, the investigators had 96 patients who they knew about nine classes ahead of time. And these are all patients who have different types of lymphoma-- 1, 2, 3, 4, 5, 6, 7, 8, 9. Actually, there's eight lymphomas in one normal class here. And they used lymph node tissue from patients who have lymphoma from some normals. They did microarray experiments. So they did 96 microarray experiments, measured something like 4,000 or so genes for each of 96 patients, and did clustering of the data, hierarchical clustering, and then for purposes of presentation, colored things in for us here. So this is the dendrogram structure that they obtained. And they've nicely colored in the picture so that all patients who had one type of lymphoma, CLL, are yellow. That's this group of patients. And the patients with this type of lymphoma, diffuse large B-cell lymphoma, are coded in purples. I think one or two-- one of them ended up out here. One is here. One is here. Most of them are in this section of the tree with a couple of other diseases mixed in. So at this point, what would you call this at this point of the class methods? Exploration, discovery, or prediction? Comparison, right. So this is class comparison they're just looking at here. But they go on, and they focus on this structure here for this disease. So looking at the patients with diffuse large B-cell lymphoma, they noticed two different branches at this level. So there's this group of patients starting from this one here out to here. And then there's another group starting from here out to here, OK? And so then they did class discovery. So they asked, that's curious, why do I have two different branches here? And it may just be an artifact of the algorithm. In fact, again, hierarchical clustering doesn't really give you classes. Everything is continually divided into a binary tree structure. And so it's always going to split things down and you have to decide what level you want to look at similarity. But they ask the question. So they then went and said, well, what about survival of these two groups? Do the patients in this group here have a different survival than patients in this group? And so let's make sure that I have that. So what they show here is that the diffuse-- so I've lost the indicator. Hang on one second. Let me make sure that I know what I'm talking about. Yeah, that's it. Yeah, right. So they found, if you looked at survival based-- so let's ignore this one on the left and look at these two here. So survival and decisions about chemotherapy are based on largely on something called the International prognostic index in this disease. And that's a measure of clinical risk. And they found that-- actually, let's do this one first. So they did look at survival for the two groups and found that they did have very different survival here, that, one group had much better survival expectancy than the other group. They then further looked at the International prognostic index and how it grouped patients. And according to that scheme, there is a group of low and high clinical risk patients. And the high ones will generally get more aggressive treatment than the low ones. And so they looked at this low-risk group according to their microarray patterns and asked, how did they fall into these two different groups that we found? And they found that a number of those patients-- 14 of them-- were in the Better prognostic group of the two they had defined. And another 10 were in the worst. And so they were able to refine and say that this group that's previously been called low clinical risk based on the international prognostic index actually has two subgroups within it. And one subgroup has a pretty good prognosis, and the other has quite a bit more aggressive disease. And they averaged out to this, right here. But the microarray data suggests that these patients are at greater risk and should probably be treated more aggressively based on their microarray data generate expression profile. AUDIENCE: [INAUDIBLE] STEVEN A. GREENBERG: That takes age, and gender, and stage of disease at presentation, whether it's just in the lymph node, whether it's metastatic. There's a few other variables in there. And I'm not an oncologist, so I'm not sure. But there are clinical variables. There are patterns under the microscope as well. But it's the best that medicine has. And the microarray work suggests that we could do better. And that's not an unreasonable thing to suspect in the first place, particularly in cancer, where, if we can generate 20,000 numbers that represent physiology-- and that's the other thing is, the microscope really doesn't show physiology. It shows anatomy there. The microarray is-- and that's what the quote said at the beginning-- a physiological microscope looking at living processes. Yes? AUDIENCE: I've got a question. I get the impression that there are a couple of marks there. When I look at the y-axis and I see probability, could you comment on what that measure actually is? STEVEN A. GREENBERG: That's the probability of being alive at a given year, at a given period. So at four years on the x-axis, you have a 20% probability. Oh, no-- and this curve being alive, yeah. These are actually censored data here. And that's a complex thing about data representation. I didn't want to get into that, except to the point to take out of this, I think, is that this has enormous potential for disease discovery of subtypes, disease subtypes. And again, particularly in cancer, where the physiology of these tumors, what genes are turned on is probably very much related to the disease course. Do these have metastatic potential? Are they growing rapidly or slowly based on the expression of genes in the tumor? Yeah, that was the point about adding to the IPI. OK, I won't do another example of class discovery. There's plenty out there right now in a variety of different areas. OK. And there's class prediction. So class prediction is a method of using expression data to build a model that will then predict class assignment of a new sample presented to that model. This one has potential very direct medical applications. If one can build such a model for breast cancer, then when you see a new patient with breast cancer and you do a microarray experiment on the cancer tissue, and apply it to the model, and the model spits out some prediction-- high risk of death within the next year. That will help you to decide how to treat the patient, if it's accurate. This can be used to establish a diagnosis, to make predictions of outcome is one of them. But it could be things like predicting response to a medication. This patient is more likely to respond to this medication as opposed to that one, based on their expression profile. Indirectly, it does tell us about disease pathophysiology, to some extent. So this method is a bit more involved, but it's pretty standard in the way people do this now. And so the approach is to take a data set that's sufficiently large-- let's say, 100 patients with breast cancer-- microarray experiments on each of the 100 cancer specimens for 20,000 genes, and that's the data set. So the first approach that's usually taken is from, within that data, to choose a gene set that will discriminate amongst classes. So in this approach, one decides ahead of time what one hopes to predict. So let's say, one wants to predict good outcome. A good outcome might be survival five years from now without metastases-- does a disease-free survival in five years from now. And that's a good outcome. If patients have that, we call that good. If they don't, that's bad. And so we're going to try and build a model based on our data set to predict good versus bad outcome. So this will be a binary predictor. We're trying to predict one outcome or the other. One can build predictors that are not binary that put people into one of three classes or four classes and say, you know, so forth. But this is the simplest type of binary prediction. So instead of using 20,000 genes, we're going to whittle them down to a more representative set of genes that are more meaningful to build this model, and that's called the gene set. Next, we have to construct or choose a prediction function that when applied to a new expression profile will produce a real number. So this prediction function is going to be a mathematical function. Take this times that, add this, square it, subtract something else. And it's going to spit out a single number-- 8, 8.5, 6.2. And then we need to choose a prediction rule that will classify a sample based on the output of the prediction function after application. So we take our new sample, our new vector, we feed it to the function. We get at 8.5, and we decide if the number is greater than 5, it's good. If it's less than 5, it's bad. And then the last method is to validate this model and its application. And that's class prediction. So going through it in a little bit of detail, we start with our data set on the left. We have columns. So we have 20,000 rows, whatever number of rows here-- I guess, 7,000-- some number of rows here. And we have our columns, which represent individual expression profiles from different tissue samples in patients with breast cancer. We figure out some discriminating set-- and there's a variety of methods to do that. We'll go through a couple. And we decide, well, we're not going to use all 20,000 genes but maybe just 500 of them. And we pick those genes. And then we construct our prediction function. And the prediction function looks like that. So if I feed it a vector, we're going to take the expression level for gene 8 and multiply by 2 and the expression level for gene 33, and square it, and do all these other things. And the prediction function is going to give us a number, like 8.5. And then we need a rule. And one type of rule is a simple threshold rule. If it's less than 10, it's in one class. Greater than 10, it's another class. All right? Make sense? Now, there's plenty of options for building these models along each step. And there are papers using all of these different options to build these models. It's still rather ad-hoc approach to model building. One approach is to cluster the genes-- so to do class exploration-- cluster the genes and the columns. And look at it and say, there's a lot of red for these genes in one class and a lot of green over here. I think this set of genes are important. I'm going to use that in my discriminating gene set. That's a pretty successful way of doing it, actually. Another method is something called correlation to ideal outcome. There are other principled methods for doing this, for finding the discriminating gene set. This is one where, from my data set, if I was trying to build some of predictive model, this is the clustering of both the muscle samples and the genes going that way. And I'm looking at IBM and polymyositis, and I see a lot of red and a whole cluster of genes that are overexpressed in those diseases compared to others. The genes have a lot of similar function. They're immunoglobulin-related genes, mostly. And so if I was trying to build a model that would predict a new sample as either being within this group versus something else, I'd probably use these genes in the discriminating gene set. That would be one way to do it. Same thing here, and one can do that. Another option is what's called correlation to ideal outcome. And this one is used fairly often and seems like a pretty good method. And the way this works is, we have our samples, and we have our genes. And we will first organize our samples into the classes that we want them to be in, the good versus the bad. So take all the breast cancer patients who had a good outcome at five years, and we'll put them in one group, and the ones that had a bad outcome in the other. Then we'll create this ideal vector in which ones represent a good class, and zero, the bad class. And we'll look at pairwise correlation coefficients for all of our genes compared to this ideal vector. So we essentially-- correlation coefficients test linear direction of correlation. And so we'll be asking, which genes behave like this vector? Which genes are up in this class and then change and down in these other class? And that's how we'll find our gene set. AUDIENCE: [INAUDIBLE] question of 1 or negative 1 [INAUDIBLE]?? STEVEN A. GREENBERG: It doesn't matter what you use here. You just use two different numbers. And you can do the math. I mean, linear correlation coefficients are invariant to stretch transformations. So I don't think 1 or 0-- well, 1 or-- AUDIENCE: [INAUDIBLE] in the other case. So if you separately tiering down, you could also [INAUDIBLE] exactly how it [INAUDIBLE].. STEVEN A. GREENBERG: Right. You can use any ideal vector as long as it's one numbers represent one class, and a different number represent the others. Yeah? AUDIENCE: In the beginning of the lecture, you mentioned the binary classification method. Is there any way to expand it to fuzzy logic or multiples? In your own sample, it tests several of the test [INAUDIBLE] STEVEN A. GREENBERG: Yes. You certainly-- it just matters how you build the predictor function. So at this step for discriminating gene set, this could certainly be extended. This particular method of finding an optimal gene set, correlation to ideal outcome, could be established for any number of groupings of classes. You could use 2, 1, 0, different numbers in this ideal vector. And you will end up with a discriminating gene set, a set of genes that do have some differences in expression across the three classes. But I don't know. I'm sure that people have applied fuzzy logic. People have applied just about every mathematical method available to doing classification and model building. But I don't know. If that's something you're interested in, you might want to look at that and see what people have done. So but this basic method is to correlate all of our genes to this ideal vector to compute for each one the correlation coefficient with our ideal vector. And I listed fictitious examples there. And we see that, say, for gene 2, it has a very high correlation coefficient with the vector. So we might want that to be part of our gene-discriminating set to build this model. So we might take the highest 100 genes out of 10,000 here, the ones with the highest correlation coefficients, and use that as our discriminating gene set to build this model. Let's say, these here, OK? OK, so then we want to make a prediction function once we have a discriminating gene set. And again, lots of ways to do this. How to compute a function that will give us back a number that differs significantly between the two classes, like that function. And then lastly, a prediction rule. The rule is often done as a threshold, saying, like that example there. If it's greater than or less than one number, it's in a different class. There are certainly prediction rules that can be ambiguous as well as deterministic ones. So you can have a prediction rule that says, if it's greater than 10, it's in one class. If it's less than 5, it's in another class. And between 5 and 10, I can't decide. And the choice of the rule is the classic trade-off of sensitivity versus specificity, that once you build your model and your prediction function, you can then arbitrarily vary your rule to optimize sensitivity and specificity. Depending on, let's say, a threshold, depending on where you put this threshold, you might get all of them correct in one class, but there are some from that class that were in the other class incorrectly. Or you might move it down some other way, and then you capture everybody who belongs in class 1, but there's a few people from class 2 who are also being classified in class 1. Everyone OK with that? And that's an example here, without going into too many details, of classification, that the black is one type of sample, the white is another. They built a model. They used a discriminator gene set, the function. And then depending on whether they put their threshold here or here, if you put it down here, you get more of the blacks into this classification, but you pick up a couple of extra whites here, which don't belong. The blacks belong up here, and the whites belong here. And so depending on where you move this line, up here, you get different values for sensitivity and specificity of your predictive model. OK. Now, the last part of this for class prediction is validation. And this is a very important part. So the question is, why do validation? And that's my only fun slide there. And that's because of overfitting. So the problem is, let's say, you have a data set here. And I want to build a prediction function which will tell me what to do with new data. So I could draw a straight line through it. And now it's-- I'm going to use a simple model like that. Now if you give me an x value here, I'm going to predict the y value using the function, and that's right there. Now, let's say, instead, I start with this data set, and I use a very complicated model instead of a straight line-- one that works perfectly, in fact. Every data point ends up on the 100% correctly predicted by this function. But this is some complex function. This is more than a cubic here, but that's what I put up as an example. So if you get a new point now and the x-coordinate and you ask for a prediction of y, you might get something down here from this function, which probably is not the best match. There's no one really to say what the best match is here, but it's probably not the best match. And this is the issue that continues to plague model building in this field. And that's the overfitting because what these models are doing are taking 100 samples, let's say, 10,000 genes for each, and picking out of these 10,000 genes a set of genes and a predictive function that correctly classifies this set of 100 samples. So you essentially have 10,000 or 20,000 variables to use to define 100 samples into two classes accurately. And it's a highly underdetermined data set. If you allow yourself the freedom to choose any one over number of ways to pick the discriminator gene set to build the function, one can take random noise and build a perfect predictive function that will classify it. And so that's where validation comes in. Validation is an approach in which one has built the model and then needs to test it before deciding, I'm going to use this in patients to decide how to correctly predict new samples. And the correct way to test it is to, once you have your model, is to then look at your next 100 patients who come in, and do their gene expressions, make a prediction, wait five years, and see who is really in the good versus the bad outcome. And that's the way this will need to be done. But in the meantime, nobody really has time to wait five years or to do this on another 100 samples of patients and to look that far ahead. Although I suspect people are starting to take that perspective in this. So what people have usually done is they go back to the original data set they used to build the model and validate the model on that data set, sort of. Sometimes, this is done through the combination of a training set and a validation set. So you had 100 samples. So why don't we just build the model on 66 of the samples and save the other 33 to test it? And that's fine to do that. So people build the model based on two thirds of the data and call that the training set. And then the other third is used to validate the model and test its accuracy. Another approach when people don't have a separate validation set and use all specimens to create their model is called cross-validation. And that's a way of validating the model on the original data set that was used to construct it. And there is a potential way to do this OK. It's inherently got problems with it. The most common method is leave-one-out cross-validation. So in that method, you have 100 samples. First, you remove one sample. So now you have 99 samples. You build the model based on those 99 samples, and then you test it on this one sample that's left. And you see whether it makes the correct prediction. Then you do it again. You take another sample and remove it, build another model based on those 99 samples, test its prediction for that sample. And you do that 100 times, and you ask, what is the accuracy of this approach? And did it accurately predict all 100 times correctly the class or not for the sample? So that's validation options. So that's the leave-one-out cross-validation construct predictor. Apply predictor to one left-out sample, repeat for each sample, calculate an error rate as present misclassifications. So the way that cross-validation works is, again, the underline there is construct predictor using all data except one. If we go back to the method we use to construct a predictor is, from the data, we choose a gene set, we choose a prediction function, and we choose a prediction rule, right? So a common problem in this area is that people don't do this. They skip step one. So they don't go back and rechoose a new gene set each time they apply cross-validation. If you think about it, you will have to choose a new gene set, and you might get a different set of genes each time. And that's a problem. If you're trying to build a model and say that these are the genes that are important in my model, I'm going to use them for new patients, you can't really test that hypothesis using this approach. And so people have looked at simulations of doing incorrect cross-validation of not repeating that first step. And essentially, the simulations show that you can build predictors that are 100% accurate, even in a data set that's completely random. All right, now, I want to wrap this up. But if you repeat the process correctly by reselecting the informative gene set each time you leave one out, then the predictor's accuracy is no better than chance, as expected in a random data set. So it makes a huge difference here. So many papers have incorrect cross-validation in them. That paper on breast cancer and nature and its follow-up in The New England Journal of Medicine as well reported an accuracy of 73% of its predicted. They built a predictor to predict good versus bad outcome and reported a 73% accuracy. This is based on leave-one-out cross-validation, but they did not recalculate the informative gene set. If one does that, the corrected actually is only 59%. It's not much better than flipping a coin. Other papers, even the ALL/AML did incorrect cross-validation. Well, that's not correct. They didn't do incorrect cross-validation, but they had a classifier which was ambiguous. That's another issue in reporting accuracy, is they did not use a classifier that said yes or no. There was a gray zone. And so they reported the accuracy as 36 out of 38, but the other two were uncertain. Similarly, with another paper on medulloblastoma, in terms of using ambiguous classifiers, reporting an accuracy of 72%, but it's not really the right way to do it with ambiguous classification. So that's a separate one. So I think I'll stop here, so I have a little something to say next time I see you. OK, thanks. Any questions right now? So is this-- I'm trying to keep it fairly simple. Does this seem appropriate, not too complex but not too slow either? Or does it seem a little too slow? Whatever, yeah. AUDIENCE: Is this going to be up on the myCourses website? STEVEN A. GREENBERG: Yeah, yeah, these are the same ones I used last year. So is the stuff that we put up last year still on there? Does it need to be put up? Yeah, so that's still on there, yep. Have guys been getting problems sets to do? Hoe's that been working? AUDIENCE: [INAUDIBLE] presentation of basic colors. [INAUDIBLE] STEVEN A. GREENBERG: There's the Eisen Cluster and TreeView software. AUDIENCE: OK. Is it TreeView making the graphs? STEVEN A. GREENBERG: TreeView makes the graph. AUDIENCE: [INAUDIBLE] if I do something with another tool, then I can [INAUDIBLE]?? STEVEN A. GREENBERG: As long as you put it in the right format for TreeView to look at, yeah. AUDIENCE: OK, thank you. STEVEN A. GREENBERG: Yeah. AUDIENCE: There is another system that used pink and blue, right? STEVEN A. GREENBERG: There's the pink and blue system too. So some of those are like Peter Park, who's going to talk to you, I think, next week, he just uses our programs that he wrote to do that. There's commercial software called GeneSpring that uses pink and blue. There's a program called TGEV, T-G-E-V, out of the TIGR, The Institute for Genomic Research, and they have free clustering programs.
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_13_Case_Hx_Complex_Traits.txt
SCOTT WEISS: So this is an outline of what I'm going to talk about, and we're going to begin by of getting at this question of why complex trait human genetics is so difficult. And then go through each of the steps that you would do if you were actually doing this work. The first question you would get asked on an NIH grant application is your phenotype heritable? And so some evidence of heritability, or doing a study to determine heritability, is often the first step in a genetic study. And so we're going to talk a little bit about heritability and how you define that, and then identifying disease phenotypes-- so that's a critical issue-- the difference between a subphenotype and an intermediate phenotype, how you want to look at those things, then developing your study design. And the paper that I just was telling you about, this paper that's coming out in science, these people looked at two relative genetic isolates-- the population of Finland and the Saguenay-Lac-Saint-Jean population in Northeast Quebec, French Canadians. So these are populations where they had a limited number of founders, and so the thought is that they're more genetically homogeneous, and it might be easier to find genes in these populations. But what's the big concern, if you found a gene in the Finns? What would be your biggest concern about that? AUDIENCE: That it would be replicable? SCOTT WEISS: Exactly. Maybe that gene might not replicate or be a significant gene in an outbred population in a country such as the United States, where there's a lot of ethnic variation and diversity. So it may be easier to find genes in genetic isolates, but it may be more difficult in terms of their generalizability. So give me some other examples of relative genetic isolates around the world-- populations that would be considered relative genetic isolates? AUDIENCE: [INAUDIBLE] AUDIENCE: Iceland. SCOTT WEISS: Iceland. Yes, absolutely, Iceland. So that's where Decode is working-- Decode is our number one competitor in COPD research. They're the only company that's actually doing COPD stuff. What else? Where else could you go? AUDIENCE: South America. [INAUDIBLE] generally from Iceland [INAUDIBLE] in South America. I guess tribal populations in South America and [INAUDIBLE].. SCOTT WEISS: Maybe-- what might be the disadvantage of a tribal population, or disadvantage of a small island population like Tristan da Cunha, which is where they first went to do genetic isolate work in asthma? AUDIENCE: The small number of people. SCOTT WEISS: Yeah, small number of people. So you've got a limited number of meiosis in a population like that. You're not going to get too many combinations. So you just run out of gas because you don't have a big enough sample size. AUDIENCE: Swiss. Swiss families up in the Alps somewhere. SCOTT WEISS: Possibly. The Finns, the Swiss, the northern Netherlands, the Ashkenazi Jews, Costa Rica. AUDIENCE: Why? Why Costa Rica? SCOTT WEISS: Why Costa Rica? Because Costa Rica is surrounded by volcanoes. It was the one place in Central America that the Spanish-- there's not a huge Spanish influx because there was no gold there. So there's a limited number of Spanish founders in the 15th century. They pushed the Indians to the periphery, and they settled the Central Valley of Costa Rica. So you've got this 200 founders in the 14th century, very little intermarriage, and perfect church records-- very large pedigrees. It's probably the closest, next to the Saguenay--Lac-Saint-Jean population in Quebec, genetic isolates in the Western hemisphere. And we're actually doing a big study there. We actually have six big pedigrees with over 120 people in each pedigree. We've just finished the collection of these pedigrees. We're about to do a genome scan that Tom is actually doing for us. AUDIENCE: [INAUDIBLE] most of the people are isolated by geography, as you've mentioned with the exception of Ashkenazi Jews. So to broaden, I guess, the definition of genetic isolate to include things like economic-- like, things that usually [? happen to ?] people, like royal families back in the old days. They only intermarried because of social status [INAUDIBLE].. But now there's things where people with certain economic status [INAUDIBLE] in the coming years [INAUDIBLE] see a certain set of people where they marry and-- SCOTT WEISS: Yeah, it turns out, I think, the geography is actually a much better-- [INTERPOSING VOICES] AUDIENCE: --happening to royal families where people have a chance to procreate [INAUDIBLE] proximity as well? SCOTT WEISS: I think the geography is a lot better. Historically, it's a much more reliable guide to a genetic isolate than any social convention. I think, exactly for the reasons that Zach-- AUDIENCE: Aren't groups like the Ashkenazi Jews AUDIENCE: There's so strong social pressure. AUDIENCE: Yeah, like they [INAUDIBLE].. SCOTT WEISS: They may be the exception to the rule, but I think populations like Iceland, Costa Rica, Tristan da Cunha, Finland-- AUDIENCE: How about the Amish? SCOTT WEISS: The Amish would fall into that group. The Mormons-- maybe a little bit less so. The big advantage of the Mormons is not so much that they're a relative genetic isolate, but that they have very large families and they have very good church records. So those are other characteristics that are helpful. But the idea behind the genetic isolate is that you've got a relatively homogeneous set of alleles that's circulating in the population. Mormons do intermarry, and so there may be-- I don't know that you'd consider them really a genetic isolate, whereas I think that the Amish and the Hutterites and the Ashkenazi-- these are people where there is some set of code social conventions that those people are much more likely to intermarry. AUDIENCE: The reason I ask is because it seems like the trend might be, especially nowadays with transportation the way it is, that finding geographic isolated populations is going to decline and [INAUDIBLE].. SCOTT WEISS: This isn't the only way to do this. And I think it is important to make the distinction between linkage and fine mapping, and those two things may be somewhat different. The advantages of an outbred population is just that the degree of linkage disequilibrium will be relatively narrow. In some of these genetic isolated populations, the degree of linkage disequilibrium can be very large. That means you can do the linkage part pretty effectively, but the association part is more difficult because you've got these big LB blocks that you've got to work with, and you may not be able to get to the gene. So anyway, we'll go through this stuff. These are all of the steps that you would do in a typical study, and we'll just go through each of these. So I'm going to use asthma as my example because this is the disease I know the best, and the point here is that asthma prevalence in Western developed countries has gone up a lot. So do you think that this is a genetic thing or do you think it's something else? So over the 20-year period, '80 to 2000, we've had more than a doubling in the number of cases. So over a 20-year period, doubling in the number of cases? AUDIENCE: Genetic? SCOTT WEISS: Genetic or environment? AUDIENCE: Environment. SCOTT WEISS: Why? AUDIENCE: Environment [INAUDIBLE].. SCOTT WEISS: So there are three potential-- what are the three population genetic mechanisms that something like this could occur-- a genetic explanation? So I'll give you at least half credit, but you're definitely not 100% right. The first genetic mechanism would be spontaneous mutation. You had some spontaneous mutation, and it caused this epidemic of diseases. Is that possible? Well, you already said no, it's not possible. And you're right, it's not, but you have to know what the spontaneous mutation rate is, which is about 1 times 10 to the minus 8 base pairs per generation. So it's pretty low. We're spontaneously mutating all of the time, but we're not spontaneously mutating fast enough to double the number of cases in a 20-year period of time. So what's the second possible genetic mechanism? Natural selection, right? So is natural selection going to do-- AUDIENCE: [INAUDIBLE]. SCOTT WEISS: Well, particularly with the disease like asthma, where there's no selection pressure and no reproductive advantages and disadvantages-- you all know plenty of people with asthma, and they're able to reproduce just like everybody else. So couldn't be natural selection. And what's the third population genetic mechanism that could-- AUDIENCE: [INAUDIBLE] the third one. SCOTT WEISS: Genetic drift. So you had some mutant asthmatic that came into the American population, and over 20 years of time, they intermarried with all these other people and doubled the asthma rate-- plausible or implausible? AUDIENCE: Implausible. SCOTT WEISS: No, it can't happen. So you're right, it can't be genetic. So I give you half credit. I mean, most geneticists don't think like this, but they should think like this. The reality is that all of these genes operate in a developmental and an environmental context. All of your genes do. So the true underlying model for disease causation is gene by environment interaction. So it could very well be that there was some dramatic change in the environment, and now that's interacting with some other genes that it wasn't-- they weren't interacting before with, and now you've got this marked explosion in a number of cases. And that's almost-- certainly is the most comprehensive explanation, but it would have to devolve from some of environmental change, rather than some of primary change in the genes. But it could easily be that there is interaction between whatever the environmental exposure is and some of the underlying polymorphisms that may be disease related that are different now than they were back here when the disease rate was a lot lower. So big health problem. I don't want to dwell on this, because it's not the purpose of this course, but all this means is that people will give you money to study this, and they weren't so keen on doing that 20 or 30 years ago. The other important point is this disease is a disease of children. So you're going to think about-- I usually tell people, this is data looking at the age of onset in a closed population in Olmsted County. What's the famous medical center in Olmsted County, Minnesota? AUDIENCE: Mayo? SCOTT WEISS: You weren't supposed to answer that. They're supposed to answer that. AUDIENCE: [INAUDIBLE]. SCOTT WEISS: Mayo Clinic. So Mayo Clinic is-- everybody in Olmsted County goes to the Mayo Clinic. Now Saudi princes and sheikhs and famous people from all over the world and who's the King of Jordan-- King Hussein? He went there for his-- so Zach's mother, another famous person, went to the Mayo Clinic. So they get a lot of people from outside, but this data is based on the people who live in Olmsted County. Now if you live in Olmsted County, you don't go anywhere. You just sit right there and you stay there. So this is a fairly stable population. And they were able to capture all of the incident asthma cases and document them because they were all going to the Mayo Medical Center, and they had their chart records. So 90% of all of the people who were diagnosed as asthmatic in Olmsted County were diagnosed before the age of six. So this is a very, very important point. So this is the opposite of Alzheimer's disease because if you think about genetics, this is great for geneticists because I only have to wait six years from the time the kid is born, and I'm going to know whether they've got the disease phenotype or not. If I was waiting for Alzheimer's cases, it would be Waiting for Godot. I'd be waiting a long time before I'd get my cases. Now there's ways around that for the old people right. And what did geneticists do? How did they find the BRCA1 gene? What did they do to enhance the probability that you would find-- if you're looking at older people, how do you enrich for a genetic cause of a disease? What do you do? AUDIENCE: [INAUDIBLE] SCOTT WEISS: Education doesn't have anything to do with it, I'm afraid. What characteristic of the cases would make you think it's more likely to be genetic? AUDIENCE: Family history. SCOTT WEISS: What? AUDIENCE: Family history. SCOTT WEISS: Family history, but what specifically-- the family history-- age of onset. You're looking for genetic causes of heart attacks, you're going to take the people that have heart attacks when they're age 50. So Ed Silverman, who's the world leader in COPD genetics, in my laboratory, is looking for early-onset COPD cases. So he gets cases where the age of onset is younger than the age of 52. So that's young. So if you were looking at Alzheimer's cases, you'd say, well, we want all of the cases of Alzheimer's people before the age of 60. And this is how Mary-Claire King found BRCA1. She looked at all of the early-onset breast cancer cases, people who got breast cancer in their 20s, their 30s, their 40s, instead of looking at older postmenopausal women, which is almost certainly another disease. So if you're looking at old people, one of the clever ways that geneticists enrich for genetic susceptibility is by looking at early age of onset. AUDIENCE: So they're selecting, potentially, a special case in a particular disease? SCOTT WEISS: Absolutely right. It's a little bit like, in some perverse kind of way, it's a little bit like the geniculate isolate. You might find a gene that is specific for that particular type of early onset disease. So you find a gene for early onset Alzheimer's, but not for garden variety old age Alzheimer's that occurs in virtually everybody by the time they're 90. So you're right. But we're still in, I think, what geneticists would say the early stages of this. And because we're still in the early stages, most of us would be happy if we found any gene. So you're going to be in science, if you find that early-onset gene, and nobody's going to be criticizing you because it's not the gene for all breast cancer or all Alzheimer's. AUDIENCE: [? So yet ?] another example of low-hanging fruit. SCOTT WEISS: Exactly. So a little bit more about the disease-- most of the kids are allergic. Allergy is probably the big reason why the asthma epidemic occurred, and that means that they have this particular type of an inflammatory process where antigen presented to dendritic cells in the airways activate these CD4 positive T lymphocytes, which then elaborate this series of inflammatory cytokines, which go to these inflammatory cells, which then infiltrate the airways and set up an inflammatory reaction with coughing, wheezing, airways responsiveness, et cetera. This is all well-known, but it does suggest a whole host of other potential phenotypes that you could potentially look at. And it also gets at this concept of ontogeny of the immune system, where T null cells, at some point, differentiate into these Th1 and Th2 cells, which are determined-- their phenotype is determined by which cytokines they actually elaborate. And I've got a question mark here, but, actually, this particular step, which is the crosstalk and interaction between these two types of cells that are controlled by two specific genes that elaborate cytokines-- IL-10 and TGF beta, and we genotyped both of those genes in asthma and COPD, and they're important in both diseases. Now it's important for you to understand that I skewed things a little bit because I told you that asthma is a Th2 disease, and Th2 diseases have increased. There's this increase in allergic rhinitis, food allergy, asthma, et cetera. This novel gene that I was telling you about just a few minutes ago-- it's going to come out in Science on Friday-- that gene is expressed in the skin and in gut epithelium and in airway epithelium, suggesting that may be important in all these different types of allergic diseases which, again, has heightened people's interest in the gene and its potential importance. But it's also important to recognize that Th1 diseases have also increased. So give me some examples of some Th1 diseases. So the epidemiology-- and the reason I'm bringing this up is that most of the immunology community is focused on-- this is why it's important if you're going to be a good geneticist, you've got to really know your disease. You can't just wave your hand at it and say-- and I think that age of the generalist geneticist, geneticists that sort of, oh, you know, I'm going to study this disease and I'm to study that disease-- not with complex traits. That's not going to work. You're going to have to really know your disease because you have to know the environment, you're going to have to know the natural history, you're going to have to know the intermediate phenotypes, and you have to really understand the biology as well. The point here is these Th1 diseases-- give me an example of a Th1 disease. AUDIENCE: Well, I'm thinking about that-- could you tell me the autoimmune diseases, like inflammatory bowel disease, what are they [INAUDIBLE]?? SCOTT WEISS: They're Th1. So Crohn's disease-- Th1 disease. Juvenile rheumatoid arthritis-- Th1 disease. Psoriasis-- Th1 disease. Juvenile diabetes-- Th1 disease. And the reason-- so if the prevalence of these has gone up, and the prevalence of these has gone up, people are thinking that there's something going on further up here that has to do with Treg cells, cells that regulate T cells in terms of their differentiation, because it can't be just at this level that the immunologic defect is. So it raises the possibility that there are genes-- a Foxp3 getter, T-bet, a whole bunch of other genes that are proximal to the Th1, Th2, C4 lymphocytes that may be important in all of these autoimmune diseases. And people are just now starting to look at that. And, obviously, the environmental and genetic factors that influence the differentiation of the immune system, or how do people actually tolerize the foreign antigen-- that's the kind of really simple complicated question that, if you could figure out an answer to that, you'd win a Nobel Prize. So that's what my laboratory is starting to work on. So this is just to show you, again, what I've already told you, that there are a bunch of factors, mostly bacteria and viruses and parasites that influence this Th1/Th2 differentiation. And environmental factors that influence those things are presumed to be important. And one would want to know both the genes and the environmental factors that are involved in this particular disease. There happened to be a whole host of environmental factors that are correlates of those sorts of changes. And I've listed a bunch of them here. We get very interested in this one. We went to China in 1996 to do an asthma genetics study, and I noticed how different the environment was there. And this left-hand category would summarize what you would see if you were standing in rural China and what you would see in terms of the environmental exposures. Very, very low asthma rates in rural China-- it's about 1%. And it is a progressive increase in gradient in terms of disease prevalence as you march towards Beijing or Shanghai-- much, much higher rates. AUDIENCE: Let me ask you some questions. Is it not factually wrong that Chinese populations-- I thought the families were [INAUDIBLE]?? SCOTT WEISS: No. See, that's again-- a little bit of knowledge is a bad thing. If you get into rural China, actually, where farming is what everybody does-- although the central government would say there's a two-child policy or a one-child policy, in rural China, they just they have as many children as they-- they may not register them with Social Security, but if they need three kids to run the farm or four kids, they have as many kids as they want. So we found a lot of families with four or five, six, eight kids. AUDIENCE: All right, here we go, [INAUDIBLE].. SCOTT WEISS: So why has it been presumed to be difficult to do this kind of work? What's the reason that it's difficult? And I think these are some of the reasons. And some of them relate to the issues of study design-- the things that we were talking about. One is this whole idea of genetic heterogeneity. Particularly, if the underlying model here is gene by environment interaction, presumably you could get the same phenotype-- and these phenotypes are determined by multiple genes. You could get the same phenotype, either high IgE or airways responsiveness in population A with a very different constellation of genes and environmental exposures, and you can get the same thing in population B with different genes. So this is the geniculate heterogeneity thing is a reason for focusing on a genetic isolate. But then you have to worry about the generalizability question. So, in fact, in asthma there are four positionally cloned genes, counting the paper that's going to come out on Friday in Science, and of those four, the first is the only one that's people have really attempted to replicate, and it's gotten mixed results. There are some people that have replicated it and some people that haven't. So it's one of those genes that probably falls into this category of, well, it's not a major gene. It's a minor gene. It's one of the 200 genes that determine asthma, but it's not one of the top 10 in every population. AUDIENCE: What's your guess about this new gene is going to be [INAUDIBLE]. SCOTT WEISS: My guess my guess about this new gene is that it's a major player. But having said that, the point that I made to the Science writer who was doing this is it that's what science is all about is replicating this, seeing how important it really is, and seeing what actually happens. I mean I think you can get a clue as to whether you've got hold of an area where there's a potential major locus or not, by looking at the replicability of the linkage peaks in a particular region for a complex trait-- in other words, if you've got a region where there's a linkage peak, and there are 10 different studies and 10 different populations, and there's always a peak in the same region, then the chances are that there's a major gene in there that's probably going to apply to a bunch of different populations. Well, this is-- going back to the Science article-- this is a region where there have been a number of people have found a peak there. The other problem here is that unlike single gene disorders, where there's a known mode of inheritance, you get everything under the kitchen sink here. So you get some of these genes that are autosomal recessive, and some are autosomal dominance, and some are-- so you're getting a whole bunch of things jumbled up in one phenotype, which makes it very difficult. And then there's this problem of phenocopies. So what's a phenocopy? AUDIENCE: [INAUDIBLE] SCOTT WEISS: Give me an example from your own clinical experience of somebody who's-- a phenocopy who's not due to a-- well, just like can get these diseases from genes, you can get them from exposures in the environment. So what if you've got some guy who smokes four packs a day and he's 50 years old and he has a whopping big heart attack? Well, maybe when you quiz him, and he has no family history, but you smoke four packs a day. Well, you can get a heart attack from smoking four packs a day, and you don't need to have any genes at all for heart attack. You can just-- so that's a phenocopy. He's going to look like somebody who's a genetic susceptible because had a heart attack at 50 years old, but it's all due to an environmental exposure. And complete penetrance. So this is a problem even in single gene disorders because they're clearly examples-- hemochromatosis, cystic fibrosis-- very different spectrum of diseases in these-- we know that the CFTR gene causes cystic fibrosis. You've got some people who have completely normal lung function, no lung disease at all, and all they've got is mild pancreatic insufficiency, and you've got other people who are totally debilitated from it. So part of that can be penetrance. Part of it can be environmental exposure. But incomplete penetrance is important. Then you've got this problem of multiple genes. People have very-- the lay public has a very delusional-- they think the genes are immutable. If you've got those genes, that's it. It can't be changed. And they also-- they're monolithic. They're really big, whereas the reality is that anyone-- you've got 33,000 genes in the genome, take a disease like asthma, which isn't very complicated-- maybe there are 200-250-- I don't know, a lot-- that are probably important. Maybe 10 to play a role in most every population. And a lot of environmental things going on, and it makes it very complicated, and that's why guys like this guy are going to make big bucks because they're going to be able to model all of the different pathways and all the different genes together in some more realistic model of systems biology or some actual way of looking at this. But the point here basically is look it's complicated to do this stuff. But again, going back to what I said earlier, it's getting a lot easier. So 2002-- one position we cloned gene for asthma. 2003-- two positionally cloned gene from asthma. 2004-- first paper is already out, and there's probably going to be four or five more. So it's going to be four or five this year, year after that, they're going to be probably 10. And all of a sudden now, you've got 20 genes identified for the disease by positional cloning. And that is the history of complex trait genetics, and it's going to be-- it's happening right now. Right this very moment, all across the world, labs like mine are right in the middle of the fray doing this stuff. This is, simply put, the single most exciting time to be doing human genetics. And it's going to go on for a while, but who knows for how long? So then there's this other problem of pleiotropy, which can have one gene and it can do a lot of different things. You got the CFTR gene that gives you lung disease, pancreatic insufficiency, and fertility. It all has to do with mucosa and epithelia and different organ systems where this particular genus expressed. So one of the genes we're looking at-- did Jeff talk to you about CRHR1? Did he show you the data about CRHR1 last week? So that gene is expressed-- is that gene expressed in the lung? Yes or no? AUDIENCE: [INAUDIBLE] SCOTT WEISS: No. Not expressed in the lung. It's the receptor for CRF or CRH, and it's expressed in the brain. So what other disease might that gene potentially be important in? He's an endocrinologist. He's forbidden from answering. AUDIENCE: [INAUDIBLE] AUDIENCE: What? AUDIENCE: [INAUDIBLE] SCOTT WEISS: It comes from the hypothalamus, actually. AUDIENCE: What clinical disease? Well, I just-- [INAUDIBLE] hypertension. SCOTT WEISS: Hypertension? AUDIENCE: It's not-- SCOTT WEISS: There are endocrine causes of hypertension. AUDIENCE: [INAUDIBLE] course yet? SCOTT WEISS: So anybody had physiology? AUDIENCE: [INAUDIBLE] SCOTT WEISS: HPA? AUDIENCE: OK, so [INAUDIBLE]? AUDIENCE: No, think common diseases, man. SCOTT WEISS: Common disease. [INAUDIBLE] tell you. It's depression. It's been studied a huge amount in the section brains of people who committed suicide, and this-- AUDIENCE: [INAUDIBLE] SCOTT WEISS: I mean, all kinds of things show that CRF and CRHR1, which is the ligand and the receptor are important in affective disorders. AUDIENCE: Is there any link to [INAUDIBLE]?? SCOTT WEISS: Well, there's an association between our [INAUDIBLE] type and depression, and Julia [INAUDIBLE] Mexican-American-- AUDIENCE: Really? SCOTT WEISS: Yeah. AUDIENCE: Very cool. SCOTT WEISS: Yeah, so that's pleiotropy. And then, obviously, you've got this problem of penetrance, which is-- individuals with a genotype who actually express the trait, and IgE genes can be important in hay fever. They can be important in asthma. And there are some people who don't have high IgE at all, even though they've got the genes. And these are some other examples of things like-- The basic point I'm trying to make here is that these are reasons that have been given for why doing this stuff is hard. But I'll tell you something. Really, the hard part has been developing the bioinformatics infrastructure, the tools, the bioinformatics tools, and cheap, reliable genotyping. Those have really been things that have been important. And just in the little bit of time that I've been doing this, my genotyping costs have gone from $1.20 a SNP genotype, down to next year, I'll be downed at about $0.15-$0.20 a SNP genotype. And there are three million SNPs minimum-- three to five million-- in the human genome. Now I'm not going to type at three million, but I've got to type-- in any one experiment, I've got to be able to type 1,000 over a 1020 megabits region-- one of these linkage peaks. So I got to do a lot of genotyping and a lot of people, and it's expensive. The very first position we cloned gene for asthma took six years and $15.6 million. We could do that experiment today for $2 million and a regular NIH grant, and that has totally changed the field. That's the kind of thing that's really making this possible. So I already said that if you're going to think like a geneticist, everybody has to know a little bit of population genetics. So you have to understand the concepts of linkage disequilibrium, drifts, natural selection, et cetera. Remind you of the fact that somebody asked the President of the United States whether he believed in evolution, and his answer was, the jury's still out. AUDIENCE: Really? SCOTT WEISS: Yeah, that's what he said. AUDIENCE: This is your former classmate, right? SCOTT WEISS: Right. I went to high school with a president. And so this is the first question that you're going to get asked if you're writing a grant. First thing that you have to address is the disease or the phenotype that you're interested in-- is it heritable? So there's lots of different ways to measure this. You can calculate a heritability estimates. You can do twin studies. You can develop this concept of risk to relatives, which is you look at the risk in the pro bands divided by-- the relatives divided by the risk in the population at large, or you can look at familial aggregation. But the point is, you've got to gather the evidence, and if you don't know that your phenotype is heritable, you're going to have to demonstrate that it's inheritable before anybody is going to give you a grant to study it because that's what geneticists say. They say they want to know that. They want to know the answer to that question. So I think it's-- asthma doesn't necessarily have a high heritability, but it clearly is a heritable disease. This is data from one twin study from Danish Twin Registry that looked at the concordance of asthma in identical and fraternal twins. Identical twins share 100% of their genotype. Fraternal twins share 50% of their alleles. Everybody knows the twins also share the environment. So that's another factor that's at issue here. But the reality is that there clearly is evidence of heritability of a disease. You get very different-- The problem here is that heritability estimates are always dependent on environmental exposures as well because the true underlying model-- and disease prevalence-- so the true underlying model for all of these diseases is clearly still going to be a gene by environment interaction. So after you've decided that the phenotypes that you're interested in are heritable, then you've got to go out and you've got to say, OK, I've got these phenotypes and I'm going to genotype them in a population. You can either look at disease phenotypes-- the advantages of this is that people want to look at asthma. They want asthma genes. They want to find, quote, the gene for asthma, unquote, which we already know is probably a false concept. But the problem with a lot of disease phenotypes is that even though there may be binary clinically, there may be real problems in terms of making that diagnosis in a way that would be useful for a research study. The problem with asthma is it's a syndrome. I mean, there is no one way of diagnosing asthma so that you can say, you take this test, and I can guarantee you that everybody that takes this test is going to have the disease and everybody who has a negative test doesn't have the disease. AUDIENCE: [INAUDIBLE] SCOTT WEISS: Yeah, but the FEV1 doesn't tell you whether somebody's got asthma or not. I can show you people who have reduced FEV1 and have cystic fibrosis or have interstitial lung disease or have COPD. I mean, they can have a lot of different things. So it lacks sensitivity and specificity the FEV1. And that's true for every single test. I mean, elevated IgE-- well, you could have elevated IgE from parasitic disease or from eosinophilic pneumonia or from 20 other different things. So there is no single test, and the same may be true for most complex traits. There may be some phenotypes that are a little easier to measure like, say, well, I want to study obesity. Well, how fat is fat? Or is people who are fat like this different from people who are fat like this? I mean, there's all sorts of different ways of looking fat or being fat. So any one of these phenotypes has complications. And I can tell you this from-- when I first got into this, all I knew was phenotype. I was a world-class phenotyper. I knew all of the nuances of phenotype and everything there is to know about phenotype. And that tends to be what happens when you talk to clinicians because they understand that more. So this stuff is really, really important, but it's not going to get you very far if you don't know all the other stuff. You've got to know all the other things. I think the point is you do have to know this, and, again it's a-- AUDIENCE: The current problem is a lot of genomicists that don't understand. SCOTT WEISS: Exactly. Well, it goes back to the point that I was making earlier, which I think is that genetics is moving from a field where genetics were generalists to a field where geneticists are specialists. You get people who specialize in respiratory genetics, cardiovascular genetics, obesity genetics, diabetes genetics. The days of the person that can roam around and do all of these things-- no, I don't think that's going to happen. In five years, six years, you're going to have to be able to go in there and focus on a specific disease because it's going to be too complicated for you to be able to do otherwise. Then you've got this other type of phenotypes where you can say, well, OK, we want to look at asthma, but what about looking at intermediate phenotypes? So give me some examples of an intermediate phenotype related to my disease of interest? What would be an intermediate phenotype? AUDIENCE: [INAUDIBLE] SCOTT WEISS: Absolutely, FEV1. What else? I had it up on a number of slides. So IgE level, right? It's a measure of allergy. Skin test reactivity, airways responsiveness, symptom score, sputum production, exhaled and out. I mean, the list goes on and on and on. You can create 100s. So for obesity, it could be looking at body mass index is the primary phenotype to define obesity, but then you could look at absolute fat mass or percent body fat, or waist to hip ratio, or insulin resistance, or do CT scans of somebody's abdominal fat deposition. I mean, there's a million different ways of potentially going at this. The advantage here is that sometimes these are more objective than a subjective, oh, it's asthma, it's not asthma. And it may be closer to the gene in the sense that you've got somebody's IgE level, you have some idea of genes that determine that. And it can be quantitative. You can do a different approach statistically to quantitative traits then you can use if you're looking at binary traits. AUDIENCE: [INAUDIBLE],, like if someone comes in and you look at them and say, OK, this person has a symptom of asthma. They have difficulty breathing. [INAUDIBLE] then you start delving into looking at SCOTT WEISS: All these intermediate phenotypes. AUDIENCE: And that correlates then with narrowing the diagnosis from, OK, you don't really have [INAUDIBLE],, you have this or you have this type of asthma or is that-- SCOTT WEISS: Well, it's the way I prefer to think about it. And I think it's probably a better way for you to think about it is that you've got to get away-- this is where thinking like a doctor and a clinician is bad. In the world of clinical medicine, it's just like religion. You either have the disease or you don't. There's no such thing as being a little bit pregnant. You're pregnant or you're not pregnant. You have to have bypass surgery or you don't. Clinicians live in a binary world. Real scientists live in the world of continuous distributions. So you can have-- when are you fat? Are you fat with a body mass index of 23, 24, 25 26. When do you have high blood pressure when it's 130 over 80? Or 140 over 90? Or when is that? And the other thing is that way to think about these is kind of like overlapping Venn diagrams. The clinical phenotype is actually a composite of these overlapping Venn diagrams that all have separate genetic determinants and things that contribute to them, separate genetic-- it's like dissecting a layer and peeling an onion, where you've got all these different things. But I think, in many ways, being a clinician can help you as a research scientist. But in some ways, it can also hurt because you start to think in these absolute terms. So I think the better way to think about it is that these intermediate phenotypes overlap to create clinical phenotypes. And yes, what you're trying to do is stratify in some way or classify people in some way so you're creating homogeneity so that you can actually identify the genetic determinants of a disease or an intermediate phenotype. So you want to go in that direction. But most of these things lack sufficient sensitivity and specificity to really be terribly helpful. So this is just a list of some of the phenotypes that people have looked at in asthma. And I've studied some of the ones that people have focused on in terms of linkage peaks that have actually been identified. But this is interesting because there's clearly a bias in the literature because there's a whole bunch of these other phenotypes where you could just as-- and I could create a list of 30 more of these where people haven't looked at. So this just gets to the point that there's plenty of work here for anybody who wants to do this stuff because you can go out and I've got a junior person in my lab, he's got a bunch of phenotypes that he's really interested in, and he's going to go out and he's going to determine their heritability, and then he's going to write another grant, and he's going to map the genes for them and so on and so forth because he wants to have his own little area to work on. So then the next thing-- so now we're kind of at the point where you've got to I've got to move a little faster or we're not going to make our way through this. But you've got to have a study design, and there's a bunch of different ways of doing this. You can do linkage. You can do association. And amongst the linkage studies, you can do a little sharing methods which are distribution free, or you can do continuous distributions and focus on that. There are two types of genetic association studies-- the family-based and the case control. Important point here is that they're very different. Here, you have to genotype three people. Here, you have to genotype only two people. Different hypotheses-- here, you're looking at the alleles or the genotypes in the cases relative to controls. It's the frequency of the genotype, frequency in the cases, versus the controls here. You're looking at a transmitted alleles from a heterozygous parent to an affected offspring. So very different hypotheses, different study designs, and important thing to recognize is that in any association study, the association between a variant and a phenotype can be due to a causal relationship. It can be the linkage disequilibrium. Or it can be due to population admixture, which means that, usually in the context of the case control study, not a family-based study, you've got different allele frequencies segregating in the cases in the controls because you've got different population histories, evolutionary histories, that have determined those allele frequencies. So most extreme example would be I had 1,000 Italian cases of asthma, and I'm comparing it to 1,000 Swiss controls who don't have asthma. And even though these two groups are predominantly Caucasian, their evolutionary history may be different, and the allele frequencies may be different as a result of that. So even within an ethnic group, you can get these different allele frequencies. And this is because ethnicity or self-designated ethnicity is only a weak predictor of evolutionary history. AUDIENCE: What was your example or [INAUDIBLE] example. So you compare [INAUDIBLE] assoiciations [INAUDIBLE] between Germans and Italians, those two populations. Sure enough, we found a linkage association between pasta eating and some piece of genome because, in fact, what you'd be looking for is linkage to the fact that you're an Italian, just by the fact that Italians have a distinct [INAUDIBLE] polymorphisms then the Germans [INAUDIBLE] creates this [? past ?] association when, in fact, [INAUDIBLE] looking at different populations. SCOTT WEISS: Some of the guys in my lab wrote an article demonstrating all of the potential problems in the case control type of genetic association study. And one of the things that's really impressive about this paper in Science is that we all use genetic association as part of the fine mapping process to map a linkage peak. But-- this is very important-- but because even if you can get rid of the population admixture problem, linkage disequilibrium is always an issue. And so you're never going to know for sure if you're at the gene or you're just close by to it. And so you're going to have to have something else to show that you've actually found the gene. You're not getting into science just with genetic association. And so the people in the paper that's coming out this week, they have expressed the gene in bronchial tissue. They've done immunohistochemistry to show that the gene is expressed in epithelium. They replicated their results in a different population, et cetera. So the thing about the [INAUDIBLE] about the case control studies and about even family-based association is that these studies are really easy to do. And so there's lots of them in the literature. So it's really important for you to know, going back to this slide, it's really important for you to know these potential problems because you want to be able to read this literature and say, yeah, these guys really found something or maybe they didn't. So the advantages of this candidate gene thing is that it's cheap and easy to compare-- remember, I said that now, four positionally cloned genes that have used this type of genome screen approach, four that have been identified since the human genome was mapped in 1996. Well, that's seven years. That's not even one gene a year. That's pretty meek, or weak. And that's because this is very expensive, technologically intensive. But the thing that's great about this is come up with a novel gene at the end of the time. So it's not dependent on what anybody knows about pathobiology. And so you can go this way, and you can say, look, I know that IgE is important in asthma. So I know that we ought to be screening IL-13 IL-4, IL-4 alpha receptor, CTLA-4, all of those genes in the pathway that determines IgE makes sense-- screen those genes because we've already said that people with asthma have high IgE. Well, you check those genes and yeah, in fact, they are-- most of those genes are asthma or allergy genes. It's not real exciting, though. I mean, it's not like everybody's going to jump up and say, oh my god, IL-13 is an asthma gene. Well, molecular biologist says, yeah, well, we knew that 10 years ago. What's new? What's great about that? I mean, there are interesting things about it because you actually can get to the level-- it's going to change molecular biology too because you're actually going to get the level where you say, well, it's these three variants in the promoter, it's this variant in exon 1, and it's this particular haplotype that's determining the effect on IgE level. So molecular biology is going to change because people aren't going to just be-- aren't going to get away with knocking out a gene or looking at a whole gene effect. They're going to actually have to go in there and determine the particular variants that are important in terms of the molecular mechanisms. So I don't want to denigrate this because this is-- we do all of us-- do a lot of this stuff to keep ourselves busy while we're trying to do these really big experiments that are very expensive and take a long time. Skip that. So let's talk a little bit about linkage. Linkage is this idea of take these microsatellite markers all the way across the genome. It's a property of families. It's not a property of individuals. And you're looking to see if there's a particular region of the genome contains a gene that's related to the phenotype of interest that's segregating in these families, using identity by descent. So what you do is, when you have some extended pedigree like this, what you could do is you could do segregation analysis to develop a model to see how the disease is actually segregating in this population. But that's pretty difficult for complex traits. It's not easy to do. You could also use this approach, the allele-sharing approach, which assumes no mode of inheritance. It just says we collected a whole bunch of sib pairs who are affected, and we're going to test whether these affected relatives have inherited a region of the genome identity by descent more often than expected under a random Mendelian segregation. And the nice thing about this is that it's easy, but it's not very powerful. I mean, the problem is you need a lot of sib pairs and even then even with over 300 sib pairs, you don't get such great power using this approach. So power goes up if the disease is more heritable, and you can do with less sib pairs, but the reality is that even with a huge number of sib pairs, you may not have a lot of power if the lambda is down here, which it is for asthma, probably. So I think that this is why people have focused on extended pedigrees in these relative genetic isolates, and that's why we're so excited about Costa Rica. The Finns are clearly excited about Finland. And Decode is doing what it's doing in Iceland. Whether we're going to be successful or not, I don't know. But the basic approach is that whether you're using an outbred population or a genetic isolate, and whether you're using sib pairs or pedigrees, is you've got these usually di- and trinucleotide repeat STR microsatellite markers, most of the genome services use about 400 of these markers equally randomly spaced across the genome, and what you do is do just do a form of logistic regression basically where you do a LOD score, log of the odds ratio calculation between relating phenotype in the family to these markers, and what you do is get a linkage peak that is the log score for that relationship between the markers and the phenotype. And what that says is there's a gene or multiple genes in this particular region on a chromosome that's associated with a particular phenotype. And then you have to then go in down and put more markers-- first more STR markers and then SNPs-- and gradually map that region until you've actually got it down to a very small region of a particular 1,000 base pairs or whatever. We can say it's a gene or one or two genes in this relatively large region. So that takes a lot of genotyping and a lot of work. So our experiments, now, over the next year, we have all these linkage peaks in asthma and COPD. Each experiment is going to be about $200,000. There's going to be 1,500-1,600 SNPs in each of these regions, and we're going to fine map three or four regions over the course of the next year. And hopefully, we will be in Science. AUDIENCE: [INAUDIBLE] SCOTT WEISS: SNPs 400 base pairs kind of thing, on the order of one per 1,000-- about one per 1,000 bases. That's about what we're shooting for. So this is just a summary of all of the genome screens that have been done in asthma just to show you that most of them have been sib pair studies. Most of them have been relatively small. But we do get a substantial amount of replication. These are regions across the genome-- this one right here, that's the gene that was just mapped. Several populations, including the Finns, showed a peak in this region. And they got this gene, and then they went to the Canadians and they said, can we replicate it in your population? The interesting thing is it was asthma in the Finns, but it's high IgE in the Canadians. So it shows you that this problem of phenotypic heterogeneity and genetic heterogeneity is a big issue here. So it isn't a perfect replication at the phenotype level between these two populations, but they've got all this other stuff-- the expression and everything else-- that proves that they've really got the gene. But the one we're working on is actually not on here. I didn't leave it off intentionally, but it's 12q, and it's one of the ones that's the most replicable. Now here it is here. It's in this slide right here. So this is a very good region, and there's an [INAUDIBLE] here, but it's also got a very low p-value. So that's one of the better ones. Now you can already see from this, each one of these-- this region has five or six different genes in this region. It's the cytokine clusters here, beta 2 adrenergic receptors here, IL-13 is here, CD-14 is here. So there's a whole bunch of small genes in here. Nobody knows whether there's a big gene or not. And it may be that linkage peak is just being given by the fact that there's a whole bunch of small genes in that region. This one, the one we're working in, this is 30 mega bases. That's huge-- huge region. But you can see from just looking at this that-- one, two, three, four, five, six, seven, eight, nine-- and there's another-- I mean, these are 20 regions, each of them about 20 to 40 mega bases. There could be five or six genes in each one of these regions, and at least two of the positionally clonal genes-- there were two genes in the region, and you couldn't tell from the articles. In fact, this Finnish article that's about to come out-- there's a second gene identified, and they don't have the molecular biology on that in the paper. And they're not sure what that gene is doing. AUDIENCE: So you're actually going to do a [INAUDIBLE] genome. SCOTT WEISS: Probably. AUDIENCE: [INAUDIBLE] SCOTT WEISS: So these are some of the issues in doing the type of linkage studies that I talked about-- multiple markers, multiple phenotypes, multiple comparisons. Phenotypes are correlated, markers not independent-- you know you've got to do. So there's a lot of statistical issues-- this work is really exciting, I think, because it combines genetics, clinical medicine, molecular genetics, statistics, evolutionary-- all of this stuff is all mixed together. So a lot of important statistical issues in doing these genome screens. So then you got to genotype the people. We've already said that SNPs are the primary genetic variation in the human genome. But we found INDELs, we found repeats, we found SNPs and INDELs together. I mean, there's all kinds of stuff. In general, SNPs occur about between 1 and 1,000 and 1,000 and 2,000 base pairs. They're approximately three-- maybe three to five million in the human genome. And it's using these as the primary source of genetic variation that we're actually going at trying to map these genes. There's a whole host of questions about how do you pick SNPs? We wrote a paper together, Zach and I, with some of our colleagues, about haplotype tagging SNPs. There's other approaches to using linkage disequilibrium to define the SNPs that you want to genotype. So lots of issues there, where bioinformatics is interfacing with human genetics. And no one really knows-- this is probably not 30 million. This is probably three. But no one really knows how many of these SNPs are actually coding, and I think everybody does know that there are more than coding SNPs that are important. Our motor SNPs are important, our coding SNPs are important, SNPs in the 3 prime UTR are important because they're going to change transcription factor binding and potentially change message level-- whole host of different-- and any one SNP is probably, in and of itself, isn't going to change function in a gene all that dramatically. So people are going towards this idea of analyzing data at the molecular level by looking at relevant functional haplotypes. If you've got a couple of SNPs in the promoter and another that's a nonsynonymous C SNP and an exon, another that's at a splice site, another that's in 3 prime UTR that's determining message level of stability-- you combine all of those SNPs to try to get an effect across that whole gene in terms of looking at that gene and its impact on phenotype. So this is just a little bit about data analysis. You can either look at continuous quantitative traits or qualitative traits. There are parametric and nonparametric approaches to this. Then you use all this stuff you actually find the gene. I think the people are not doing-- the initial work was done with the [INAUDIBLE] clones, but now we're past the idea of doing that because there's enough markers with a HapMap project across the genome that we can go into almost any region now in the genome, and we can come up with validated SNPs across that region so that we can actually pick SNPs and genotype them and go directly-- and this is what's accelerating the pace of positional cloning at the moment. So these are some of the things that-- I haven't really talked about this. This is introductory, this lecture. But you really get into this-- how do you do haplotype analysis, ancestral haplotype analysis or linkage disequilibrium mapping, molecular methods or tissue expression-- all of these things can potentially be helpful in the fine mapping process. We've been very interested and have a project with Zach, where we wanted to use mouse expression and mouse QTL analysis to help us with human positional cloning. We're not sure if our project is going to be funded so we don't know if we're actually going to get a chance to do that. At the end of the day, you want to be able to look at the impact of polymorphic variation in the gene that you found and see whether that polymorphic-- how much of the phenotypic variance is explained by that polymorphism. And that gets back to this question of, well, you found a gene by positional cloning. How do it's really a significant gene? Well, does it replicate across different populations in different conditions? Is it important in different kinds of asthma? Does it seem to be explaining a significant amount of the variation? So this is one example. It's a poor example because it's not a really good one. This is a gene, CD-14, that we genotyped in the program in genomic applications. This gene is the gene that binds LPS, or lipopolysaccharide, to the membrane of the monocyte, and then transduces that signal to the T cell to produce Th1 cytokines. So we found a polymorphism in this gene as part of the program in genomic applications. It's a CDP polymorphism. So here's the T variant. Here's the heterozygote. And here's the C. And you can see that if you look at a dominant model, where the C's are together, anybody that has a C genotype actually is likely to have more positive skin tests than those who are T, T. And that genetic variation is associated with variation in soluble CD-14 levels in peripheral blood. So there's a relationship between genotype and intermediate phenotype, and a relationship to allergy, ultimately. AUDIENCE: I'm sorry, is that supposed to show a difference between the two? SCOTT WEISS: It's small, but it was significant. Well, I think the point here is that this is one SNP. This gets back to the point that it's not even a haplotype in this gene, and still there is-- and these are modest numbers. They're not huge. But there was clearly a difference, probably the level of difference you'd expect if it was just a single SNP. I mean, none of these effects are going to be very large at the level of an individual variant. At the level of a gene with a haplotype, with a really significant gene, maybe so, but certainly not one SNP. So these are some of the skills that if you guys want to do this work, if you were going to come to my laboratory, I would want you to know something about it. You'd want to be want to know something about this. And how to genotype and apply this to the disease, study design, statistical methodology, phenotyping, environmental exposures, and I probably ought to add to this list bioinformatics because without good bioinformatics skills, you're going to be lost. And it's hard to know exactly where on the spectrum people want to-- if could do this and never have anything to do with the phenotyping and just focus on the functional variation from genes that these guys are actually finding, or you might situate yourself somewhere in the middle, I've got people in my lab that are doing just this and very few people that are doing just this, but I have some that are sitting in the middle. So where is this going in the future? I mean, I think that what's driving the field is high-throughput sequencing and high-throughput genotyping combined with bioinformatics in the presence of having lots of populations to do this kind of work. That's what's really necessary is you've got to have well phenotype populations. In my lab, these are all the different populations that we have for asthma. We've got these extended pedigrees. We've got affected sib pairs. We've got trios, and we've got individual cases and controls so that we can test the genes in multiple different populations and under different conditions. So why don't I stop there, and I'd be glad to answer any questions that people have about any of the things that I said. AUDIENCE: [INAUDIBLE] SCOTT WEISS: Structure of proteins? AUDIENCE: [INAUDIBLE] SCOTT WEISS: Well, I mean I think that means is that you actually getting into-- once you've got a relationship with a gene, what you have to do is really get down and figure out what are the variants in that gene and what are they doing, and that can proceed. Human genetics can contribute to that at the level of genetic association. So, for example, Lori Glimcher, who's an immunologist at the School of Public Health, identified a gene that controls T-cell differentiation. It's TBX-21, or T-bet is the name of the gene. And she created a knockout mouse, and when you knock this gene out in the mouse, you get tremendous airways responsiveness and allergic inflammation. It looks like an asthma gene in the mouse. And so we sequenced that gene, and then we started to look at-- we found a variant in the gene that's in the coding region. It's a nonsynonymous C SNP in the coding region. It's very rare. It's only occurs in about 3% of people. But it turns out that coding region variant determines which patients who get inhaled steroids get better. The people that have that variant and get inhaled steroids have their airways responsiveness completely return to normal. AUDIENCE: Has this been published? SCOTT WEISS: We're about to submit it to The Lancet. We're actually working with Lori to-- it's pretty exciting [INAUDIBLE].. And it's exciting because it's an example of how you can actually-- you don't have to even go to the animal model. And so she, then, has created her mouse model. She started to do some experiments with steroids, and steroids are probably important in controlling T-bet expression. And she didn't know that. So that's an example of structure-function relationships, where you're trying to figure out what a gene actually does. And it is important to recognize, there are some genes that have been around for a while, and people still don't know there's a relationship to a disease phenotype, but we don't know how they work. So figuring out that structure-function stuff can take a long time, potentially. And doing the genetic association and the fine mapping may actually now proceed at a faster pace and not take as much time. But I think you can actually do a lot of structure function stuff. Usually, what we do is when we get an association, we will type every damn-- we'll sequence that gene, we'll type every damn variant we can find in that gene in population and look at everything that could be related to an interesting phenotype because we're searching for clues to how to help our molecular biology colleagues in trying to help them figure out what the gene is actually doing. AUDIENCE: [INAUDIBLE] of that group [INAUDIBLE] see how [INAUDIBLE] changes [INAUDIBLE].. SCOTT WEISS: Well, you could do that. I mean, we're trying to work with this guy. He's got people in his lab who have ideas about how to get clues. So like the stuff that [INAUDIBLE] showed you last week, that gene CRHR1-- we know a relationship to steroid treatment response, but we don't know what the variant is in the gene. So we sequenced the gene completely, and now we've got two INDELs in that gene that sit right at intron/exon junctions. So the presumed-- what we're thinking is that those insertion deletion polymorphisms may be changing alternative splice sites. So we're going to have to try to prove that. That's one of the hypotheses that we're going to investigate in the renewal of the grant is trying to look at that. So you have to let the gene tell you where its variation is and how it might be contributing to phenotype. And so the first thing, usually, is to sequence the gene completely. Second thing would be to then do a very careful analysis of the new variants and the resequenced variants that you found in relationship to the phenotype of interest or phenotypes of interest, and see if you can find either haplotypes or individual SNPs or insertion-deletion polymorphisms or transcription factor binding sites or things that could potentially explain the genetic association. So then you can do that, and then you have to go into an animal model and test those in a more rigorous way, usually. AUDIENCE: How far off do you think is the day when a commission [INAUDIBLE] will be able to come to you and say, I have a disease. I have 500 [INAUDIBLE] cases and 500 controls. I think [INAUDIBLE] on the long arm of a chromosome. And I want to [INAUDIBLE]. SCOTT WEISS: So you're really asking, the question is-- I think the question you're asking is how far away is whole genome association? AUDIENCE: How far is whole genome association where it's within the reach of significant but not impossible clinical studies. SCOTT WEISS: Max three years. Max. AUDIENCE: All right. SCOTT WEISS: I mean, George Church-- it's all about the genotyping costs, Zach. AUDIENCE: [INAUDIBLE] SCOTT WEISS: I mean, he-- listen, he thinks he's close to the $1,000 genome. So if he's really close to the $1,000 genome and SNP genotyping costs really drop, continue to drop as dramatically as they've dropped over the last three years, I would see whole genome association being within the range of a reasonable budget in a two or three-year period of time. AUDIENCE: All right, on that note, thank you very much, Scott. And--
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_10_Association_with_Markers.txt
MARCO RAMONI: What I'm going to talk about after this little introduction about microarrays is how to analyze this BLAST data. And the principle that I try to present to you is that there is no such a thing as putting your data into a freaking machine and expecting to get an answer. The type of analysis you make is always related to the question you're asking. This has to be a completely stupid point. But the tragedy of a lot of this field is that it's not. And a lot of people usually try to answer the same question using different methods and different questions using the same methods, which is even more disturbing. What I'm trying to tell you today is what kind of problems you can tackle with this kind of data and what kind of analysis you need to answer every different question. It's going to be very basic. I will introduce some kind of advanced notions at the end. But most of the rest is very basic and is what is routinely done by people in papers, in genome centers, and things like this. And this is important for you. Because at the very end, I will tell you the bad news. You have an assignment. And you have to use a couple of programs that are describing this thing. So what I'm going to do is start from the microarrays, tell you what you do with supervised classification and differentiable analysis, argue the prediction and validate your results. How do you do unsupervised analysis using basically clustering for different type of methods and different types of experiments? And then, at the end, I'm going to talk to you about base networks, which are those things that a few of you know are my passion. This is exactly the same slide I had last week, two weeks ago. This is central to molecular biology. I will not recapitulate it here, except to say that DNA gets copied in RNA. RNA gets copied in proteins. Proteins do all the job. So last week, we talked about DNA. Today, we are talking about RNA, OK? So all your cells in your body come from the same cell. And they all have the same DNA code. What makes a neuron different from a fingernail is that the proteins that are made of this are different. So the same code will express different type of RNA, which in turn will be turned into different proteins. And these will give different cells their different in nature. The idea of studying this is called functional genomics. What we talked about last week was structural genomics. We were looking at the structure of the DNA. Now we're looking at the action of this DNA. And we're looking at the function that each different cell performs while it exists. The aim of the game is elucidate functions and interactions among genes. Now functional genomics is a very old thing. I mean, you don't need computer scientists to do functional genomics. Functional genomics means try to understand what is expressed in a particular cell. And you can do it by hand. You can do it one gene in the time. And people have been doing one gene at a time forever. What is changed in modern functional genomics is the introduction of microarrays, which are these platforms that allows us to look at the whole transcripts, all the RNA, every gene in a cell and see what is expressed and what is not. This is what is changed. And this is why you need, at this point, computer scientists. But there is a little different change in this. There is an intellectual change in this, a very dramatic intellectual change in this. If I have to pick up one gene to see if this gene is expressed in a cell, I have to go in and read papers and, at some point, decide how to allocate my next two weeks or two years to see the expression of this particular gene. Sometimes I need money. So I need to put in writing-- to somebody else that hopefully will give me money-- why this darn gene is important, right? Now with microarrays, they don't need to do that. With microarrays, I use one microarray. And for a tissue, what they observe are 40,000 genes. I don't need to justify which gene I'm interested in. I look at all of them. And this is a very interesting consequence. One-- one is cool. One, I can look everything in action. I can try to be surprised by my results. But the other thing is that I get very different type of information. That is, suppose I have been spending two years seeing if a particular gene is expressed in a particular cell. And what you get at the end are two pictures-- one of this gene in a normal cell and the other in the cell you're interested in-- and see if there is some kind of change. And on one side you see a ball this size. And on the other side you see a ball this size. You take pictures and you send in your paper, right? This is what people do. And probably, you are going to make in your-- after you spend two years-- maybe even if the ball is not really that big, you are going to make a very long argument that this ball is really, really big. And there is some particular reason why you shouldn't take this ball as small as it is, right? Now when you have microarrays, you're measuring all the genes. So even if the ball is not that small, maybe there are lot of other balls that are much bigger. So at that point, it's kind of difficult to say, you know, my gene is kind of interesting. Because it ends up that your gene is expressed as another 20,000 genes, right? So you get another piece of information by looking at the entire genome. You also get what are the most dramatic changes. What are the most dramatic things that happen in that particular cell? And this is kind of interesting because you produce a new intellectual style. The new intellectual style is not hypothesis-driven and has been a disaster for the biomedical research culture-- for us, basically-- and still is a big problem when you submit grants. When you submit grants, the way you write the grants is to say, this is my hypothesis. This is why I think this is true. This is why I think this is important and interesting. This is what other people have seen-- and myself, I have seen-- to support this hypothesis. And this is what I plan to do. Now here, the hypothesis-- what the hell is the hypothesis? I don't know. I have a very vague scheme. And I can say, well, you know, my hypothesis is that the genes expressed in prostate cancer are different from the genes not expressed in the normal tissue. Gee, what a hypothesis. I mean, you don't need to go to grad school to come up with an hypothesis like this, right? But then the intellectual styles is completely different. So we are heading into something that has some simple statistics and some simple technology but has a much broader impact on the way people think about biology. One of my dearest quotations is from a physicists of the 19th century that used to say that, "there are two types of science, physics and STEM collection." What this mean is that physics provides mathematical, quantitative models of phenomena. STEM collections is going around and measure animals, and put them in your collection, your album, and show them to friends, and maybe arrange them in some way. What this thing is doing to the intellectual landscape of modern biology is, hopefully, to turn STEM collection into highly quantitative science. The characters that are behind all these things are microarray technology-- microarrays. They are able to measure the expression of thousands of genes at the same time. And now we have microarrays that are able-- on a little piece of plastic this size-- they are able to measure the expression of 55,000 transcripts, which includes all the estimated 35,000, 40,000 genes in the human genome. Technically, a microarray is-- although it is called an array, it's actually a vector. So for each cell in my array, I have a label that tells me the name of the gene there. And then I have the value of expression for that particular cell. So at the end, when they put them together, they become a vector that associates to each gene-- its particular value of expression in a particular cell or a tissue. Therefore, the arrays-- because, of course, putting down a vector of 25,000 genes, of 25,000 little cells, is less convenient from a geometrical point of view than putting down a square thing. It takes less space. They are called arrays but that can be kind of misleading. And there are two types of arrays that are currently used more frequently. One are called cDNA and the other are called oligonucleotide microarrays. I'm going to tell you in a second what they are. How does these things work? It works by reversing the natural phenomenon of transcription, right? So the idea is that I have special glue in each cell that is shaped exactly as the transcript that they want to measure. Then, I will have my tissues that go on this microarray. And by some diverse method, they will hybridize. They will attach to the cells that are specific to them. And then, I wash them away. And what they have to do at the end is simply to measure how many of this RNA or how much of this RNA is left of a particular cell. How do I do it practically? It works like this. I get a bunch of tissues. Let's say I have only one tissue, for the time being. I'm producing one, single microarray, right? I get my tissues. I extract the RNA. And I tag this RNA with pieces of transcribed RNA with some fluorescent dye. Then, I put them in the dishwasher, [INAUDIBLE] station, which is microarray washer actually. And I hybridize it. I put it there. And then I scan it. So what happened is that-- you remember, this RNA is left attached to this particular cells. And because I have tagged it-- it's tagged with a fluorescent dye-- I would have more intensity in those places in which a lot of things are attached. So once I wash it, I can actually use a scanner, like the one for your picture at home, exactly a scanner. And that scanner will come up with a picture that looks like this one, in which each ball represents how much RNA is left in that particular spot. Because I have created or somebody else I pay has created that particular spot, knows exactly what is the transcript that is there. And they can tell you that the third spot from the left is gene [INAUDIBLE] [? alfalfa. ?] I can go measure the intensity of this gene and then turn everything into a database. I know for each probe what it represents. And for each microarray, I can measure how much of that is estimated in my sample. Question? This is how it looks like. This is cDNA microarrays. cDNA microarray-- I lied a little. You can't use scanners for this. You use some kind of laser stuff to read these points. But the idea is that in this microarrays, you copy the entire transcript of a gene. So I know how a particular gene-- what is the sequence that is transcribed for a particular gene. I make 1,000 clones or a million clones of this. And I put them in one spot. Now this microarray has two channels that I can read using laser scanning, OK? So I will have two samples. And I dye one in red and one in green. And I put them on this microarray. And they will compete competitively-- hybridize to this one. So if both things are highly expressed, what I will see is something that is yellow. If none of them is expressed, it would be some kind of black-grayish thing. And if the green is more expressed than the red-- will be greenish. And if the red is more expressed than the green, it would be reddish. And you can see here-- you see a lot of yellow balls, few green balls, and few red, and a lot of black. What's that? AUDIENCE: Just a basic question. What kind of different information will you get from the [? array ?] as opposed to [INAUDIBLE] intensity. Do you get any different-- MARCO RAMONI: No, this was just because it was the original-- the original one were made this way. To use a scanner-- which I guess what you're going to-- to use a scanner, you need to use silicon technology. And these are glass slides. So to build this kind of things is much easier. You can build these things at home. You can buy a robot that will spot the stains for you. The oligonucleotide microarrays-- I'll show you in second-- use another type of technology that requires really a production line. So it's not something you can do. So these things actually offer you flexibility. If you're interested in 1,000 genes rather than 40,000, you can do it. The other thing is that it costs much less. The problem is that because you copy the entire transcript, when you take your RNA out of a cell, you crash it. Then you dye it. Now what happened is that there may be a lot of crap floating around that is really not related to your gene. There may be very, very small fragments that are going hybridized to some random sequence in your clones, just because they are very small to hybridized there. But still, they will bring fluorescent dye to that particular spot. So the precision of this measurement is not really as great as it could be. You get a lot of random hybridization. There are tricks you can play for this. So if you want to make a comparative experiment, you can put one condition in one and one condition in the other channel. But you can use some kind of RNA soup that is not supposed hybridized to anything and put it on one channel. So if you get a lot of random hybridization on one side, this will pick it up and makes your point, your spot, yellow. In this case, you will treat yellow and black exactly in the same way. It's not like, no, I'm undecided. Both are up. You will say, well, this is up, but it's up because of some possible random hybridization. The resolution is to use a computational methods. And these microarrays produced by Affymetrix are like Microsoft Office. It's something that everybody uses. They are far more expensive than they should be. And everybody hates Affymetrix. But still, you can't live without Microsoft. You can't live without Affymetrix. AUDIENCE: Was the last slide a cDNA array? MARCO RAMONI: Yes, it was. So cDNA means that you put the entire transcript. And usually you have this two-channel dye. [INAUDIBLE] is a great expert in cDNA microarrays, which actually comes from one of the very first departments that made cDNA microarrays. And is [? Formica, right? ?] So oligonucleotide microarrays follow this idea. OK, I have my entire transcript. And my problem is this random hybridization. What can I do? Well, because I have the human genome-- the entire draft of the human genome-- I can take this gene and find out if there is a sequence, a small sequence in this gene, that is unique to this particular gene, right? So in this case, even if the sequence is small, if the broken part of RNA in my sample is small, it will not hybridize there because this sequence is too small and too specific. So the idea here for oligonucleotide microarrays is to say, I'm going to take my transcript-- the entire transcript, the subsequence of the transcript-- and I'm going to sample it 20 times, between 16 and 20 times, and find out these 25 small sequences that are specific to that thing. And, for good measure, I will create another sequence, a sequence that is exactly to this very specific sequence, except that I have the base in the middle that is flipped. And then we check that the sequence with that base in the middle is not specific to any other gene. So I had a positive control and a negative control. And this way, what I'm going to do is to have a very specific measurement and a very specific measurements of random hybridization. Then, once I have these 20 measurements, I will find some statistics to put them together. It's not easy because these measurements are not independent measurements. But it doesn't matter. I will put these things together somehow. And the measure I get at the end is going to be pretty accurate, right? This is why they cost a lot of money. This is why a lot of computational work goes into it, because you have to search for all these sequences. And this is why sometimes these microarrays get completely screwed up. There was a famous case a couple of years back in which they created a new mouse microarray. And somebody-- I don't remember where-- reanalyzed the sequences of their microarray-- that new edition of the mouse microarray-- and found out about 25% of those were screwed up. They were not specific. They were not following the standard design. Hello? OK, so this is how they look like. This is the scan microarray. This is Affymetrix microarray. This is how it scanned. The little spots here on the longer transcripts but are these probes that are sampling for one particular, specific sequence. And they are scatter in the microarray so that if something bad happened to a corner, it would not affect something else. Otherwise, you could have biases in the entire microarray. There were one next to each other. Your whole microarray will be screwed up. And that's how, in theory, one probe should look like when it's hybridized. Up there, all these transcripts are more or less hybridized. And down there, the random hybridization is not really hybridized. But the resulting hybridization will be the difference between the real hybridization and the random hybridization for each probe, for each probe pair, the negative and the positive, and then a global measure to put them together, which I will not bother you with. So what's the problem? The problem is that this stuff costs $1,000 a pop. $1,000 a pop is a lot of money. And you remember what I was talking about-- the hypothesis-driven thing. When people draw balls for a single microarray, usually they do it twice, at most, three times. But if you're measuring 40,000 genes at the same time-- well, measuring twice is going to be a little problem, also, because you don't have any hypothesis to prove, right? So this is where the major cultural clash comes in. That when people analyze data, even in medical domains, database looks like this. There are some variables and a lot of cases. The microarrays data set usually look like this. You have a lot of variables, thousands of variables, and very little measurements. And it's kind of funny to see these people that work on the genetics and define from a genomic side. So when they design a genetic experiment for SNPs, they collect 5,000 patients, 2000 patients, 3,000 patients because that sample size is required to analyze a couple of SNPs. But then when they do microarrays, they expect to find the [INAUDIBLE] 5 microarrays out of 45,000 probes. It's exactly the same people. So what you can do with this? Well, let me introduce you to a notion that will remain very precious to you. When people will be confused-- as some people are-- you will have a very easy and fast answer. What is the difference between supervised and supervised? Is exactly the difference between a normal movie and a PG movie. Supervised means that there is something or somebody supervising. They're telling you things. This is what a supervisor does, tells you things. So a supervised thing means that they have either a human or some kind of signal that will tell me what a particular sample means, right? A typical supervised problem is-- let's try to decide what characterized the people in a particular room. I get measurements from this room and that room. And in this room, this is a graded course in functional genomics. In the other room is a class from the dental school. And let's make differences. We have properties of these people. Let's see what is different. But I will tell you that people in this room are different from the people in that room. And this is what is called your training signal, the difference between two clusters. In an unsupervised thing I know I have no supervision. I'm old enough to go to a R-rate movie. So the question in this case is-- I get a bunch of people-- are there groups among them? There are people that look more like others. There are people that control other people. It looks more like, I can say, gossip, finding stories in these things. But they answer two very different questions. One is, what is different between these two groups of people? And the other is what is similar, or what is related, or what are the stratification, or what are the things that we have in common among these different people? It becomes clear. So what can we do with the microarray? With the microwave-- well, the first thing we can say is, OK, I have two experimental conditions. And my aim is to see which genes are expressed more and which genes are expressed less in this condition, right? So typical example is cancer. I get a bunch of people with cancer. I get a bunch of people without cancer. I run microarrays and then see what is different. What does it mean? Well, it means that they have tissues from healthy cells and from tumor cells. And for each sample, I will create a microarray. And this is how my database, at the end, will look like. So the first column represents the name of the gene, name of the transcript directly looking at. And the second column represents the value for that transcript for sample one, sample two, sample three, sample four, and sample five. And then, I'm going to tell you, well, sample one, sample two, sample three belongs to one category, are in this room. And sample four and five are in that room. Go and find what's different between these two things. Now, what does it really mean-- what is different? Remember, so if we do it by hand, we can take pictures of balls and say this ball is bigger than this one. But if I have 50,000, 40,000 balls, what I'm going to do about this? Well, what they want to do, in this case, is to find what is more expressed in one side rather than another. And the currency provision-- that I will tell you in a second-- the currency of these measurements are called folds. Fold is how many times one condition is more expressed than the other condition. Now the problem is that this is good when you have one single ball. If I have 50 patients, what the hell I do? Should I take the mean? Sure. I can take the mean, but then I don't have any measure of the variance in my data. Maybe I have two things that the mean are very far apart. But because the variance is very big, they will overlap. So there is not much evidence that they can collect. So other measures are things like standardized differences, and make the difference, and standardize them by the variance, which will somehow take into account the variance, that is under the assumption that these things are normally distributed, so that kind of variance has any statistical meaning. Then what do you do? Well, then I decide the threshold. I get the landscape of this thing. And I'm going to say the top 50 genes are what I actually like and the bottom 20 genes. The top 50 are the ones that are more change in one condition and the bottom are the ones that are more change in the other condition. I'm going to pick up this stuff and see if there is anything interesting. What people do typically is to make up stories or to pull up a protein-- like people are doing with their project-- pull out the protein and see if I can actually-- a gene, find out the protein, and actually find out if this protein does something to my particular phenotype. This project, which is a project about preeclampsia the investigator there-- you get only two microarrays from a preeclamptic-- preeclampsia is a disease that women get during pregnancy. It's a very bad disease-- and the normal placenta, compare them, and pull out the protein. Put the protein into mice and found out the mice were getting preeclampsia. There are this kind of an exploratory thing. In this case, what I'm interested in is find out new hypothesis. Then they can test in some kind of laboratory setting. As I said-- because, in this case, I have only two samples. But suppose I have several patients, what I can do? One other problem we have here is that we are not really sure what kind of distributions are running this microarray. So what people say is, well, because we didn't know the distribution, let's use some distribution-free method, which is a good idea. But it's an idea that rests on the hope that there is some free lunch of life. And there is no free lunch in life. No parametric method, distribution-free method requires a lot of data because you have to do two things. First, you have to decide what kind of distribution you have, implicitly. And then, you have to run your test. People use parametric method typically because they have an idea of the distribution. And so they need less data to fit this test. If you have few data and no idea of the distribution, you are screwed. And running this kind of test tends to be kind of a dangerous. One, because usually your sample size is too small to run a proper parametric test. Two, because frequentist people have these things called p-values. P-values are very interesting animals. What is the p-value? Who gives me a definition of p-value? AUDIENCE: [INAUDIBLE] MARCO RAMONI: Speak up. AUDIENCE: The probability that-- MARCO RAMONI: Two things are different? AUDIENCE: --that the means [INAUDIBLE] very different. MARCO RAMONI: OK, so this is what patients believe. But to do that-- you work with patients too much. And that's a very reasonable measure. I'm interested in finding what is the probability that these two things are different, right? This is not the p-value. The p-value is the probability that you will make a mistake if you repeat the experiment N times and compute it as the number of times you will be mistaken by repeating this study, which is an extremely [? masturbational ?] measure. There is no relationship with the probability of your hypothesis. And it's very difficult to put into practice. First of all, people should explain me why I should repeat my experiment 100 times when I already repeat 20. And this is what I know, right? The rest is educated or uneducated guess. But the p-value, in this case, has this other little problem. That because I repeat the experiment a lot of times, sometimes things may come up just at random. So if I say, OK, I'm going to accept something if my probability of is 5%-- so the p-value is 0.01-- if I test two hypothesis, to maintain the same level of error, because I have the probability that something will come out at random-- assuming that these two tests are independent-- I have to multiply the probability of this p-value, right? So my real threshold to get a 5% evidence of a p-value would be the product of these two 5%s to maintain the same level of strength of evidence. Now imagine if you have to multiply 0.05 40,000 times. What kind of threshold you get? Nothing. Nothing will pass that particular test. This is called Bonferroni correction. Nothing will pass the test. I have a very dear friend of mine who is very frustrated by this and decided to be a biologist after trying to use p-test on this kind of experiments. Because the threshold, the accepted evidence is 0.05%, will turn against you when you are testing your hypothesis 40,000 times. Besides, this is under the most lenient condition. Because you'll assume that all your hypotheses are independent. But we know that this is not true. We know that these genes regulate each other. So the probability something is up is not independent of the probability something else is up, OK? So even under the most simple condition, we have a little problem with this. So what can we do? Well, I will tell you in a second what can we do. But what we can do further-- not for the experiment but in general-- so once I have the differences-- OK, I can go back to my lab and put the protein into a couple of mice and see what happened. But isn't there anything better that I can do using these differences? Well, maybe I can make predictive models. Predictive models, rather than using proteins-- one protein at a time, what are called markers-- are able to put together a batch of proteins and provide a profile, a prediction for a particular outcome. In this case, maybe I can predict if a particular tissue is a tumor or it's not. I can predict if a particular tissue is a type of tumor or is not a type of tumor. Maybe this type of tumor require different therapies. Maybe I can predict how long it will take for a particular tissue to come back as a cancer, because they find a particular signature. Now how do I find the signature? I have to run a game called feature selection. Feature selection is-- I have a class. I have all these predictors down together. And I'm going to select some of them as good predictors for [? C. ?] I cannot use all of that, right? Why cannot use all of them? Because good prediction comes from specificity, right? I'm glad you agree with this, because it's not really such a normal statement to accept. People believe that you use 40,000 variables, you're going to give better prediction if you use five variables independently of the quality of this variable. I mean, as long as these five variables are a subset of the 40,000 variables. But we all agree that this doesn't happen, right? Right? If somebody has a doubt, I have a joke. OK, no joke. So what do I do? Well, I want to identify those genes that predict my class, the set of genes that predict the class. So if I do feature selection, I typically increase the predictive accuracy. I get a more competitive presentation. I can get some insight in the process that may happen. Although, remember, this is not just differentiable analysis. It's something that I want to use as a prognostic or a diagnostic set of markers when combined. And why differences are important? Well, because we start from the assumption is that if two things are exactly the same across two samples, it's very difficult that they will be able to discriminate between them. So classification, which is this task in feature selection, looks sometimes very much like differentiable analysis, but it's not. I have a twist at the end. And the aim of my game is not really to find out what is different. It's finding what is predictive. And the example is-- supposed I give you two groups of people. And you don't know it, but one group are men and the other group are women. And then I give you a list of properties of these people. And there would be a lot of differences. Women tend to be slightly shorter than men. Women tend to have more hair at certain age. Women tend to make less money. But there are a couple of anatomical differences that are really good predictors of these differences. Doesn't mean that there is no other difference. But it means that particular anatomical feature is a perfect predictor between male and female. So if you're doing differentiable analysis, you may be also interested in the fact that these people have differences in income. But if you include these factors into your predictive model, maybe because I'm short and don't make a lot of money, you end up classifying me as a woman, OK? May confuse your ideas. So we were saying, non-parametric method has kind of little problems with this because we don't have enough samples. But we have classifiers that are parametric classifiers. In this case, we make an assumption about the distribution of our data. And then we try to fit our data into this distribution, thus saving us a lot of effort in collecting more data. Because data are very complicated and hypotheses are cheap. We can actually go and validate our hypotheses afterward. So this is, as many of you know, is called a Naive Bayes Classifier, in which I assume that each gene down there is conditionally independent given the class. It doesn't mean that it is independent, right? Like we were doing before-- we were doing independent tests, you remember? The independent tests assume that they are marginally independent. In this case, they are conditionally independent. Conditional independent is that once I know the class, I don't give a dime about the dependency between these two genes. Maybe there is a very complicated relationship between gene one and gene two. But because my interest, in this case, is to find a classification, I don't care. Because as far as the classification of the class is concerned, these things are not related, right? So it's like a weak independence assumption where weak, in this case, means good. Because we are not forcing an assumption that is too strong within your data analysis. Once I have that, I run-- let me go back a second to this-- the other one. I want the other picture. Come on. See this? In this case, my genes are marginally independent. The arrow is going the other direction, right? So all these genes [? cause ?] my class, but they are independent. And this is the structure of a standard classifier. In the other case, they are conditionally independent given the class. So once I have this particular model, I have selected which are the genes that I like. I have estimated the parametric model. Then, I can make predictions. So if I had used some kind of differentiable analysis using a non-parametric test, by definition, I don't have parameters. It's non-parametric. So I cannot really make a prediction with the parameter set [? lower. ?] What people use are things called mixture of expert, in which they assign some kind of arbitrary weight to different genes. And each gene will be like an expert, judging if this particular tissue is a cancer or is not a cancer. But these weights are actually embedded in any parametric model you derive, which is the probability of observing that particular gene expressed, given the fact that you have a change in your class, that the class is tumor or not tumor. So you can apply Bayes' theorem, and reverse those errors, and obtain the posterior probability that your particular sample is a tumor or is not a tumor. This is how it works. Well, this is what I just said. [INAUDIBLE] had this one. So I have a class. And I'm interested in the probability of the class, given the sample molecular profile, which is my new patient coming in. And by applying Bayes' rule, I can actually compute. Because the probabilities I have are the probability of each feature given the class, which is the direction of the arrow. Bayes' rule will allow me to flip this rule backward, apply this as a product, and put all these things together into a single posterior probability. It's just the sum of this probability. I have another interesting thing with this-- another goodie with this thing that I can actually validate my stuff. Well, validate this would mean to go back in my lab and look at a couple of things. Validate means to see how my model is good to fit the entire 40,000 genes. And the best way to validate something is to have an independent test set. I collect patients here at Harvard. I build my model. And then, I call up my friends in San Antonio and say, listen, I have this model. Do you have 50 patients. for me that I can classify, and you know diagnosis already? And if he has them, then cool. I can really say, this is the accuracy of my model from here to there. But sometimes, we don't have these things. Well, quite often we don't have these things. So how can we do this cheaply? Cheaply-- we can use cross-validation. Cross-validation means that I take my data set. I split it in five parts. And I use four parts to learn my model. And then, I predict the fifth part. And then, I take other four of these five parts. I build another model. And I predict the remaining fifth part. This decreases the sample size I originally I had already. So what happened is that people use a thing that is called leave one out cross-validation, where the number of sets is equal to the number of samples. So what means that they pull out one sample. I build a model on the other one. I try to predict the sample that's taken out of which I know the classification. This is an example. One of the first predictive models that came out in 1999. We have two types of leukemia-- ALL and AML, acute lymphoblastic leukemia, acute myeloid leukemia. And as you can see under the microscope, they are very difficult to diagnose. So what these people at [? Wycliffe ?] did was to say, well, let's collect, I think, 27 and 11 patients, right? And what they did was to create a dummy vector of zeros and ones and then correlate the gene expression-- sorry. The columns are patients. The rows are genes, right? And now I don't remember if the blue is underexpressed or overexpressed. But what does it mean is that they take some kind of average to represent this picture. And the positive distance of the point from this average is the intensity of the red. And the negative distance is the intensity of the blue. So the more intense is the color, the farther would be your point compared to the mean of these values. And at the same time, the direction of this distance would be given by the color. So if it's dark blue, it would be very negative. If it's dark red, it would be very positive. So what they did was to correlate these genes and pull out the top 50. So the 50 that correlated more with the gene, with this dummy vector with the positive correlation and with the negative correlation, 50 and 50. And what they did then was to make a mixture of expert prediction and see what was the accuracy they could get of their own patients. And since then, there have been gazillions of paper written like this. I want to stress the fact that, in this case, we are not interested, again, in what is really different. We are interested in finding a molecular classification for these things. The hope here is that one day you can build a little check-- and they are really doing this for literature-- on which you can put some specific genes and have a classification that will tell you this patient has this particular type of leukemia. This patient has this other type of leukemia. OK, so I have talked about something that is [INAUDIBLE] question, is controversial thing. But I thought last night about including this thing. But then I said to myself, yeah, as long as I tell you that what I'm going to tell you may be kind of controversial. It's OK if I tell you that, right? And this is why you want to go to school to be a professor. Because then you can say controversial things. They cannot fire you, hopefully. One of the things people do to identify differences easier, even with pass it down, cut the threshold, is to deflate the variance, OK? If I have two samples that are very far-- but if I find a way to squeeze the variance of these cases, then I will have a much smaller variance. And I have more chances that my changes will be significant, right? Because the variance would be smaller. Now this is something that for any other type of data analysis, will send you at least in disrepute, sometimes in jail. If you do this on a company budget or if you do this on a clinical trial, you go to jail. In microarrays, people don't go to jail. Because it is an original thing that made originally sense. Remember cDNA microarrays? cDNA microarrays have two channels. Now we know that, by design, there is an imbalance between these two channels. One channel is more intense than another. So if I'm comparing to samples, what I may come up with is something that looks like this. So I have the two microarrays that are lying on two parallel things, right? And you see that there is a bias. All the red on one side and all the blue are on the other side. So what people used to actually do for this kind of platform, because you have two channels, is to try to reconcile these two channels by studying the distribution of these two things and try to put them one over the other. So as a form of correction, you do. Because, by design, you know that your platform will introduce some bias. And this is fair. This is good. The problem is that when oligonucleotide microarrays were introduced, people just blindly took these things and try to apply to microarrays. And you start coming up with a couple of problems. First of all, oligonucleotide microarrays are not two channels. They're one channel. So suppose there have 50 patients. What do I do there? Which patient do I take to be my baseline? I'm going to reconcile all the patients with the first patient at the beginning? And what happens if I change this patient? Are my genes going to change? Yeah, you bet so. So now if you really want to have a great success talk with biologists, go and tell them that they shouldn't normalize. Because there are about 100 different normalization methods of this type. And people are confused. But people are confused because there is really no need. People are not confused on normalization and cDNAs. People are confused for normalization to squeeze your variance and get better results. Because in reality, even when you have design with two channels-- so I have a pair case and control with microarrays. You get actually results that look like this one. Now these are microarrays that come from an institution from this [? street ?] to which I am not affiliated and nobody here is affiliated to. So I can actually speak on them. And this is a good example of why not to do normalization. So these are people before and after treatment, OK? These are paired experiments because it's the same person that is sample before treatment and after treatment. So you remember those lines that were like going one after the other? Means that we were plotting the intensity of one channel against the intensity of the other channel. So look. We plot this microarray against this microarray, which is the microarray before and after, right? So in this case, yeah, more or less, it looks like the other one. You remember? Now look at this one. Can you imagine any transformation that will put those things along the same line? Yeah. Look at this one. So in this case, what happened is that there is something that is highly screwed up. And, again, these are following exactly the same design that cDNA followed, the experimental design, although the end practice is absolutely different. So my advice as far as normalization is concerned is don't change your data that may be useful. But try to look at your data because they may contain some important information. This microarray is completely screwed up and should be either removed, redone, or done something about it. OK, so what have we learned? We have learned that we can actually find differences among samples in different conditions. We can make predictors. Have we learn anything interesting about the genome cells? Not really. We have learned nothing about the relationship among genes. Although we are measuring all of them at the same time, we have completely disregarded-- actually we have fight against the idea that these things could be related. We are simply interested in finding something that was different in two conditions or simply interested in finding something that, put together, could predict this condition. That's it. This is where supervised classification brings us. If we want to take advantage of the fact that we measured all these genes and we observe the genome in action to try to decode something about the genome, then we need some different method. And we don't need supervision all the time. It's like when you're a kid. If there is supervision, there is very little fun. So the easiest thing we can do is to say, well, OK, forget about supervision. I got this bunch of genes in different conditions. Forget about these conditions. I don't care about these conditions. What I want to see is which are the genes that behave more similarly across all these different conditions? It's like having a car, right? Try to understand how it works by kicking it, and kicking in different points of the car, and then see how the things go together under different stresses. So if I kick the wheel, if I keep the trunk, if I kick the door, what happens? How these things move together? What is the relationship among these things? Was a nice analogy some time ago. But you study these things. And the way in which you study these things-- this was for sequencing the genome. Well, it's like you have, in the future, somebody comes up with the Volkswagen. And they discover a Volkswagen [INAUDIBLE] somewhere. And they have no idea what it is. So to understand how this works, they take the Volkswagen and they throw it off the cliff. And then, when it's down, they try to put the pieces together again, right? This is what somehow we're trying to do. We are breaking this down with some kind of solicitations and trying to see which parts behave together. So, in this case, like we had 1,000 Volkswagens-- well, 100 Volkswagens. And we keep throwing them down. And at the end, when they are down-- because we don't know how to open the engine-- when they are down, we will see there are some pieces that are closer together. And they remain closer together. And this is independent of the fact that these two things fall to the left or to the right of the main body of the Volkswagen. So a simple thing is to say, well, let's measure correlation among these things. Genes [INAUDIBLE] supervision. They have a lot of solicitations. They could be different compounds that are treating a particular disease. These are maybe different type of cancer. I don't care. I don't want to find classification. I just want to find out which are the genes that go together around these conditions. If I use correlation, the only thing I can do, though, is to look at pairwise comparisons, right? I can only say that one gene go to another gene. A correlation is a distance between two points. I cannot have groups of three, or five, or 15. How can I put these things together? Well, I can use another type of clustering called hierarchical clustering. Hierarchical clustering start putting things together. But when it puts two things together, it creates some kind of a dummy gene, which make us feel like the average of these two genes or something like this. And then try to correlate this average profile, this average gene, with other genes. So, at the end, the result would be something like this. Again, it's like the blue and the red. In this case, is green and red. These are the Stanford color. Wycliffe uses the blue and pink. Duke, I think, use yellow. John Hopkins use green and blue-- well, a few combinations of this. But you can actually recognize at least the platform they're using by the color of their pictures. So in this case, this is a Stanford picture. Again, the green is down and the red is up or vice versa. And you can actually, by visual inspection, see that there are some points that are very highly expressed away from the mean, are very down expresses, very [? low ?] expressed from the mean. So this is the zoom of that picture. And you can see that these things are creating a tree or Venn diagram. And this tree will put together groups of genes, not only two genes. And the problem here is that you don't really have a good measure to decide when you've made a group. Because, again, you have one single tree that will combine all of them in different order. So, technically, this is not-- although it's called clustering-- clustering means to put things together and divide them. Technically, this is a sorting algorithm by which I put a particular order-- in this case, a partial order-- over these things. And then some knowledgeable biologists will come and say, oh, among these people here in this group, I see that there are among this group-- I see that these genes are all related to a particular process. So maybe also these genes that is right embedded between them is related to the same process. And maybe it's apoptosis. And these are five apoptotic genes. And then, they find something else. And we create another group. But these groups, these different coloring-- the pink, the purple, and the red there-- are handmade by somebody with a lot of patience that put them together. AUDIENCE: [INAUDIBLE]. MARCO RAMONI: Say again. What? AUDIENCE: The trees are made by hand? MARCO RAMONI: No, no. The trees-- sorry. The tree itself is built through some kind of metric. I don't know why it's not coming-- OK. So I compute the correlation between these two points, these two vectors of values. Then, I create, let's say, an average value here. And then, I draw these two points. And they consider this new value that I have created as a new member of my data set. I didn't see what this correlates to. In this case, this correlates to this one. So the highest correlated thing is a gene. And this creates a new hypothetical thing, which is the average of these two and this one. So what happens is that, at the end, I create a structure like this. But the problem is that because they are all measures, at the end, they will have one single tree. So how do I create blocks? The way in which blocks are created-- and I say, color this in purple and this in pink. These were handmade. I will tell you in a second how you can avoid to do this handmade. I can do something more interesting also. That was a temporary experiment, [INAUDIBLE] second temporary experiment. So I knew the order of these microarrays. But sometimes, I'm not really interested only in the way in which genes go together. I'm also interested in finding some new class among patients, right? This is a very interesting paper from 2000 in which what these people did was to try to cluster simultaneously genes and patients. And what they came up with were groups. You see those groups up there. The groups up there is not a Venn diagram. Those are group of patients based on some selection of genes that are more expressed across the two conditions. And then what they did was to find out that-- if you look at the survival time-- how many of what is a Kaplan-Meier curve? Everybody. OK, so if you look at the Kaplan-Meier curves of those groups, you see that there are very significant difference in survival, OK? So in this way, I can discover not something really that is about genes but something that is about the overall classified disease. I find out new classification for diseases with interesting clinical consequences. Again, problem is I have to do this darn coloring by hand. Is there a way by which we can actually avoid coloring this stuff? Yeah. There is a way. And this is the idea. If you want to cluster, it means that you have to make differences among things. So you can decide arbitrarily the number of clusters. And say, OK, I have 50 clusters. And you divide everything in 50 parts. But why not 49, or 38, or 15, or two. So central notion of the clustering is similarity. If we have a definition of similarity that is specific enough, then this similarity will allow us to say when we can actually cluster without creating a threshold, just a conceptual definition of similarity. So I have to postulate this description of similarity. And I need a piece of theology before this. But let me postulate this. In statistics, you don't believe that what you observed was directly created by God. What you believe is that there are some processes that you don't observe that generate the data that you observe with some randomness, some measure of uncertainty. Now let's make an example. Let's suppose we take the electrocardiograms of each of us. And, hopefully, especially for me, all these electrocardiograms would be different. But, hopefully, they would be coming from the same process, which is the process of a healthy heart, mine will be slightly different because it's small but probably will not be different enough from yours to say that this is a completely different stuff. Now suppose we go to Brigham, to cardiology at Brigham, and we take electrocardiograms of people there. There, I expect people to have differences between themselves that are great enough to be generated by different processes, different pathology of the heart. Now I will pose to you that two things are similar if they are generated by the same process. And two things are different if they are generated by two different processes. And if you buy this story, then I can give you a method to compute when something is generated by the same process and when something is not. How? Well, we know that these processes that we do not observe but they underpin the data that we actually observe, generate our data with some kind of uncertainty, that is a random process that is generating data from this. An example is aging, right? Aging has a particular effect on people, usually make you wealthier, usually, after at a certain point, make you stronger, after a certain point, make you weaker, has affect on your marital status. You tend to get married, and then divorced, or widowed, or whatever. When coupled with other variables like gender, can have other physical effects like you can lose your hair if you're male and so forth, right? So if I find somebody that at 13 is on the verge of his third divorce, that's not impossible. But I would find it kind of unlikely. Why? Because there is a process called aging that dictates, more or less, that people to be on their third divorce usually have to be at least 35. So if this guy is 13, it's difficult. It's not impossible, but it's difficult. So we have these general expectations that stem from the fact that there is these processes generating the observation that we have and is constrained by other things. Like we're saying losing her is constrained by gender, probabilities change by gender. But at the same time, once I observe the data, I can tell you that something is probable to be generated by a particular process and something less probable to be generated by a particular process, right? And this is what we want to do. We want to compute the perceived probability that a set of processes, as responsible of my data-- so M given D will is the data-- for each class we model, for each way of combining my clusters. And then, I can combine the score and find out what is the most probable way of combining these clusters. And at the very end, what I will have is a bunch of clusters, not simply a tree, not something that I have to cut with the threshold. But I would be able to tell you that if two things are put together, they are N times more probable to be generated by the same process than they are to be generated by two different processes. Interesting paper-- you're going to read it. This is how it works. The probability of the model given the data by Bayes' theorem is equal to the probability of the data given the model times [? provision ?] of model on the [? unprovision ?] of the data. Now I will not get delve into details. But at the end of the day, under some assumptions-- like the assumption that before looking at any data, all models are equally probable and the assumptions that we are trying our models on the same data, which is usually what we do. We have the same set of expression data and want to find the best model. What we can compute is that probability, which is the probability of the data given the model, which is proportional to the probability of the model given the data. And, therefore, we can use that as a score. These things is kind of compared with this to compute. It's called marginal likelihood. And so we can search all these combinations and find out which is the most likely combination, which is the most probable combination, of, in this case, genes, given the data that we observe. Now let me go-- so these are a couple advanced topics. From now on, this is not subject to examination, for the test. Suppose I'm interested in something like control. Have we learned anything about control so far? Well, we have learned that things go together, things are similar. But we haven't really learned anything about how things control things. To see how genes control other genes, we need a very important experiment design which is a temporal experiment. We need to see what happened from one point to another. And you say, well, it's kind of easy. I take this clustering method. I use this clustering method, and I put them together. And then, I will find some kind of similarities. Can I do that? No. Why? Because measures like correlation or distance measures assume that all the observations you have are marginally independent. What happened to patient one in gene one is completely unrelated to what happened to patient two on gene one, right? But when it's time-- well, when it's time, it's really, really different. Time means that where I am now depends on where it was five minutes ago, 10 minutes ago, 30 minutes ago, 100 minutes ago. So if I keep measuring the same system a long time, my observations will not be independent. Let's put it this way. If I measure things a long time, I don't have ground to safely assume that my assumptions are independent. Because assuming that assumptions are independent is a simplification, right? If I have a model that is able to account for dependency, I can always reduce it to a model of independence. But I cannot do the other way around. And let me give you a practical example. These are two pairs, two genes up and two genes down. So you are measuring the distance between these two genes. Now the correlation of the two genes up there is something like 0.6. And the correlation of the genes down there is about 0.8, right? But now consider the memory of time. And look at the first picture. Except for the first point, when the first gene goes in one direction, the second gene goes exactly the same direction, right? They never intersect each other. The second point-- it goes from one point, goes down. The second gene goes down. The third point-- the first goes up and the second goes up. And then it goes down, and goes down, and goes down again. And the other goes down again. It goes down a little less. It goes-- look at it. Now look at the other one, which has a higher correlation. These genes are always one against the other. Every time one gene goes up, the other gene goes down. So if I am actually interested in the dynamics of my system-- why correlation would put these things more similar to those ones-- my good measure-- by keeping in mind that I'm interested in the dynamics of change of this thing-- would actually require a different perspective, a different measure that takes into account what happened before and that we put those two together, those two closer than these pairs. How can I model these things? I can use a thing called autoregressive models. Autoregressive models-- it is very simple. There are a lot of way of doing this. This is just an example of how to take into account your past. How can I do this? Well, I have a time series of dependent observations. And what I can say is I assume that my observed point, at in this moment, is independent of the remote past, given its recent past, right? So to know that I'm here now, you don't really need to know where I was the day before yesterday. You need to know where I was 10 minutes ago, an hour ago, maybe two hours ago. But the predictive ability of two days ago, where I was five miles from here, is going to be very, very weak. So you can actually summarize your data, summarize your expectation on somebody being here by forgetting the remote past and considering only the recent past. The most recent could be one point. And, in this case, you can create a model like this in which you plot your present-- that is, my time now-- with your immediate past. And, in this case, you're assuming that everything-- my observation is independent of my past, given my most recent observation. This is the simplest autoregressive model. Now this kind of experiments, again, tell us something about the similarity of things. Actually this is a kind of analysis. The data are always the same. Once we have this temporal data, if we do some clustering, we may see that things are working in the same way a long time but are hardly going to tell us that something controls something else. In this case-- as I was saying in the beginning-- it's not really the data of the design of the experiment. It's the type of analysis you make. So if your interest is to find out which are functional clusters of genes that work together, well, clustering is your solution. But if you're interested in dissecting what is the regulation, the mechanism of regulation among genes, that will not tell you. I may have things that behave kind of similarly, but they not necessarily behave together equally. To be extreme, I would consider that something that controls something else will not really have exactly the same temporal behavior, right? So if I want to have you here today, I have to call you yesterday or I have to be here yesterday. I have to do something before you're here if I'm controlling you, right? So a way to use these things, to try to dissect this kind of information, if this is the question you have, is to use a thing called Bayesian networks. Bayesian networks are regulating genes-- in this case, relating variables in general-- by looking at how probable it is that one particular set of variables will control another set of variables. Originally, these things were built for humans, humans you want to clone information from, knowledge from. You are buying lunch or dinner to your physician friend, getting drunk, and distract the promise that will come to your lab the day after. And they will draw a network of this knowledge saying this gene versus other gene versus other gene and then add some probability that describe the function by which a particular gene controls another gene. This particular example-- I'm sorry. There are a couple of people who have seen this example at least 100 times. This is not about genes, just the intuition of what is there. This network tells you that your age your education affects your income. So this is easy to draw. The problem is how age and education affect your income? This is specified by that particular set of distributions. And those distributions tell you that if you are young and if you have a low education, your probability of having a low income is 0.9. And as you grow older and you get more educated, your probability of having a higher income increases. It's not one because you can always choose to be an academic. The problem is that we're not interested in doing these things by hand. We are interested in finding these things from data, right? And we can play exactly the same game we're playing with the clustering thing. We can find out what is the most probable set of nodes, which is the set of nodes that are most probable to control a particular gene. And we can do this for each gene. So the final picture-- lost it. Oh, the final picture of this is this one. Come on. Give me a picture. Here it is. Each ball represents a gene, except these three blue balls. OK, so these are about 40 patients, say, 41 patients, pediatric patients with leukemia. And for these patients, we have measured some phenotypes. But the most important thing we are interested in is the molecular classification, so the type of is called oncology status, oncogene status, which is the molecular classification of the tumor. And this is their survival. And this is [? finding, ?] it's how many days they've been in the hospital, OK? And what they're interested in is find out if there is a relationship between-- you remember when we were analyzing the other things into different conditions? We were doing one analysis for each different phenotype. We couldn't put the phenotypes together in a single picture. In this case, we can put the two phenotypes in a single picture and see, for instance, if there is any link that will go from oncogene to survival and how this process is mediated by other genes. And what we can also find out are dependencies among genes and other genes. And you see there are directions in those arrows. And those directions mean actually that one gene controls the other gene. Example I usually run is suppose we want to discover which of these flickers control these lights, right? So I can do it this way. I can change the flicker. I can change these things. And this will affect these lights to be on and off. But if I try to unscrew those lights, they will not change the state of this one, right? The metrics we use is very similar to this one. So the metrics will actually take into account the fact that you are measuring the influence of a directed influence from one gene to another gene. It's not just a simple distance. It's not just a pairwise measure. And actually, it's not pairwise because, as you can see, you can have more than one parent. This node here-- just to make an example-- is three parents, this one, this one, and this one. No, sorry. This is a child. And this is a parent. And this is another child. And he also has a grandchild, here. So you can actually use this kind of information to create a molecular landscape of the control mechanism of your things. And you remember what we were saying about how probable it is? I can actually measure how more probable is something to be affected by some variables than is to be affected by other variables through something that they I will not bother you with. But it's called basically base factor. Base factor is the ratio between the probability of two models, which tells you how more probable is one model compared to another. And these are the numbers we get. So we say that oncogene status, which had these three parents, we choose these three parents-- these are all the other possible combination of parents we have explored. And this picture tells you that the runner-up-- which is this other thing down there, the second one-- is seven times less probable than the top one to be responsible for the oncogene status. And the third one is going to be 56 times, and [INAUDIBLE] times, and down, down, down, down, down. And you see, basically, the runner-up, which tells you how more probable is the model you have compared to the best scenario of any other model. So it gives you some kind measure of confidence. OK, so this is an example how you can validate these things. You can actually do cross-validation. You remember? We were saying you pull out one case and make a prediction. And first validation is here. It was 100% and something like this. But the interesting thing is-- the take-home message for today and the thing that is important is that because there are no hypotheses here, the way you collect the data is important. But the way in which you analyze the data is the thing that is going to give you the answer. So if you are interested in mechanism of control, comparative analysis will tell you squat. If you're interested in molecular classification, clustering will tell you nothing. If you're interested in discovering new types of disease, these metrics will tell you nothing. Each type of analysis, as a particular type of answers-- is designed to answer them. And this is the most important thing you want to consider. There is a review up there. If you want to be bored to tears, then you can take it down from that website. But it was the state-of-the-art until six months ago, nothing has changed much. So the second is a [INAUDIBLE] book, which is part of your school equipment, right? Didn't you have to buy this book, yes, for the course? AUDIENCE: No. MARCO RAMONI: No? OK, go and-- AUDIENCE: Not that I know of. MARCO RAMONI: --see it because he's the director of the course. You may want to kiss some ass. Gene cluster and SAM are the-- you member The two non-parametric statistics I was describing before? Age is the thing that implements the Bayesian metrics and the temporal analysis. And what I'm going to do is send around an assignment, which will probably be a data [? study. ?] And you will do two different analysis for it. I don't remember if you have to do both of them or if you have to choose which one you want to do. And one is going to be a supervised analysis using either a gene cluster or SAM, two different statistics. And the other is going to be an unsupervised analysis using gene [? calssification. ?] Gene cluster is two components, one doing clustering and one doing a supervised differentiable analysis. OK, thank you.
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_4_Microarray_Massively_Parallel_Measurement.txt
ISAAC SAMUEL KOHANE: Also, I forgot to mention at this point, the output of microarray studies is foreign to basic biology researchers. They're used to looking at three or four or five or 20 numbers and performing some easy analysis in an Excel spreadsheet. But the point-- or doing a BLAST of one gene at a time. But the data from these microarrays does not easily load into standard tools. The analysis is non-standard. Excel does not cut it. Excel can barely load some of these data sets. And so all this is actually representing an interesting turn. So this is the other point of view, which is now that the standard biologist tools are no longer fitting nicely onto the desktop productivity suite that they previously had. And this is, in some way, a full circle. It used to be, of course, that computers were run by the high priests in the computer center. And not only that were they-- did they know about how to boot-up the computer and to make the computer do whatever it did, but they were also pretty steeped in the math and in the computational sciences, so that they could actually interpret the results of a study. With the advent of the personal computer and incredibly useful tools, like Excel, there was a lot that was now democratized and brought back to the desk of the individual biology researcher. But then when we scaled back up to large data sets that are not trivially analyzable with current desktop tools, then the biologist's education is-- then the biologist's computational education is lacking. The biologists desktop tools are lacking. And again, they are not in the loop. And let me tell you this-- and I'll get to more interesting details in a second-- but something that you have to understand. This is creating a lot of resentment among standard biologists around this whole genomic revolution. Because not only-- I mean, what you'll hear mostly is voice of skepticism about the methodologies. But what's really going on is that they're justifiably feeling quite threatened for a big part of what they've been doing previously. Because they're just going to be disintermediated by the commoditization and the quantification and computational expertise that is required. So what are the characteristics of a microarray? How do you recognize a microarray when you are being marketed one? The first is that it should have a small form factor. So one part of it's -- capable of measuring a significant fraction of some "-ome," the genome, the transcriptome, or the proteome. So I think small form factor is an obvious one. But it also needs to measure a large part of the total output of the system. Otherwise, it's not a microarray in the sense that we commonly understand it. There has to be minimal labor in data acquisition, so that it's an industrial process and not a labor-intensive process. You need to have an automated data path to a digital electronic format. And it has to be sustainable, high throughput processing. And that begs the question then, is this a microarray? Well, this is a very nice piece of technology from Sciomics, which has, on these pillars, these antibodies that allows you to measure, in a very consistent and systematic fashion, 200 protein measurements or typically 30 proteins measured in sextuplets, in sextuple for redundancy. So it's kind of a microarray. It's small. It's an array. But as typically used, it's measuring between 30 and 200 proteins, and so, therefore, I would argue it's not a microarray. And basically, they're trying to sell their venture capitalists that this going to be as revolutionary as the standard Affymetrix RNA expression microarrays that I told you about. But it doesn't scale. It doesn't give you the comprehensiveness. And, therefore, you know in advance that this is only a somewhat more efficient version of the ELISA test, the antibody-based test. It's not leveraged. Similarly, when you're looking at two-dimensional gels, there you are capturing a large percent of the proteome. But it's neither small nor is there an automatic data path to analysis. Because you basically have to pick out these different blots and apply some intelligence about which ones you pick, and then feed them into the mass spec, and then do the deconvolution. It's actually not a high throughput data path. And when you speak, as I have, to those who are really in the know about this area, and they don't have an immediate sales pitch to the venture capitalists on the horizon, they'll acknowledge to you that this is not ready for prime time. Which explains why I had so much cognitive dissonance, in the late 1990s, when everybody is saying, forget this genomics, proteomics is it. And I'm saying, that's great. I'd love to go to proteomics. But I don't see these cheap, high throughput, comprehensive, electronically data acquired sources available today. That's the goal. I believe we'll reach it. But I think we're actually a full five years, at least, away from a high throughput version of this. And what do we actually use these microarrays to look for? We rarely use it to find a single gene responsible for a process. But we frequently use it to find a pathway. We frequently use it to find a set of genes working in a coordinated fashion. The assumption being that there's structure at the biological scale. And what do I mean by that? That genes behave in coordinated fashion. And so the questions that immediately are asked is why should this assumption hold and what evidence do we have that it does? So why does clustering work in functional genomics? Well, what do we see in the night sky when we look up? We see a fairly, apparently uniform star field. But when we look with the right instruments, we see clusters of stars, from which we have been able to apply our classical and then Einsteinian mechanics to figure out what are the forces that hold these clusters together and make them behave the way we see them behaving. Similarly, when the Genome Project essentially came to a conclusion in this year, what we had was the gene universe. We had the list of genes that form part of this universe. And here, for instance, is a bunch of such genes. However, this really doesn't tell us anything about how these genes interrelate. And although there's a bunch of hoopla made about the Human Genome Project, it doesn't start to become really interesting until we can do something like this-- a figure that I borrowed from the Kyoto Encyclopedia of Genes and Genomes, KEGG, which is a pathway representation. Understanding that this is how the stars are interrelated in a galaxy, how these genes interrelate in pathways. So first of all, why should-- why do I see this structure in the transcriptome? It's because, in fact, in order to get things done, like, in this case, make sure this cell kills itself, there has to be coordinated expression of genes. As we explained to you previously, there are many reasons why gene expression and protein expression may not be correlated. But if there was no coordination at all of gene expression, two things would be true. One, the genes would be behaving randomly together-- in other words, would not be working together to make a process, which seems unlikely. And then clustering would never work. Because there would be no coordinated action of genes. And the reason this coordinated action is-- in order to get a job done, these, out of principles of parsimony-- that is, you don't want to have a gene product unless you need it-- these things are only generated when, in fact, you need them for this process-- in this case, cell death. And that allows people to do these clustering analysis. And there'll be another lecture on clustering. And I just want to give you a taste of what I mean by clustering. So here's the world's smallest microarray. You have three experiments. And on this little, small microarray you're only measuring a gene three times. And across these three experiments, you therefore have three measurements of each gene. And the most standard type of clustering is the so-called phylogenetic type tree or the dendrogram, where you calculate all the correlation coefficients between every column, in this case, every set of genes, and the correlation coefficients are shown, as here. You then figure out which genes are the closest together, based on their correlation metric, which, in this case, is 0.88. And you bring them together. And then you can summarize, in a variety of ways, that pair, let's say, by their median. And from there, you add on the next closest gene. So very specifically, gene one and gene two get put together, because they're the closest. And then the next closest to the joint branch there is gene three. And recursively, this gets built up until you have one of these classical, large dendrograms, where, again, each row is a gene. The columns are different experiments or samples, in this case, a time course. But they were different, distinct samples. And what you see here is that every row that's close to one another is close to one another because they have a higher correlation, between these two ones that are close to each other, than between two rows that are far apart. Now, there's something. I'm eliding some complexity. For instance, you can toggle around a whole branch, so that you might actually have some blocks, some rows, which might not be together in some rearrangements of this tree. Nonetheless, by and large, within a subbranch, you'll find, typically, that genes, with a similar expression pattern, are put together because they have the highest correlation coefficient. And the diagrammatic version of it is as follows. Higher expression level is red. Lower expression level is green. And you see that the red and the green bunches together, because, in fact, each of those genes, in each row, are clustered together by their expression pattern. That's all there is to it. And then when you look what these bunches actually represent and what people do is, essentially, they draw lines. And it's unfortunate, it's not much more fancy than that, at least in the publications that occurred from 2000-- 1999 to 2003. We draw lines next to these groups. You can actually find, for some of them, some functions, such as cell cycle or coagulation or apoptosis. You'll find different groups of genes clustered in that fashion that correspond to those different processes. Note, however, that for whatever reason, lines were not drawn against these other blocks of genes. And I'll argue, in another lecture, why that's a mistake. And furthermore, I'll argue in another lecture why this act of drawing lines, like these blocks, although it's ground in the basic truth that these genes more or less work together, where you draw the boundary of that line is an act of creative invagination and does not stand up well to statistical analysis. In other words, based on the knowledge of a particular biologist, you might decide to change the boundaries of that box. And there's well-grounded information theoretic methods which allow you to draw those functionally related groups more soundly. And this is just a close-up of that. So the point here is that, if you're not looking for a global pattern, as I just showed you, if you're not trying to see how things are working across a process, like coagulation or apoptosis, and if you're not looking for guilt by association, by which I mean, let's say you know the function of all these genes, and all these genes but not these genes, here, in the middle. Guilt by association says, essentially, that if you know the function of these genes and know the function of these genes, because these are coregulated in the same fashion, you can impute something about that process. And that heuristic course, as it may be, has been actually exploited very successfully by many investigators. So if you're not using guilt by association and if you're not looking for global patterns, then microarray technology may be inappropriate and misleading. And it may be inappropriate and misleading, because microarrays themselves are much more noisy than people understand. And therefore, if you're just looking-- specifically, if you trying to answer the question, do these three genes get expressed differentially, then you're probably using the wrong technology, for the wrong costs, and with much greater danger of being misled. And why is that a problem? Well, let me give you a decision theoretic view of microarray analysis as we should all, I think, think about it. So the most basic micro experiment asks the following question, across a set of experiments is a gene up or downregulated? Or more specifically, of the thousands of genes that we're measuring, which are up or downregulated? And for some, you'll use some statistic. And you find it, it is indeed upregulated. And for others, it is not upregulated. If a gene is said to be upregulated, is the gene relevant to the process being studied, let's say cancer? Yes. In that case, it's a true positive. It's perhaps a real target for cancer treatment. So therefore, big reward, because either you're a drug company or you're a researcher publishing a finding. If the gene is not relevant to the process being studied, it's a false positive. Therefore, you've wasted your money. If the gene is not significantly up or downregulated, in other words, it doesn't look interesting by this experiment, if the gene is relevant to the process being studied, then you've missed it. It's a false negative. So then, again, you've lost an opportunity. However, if the gene is not significantly up or downregulated, and it's not relevant to the process of being studied, then it's a true negative. So that's great. You're not wasting your time. And here's the problem. So you have 10,000 genes, let's say, or 20,000 genes that you're measuring in a microarray experiment. And the biologist and the big pharma can't bear the thought that they're going to miss the blockbuster false negative. And consequently, what they do is they change the threshold for declaring something to be a up or downregulated, so that more and more genes are considered to be up or downregulated. But the problem is that, not only does this increase, potentially, the number of true positivees you find, but it also hugely increases your number of false positives. And the problem again-- this gets back to the biologists don't understand the difference between a northern blot on a gene they understand and doing a microarray on 10,000 genes that they know nothing about. Because they can be given literally thousands of false positives. And the typical interaction you'll have with the basic biologists, when you do this kind of analysis for them, they'll say, Oh, Zak, I see that you said, statistically significantly, we can only place bets about 10 genes. But I see down the list, that you said not to be significantly differentially expressed, a gene that I know from my own research is, in fact differentially expressed. But it just doesn't have a P value that reaches a level of significance. So therefore, couldn't we actually just push the whole threshold down to at least the same threshold as the gene that I already know something about? And the answer is, unfortunately, if you do that, then it'll still happen, you also-- there's nothing to say that these other genes, that are now included in the list, are anything at all like this gene that you happen to know a lot about. So you're using prior knowledge. You're not using the microarray. You're using prior knowledge to say something about it. And therefore, you get a lot of false positives. And the problem is each one of the false positive is one post-doc going off to do a validation study. And it turns out, with tens of thousands of genes, you can launch a lot of post-docs. So the pharma industry, with it's big, deep pockets, actually got exhausted by this and, actually, quite turned off by a lot of these technologies. Because a lot of the genes that they thought they were going to look after, these top genes, which look kind of interesting-- you can always make a story and create a story about why this gene may be involved in your favorite disease process-- turned out not to be the case. So by not understanding this decision theoretic view, a lot of money was wasted and a lot of disappointment with this technology, which is one of the reasons I wrote it, in the first sentence of the first chapter of my book, Microarrays for an Integrative Genomics, the functional genomic meltdown is imminent. And I first wrote that in 2000. And everything that I have-- everything that's happened since shows it's true, that, although there's more interesting science being developed, the fact is a lot of the companies that were built around this, and some of the bioinformatics companies that are built around this, are just going down the tubes. So let's just remind ourselves about-- by the way, I'm very optimistic about the whole field. But I'm optimistic about the whole field when we're careful in our use of the technology, both in understanding the actual engineering part of the technology and understanding the quantitative analysis. So I know that we've gone through this, with you, several times, but I want to revisit. Actually, is everybody comfortable in the way the basic microarrays work? Do I have to revisit it? If you want me? I do not have to revisit it? Thank you. All right. So now we can just remind ourselves about the difference between the two platforms. Affymetrix, as you know, does not represent, on each spot of the microarray, the entire gene. It just represents a few oligonucleotides that they have chosen to be representative of part of the gene-- more on the 3' end of the gene. And what they will calculate, as we will get to, is some measure of the hybridization of these perfectly matched probes to this gene. Versus a probe, which is exactly the same to this one but has a central base which is a mismatch to the designated area in that gene. So, there, the measure of the gene is essentially the difference between the predict matches and the mismatches in aggregate across the entire probe set. Whereas, in the Pat Brown inspired do-it-yourself spotted industry, what you're doing is you're spotting these probes right onto the glass slide. And then you're going to have a competitive hybridization between a reference set of cDNAs and the test set of cDNAs. And that has some implications which we'll get to shortly. So we'll make a point, because I'm going to ask you a question later, that the spotted arrays have to end up on the slide through some physical process, such as a printing head. Whereas in the full lithographic techniques built by Affymetrix, what you do is, after you create the first layer, which is bonded to the glass or silicon substrate, you're going to chemically add, let's say, a nucleotide G masking off the rest of the array where you don't want to add that G nucleotide. And then let's say you want to add an A in the next step, you mask off everywhere that you don't want to have the A added, and so on. Clear so far? Now the problem, of course, is that to do this for a 25 miR requires-- in other words, on the nucleotide that's 25 long-- is on the order of 100 masking procedures. Anybody have an idea how many masking procedures you do for a Pentium 4? On the order of 20 to 30. So they're having to do four times as much photolithography as for a Pentium 4. So that's going to limit their costs, at some level. And furthermore, let me just say it right here, it turns out that 25 miRs are not the ideal length for hybridization. The longer-- if you get to a longer length, let's say a 60 miR or 70 miR, you get much more sensitivity and much more specificity. But they can't economically do it because of the limits of photolithography. But the advantage of photolithography is that you can get extremely compact and high resolution. And so you can put a lot of genes on one chip, like the entire transcriptome. So this is just a summary of what I just said here. Also, we like to think of these in some of computational abstraction. But it's important also to realize that these are real molecules that, during the hybridization reaction, are tethered to the substrate. And therefore, their secondary and tertiary structure may in fact influence how they're able to hybridize with the molecules from the sample. And it turns out that certain types of DNA probes just don't work as well as you'd expect, computationally, because of these effects. And if you look at Affymetrix probe sets, there are some probes that never work. And despite the fact that they're well-designed from the point of view of finding a matching hybridization sequence. But it's because of these kind of effects. And note, also, that they are not all of the same length. The efficiency of the masking reaction is not 100% by any means. And what you will find in each one of those probes is, in fact, millions of DNA molecules, which have a distribution, which hopefully is close to the desired 25 miR, but, in fact, often is considerably lower. And unfortunately, there's no way for the end user to be able to ascertain just how good the job had done. So given that fact, we can already see that, even though we can achieve a higher density with these Affymetrix microarrays, there is still a lot of variation that we can expect from different probes within a probe set. Which is why, for example, you might not see, in a single probe set, that the perfect matches have all the same intensity. You can see, for instance, if there's 11 perfect matched probes, five of them will be really lit up and six will be quite dark. And there's many other reasons why that might be the case. And I just want to make the following point. As I said, when you're looking, for instance, in a sporulation experiment in yeast, what we're looking at is, let's say, a test sample versus a reference sample. And we are looking for their competitive hybridization to the target. Now, a couple of problems immediately arise, such as, unfortunately, it turns out, for reasons that have yet to be made clear to me-- and in fact, if anybody in this room has any insight, I'd be very curious. Which one gets the C3 versus C5 dye actually makes a difference. So if you switch around the dyes, you'll get different ratios. So something sterically about the attachment of the different dye causes a difference in the competition reaction. And so people who are practitioners of this and who are fastidious about it, will, in fact, perform both labels, both the Cy3, Cy5, and Cy5, Cy3 label to actually better understand the limits of that technology. It's not clear to me why that's happening. The other point to note is, if you want to compare across different microarray experiments, you probably want to have the same reference sample. And that sounds like a trivial thing to say, because you'd like to see, for instance, let's say, this condition. The test condition is a bunch of different cancers. And you want to compare it against a reference RNA, which might be at Stanford. What they do is they pool together all their different lymph node tissue types. Problem, of course, is, if you deplete your supply, you're going to get a different set of ratios with the new supply. And because you're going to have different abundance of RNAs in the reference sample, you're going to get different ratios. And so you see drift in the ratios across experiments, over time, with this kind of setup. And it's highly problematic. At Mass General, they've developed a synthetic soup of spiked-in RNA controls. I think 256 different RNA controls. And that will give you a standard reference pool. But unfortunately, it's not quite the same as having the full eukaryotic reference pool. So the ratios you might get from that artificial pool might be quite a bit different from what you get in other experiments. So just want to make it clear to you that is not a done deal. So how do you use, actually, Affymetrix to determine the intensity of gene expression? I just said that, for the spotted arrays, all you're doing is actually reporting a ratio, and a ratio that typically has a normal distribution. What you're doing in Affymetrix is a multi-step process. First of all, you're going to take the average of the lowest 2% of the cell intensities in the sector. This is the background of each sector. And then you subtract the background from the average intensities of all cells in this sector. And you calculate a background noise, which is essentially a measure of variance of the pixels and the scaling factor and normalization factors, which are settings you comply to Affymetrix. And this is actually somewhat out of date. But I'll give you updates shortly. So what is reported, for instance, most often as the intensity of gene expression is the average difference. Which essentially is the difference between the perfect matches and the mismatches across the entire probe set. So a probe pair, first of all, is said to be positive if the difference between the perfect match and the mismatch is greater than the amount of noise that is computed based on the background. And a probe pair is negative if the mismatch is greater than the perfect match by the same amount. So therefore, not all probe pairs will be scored as positive or negative. And then, what you have, you calculate a trimmed mean across the positive or negative probes. That turned out, by the way, to be a highly fluky, non-robust measure of intensity. And that was ensconced in an algorithm called MAS, MicroArray analysis Suite 4.0. So when you read papers and you see Mass 4.0, that's the method that they're using to interpret the data. And the problem is, it was both oversensitive to outliers, because they're using the term mean, and it was also sensitive to the central moment. So instead, in MAS 5.0, they are using the Tukey biweight statistic, which is a much more robust statistic of outliers. And I will upload a paper today to the My Course website, which will allow you to read the analysis of how they approached this. And this was actually done by Affymetrix. And it's now part of their standard interpretation software, MAS 5.0. Another competing microarray or high throughput gene expression technology is a serial analysis of gene expression. Did Atul tell you about this at all? He did? All right. So suffice it to say that it looks digital, in that what you're doing is literally counting the number of tags with a particular sequence. But it turns out this has noise, as well, for a number of reasons. One, our sequencing technology is not 100%. It has error. And also, sometimes the tags are not perfectly chosen, either. So it's actually, in the right-- excuse me. In the right hands, with the right set of genes, it does a nice job. But in actual practice, I've seen it to be fairly noisy and certainly not cost competitive with the standard microarray technology. Quite interesting is this very nice technology out of Illumina. So they're using this for genotyping, but they're thinking about using it for doing expression microarrays. And it's as follows, you actually have a fiber optic bundle. And the tip of the fiber optic bundle is sculpted so that it fits in nicely. It has a groove that fits in. Yeah, it has a groove, which fits a bump, nicely, on these optically coated beads, so that we can do the following. We can attach to each of the different optically coated beads a different piece of DNA. So we could literally have millions of different optically coated beads, each with their different DNAs. And then they will settle into the groove on the end of the sculpted fiber optic cable. And then you can perform a hybridization reaction with a sample. And then, by sending down a laser, down that fiber optic cable, you can actually determine which of these beads has a hybridized piece of DNA versus which does not. And this allows you to do, in a very high throughput fashion, millions of different hybridization reactions. It's currently being used for doing millions of different genotypes. So it's recognizing a whole bunch of different genotypes with different pieces of DNA corresponding to different genotypes. But, also, it's been it can be used to do expression microarrays. And there are some groups, around the country, that are now beginning to use it. And for those of you who are in the enterprise program, looking at how Illumina was funded and how they got together with scientists to avoid being sued by Affymetrix, it's actually a great case history, which I don't have time for. But the point is that Affymetrix has a very broad patent around the measurement of expression in two dimensions, in a two dimensional form factor, essentially, with any way that you place these DNA spots down. And this gets away from that patent in a very creative way and also in an economical way. AUDIENCE: Two questions. ISAAC SAMUEL KOHANE: Sure. AUDIENCE: Is the main benefit here that you don't have the tags or the data input down the line? ISAAC SAMUEL KOHANE: There's a couple of factors. First of all, you're not limited in having this form factor, this big, where you're going to do this photolithograph. Even so, with the photolithographic process, because of the registration of scanning, the registration of the full lithography, there's limits to how many different probes you can have on one chip. Whereas here, because each bead can have its own unique ID, so you know which one it is, and can have its own piece of DNA, it's going to self-assemble. And this is a very microscopic view of this. You can have many more different, because it's essentially self-assembly. You swirl around the fiber-optic cable in the soup of beads. And they basically all self-assemble onto this tip of that cable. You get many, many more different types of beads and, therefore, of probes in one sample. And the process of labeling the beads and attaching the probe is actually much cheaper than the Affymetrix technique. So it's higher density, more economical, and it gets away from the patent. Another technology is inkjet technology. The same people who brought you high resolution family photos said, if we're able to put down picoliters in precisely the right spot so that your kid looks the way your kid looks, perhaps we can use that same technology to spot a piece of DNA onto a flat surface. And in fact, they use-- So a spin-off of Hewlett-Packard, before it split off into Agilent, so a spin-off of Hewlett-Packard printers, actually developed the system where they actually have four colors, CATG, and they spit the four colors onto the glass slide. And they do successive in-situ synthesis of oligomers, 50 to 60 long. And because your pictures of your baby have to look good and have much more high tolerances than anybody cares about in Pat Brown's group, let's say, these are very good looking spots, with very nice, consistent shapes. And it's also very cheap. So this is the kind of shapes that you get with mechanical pens, which is the normal way you spot. And these are much more consistent and precise spots that you get with the inkjet. But there's many other uses for microarrays. If you recall at the beginning of the lecture, I didn't describe a microarray as being about expression. I said it had to be able to have a small form factor, it had to be able to interrogate for a large amount of the -ome that you're measuring. And it had to have a direct data path to an electronic format, and so on. So can we use these same kind of microarrays to study, for instance, gene expression control? So zinc finger proteins are proteins that actually bind to DNA. As you see here, so this finger is around the DNA molecule. And there's different parts of this that do the recognition and binding to a consensus site on the DNA. And so for understanding that, understanding that it was a consensus sequence with a limited repertoire of possible values, you could actually create a oligonucleotide that had all the possible permutations for this recognition site. And then you then ligate that oligonucleotide to a surface, like the microarray. And this is the actual results that you obtain when you take different transcription factor mutants. And you see that they bind at different spots, on the microarray, based on their affinity with a different consensus sequence. And since you know which spot corresponds to which consensus sequence, you can quickly develop a notion of what are the consensus sequences, what is shared and what is not shared among these mutants, and, therefore, what is important for the binding activity. As shown by this cartoon that represents the conservation of picker binding consensus sequence. So a lot of this work actually was pioneered by one of our faculty members in HST, Martha Bulyk. And it's now been scaled up so you can actually buy a lot of microarrays that allow you to do much larger-scale assessments of thousands of different binding sites. These are the protein microarrays that I discussed with you previously. And you can have a variety of different baits. A small molecule present on a microarray, an antibody present on the microarray, or a different kind of protein baits. Or you could even use a phage as a bait. The problem with all of these is that, as I said, we don't have yet, particularly for the proteins, high throughput methods of doing this. So for instance, selecting and laying down the antibodies for the entire proteome is just not within our vision. We just don't know how to get there. Currently, in a reproducible fashion, we can do it for maybe hundreds of different proteins. And these are the kinds of results that you get. You do, in fact, get these good looking graphs, where you can show that you can reliably identify the differentiable expression of different proteins. But again, these are from very small numbers of genes. Whoops. Others have tried the notion of a universal microarray, where what you're doing is, rather than having to re-engineer the microarray all the time, you're going to create a generic microarray. And then you're going to create some custom technology to link that generic microarray to the system that you're interested in. So this universal microarray has these zip codes, which basically are standardized sequences that recognize a bit of DNA with a complementary sequence, which you then add chemically to the sequence that you want to interrogate for. And such universal arrays don't require re-engineering of the zip code part of the probe, but only the part of the probe that you want to interrogate for. And it works actually reasonably well, as is shown in these experiments. I want to point out that a lot of these technologies became popular when Affymetrix microarrays were more poorly performing and much more expensive. Four years ago, an Affymetrix microarray at Harvard cost, with their academic discount, about $2,000. Now it costs about $200 for eight times as many genes. So the price factor has gone way down. These other technologies are just going to go away. It's sort of a Microsoft effect, the technology is good enough and cheap enough that there's very little incentive to invest a lot of your own time to make these other technologies better. Plus, as I said, there's patent control by Affymetrix, which makes it harder to create a business out there. AUDIENCE: Isn't it for things that were being published three, four years ago, maybe, were [INAUDIBLE]. ISAAC SAMUEL KOHANE: The short answer is, yes. And then I'm going to show you, quite literally, just how bad that intergenerational difference is. You have heard a lot about tissue microarrays. What are tissue microarrays? They're essentially these collections of hopefully homogeneous samples. So for instance, you could take a tumor and could slice it and dice it into these small salami slices that you lay down on the microarray. And then you can stain it for a variety of things. And then you can look under the microscope. And if you're clever, get an image recognition program, to detect a particular color for a particular stain, to see how much of a process is present, based on the staining characteristics of that tissue or the morphological change in that tissue. It's not yet a high throughput technique. And if sort of fails the microarray definition that I discussed previously. But I think that as we get better in the image analysis part of it, it does have a shot of becoming much more of a commodity microarray. AUDIENCE: What have-- ISAAC SAMUEL KOHANE: Go ahead. AUDIENCE: How could they ever do something like this, though, given the fact that in the chemical environments, themselves, [INAUDIBLE] incredible. Not only are you dealing with molecules, you're dealing with a very complex, poorly understood system. ISAAC SAMUEL KOHANE: Right. AUDIENCE: Well-- ISAAC SAMUEL KOHANE: Well, the answer is the following. The question that you just raised is almost always true in genomics. It's a more general question. With complex systems, how can we do this? And the answer is, low hanging fruit. In other words, there are some strong effects. In other words, if there is a lymphocytic infiltrate into a tissue, you'll have a lot of immunoglobulins present as opposed to not. So if you stain for immunoglobulins, you'll see it light up. If you are looking for genes that-- if you're going to do an in situ hybridization for an apoptotic gene, those samples that have more apoptosis in them will light up more. Now, the way you handle them is, without a doubt, going to influence it. Which part of a tumor of this piece of salami was originally sliced out will also have an effect. But if-- AUDIENCE: That was less what I was getting at. Let's take like a fibroblast. ISAAC SAMUEL KOHANE: Yeah. AUDIENCE: The degree to which you-- I mean, obviously, that isn't standardized in how you make these microarrays. ISAAC SAMUEL KOHANE: Right, go ahead. AUDIENCE: The degree to which you have confluent layer will dictate the morphology of the cell, just like under visual inspection. ISAAC SAMUEL KOHANE: I see. I see where you're coming from. These are not cells. This is not cell culture. This is a hunk of tissue that was taken out of a patient, fixed, either through freezing or through some other process, and laid down onto that slide. It's not a tissue culture experiment. AUDIENCE: OK. ISAAC SAMUEL KOHANE: That's a very good question. And that speaks to the next generation of microarrays, which I'm not willing to speak about, which is when you have a confluent layer of living cells, as opposed to this fixed tissue that you're going to stain, that's another type of microarray. You can then, actually, for instance, target drugs, different drugs at different points in this microarray and see how these cells react. But god knows what that means in a set of fibroblasts spread as a monolayer. I think it's a big open subject. It's a very-- AUDIENCE: I think I'd laugh if somebody they tried to make any conclusion about something like that by entering individual cells into a layer. ISAAC SAMUEL KOHANE: Well, I'm glad you're laughing, because-- AUDIENCE: The structure-- ISAAC SAMUEL KOHANE: I'm glad you're laughing, because I've been crying about the way these much more boring microarrays have been used in the past. And I'm going to start to give you the bad news, shortly. But I just want to tell you that people are publishing, in first class journals, the kind of thing I'm just describing now. And some of the luminaries in our field. And so what you have to understand, guys, there's a lot of legacity in the field of genomics. And part of what this course aims to do is for you to learn the limitations. When Joel, for instance, talked to you about SNPs, you'll learn that half of all SNP studies are just wrong. And what I'm about to tell you is at least half of all microarray studies are wrong. But big companies are being formed around this model they're targeting with drugs-- kind of stuff, Todd. So this is an opportunity for you to make a big name for yourself in the future. In fact, so here comes the debunking. So Pliny the Elder said the following. "Indeed, what is there that does not appear fabulous when it comes to our knowledge for the first time? How many things, too, are looked upon as quite impossible until they have been actually affected." So this is a very optimistic statement about the future of science, technology, engineering. Build it, they will come. And this is Pliny the Elder, who wrote this big tome called Natural History, which was very well regarded. He's a Roman patriarch. And note when he died-- 79 AD. And that there is significance to that date of death. Because what he was was, in fact, not a true experimentalist scientist, using the scientific method as we currently understand it. He was more of a description-based person, describing what he sees in incredible detail, annotating meticulously, taxonomizing it. But not really putting forward any hypotheses about how things work. And because he did not understand the mechanism of things, when he went to visit his last scientific investigation, which was look at a local volcano, which happened to be called Vesuvius. When he went to the boat to visit Vesuvius, he misunderstood its basic nature. So, in fact, he died, as his son was trying to drag him from the island, probably from asphyxia from the vapors from this volcano. And consequently, I ask the question, are we in danger of an imminent genomic Plinian eruption. Now Plinian eruption is actually a term of the trade used by geologists to describe a massive explosion that results in a lot of hot air and ash going up into the stratosphere. And I think we can argue that we're in similar danger because of over-promising, the slowness to acknowledge the limitations of our measurement techniques, the challenge of linking genomic data to biological and clinical significance, and a lack of formal hypothesis testing, and the lack of sufficient multidisciplinary expertise. For all these reasons, the fact that Todd might roll his eyes at this tissue culture experiment doesn't mean that probably all VCs will think it's great. Because they won't have that multidisciplinary expertise. And so are functional genomicists over-promising? Absolutely, yes. So let's go through it. Let's go back to our world's smallest microarray, three genes measured before and after an intervention. And in fact, the content of Nature Genetics science journal articles about microarrays, in the first four years of functional genomics, of lots of publications in functional genomics, which I think of as 1999 to 2003, so plenty of papers which did the following sophisticated calculation. You take each gene before and after intervention. Take the ratio. And you report the ratios. And you sort them from high to low. So that, in fact, gene 2, which is 2.1/0.3, that gives you a ratio of 7, therefore, it's the most upregulated gene. It's lists like that that you'd get. So for instance-- and I feel bad for this poor guy, because I always cite his article, because I just don't want to change my slides. But this is a good point. So this group, out of Yale, did the following interesting experiment, which you should think about as you have your next meal. Rats, who are starved, live twice as long as unstarved rats. And so what they did is they starved these rats and compared them to non-starved rats and looked at the expression profile of the muscle tissue. And what they saw, using Affymetrix technology, was pretty interesting. There were a bunch of genes that are involved in mopping up free radicals, these bad things that screw up the structure of your proteins and DNA. A bunch of genes that are involved in mopping up free radicals and combating the oxidative stress were downregulated in these starved mice. I want to point out that they are, for instance, down by 1.5 fold, down by 1.6 fold. Remember those numbers. Well, I won't be cute about it. The fact is those fold changes, especially when they had a limited number of the replicates, like three replicates, are just not sustainable in any kind of analysis, even today, unless you do a lot more replicates and use much better platforms. So here, for example, is a study that we did using a Incyte chip. Incyte used to make a spotted microarray. They're one of the technologies that went away with the genomic implosion or meltdown. And what you see, here, I've taken the microarray and I've strung out, into a line, into a vector, the 8,000 genes. And I'm showing you the ratio of these genes across two conditions in this case cardiac muscle of a mouse before and after cocaine. And what you see is that most of the genes have a ratio of 1. And some are a little bit above and some a little bit below. What to make of it? Where do we draw the line? What is upregulated? Is this 1.4 fold upregulated and above, is that the significance level? Well, I did the following thing. I applied a Fourier transform to this data, where the dimensional cross, across which I was doing the Fourier transform, was that of the linear position on the microarray. And as you know, the Fourier transform identifies elements which have periodicity. And what I found was the following periodicity-- a periodicity of 4, a periodicity of 9, and some other periodicities, but a huge periodicity at 4. Anybody have any idea why there was this periodicity of 4 in the data? Remember, regardless of what gene it was, across this chip, there's a periodicity of four. AUDIENCE: Chip. ISAAC SAMUEL KOHANE: Chip what? AUDIENCE: Chip [INAUDIBLE]. The printing? ISAAC SAMUEL KOHANE: Print chip, what about it? AUDIENCE: Maybe the form of the-- or in the tip, especially with the-- ISAAC SAMUEL KOHANE: You're getting close. AUDIENCE: [INAUDIBLE] maybe, like, could there be any element-- ISAAC SAMUEL KOHANE: You got it. This is what the inside chip looks like. And the reason it looks like that is because they used four pins. Each pin took care of a different quadrant. And so if there's even slightly different physical chemical properties of each pin, in the engineering of the pin, you would get a slight systematic change in the amount of probe laid down. Consequently, it turned out that you got on this chip much more than 1.5 fold changes in the readout just based on which quadrant you were in. And that was just based on the physicality. Now Affymetrix, in the past has not been guilt-free in this matter. For instance, depending on which way the hybridization solution was washed on, you can see different-- you can see intensity effects going this way or that way. And if there was a thermal gradient in the hybridization chamber, you could also see different intensities that were chip geometry dependent and independent of the gene. Yes? AUDIENCE: What did you think to look for [INAUDIBLE]?? ISAAC SAMUEL KOHANE: Because, of course, I knew those four pins. I said, I wonder if there's anything different about the pins. By the way, the second-- and now, by the way, I almost do that routinely. Any series I do, as a sanity check, I just run a Fourier transform to see if there's regularity. The other periodicity was due to how often they replaced the printhead. But of course, the manufacturer would never tell you this. Let's make things a little bit more interesting. So this is an experiment that Atul did with Morris White at the Joslin, using a now obsolete set of chips. They had four patients with glucose intolerance. So it's not quite diabetes but it's the inability to bring down your blood sugar rapidly enough after you give him a challenge of glucose. And using a collection of three chips, you measured 35,000 genes. So we did it on patient one, patient two, patient three and patient four. And we repeated the same experiment with the same extracts of RNA from the patient's muscles. These were from the patient's muscles. So let me ask you the following question. For gene five on this microarray, what should the ratio be between gene five, here, and gene five, here? AUDIENCE: 1. ISAAC SAMUEL KOHANE: Did someone say 1? Thank you. The answer is 1. What should the ratio be between gene five, here, and gene 5, here? Who knows? It shouldn't be 1. The answer is, who knows, because it depends on the individual, right? They are different individuals. Good. So now that we understand that, let's repeat. The intrapatient variation should be 1. The interpatient variation, we don't know. But let's say this ratio, for the sake of argument, between gene five in patient one and patient two is a ratio of 5, OK? What should this ratio be of gene five from patient one? AUDIENCE: Close to 5. ISAAC SAMUEL KOHANE: Close to 5 as well. So what should the ratio of the ratios be? AUDIENCE: 1. ISAAC SAMUEL KOHANE: 1, thank you. So to repeat, the intrapatient variation ratio should be 1. The ratio of the ratios of the interpatient variation should be 1. Yes? AUDIENCE: So the measurement's actually were pretty close in time and all that? ISAAC SAMUEL KOHANE: Well, it's actually the same RNA sample that we took. And then we hybridized it all at the same time. But all good questions to ask. In fact, never reported typically in the journal article. In fact, so now I have to give you an anecdote. You'll learn about clustering in the lecture later by Steve Greenberg. But Todd Golub did one of the first studies of the differences that you can find in two physiological conditions or pathologcal conditions based on expression. And he looked at Acute Myelogenous Leukemia versus Acute Lymphoblastic Leukemia, AML versus ALL. And sure enough, using a supervised learning algorithm that you'll learn about, he could do it without a problem. He also had a test set and a training set. When I put all that data together, I found out I could actually distinguish another group, not AML versus ALL, but a test set versus the training set. And I said to myself, what the heck is this? And I went to talk back to Todd. Because, after all, Todd's one my former interns, so I have no problem calling him up. I said, Todd, what gives? He says, they were hybridized on different days. And that wasn't published in the literature. But just doing it a different day-- same technician but different days, had slightly different hybridization reaction. And that could be picked up just by looking at the data. Anyway, to remember where we were here. The intrapatient ratio-- 1. The ratio of the interpatient ratios-- 1. What in fact happens? Well, this is actually kind of crummy by 2004 standards, but it was not bad when we did it. Here is one chip, the other chip, one chip, the other chip, one chip, the other chip, for the four patients. And you see, here, what looks like a 1-to-1 line, but it's kind of muddy. And the correlation coefficients were 0.76 to 0.84. Currently, anybody who does more poorly than a correlation coefficient of 0.97, I question how good their hybridization laboratory is. Nonetheless, that's what we have. But you still can bleed it to a good 1-to-1 ratio. However, what was the ratio of the ratios? Now, here's the ratio of the ratios in one case versus the other case. And you don't have to be a statistician to realize that there's no signal here. These are blobs. Now what does a blob mean? It means that, with one set of chips, this gene was 10 times higher, because of the log scale, in patient one than patient three. With the other set of chips, this gene was 10 times lower in patient one than patient three. So this is particularly heinous. We have not only the wrong magnitude, we have the wrong direction of regulation. And by inspection, you know this is happening a lot. So right away, you should be asking yourself a question, how can this be? I mean, after all Zak's told me about how these microarrays are wonderful. How can this be that this is so bad? And after all, in the end, there is some signal here. And why is it so bad? Any idea why it is so bad? Anybody have any ideas why these results are so bad? We were not particularly worse than anybody else at that time. AUDIENCE: [INAUDIBLE] different days. ISAAC SAMUEL KOHANE: But we weren't doing that on different days, so some other kind of variation. So let me ask a lead-- go ahead. AUDIENCE: All you have, [INAUDIBLE] if you're doing ratios to ratios. ISAAC SAMUEL KOHANE: Right. AUDIENCE: It compounds it. ISAAC SAMUEL KOHANE: It does-- definitely compounds it. But which kind of measurements are the most sensitive to that kind of compounding? What kind of measurements? What, in the denominator, makes a ratio change the most? Small numbers, right? So, when you have a gene expression, which goes from 0.6 to 0.3, that's a two-fold difference. When you go from 600 to 300, that's a two-fold difference. You add a little bit noise, this one flips on the left, and that one's pretty stable on the right. Well guess what? 2/3 of transcriptome is expressed at very, very low levels. You have one or two copies of RNA. One third or less of transcriptome, we have hundreds of copies of RNA, millions of copies of RNA of that gene. And those are easily picked up by these microarrays. But they're very noisy when you get down to these very low levels because of exactly that problem. And so if you don't have a method that looks at the variance and does not take into account expression intensity, your hosed. And that's why just putting a fold cutoff that says, I'm going to look at everything that's two-fold or greater, you know that you're going to find a bunch of things at two-fold or greater at lower expression levels, but they're just wrong. And thank goodness, in 2004, you can't get a journal article now published unless you do a variance analysis. But that was not true in 1999 to 2002, at the very least. Even more worrying, so, as bioinformaticians, we were spoiled by the following fact. Is it really 1:45? and I have until when? AUDIENCE: 2:00. ISAAC SAMUEL KOHANE: All right. OK, time flies when you're having fun. We were spoiled in bioinformatics by having this international resource, called GenBank, where we put all our gene sequences into an international resource, and allowed the researchers to compare different things across different systems. And we thought we could do the same thing for microarrays. So for the 60 cancer cell lines that the National Cancer Institute collects to test the thousands of drugs that they obtain, from pharmaceutical industries, from the rainforest, from China, and so on, to test for chemotherapeutic efficacy, they have these 60 cancer cell lines that they've been keeping around for years. And so Todd's group, at the Whitehead, did the state of the art hybridization with Affymetrix. And Pat Brown's group, the best practitioners of spotted arrays, did the same analysis with spotted arrays. And shown here is something that you, again, don't have to be a statistician to understand the correlation of the genes that were in common across both platforms. And the correlation was terrible. There was almost no correlation. That meant that you could not compare the results from one platform to the other. And when I first published this result a couple of years ago-- it was actually a result of a course like this. One of the students did it as a final project. I told him, why don't you compare these two platforms? And I was convinced that he was wrong. When I looked at him, he wasn't. But I thought it might be some fluke. Since then, there's been about four or five papers doing this again. And it's getting better, but it's still pretty bad. Correlation now is up to about 0.6 across the different platforms. That's a real problem, obviously. Here's one of the many reasons, subsequently, that we've discovered for this problem. Shown in orange is the ref seqs, the reference sequence that the National Center for Biotechnology Information maintains, a curated resource of what is the definitive mRNA subsequence of that gene. Shown in Black are the Affymetrix probes, which part of the genome they interrogate. Well, watch it. They seem to be falling off the edge. So if you have, for instance, a cDNA that interrogates this part and an Affymetrix that is off the ref seq. It may have, in fact, a very poor reproducibility. And we've subsequently done a study of that. We've actually looked at the position. By the way, Affymetrix previously had considered the exact sequences of these probes as proprietary. So it's only last year that they've revealed what those oligonucleotide sequences are, so that we can actually position them on the gene to know where, in fact, they're interrogating the gene. And consequently, when we eliminated those Affymetrix probes which fell off the gene, we had much better correlation across platforms than previously. There's other reasons, but that's, I think, a major reason. AUDIENCE: Maybe try to find some correlations [INAUDIBLE]?? ISAAC SAMUEL KOHANE: No. So here is a study, that was done by one of our fellows, of Affymetrix going from HuGeneFL, which is one generation, so to U95A. So this is looking at human RNA for the same genes. We took the same RNA, from the same muscle, and hybridized it on the same day. Looking at correlation coefficients, we got 0.7 and 0.59. Why do you think that is? AUDIENCE: [INAUDIBLE] Again, you might be looking at different-- ISAAC SAMUEL KOHANE: The answer as they picked different subsets. As they learned more about the human genome, you could be more and more accurate about which oligonucleotides uniquely represent that gene, as opposed to matching another gene. And so what we found is that the larger number of probe pairs shared between the generations of Affymetrix microarrays, the better the correlation between those microarrays. But to answer your question, that means that if it was done on a previous generation of microarrays, even within the Affymetrix family, reproducibility is not good, at least for those genes that don't meet those criteria, which is a huge waste. Because, remember what I told you, we can do DNA sequencing on blood. But the expression analysis has to be done on the tissue that you care about. So if someone used up some precious brain specimens to do that, they're gone. And it'll take them several years to recreate them. When I actually first presented this at NIH, I actually heard a groan ripple through the whole audience. Because they had actually just blithely gone on from one generation to the other and assumed that they would just be able to analyze everything together. And that was millions of dollars down the tube. Please? AUDIENCE: So you assume that all the old stuff is just junk? ISAAC SAMUEL KOHANE: No. But what you do is try to figure out which stuff is reproducible, and you're careful about it. And it's actually quite doable. And we do it. But do we naively-- and sure, we can't do it wholesale, the way they thought they were going to do it. Yes? AUDIENCE: The good thing is you can do [INAUDIBLE] process. You can just repeat the whole experiment. AUDIENCE: If you have the tissue. ISAAC SAMUEL KOHANE: If you have a tissue. Plus it's not-- even at $200 a shot to put on a chip, plus labor costs may be $500 a shot, so it's 500 times 100 patients, so it's looking like real money. And but I think the real problem there is limited tissue resources. It turns out to be a real issue. For those of you have ever tried to do one of these experiments, getting the right tissue, with the right annotation, out of the medical system is very hard. In fact, that's why I really like the MEMP program, because it makes the engineers go and deal with doctors. So they understand what kind of social pressures are efficacious getting the right kind of biomaterials out of the medical system. It sounds like a trivial issue, but it's not. So let me show you the following graph. Shown here, on the x-axis, is the amount of spiked-in probe. So this is a probe of known quantity. And here's the readout from Affymetrix on intensity. Shown in red are the perfect match probes. And you see, as you add more and more probe, it increases. Just as you'd hope, the signal increases. But what you see here is that it saturates at the high levels, right, as you'd expect it to. Now, pretty interesting is the mismatched probes. These are probes that were designed not to hybridize with the target sequence. But they too, in fact, rise also. They lag, but they rise, as well, with the amount of spiked-in control. And in fact, they don't quite saturate. What does that mean? It means that the average difference actually starts dropping down. Because even though the perfect match is saturating, the thing that you're subtracting out, the mismatch keeps on rising. So at the higher levels, you actually get a dropping signal. That's lousy. And that's with a clear solution background. Here, with a eukaryotic background for the spiking, it gets even noisier. You get less sensitivity and even more marked effect. All of these things are true. And yet, you could actually do very, very good science with microarrays and actually discover lots of biology, but you just have to be aware of its limitations. Let's get back to the dangers of dimensionality. Remember how I said, in the very first lecture, I gave my somewhat lame metaphor about, if every base was a bead on a necklace and the necklace was worn by everybody in Shea Stadium, it's take on the order of 1,000 Shea stadiums to have as many beads as we have bases in a single human genome. I was trying to impress you with, even though we'd like to think that gigabytes are trackable, it's still a pretty big amount. So let me revisit that question. Given 1,000 stadia full of people, with necklaces, with beads of 10 colors, and let's say that 1 in 1,000 necklace beads are different every baseball season. And you notice that the third seat, on the fifth row of all games, has a yellow bead in the middle of the necklace, every year, in the season opener, in the largest New York stadium, in the last 26 wins, in the last 102 years that the New York Yankees have won the World Series. In fact, let me restate that. Every time in the last 102 years that you, the very old sports fan, have seen that this equally old person is wearing a yellow bead in the same spot in the necklace, each one of those 102 World Series, the Yankees have won. I'm just asserting that as a fact. How good a bet is it that the bead will be also yellow on that position next time the Yankees are in the World Series? Do any of you want to put money, their own money on that bet? Anybody give me $1 for that bet with a $1 million in exchange, with $1 billion in exchange? The answer is no, of course. Because it's all too easy, after the fact, to look through all the games and through all the millions of combinations of necklaces, to find one necklace that was highly, highly correlated with a particular outcome, whether it be the Yankees winning the World Series or the color of the Coke bottle tops that day. The point is, if you have enough opportunities to test something, you're always going to find it. Which brings us to the following point, if you have that much in the way of genes, and only hundreds of patients, it's going to be all too easy to find a correlation between the values of those set of genes and the outcome that you care about, let's say mortality. AUDIENCE: Can I ask a question? ISAAC SAMUEL KOHANE: Please do. AUDIENCE: I mean-- ISAAC SAMUEL KOHANE: Go for it. AUDIENCE: I actually need to think about it. ISAAC SAMUEL KOHANE: Go for it. Go for it. In fact, that would be a good project, just to calculate that probability. AUDIENCE: Well, I get what you're saying. And I would accept it wholeheartedly, without any reservations, if you were talking about just-- maybe I was misinterpreting-- ISAAC SAMUEL KOHANE: Yeah. AUDIENCE: --just about any necklace in the audience or whatever. But as you begin to-- I mean, the example that you gave was starting to get more and more and more and more specific. And if I remember, correctly, the more specifications that you put on a probability, the harder it is to-- ISAAC SAMUEL KOHANE: Right. AUDIENCE: --feel that. And so then you begin to get into correlations that actually are just-- are probabilistically significant right? ISAAC SAMUEL KOHANE: Except that-- you're absolutely right. Everything you said is true. But you missed the following fact, which is I had the opportunity now to look at all the people in all these thousand Stadia. And all I had to do was find one bead on one necklace that would predict the game. I had millions of opportunities to find that. But you see, that's very good, Todd. Because you just went through the same error that all the functional genomicists make. Because after the fact, I can always find, with the thousands of genes, some gene that was up or down by dumb luck across the phenotype that I care about, bad cancer versus good cancer. And that's an important intuition to have. And I'm glad you asked the question, because that's exactly it. If indeed, I had been able to-- if I had a hypothesis I kept updating. And every time, it got stronger and stronger, then, boy, was I a genius by finding that yellow bead. But if after the fact, after having done all the experiments with 102 World Series, I look and search all the attendants to find which bead that correlates, you know I'm going to find it. It's certain that I'm going to find such a bead. But what's the likelihood that that bead is going to be useful for the next World Series? 0 or something. AUDIENCE: Well, you can hypothesize. The correct approach for scientists-- and just keeping to your analogy-- would be to hypothesize that. You wouldn't say, it is. But you'd say, if this is truly a phenomenon that actually exists, then we should see it. And then if you saw it, we would be able to make a stronger statement about it. ISAAC SAMUEL KOHANE: Right. That is correct. But that's not what happens. AUDIENCE: That's not what people do. ISAAC SAMUEL KOHANE: So remember, this is my poster child for you, that I gave you, of large b-cell lymphoma. And I like this paper a lot. It's groundbreaking. And it says the list of genes that distinguish between low and high clinical risk. Do you remember this paper? All right. This appeared, I think, in Nature. 2002, same disease, different microarray platform-- it's out of Todd's group. And they, too, find out high and low risk cohort that are predicted by a set of genes. I have a process point, which is, how the hell do you use, essentially, the same technique on the same disease and get into another first class journal? This was, I think, Nature Medicine or was it New England Journal? I can't remember which. AUDIENCE: New England Journal. ISAAC SAMUEL KOHANE: New England Journal, I think. New England Journal. Yeah. No, no, no, that was Nature Medicine. Then yet a third paper-- and no, I fully don't understand how this can happen-- same disease, same question, also using microarrays, and they, too, find a high and low risk group. What was the overlap in the set of genes that predicted outcome in these? On the order of 20% to 30% depending on how you sliced it. AUDIENCE: Chance. ISAAC SAMUEL KOHANE: Which is getting pretty close to chance. Now, I think there's some signal there. There clearly is some signal there, but I don't know how much of this yellow bead phenomenon we have, which is basically the multiple hypothesis testing problem revisited in a very, very vicious way. And the problem is-- this is more than actually a problem. I'm just irked, of course, that the same thing got published three times in first class journals. But the real problem is that people are being stratified today in oncology protocols based on expression profiles. So you know that, if they're not taking this multiple hypothesis testing really to heart, patients are being stratified the wrong way based on a subset of genes. And remember, I told you that you could get together as groups of two for a final project? I would strongly urge at least one group to think about looking at something like this. Perhaps this very study, because no one to my knowledge yet has actually published this. The emperor is not, at least, wearing enough clothes. What was overlap? And what was the characteristic of the genes that overlapped across those? And could you, in fact, come up with a robust-- more robust set of predictors based on those three studies. But that's highly problematic. And I think most of the fault is in this overfitting problem. So why are these things inconsistent? Well, also, they were different populations. Not all humans are the same. And this brings me to another point, which is, why, all of a sudden, is it OK to do 100 patients and do a clinical study, whereas for everything else that we've ever did, with much fewer variables, we had to get thousands of patients? And the answer, of course, is because we have limited resources and so on. But I think it's somewhat delusional to think that we could get as good predictors, with only hundreds of patients, as we could get with thousands. Now it's true that we can characterize two groups. Because we have all these thousands of genes to measure. And we can measure broad patterns. In fact, Botstein, who was one of the leaders in this area, never pretends that any single gene is a reliable measurement. He says, just give me an overall impression of the biology, of the pattern that's going on. But when you start then taking a set of specific genes and then hanging someone's prognosis on that, then you're perhaps getting close to a line where you have to be a lot more methodologically strict and understand what are the power and significance of your studies. And again, some of them are using different measurement platforms. There's overfitting and some use of indirect measures. So that's exactly where I wanted to end up today. I want to give you a heads up that, soon, you going to be getting your first problem set. It's going to be a very simple one, which is it's going to be a treasure hunt through all the national databases, to allow you to just make sure you touch all the databases, and so that you can be a modern biological researcher and find out what you need to find out about the different biological databases. The other problem set's going to be one around clustering and classification. Just make sure you can do it correctly. But I'd like to start thinking about your final projects. And it's now mid-February. Again, I really would like to have settled on your final project no later by than mid-March. So, therefore, if you're having any doubts that you're honing in on a project, please talk to me earlier than later. And there's nothing wrong. In fact, it's just the opposite. It's great if you can think of a problem that you would actually like to do further research on, there's nothing more motivating than that to address.
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_9_Machinelearning_Approach.txt
MARCO RAMONI: Today I'm going to talk to you about the-- about basic genetics, what geneticists do, and how genetics is moving into the genomic area by increasing the size, the scope, and the quality of genetic studies. We'll do-- [DOOR CREAKING] You can close the door. So the origin of all of this is-- I will put myself at the particular time point, and the origin of all of this is the publication of the Human-- of the results of the Human Genome Project, a first draft of the human genome. And when they announced the publication of the human genome, the first item on the future agenda was a SNP map, and they said, this SNP map promises to revolutionize both mapping diseases and tracking human history. And I will try to show you how you do both using SNPs, and if you don't know what SNPs are, hold onto your breath. You will know it in a second. [DOOR CREAKING] Please. The SNPs, I will tell you in a second in more-- in detail. SNPs are things called single nucleotide polymorphisms, and there are subtle variations in the human genome across individuals. What this mean? It means that one of the most startling statistics out of the Genome Project is that we're all similar, very, very similar at genetic level. But there are some very, very subtle variations, one base pair swapped here and there, that make us different and, most of all, makes our genomes different. And what you can say is that this is the real meat of the human genome. People paid billions of dollars not for the curiosity of a handful of scientists, but because the promise was that, once we know the human genome, it would be easier to track down the genetic causes of diseases and the genetic causes of-- and the-- possibly find some cure for them and get a better understanding. And as you will see, SNPs are the real meat of the human genome. So I will start from the '80s, when people started thinking about-- seriously thinking about developing a human genome project. And the Human Genome Project-- I will tell you what genetic polymorphisms are, what type of polymorphisms exist, and then we give you some basic terminology of what are the terms that geneticists use to describe the properties of a genome. And then I will talk to you about the real interesting stuff, which are complex traits, or those traits, those diseases, those observable features of an individual that don't come from one single gene, but they are the result of an interaction of more genes or the interaction of genes with the environment, and I will tell you why these traits are complex. And then I will tell you how people designed these kind of-- the experiments to identify these complex traits and also the simpler ones. And once they have designed it, I will tell you how do you analyze this data once you have them. And then I will tell you what's the word on the street on the latest fashion in analyzing the genome. So as I was saying, the intuition behind genetics is that we can find causes for diseases without knowing the actual mechanism of the disease, just associating changing in the genome to observable traits, observable characters. Now, as you can imagine, this is not really an easy idea to sell, and when people started selling it in the early 20th century, people didn't really like it a lot, especially biologists, who say, what kind of science is this? There is no mechanism in this. There is no understanding of how the things work. But then people started to deliver results, and how deliver results about this? Well, once we know that gene exists-- before, it was kind of difficult, but-- I will show you how you did it before, before knowing how to genotype somebody. But one of the-- the main intuition that changed everything is due to a guy called David Boston that, in 1997, showed that there are natural markers on the genome called polymorphisms that make us different and make this genome different, right? So if I have a little change in my population, what I can try to do is collect people that have the disease and collect people that don't have the disease and see which marker shows up more than I would expect in one group or another. And I will tell you how you play these games. You can play them in several ways, but the way in which we rely-- we do-- we conduct the studies is not by tagging something, but leveraging our natural tags where exactly these polymorphisms. This is just a summary of the central dogma of molecular biology. You all know this. You got a class from Dr. Butte, I think, about this. Some of you have a masters and PhDs in biology, so they know this better than I do. The assumption is that there is DNA that is all identical in all our cells, and then this is translated in RNA. And RNA is then turned into proteins which determines whatever we can observe of somebody, the traits, the diseases susceptible to, his physiology, the metabolism, even drug resistance. So the idea that polymorphisms could be used as natural markers created exactly the background of the Human Genome Project because, at that point, we had an intellectual tool, a scientific tool to track down diseases using the code. So the easiest thing we can do is to find out diseases that are caused by one single screw-up in one gene, and there are various ways in which this can actually be a screw-up or not. They can be dominant or recessive. Recessive means that you need to-- you know, you receive two chromosome, one from your father and one from your mother. To be recessive, you need to have both chromosomes that are screwed up, your gene in both chromosomes. To be dominant, one is going to be enough. And we classify these diseases also by another dimension, which is, if it's on the X chromosome, which is-- has-- there's a symmetry for men-- and/or if it's any other chromosome. So if it's any other chromosome, it's going to be an autosomal and can be dominant or recessive, and examples are the Huntington's disease and cystic fibrosis. And if it's X-linked, it can be dominant or recessive, and I will give you right away a little example of this. Today, there are about 400 single-gene diseases that have been identified. So for 400 diseases, we know that there is a little screw-up. Sometimes there is a very, very simple reason, because a base is turned from being a normal base, coding for amino acid, into stopping signals. So when that particular sequence is read, the protein does not code after that particular point and the protein is screwed up, and this creates disease. OK, so I need-- this is the boring part. I need to give you-- I need to give you a little terminology. Now, an allele is a sequence of DNA bases. So for each SNP, for each gene, for each whatever, for each piece of your DNA, you have two alleles, one coming from your mother and one from your father. Locus is a physical location on the chromosome. It's like [INAUDIBLE]. We know exactly where it is. We know it's there. We know on which chromosome is it, and we know the position of the chromosome. Linkage is a proximity of two alleles on the chromosome, and it is kind of ambiguous because, as you will see, proximity may have several meanings in genetics. The marker is an allele with a particular position that we can track down somehow. The distance, the physical distance, is the number of bases between one point and another, but this is not the only distance we have, and actually is not really the most interesting distance we have. The most interesting distance is the probability that two points on the chromosome will be recombined when you make your children. And the distance, this probabilistic distance, doesn't necessarily map on the chromosome with a constant number of bases, as you will see in a second. So there are-- this means that there are points on the chromosome that are easier to recombine and points on the chromosome that are harder to combine, less likely to recombine. The measure by which we identify this probabilistic distance is called centimorgan. A phenotype is a observable character. It can be susceptibility to a disease. It can be diabetes, a disease itself. The genotype is the internally coded inherited information, so it's a piece of DNA. And the penetrance is, if you take the frequent interpretation seriously, is the probability that, given that you have the allele, you will develop the phenotype. OK? So the physical distance between two alleles are a base pair, but the combination among them is not constant. Well, you all know Mendel's first law. We know that allele pair separate when the gamete is formed, and they are randomly reshuffled and create new pairs. Now, use-- the probabilistic measure we want to use is the probability that two points will be recombined at the next passage to the next generation. So we can say that, on average, 1 centimorgan, which is the probability of 1% of being recombined, is about-- happens between 1 megabases. Every 1 megabases, you have one probability of being recombined. Now, these are kind of little gossips. If human autosomal physical map is 3 billion bases, as you know, the linkage map in centimorgan-- the distance, probabilistic distance between points-- is different between humans and-- between male and female humans. There are 2,800 centimorgan for male and 4,200 centimorgan for females. Why's that? Females have a bigger genetic code. Male have this little Y wimpy chromosome, and females had this second big, beautiful X chromosome, which is much bigger than ours, less stable, but bigger. But if you compute the difference between these two things, you see that the probability of recombination of the correspondence between the probability of recombination for male and female is also different. One is a bigger code, and so the difference, you would say, is about 1-- on average, is 1 million pair-- 1 centimorgan for 1 million bases. And for female-- for male and female, it's about a bit more than a million for male and a bit less than a million for females. And one notion that would be absolutely important for our ability to use markers is of cosegregation, the fact that two alleles are transmitted together to the next generation. AUDIENCE: I just have a quick question on that. MARCO RAMONI: Sure. AUDIENCE: So why-- MARCO RAMONI: You're upset because they said that women had better chromosomes than we have? No. AUDIENCE: I agree. The question I had is-- was back in the definitions of a locus, and I just wondered, so the-- those things, can-- are those affected by, like-- so you said it's the physical location on a chromosome. Can the locus of a particular allele change if the packing into the chromosome structure is altered in any way? Like, is that something-- MARCO RAMONI: That's a very good question. So the-- AUDIENCE: And then follow-up to that is, if that is the case, does that affect the cosegregation as well? MARCO RAMONI: Yeah. So this is a very good question. You can imagine a locus as a variable and an allele as the state of this variable. So you have-- you are given this locus, and at position 3,022 on chromosome 5, you have an A. And if this is a polymorphism-- if it this is not a polymorphism, everybody has A. But if this is a polymorphism, we expect that something like above 1% of the population will have a different letter there, like a T or a G. Now, this is due to mutations, and there is nothing we can-- it's not affected by the locus. But the packaging of the genes, of the chromosomes, where they are next to each other, may affect the probability that it will be transmitted together. So there are several reasons why this linkage disequilibrium exists. One is the physical recombination of this, which is very difficult to identify, and the second are historical reasons. As I will tell you in a second, Caucasians come from a handful of people who left Africa, and this handful of people were basically assimilated in this process. So we all come from a very, very restricted number of people that were alive between 25,000 and 50,000 years ago, which is a blink of an eye from an evolutionary point of view. So if you look at the map of an African American population, or an African population and a European population, you will see that there is a staggering difference in the variability of those and our variability. Now, if you look at our things, you may-- if you do a genotype study of Caucasians, you will see alleles that go together, but you will not be able to say if these alleles go together for actual physical recombination or simply because you live, we live-- we draw our genetic codes from a depleted pool in which only very few combinations are available. Does that answer your question? AUDIENCE: Yeah. MARCO RAMONI: I have a picture in a second that may be helpful in this. So let me go back to all this, a single disease works and how dramatic can be the effect that a single disease has. Hemophilia, as you know, is the X-linked recessive disease that is fatal for women, probably. So if a man-- it's on the X chromosome. We men don't get an X chromosome. So we only get recessive. We only get the mother. But-- and we-- so if the woman has it, she doesn't have any manifestation. She's just a carrier, and if a man has it, he has manifestations of the disease. We don't observe women with-- it's very rare to find women with both X chromosomes, male hemophilia. Now, this is a major screw-up in the history of Europe, caused by a gene like that. Now, this is Queen Victoria's family tree, and Queen Victoria is the one-- is the second-- is the one right in the middle, half orange and half white. So if they're round, both are women, and square are men. And if it's a half colored, she's a carrier. If it's all colored, is an affected child. So when Queen Victoria had her fifth child, but her first male-- we're talking about monarchy. Males have some kind of importance here. The first male, the male turned-- Leopold, the male, turned out to be hemophiliac. And she was very, very upset, and her declaration was that their blood was strong and there was no change in it, no weakness in it. So if we take her seriously, what would you think had happened? I mean, this thing has incredible consequences. If you look at down here and you see this row of seven-- of five children with no descendants, that's the Russian family. They were exterminated by the-- during the Russian Revolution, and people say that the wife of Nicholas II started seeing Rasputin-- was one of the major causes of the upsetting of the population-- because her first male born was hemophiliac, so a major screw-up. I mean, this led to the withdrawal of the Russians from the First World War and big, big, big changes. I mean we still pay taxes because they are communist, right? So what do you think happened? Who is responsible for it? They start from the assumption that they're strong, their blood is strong. You say, OK, so poor Victoria had a mutation. Would you believe that? Poor Leopold had the mutation? How would you compute that? What I would do is look at the randomness of the distribution of those alleles in the second generation. We cannot genotype these people. We can infer the genotype only by looking at what they have, right? So you would expect that-- all of these guys on the third row, they are Victoria's offspring. You would expect that 50/50 distribution, right? Now, for some of them, we cannot really evaluate. We have two-- we have three we have to remove out because they don't have the same, and we don't know anything about that. If you take the other, we have 1, 2-- we have 1, 2-- they are actually healthy, and we can almost say that. And then we have three that are either affected or carriers. This is a good 50%, right? I mean, I would say that you cannot really get any better than that with this sample size. It's almost a perfect 50%. If you go down and look down streams, you will see that the ratio will be exactly the same. For the women-- for the daughters of Nicholas II, we cannot know, but if you look at R in there, you will see, you have the same distribution. So I would say that from Victoria onward, everything is pretty consistent. So she didn't-- so, OK, she's might have a mutation, or she may have been cheating on her husband? No, because otherwise she would've been cheating with everybody, right, for all the children. We have a very pretty nice distribution of this. So what happened? How about her mother? Well, her mother, we don't have really enough information. How about her father? Her father married a woman 20 years younger than he was. He was rumored to be homosexual, and he died six months later Victoria was born. The postman maybe? If you have an hemophiliac postman, there would be a good explanation. Now, somebody made some research and found out that a grandmother of Queen Victoria's had two affected siblings who died of hemophilia. So in this case, the blood is not really that strong because it's coming in from Saxe-Coburg, from that line, and becomes part of the monarchy of the European aristocracy. You don't like this kind of gossip? I mean, this is-- [LAUGHTER] I mean, this was really the story of the day, of the century, in-- at that time. But you see how we did it. We had to wait 200-- a hundred and something years to find out what really happened because we had to observe-- we cannot genotype these people at that time, so we have to observe their, of course, characters of their things. But if we were able to genotype them, what could we do? Well, we could look at this thing of these small variations in their chromosomes, and the oldest variation we have been able to use are called simple sequence repeats and microsatellites. So these are-- let me see if I have a slide explaining this. No. So there are parts of the chromosomes of the genetic code, in which you have sequences like G-A-T-A that are repeated several times in your chromosome, in your genetic code. They are repeated 13 time in me and 15 times in you, two times in somebody else. And if we count these things, we can actually identify a region of the chromosome that by linkage-- remember, by being attached to it from evolutionary purposes-- we'll identify a part, a stretch of the genome that we can actually tag. Now, the problem is that simple sequence repeats and microsatellites add this little drawback. They cannot occur in really interesting regions. You cannot change to match a genetic code by repeating the same thing 13 times or two times and hope to get coherent proteins out of it, right? So we are confined in interest in regions that, by design, are supposed to be kind of uninteresting from an-- from a functional point of view. There may be interest from an evolutionary point of view, but they're kind of leftover of something else. Now, SNPs have this great property. They are one single base. And so it's something like this. You have your sequence, and you have that little base that in 90% of the population is T and in 10% of the population is G. To be sure that-- to qualify a single change like this as a SNP and not just a simple mutation, it has to have a minimal appearance in the population. And we stipulated this 1%, but nobody takes the number seriously because that's below the genotype error-- the expected error. So when you genotype people, usually you have a higher cutoff to say, well, this is not really a SNP. This is something that is just due to genotype error. They are also the most common type of variations. Not only they go in interesting places, but they can actually be in any place in the genome. And they have different functions and different roles in terms of protein coding. So a cSNP is a SNP occurring in a coding region. An rSNP is a SNP occurring in a regulatory region. The sSNP is the gene that occurs in a coding region, but by changing the SNP, you don't change the amino acid. Right, we have all this redundant vocabulary of amino acids, and in this case, it's going to be functionally silent, but it's going still to be a marker. Now, why actually we cannot do this-- why SNPs are important, and why we cannot have microsatellites in other regions? Well, this is a very old study, one of the very first. And what this Kreitman guy did was to sequence 11 alleles from a locus from a gene called alcohol dehydrogenase into drosophila. Now, if you-- you have 11 coding regions, and you have 14 sites that have alternative bases. Now, if you simply imagine that these are random changes, you would expect that about 70%, 75% of them would change the amino acid. Now, when you actually look at them, you see that basically none of them does. Why is that? Well, because this is a very important gene for this animal. They are born and they are nurtured in alcohol, like some of my friends in college. And they-- so the ability to detoxify alcohol is going to be a very important evolutionary point for them. So nature can make mistake, but it can give them so many changes in a critical region. But I want to challenge a little your Darwinian souls. Mind, it's not that we observe them, we have the change, the random change, we kill off 75% of them, and then we see that the remaining 25% have survived. These things just don't happen. There is no natural selection, in terms of environment, that is killing the 75 people-- 75% of drosophila, OK? You with me? So in a Darwinian scheme, what we would expect is that you have a random mutation. You go out. You don't run fast enough. You get killed. In this case, you get a random mutation. Your random mutation is selected by the environment in some way, typically by killing you, and then-- before you can reproduce-- and that's it. But in this case, these things just don't happen, so there is no random change and then selection. There is something there in the control mechanism that will prevent the animal to have the mutation to begin with. OK? This is the SNP map I was saying before. This is how you read a SNP map. So you remember that you have two alleles for each locus, and in this case, what we have done is to genotype all these people-- they are the rows-- for these loci, that are the columns. So if an individual has a blue spot, the blue spot means that he is homozygous at the major allele, so he has both chromosomes with the most frequent allele, the most frequent in the population. If he's green, he's heterozygous, and if he's red, he's homozygous in the minor alleles. And his wife is a missing link. Now, how would you do an association study? Suppose I draw that red line and I tell you, you know, down here, I have the phenotype. Up there, I don't have the phenotype. Could you make some association? Well, you know, if you look here, there are some genes that, one side are all blue and one side are more colorful. You can do some statistics and say, well, this thing would allow me to distinguish between these two groups and predict which one is going to get the SNP-- the phenotype. Now, I'll ask you a slightly different question. This is your SNP map. What is the phenotype? What makes these two groups different, something that is observable? Can you say that? Come on. Buy a pizza. Guess. Why? Guess. No taker? AUDIENCE: This is a human map or-- MARCO RAMONI: Yeah, it's a human map. These are two human samples. AUDIENCE: I mean, [INAUDIBLE] African Americans. MARCO RAMONI: You got the pizza. [LAUGHTER] Down there are African Americans. Up there are Caucasians. And so to go back to your question, if you look-- so the evolutionary pressure of the link between two elements in the European population-- take the fifth and the-- no, take the sixth and the seventh, two spots, right? There you-- no, let me do this this way. Take these two points, OK? These two points, if you study them, they go together, if you look at the statistics, they will go together. Why-- if you do it in a European population. Why? Because you don't have alternatives. You don't know if these things really go together because some physical reason, or they're really just recombining like crazy but we cannot observe because our population is a population that doesn't have that particular SNP and this SNP exists only in this other population. In this case, it's the African American people. Does this answer your question? AUDIENCE: I'll have to think about it. MARCO RAMONI: OK. So let me go back here. So the first quantitative law we-- the most important quantitative law we have in genetics dictates how many major, minor, and heterozygous people we'll have in a particular population. Let me-- come on. OK. So in this case, you take a single allele, and you can actually see that you have a-- you expect to have more people with the homozygous at the major allele, less people heterozygous, and less people that are homozygous at the minor allele. Now, what rule exists there to make-- to distribute these proportions? So the law is the Hardy-Weinberg law. It states that the probability of having a major allele, the minor allele, and an heterozygous is this formula, is p squared plus 2pq plus p squared equals 1. We call this a situation of equilibrium. When everything is all right in the population, there hasn't been some major screw-up, this is the law that dictates the distribution. This is what we expect in a population in equilibrium. Now, in a hermaphroditic population we have an equilibrium in one generation by redistributing these things. We have no autosomal genes, so in our case, we need two generations to get an equilibrium. Now, you can use this to make a lot of little games, like, for instance, how many Caucasians are carriers of cystic fibrosis? Well, we know that cystic fibrosis affects 1 of 2,500 Caucasians, so this is our q square, right? So 0.02 is going to be our q, and the number of nonaffected alleles is going to be 98-- 0.98, 98%, right? So we can plug p and q up there and compute the 2pq as 1 out of 25, which is a remarkably high number for a disease like this. Now, we make all these calculations under some assumptions. First assumption we make is random mating, which is not a justification to have sex with anybody but is the mechanism of reproduction that we imagine that exists in our population. That is, the person you will mate with will not select you on the basis of a particular genotype. If we are interested in that particular genotype, she would not ask you for your genetic map and say, no, you have a T in this locus, I don't want you. OK? If this is the case, all our calculations go down the drain. Well, we can make some adjustment then. The consequence of this is that we have a lot of problems when the selection is based on things that are proxies, like being my relative. If somebody is my relative, I'm going to introduce a bias because I'm going to select her on the basis of her genetic code. I know I don't do this for the genetic code, but I'm going to have-- I'm going to do it anyway. And the other assumption we have is that we have an infinite population, which is kind of sensible for us, for 6 billion people. So the other thing we-- so why do we need these assumptions? Well, because we know that the mechanism of spread of a character in the population-- it's called drift-- is the way in which a particular allele will get into the population and either conquer it all or just simply disappear, right? So at some point we will reach some kind of evolutionary stability in which these things will be around. And the problem with this, the moment in which people in the-- mate locally, because, again, this creates some particular population-- some particular bias with the fact that your fishing pond is too small. So at the end, you are going to erase everything in that particular population. If everybody mates with each other in the particular-- in a very small population, whatever we can say about variation is going to disappear. Now, the contrast of-- what is opposite of the drift is a mutation. So we have these random mutations that come up and show up in our genome, but as you can see, the funny thing between these mechanism-- that is, whatever we observe is drift and mutation-- mutation introduce changes, drift makes these change stable, so just get rid of them-- is in sharp contrast with what we have been taught in high school, that is, there is some kind of selection. What is selection here? Where people get eaten, where people don't run fast enough? Is there a way to account for selection in this? Yes, there is. So let's keep a quantitative representation of selection. A quantitative representation of selection is a function of fitness. That will tell you how good is somebody with that particular SNP, with particular allele, to survive in this particular environment, right? Now, so suppose you have an allele that has a distribution of 0.6 and 0.4. So you apply the Hardy-Weinberg law, and you have that distribution of homozygous and the heterozygous in your population. Then you have a function. We have a fitness function that will tell you that you have a-- your selection rate will be 0.2 because you're going to say that if you have the major allele or the heterozygous, you are all right. Your probability of survival is 1. If you are homozygous at the minor allele, you have a diminished capacity to survive. So we can compute out of this fitness function a selection function of 0.2 that will tell us how many people we lose in a particular category at each generation. Now, if you look at the effect on the first generation, you see that there is an increase in the homozygous at the major allele, a decrease in the homozygous-- in the heterozygous, and sharper decreases in the homozygous at the minor allele. But there is something funny here, that because we used a differential there, the smaller is the number of individuals that are homozygous at the minor allele, the smaller will be their depletion, right? So this mutation, that's a mutation doesn't really go away ever. It will simply stay there and slowly be taken out-- taking out individuals, but it will be lurking in the genetic code for the years to come, OK? We have no explanation in this way of why dinosaurs disappeared. This simply tells us that a particular mutation will keep floating in our population and we have no good mathematical models to explain how they will disappear. Now, that's this-- everything is based on Hardy-Weinberg law, right? Does it really work? So this guy, this [INAUDIBLE] in 1975, in England, they made this experiment. They took blood samples. This is their blood groups, which are, again, governed by one single allele. And they look at the population of this group, this random population, and they found that if you compute the expected values using Hardy-Weinberg and you compute the values, the observed values, these numbers are remarkably similar. You see, so we would expect 261.54 to be Mm from Hardy-Weinberg. We get 363. We get 636 for Mn, and we get 334. Now, these numbers are so similar that somebody wrote a paper showing that actually they had been falsified, because you don't-- the precision of this number is above any reasonable statistical expectation of precision for a particular law, OK? But let's say they falsified the number because they didn't know statistics and they didn't know they could get a way with the better results. But let's buy this thing because we have seen in other population that it's not really that close but it really works. But we've also seen some population in which it doesn't really work. So beta model being sickle cell anemia in West Africa, if you look-- if you compute the distribution using Hardy-Weinberg, and you compute and you observe the distribution in the population, you have this remarkable change. Look at the difference of those two numbers. We expect that, to have 254, and as a matter of fact, we have 64. Why? AUDIENCE: [INAUDIBLE]. MARCO RAMONI: Say again. AUDIENCE: [INAUDIBLE],, so you can't observe. MARCO RAMONI: OK. I think I-- no, I made a mistake. It's flipped. So we have-- sorry, we have 600 and something, not 64. This is 645, so your explanation is the opposite. So you have to explain the opposite phenomenon, not why it's smaller but because it's bigger. [LAUGHTER] As a matter of fact, you have a-- you see you have more homozygous-- heterozygous in the middle? Oh, no, no. The number is correct. So this is the thing. [LAUGHTER] The inconsistent thing is that you have too many heterozygous, given the homozygous and the minor alleles. You shouldn't have them. You see? AUDIENCE: But isn't that evidence of a selection, evidence of gene selection? MARCO RAMONI: You get two pizzas. Because the heterozygous get protection against malaria, so in doing so, you have a bias towards having a lot of heterozygous in the middle and keep the allele around in the population. And the other below there doesn't remain-- doesn't really change, but the selective advantage is for these 5,400 people who have the heterozygous. So in that particular case-- this is something that probably wouldn't be a good test to make here, because here malaria is not really a big issue. But in West Africa it's going to be a big problem. AUDIENCE: If we did the-- did that test at birth or before birth, would it be consistent with the prediction? MARCO RAMONI: Before birth? AUDIENCE: If you did if you did the same distribution of the alleles before they had this pressure of malaria? MARCO RAMONI: Well, that's is the-- AUDIENCE: Is it the dying-off factor that's causing the difference or something-- MARCO RAMONI: No, there is no dying. Well, so let's put it this way. We actually do it before they're born, because the selection is actually on this parents. The pressure, as usual, is on the parents. So if you keep these things around, the chances that these people will go in older age and reproduce are higher. So we-- the entire population has a global advantage in keeping around this particular variation, but only in the particular form, which is the heterozygous. The homozygous is going to be too bad for you, but the heterozygous is going to strike a nice balance between getting the disease and actually being protected by-- against malaria. If you do something like this here, you change the environmental condition, you're screwed. Here, I'm pretty sure, if you do the same test, this will not show up because the population as a whole doesn't have a pressure for this. Now, how do we map these things? How do we use a marker as such? Well, first we want to find the genetic basis, a genotype for a particular disease that is our phenotype, and what's the mindset for somebody looking for this? We are not really looking for the SNP or the mutation that is changing there. We are trying to find the marker that is in the proximity-- well, we'll be happy to find the marker that is in the proximity of the actual codes. So the mindset is more or less this one. I have a marker, and I observe in my data set a dependency between my phenotype and my marker. But in reality, the extent of my claim cannot be, this marker cause this phenotype. The extent of my claim can only be, there-- this marker is linkaged to equilibrium with some real genotype that I haven't observed that is actually causing the phenotype. And you see where the complexities start emerging. The first complexity is that the phenotype and-- the genotype and the phenotype may have a complex form of causality. They may have an interaction with an environmental condition, or say they have incomplete penetrance. And the other thing is that, our group is the linkaged equilibrium between the marker and genotype. I don't expect to have everybody getting exactly the same pair of genotypes, right? So this is where stochasticity comes in, and this is what is making our life slightly more complex but definitely more interesting. Now, traits don't really follow single gene models. If you look at the list of diseases they gave, they are kind of minor diseases, minor. They have small incidence. They are horrible diseases, but they are not going to save the 50% of people who die of stroke or people who have diabetes or people that have this for more complicated alleles factors that cause more interesting diseases. On the other hand, even some Mendelian traits are actually complex. So if you look at the-- so the-- and sickle cell anemia is a classical Mendelian disease, but-- so we know why you have it. But the phenotypic variability of the disease is immense. There are children that die at 13 by stroke, and there are people that live forever. And there are perfectly healthy people. We don't know why this happens. We know these people get the disease, but we don't know what the-- why this happens. And so people are, for instance, studying the early mortality of kids with sickle cell anemia, try to find [INAUDIBLE] SNP, and they say, well, you know, these people get this particular disease that is genetically found. Maybe the difference between their long-term survival-- the chance of their long-term survival is really due exclusively to another SNP. Actually, they are almost finding it. Another problem is incomplete penetrance when you have an interaction with another variable. This is a recent, very famous case called Bcr1. Bcr1 is a gene, is a particular locus that will be a predictor of the greater risk of developing breast cancer in a woman. Now, we know that it's very difficult for a woman to develop breast cancer before menopause, so you expect in their 40s to have a very low incidence. But if look at-- if you average all this, the population in-- across people with BCR 1 and people without Bcr1, you will see that the evidence is not really that big. They get the slight increase in chances of developing the breast cancer. But if you split them according to their age, so you use age as an environmental factor, you see how the changes increase along time. So when you're 40, it's 37 chances-- 37% chances. But when you're 80, it's 85% chances that it's going to make a change. But if you put them all together, eh, you have something that is not such a clear result. On the other hand, the challenge here is actually to find out which is the environmental factor that is actually doing the deed. This way is to find out that it's age that is splitting these women in these different groups. So this is a necessary, sufficient problem, but we have also redundancy problem. For instance, we have retinitis pigmentosa that may be caused by 14 independent SNPs, mutations, right? So any of these mutations will show up positive to a particular genetic test for association with the disease. And we have other diseases that are known to have polygenic causes. If you have only one of these, you will not develop the disease, like the Hirschsprung disease, in which you'd need two different mutations on two different chromosomes, which, by the way, opens an interesting question on, what is the relationship, the evolutionary relationship, of objects that leave off loci that are on two different chromosomes? They are completely independent? Well, we have very complex systems that are displaced around-- there are genes displaced around different chromosomes, and we are able to rebuild this system from one generation to another. How do we combine this with the idea that if you measure things by centimorgan, two things-- it doesn't make any sense to have-- to compare the distance of two loci on two different chromosomes? So this is a notion, kind of radical notion, but proposed in the '30s by Sir Ronald Fisher, father of statistics and modern genetics, under the name of affinity. He didn't have genotypes at that time, but it was a paper on nature, talking about this very strange phenomenon and how likely it was that there was really no relationship on things that were very far on the genetic code, in this code he was imagining. And they were, at the same time, functionally tightly related. So how do we-- how we try to dissect all this complex stuff and also the simple one? Well, you know, the traditional way is to do the same game we did for Victoria's secrets, right, find a large pedigree, wait, you know, those 150, 200 years, and see what happens across three generations. We can do this with the drosophila much faster, but if we want to approach human diseases, drosophila doesn't really always cut it. So if we want to study the same thing in humans, what do we do? One is to find a large pedigree. One is to say, well, you know, I don't really need a large pedigree. What I can get are trios. I get mother, father, and child, a very modern nuclear family. And then I imagine-- because I know the parents-- I imagine that I will get statistical evidence of transmission by repeating the same measurement in different spheres. Right? Like, they were a huge pedigree, and in this case, I can do everything in two generations. I can have-- but sometimes, for some diseases, finding parents is very difficult. If you're looking for two complex traits that show up late in age or-- there are people that are actually studying this, the basis of longevity. And they recruit people in their 90s, late 90s, early first century-- early second century. It's kind of difficult to track down their parents, and most of all, it's difficult to genotype them. So what do you do in this case? You get brothers. If you get the brother, you try to figure out what was the genotype. So you assume that there is some kind of random distribution of these things across the two children, and you try to figure out what was the original genotype of the parents. Or you can do a standard case control experiment in which you collect a bunch of people-- half of them or both of them have the disease, other then don't have the disease-- and compare the association between a SNP, a mutation, and if so, all three. There is another way to categorize these experiments. One is-- one are the experiments that are, like, double-sided, like case control experiments. I have two-- both sides of the story, the affected and the not affected. Or I can have single-sided studies-- and I will show you how you analyze them-- in which you recruit parents-- they are healthy parents with an affected child-- and you try to see what is the transmission that goes from parents, child, that make the child affected. In this case, all your pool, all your recruited pool, would be made by people that are either affected or are related to an affected person. Which actually makes, usually, life much easier for the recruiter because, if I have to give-- to volunteer for study if I am affected or my child is affected, I am going to be more willing to volunteer for the study than if I'm just a random bozo that doesn't have the disease and by design I have to be independent of any subject with the disease. So I will go into details for each of these studies. So the first thing we can do is to do linkage analysis. That is a traditional analysis of pedigrees. The second way we can do-- we can go is to-- is allele sharing. Do you remember what I was telling you about, about brothers, and siblings in general? So if you have siblings, what you can do is, imagine that there is a random distribution for a particular SNP, for a particular mutation. And you would expect that if you have two brothers, these two brothers will have a particular probability of getting one allele or another. Now, if you start deviating from this distribution, then you start becoming a bit suspicious. The association studies are standard case control studies in which you compute the likelihood that a particular SNP is causing, affecting your outcome. TDT, Transmission Disequilibrium Test, is the thing we actually use for the trios, and I will tell you in a second. And then for complex traits, like quantitative traits, things that are not really binary diseases but are, for instance, your resistance to-- your inclination to start drinking, which may have different degrees, or your inclination to quit smoking, which, again, can have different degrees. The best way of doing them, of looking at these quantitative traits, is to use animal models, so crosses, which is not something we are allowed to do in humans. Typically, these collections are hypothesis driven. So what these people do is, they wake up in the morning, they read a lot of papers the night before, and they take a shower and they say, oh, here are seven maybe related to my disease. So they go out, they ask money. They recruit a bunch of people, use the money to genotype these people, run an experiment, and write a paper in which they either find something or they find nothing. Usually, if you don't find nothing, you can write a paper, and this induced misbehavior in scientists. The challenge here is that doing this is the old-fashioned way. What is very precious for us today, or in this case, is the samples of people. Say, imagine you have a single drop of sample-- or of a-- from an individual and you can do a gazillions of these SNPs, but in this case, you don't have to read a lot of papers. What you can do is-- or you can have your doubts. You can say, well, maybe it's not only this one. Maybe it's the second. Maybe the third. Maybe it's the fifth. What if I collect 500 SNPs and then test all of them? What if I do an analysis that-- we don't collect any SNP? We collect all of them and try to look, to fish for dependencies in the data set. OK, in this case, it becomes really, really difficult. This is how things used to be done. And this is a three-generation-- 3, 4-- four-generation linkage study. And in this case, you have a particular disease, and people that are affected, they are the red ones. People who are not affected are the bluish one. And you study a mechanism of transmission for a particular SNP, for a particular mutation. And what you do is use a quantity called [INAUDIBLE] ratio or loss cortex. What is this? You develop a method-- a model of transmission that is accusing that particular-- the SNP to be responsible for the disease. OK? So the tracking down of the SNP will follow-- the transmission of the SNP, of the mutation, will follow the pattern of the observable characters in the individuals you see. And then you make another one in which is-- in which the mutation is not responsible for it. And then using very simple statistical methods, you can compute that the data that you observe are generated by one model and the data that you observe are generated by the other model. And you can compare these two models, and you say, the model in which-- and you can say, the model in which this mutation is responsible for the disease, that is following the same pattern of inheritance, is n times more probable or less probable than the hypothesis in which the two patterns are not associated to each other. Now, the problem here is that, if you have a large pedigree of-- you have many SNPs, you end up in the problem of multiple comparisons. So I have to tell you this because this is my job, but I don't believe in this. But if you one day want to publish a paper using standard statistical, classical things, you will end up in a thing called-- in a muddy area called multiple comparisons. Now, multiple comparisons come from a classical superstition and from classical statistics. That is that you can infer-- you can say that two things are different if you can, by repeating the experiment a hundred times or a million times or some times, you will get less than 0.05 mistakes. You have heard of this p value. This is the p value. It is the probability that you will make an error by assuming that hypothesis more than n times. OK? In this case, it's five times out of a hundred. Now, if I'm testing one hypothesis, this is my error level. Supposedly, I am testing two hypotheses. Well, the probability that I will do this by chance doubles, so it's going to be the product of these two probabilities, right, because these are two joint probability distributions. Suppose I am observing 500, so it becomes 0.05 times 0.05-- smaller number. Suppose I have 500 SNPs. What should I do? That number will become so small that it would be impossible to prove anything, because the number would be-- because the probability that I pick up one hypothesis by chance would be so high that I will need a lot of evidence-- an unseemly amount of evidence to accept a particular hypothesis. So frequentist people have this problem. Patient people don't. But it's a problem. If you want to publish in a journal, this is a-- this is something you will-- OK. So this is nonparametric-- allele sharing is the thing I was telling you before, nonparametric method to assess linkage. And what you do is, you use siblings, and you assume that your particular SNP is identical-- the distribution of the SNP, of this mutation, is identical by descent. So I know what is the distribution I expect, and what I can do is to see, for a particular SNP, if there is a deviation from the distribution I would expect if these two guys were getting the way-- they were getting their mutation from the same pair of parents. Right? So these are siblings. I look at the probability that we'll get this SNP by chance. And if they don't get the SNP by chance because they're all affected, what we find out is that these people that are actually affected have the greater chance of getting this particular SNP, and this will make my SNP suspicious. Making it suspicious doesn't mean that you prove it. Now, if you want to prove something like this from a statistical point of view, using a non-parametric method-- because you make no assumption about distributional nature of your data-- you are going to have-- to need a huge amount of data. And this is not always feasible, but this is really, really feasible. So this is a weak test because it's simply telling you, the only thing you prove is that that particular allele is transmitted more than-- in a different way than random, well, more than its random population. Now, association studies are typically done using parametric methods, and they test for association between a particular phenotype and the genotype in two sets, in two different populations, well, in two different samples. What is the problem? It's that sometimes you recruit your sample from two different populations. I know it sounds absolutely silly, but this is one of the major concerns. Suppose the people in red are all people with asthma, and people with blue are all people without asthma. But then-- and they go and find out an association between the SNP and this-- and a mutation and then this phenotype. But then I discover that, for some reason, I have recruited everybody with asthma from Finland and everybody without asthma from Sicily. Now, is this SNP going to account for asthma, or is this SNP going to be account-- going to account for whatever this variance is between these two populations. These are called population admixtures, and I know it sounds silly for people to recruit things across different countries or buried, segregated populations. On the other hand, you may have-- you no guarantee that you don't have stratifications in your population. Even if you recruit in beautifully multicultural Massachusetts, the probability that you will find out something about the-- something about your sample that you don't really like, like a stratification, is not negligible. So what is really, really trendy today is to use transmission disequilibrium tests. Transmission disequilibrium tests are based on the assumption that you're actually using the parents of an individual as his controls, so you're sure you are not going to get any stratification, right? So I'm going to recruit a bunch of triads, father, mother, and child, and the child is affected and the parents are not affected, OK? And then what I'm going to do is to compute for each particular SNP, for a particular SNP, for a particular mutation, if there is a dependency, if there is a pattern, so to speak, between my distribution of transmission and the fact that all these kids are actually affected. And in this way. I have a very powerful test and non-certified controls. The only problem is that it's not always easy, as I was saying, to find parents for this. And the other problem is that you may find phenotypic stratification with that. Suppose you're looking for a particular disease that is related to weakness. An example I have is cholera. There are-- people believe that there is a SNP and people are looking for a SNP that is more-- making people more susceptible to have a bad outcome from cholera. And what they do is to go and recruit the people in households, so when somebody shows up in the hospital, they recruit the entire household and use the parents as control. Now, at that point, the problem is, because the phenotype is not really that easy to identify, we don't know if these people, for instance, got an immunity 20 years before because they got cholera. These are parents of a child who has-- who is 15 years old. In places where they have serious cholera, usually these are still young people, but from the societal structure, they're kind of old people. They have been there around for, like, 35 years. And this means that, at that point, you have this particular population that by design has a very different-- a phenotype that is very difficult to characterize. So QTL-- traits that, like in this case, variability in intensity. I'm not looking for something that is black and white. I'm looking for something with different degrees of severity. And until last year, there was really no way of doing these things. Well, nobody has proven ways of doing this. Now, people are working very hard because QTL are actually one of the most interesting things. People are looking really hard to find ways to characterize at least some type of QTL. And you can-- and complex QTL, like with census data. So one of the studies we have been doing is about the development of-- breast development in women. You say "why?" Well, because breast development has been associated to breast cancer, so we know that late breast development is protective of breast cancer, at least one type of breast cancer. Now this is a QTL. Why? Because you're looking at age, which is not a binary variable. And it's a very complicated QTL because it is-- you may have sensor data. You may have girls that at some point decide for one reason or another to drop out of the study. So the way which you can organize for this particular QTL-- not for every QTL-- is to imagine, to represent the structure as a survival study, in which you imagine that your SNP, a particular SNP, is going to be a treatment and the other is going to be the control. You draw your Kaplan-Meier curves, in this case, not for survival but for breast development, which is a kind of a deadline point. And at the end, you can compute the difference between these two samples and get evidence that actually there is a factor there that is putting them together. And you can use other multivariate structures that allow you to study the interaction of SNPs in QTL. But typically, if you want to have a general model for QTL, you have to resort to animals, animal models. Now there is another interesting thing that is happening on this street mostly, and has been happening for the past four or five years. There is-- we have two or three large phenotypic studies run by Harvard and affiliated hospitals. The oldest one is known as the Framingham Heart Study. Framingham Heart Study collects, I don't know, 50,000 people. They have been followed, and now they are the third generation. They have been followed. Their family members have been followed. We know about these people basically everything. They're-- these are selected for heart disease, but there are also studies-- there are cohort studies that have not been selected for anything, like the Nurses' Study. Nurses are wonderful individuals, incredibly compliant to doctor's order. So even if you harass them for 30 years, asking them every other year a complete report of what they have done, diseases they have, what is their diet, how much they weigh, they will comply. Nurses' Study has 150,000 women that, every other year, go and fill a questionnaire just for the sake of it. So about these women-- and this is now in the second generation-- we know absolutely everything. We know how much carrots they eat every year. And you probably see one-- the reason why you see these articles in the New York Times, they come out of chance, or the School of Public Health, but usually they mean chance, in this case. There are people who own this data. And these articles are like, you know, red hair are going to make you live longer. Or, you know, eating five carrots a day has been associated to having brighter eyes. Where those findings come out from? From the nurses study. There are these people that mine the data set. Again, they don't read in this case. They take a shower in the morning and say, what I can go and look into the nurses study? And they say, well, how about the association between mascara and blindness? We have 150,000 women that use mascara every day. Well, actually, you probably have a good proportion that uses mascara and a good proportion that doesn't use the mascara. And you know it. So they go there. And the joke on the street is that they don't publish in New England Journal. They publish directory on The New York Times because these are usually very high impact questions that people get right away. But imagine if you can take all those things, all those phenotypes, and you could genotype those women-- we have blood for about 90,000 of them-- and run full genetic scan for these women, what could you do? You could find an association between mascara and SNPs, or blindness and SNPs, or whatever and SNPs. Can we do this? Why don't we do it? Well, OK, genotype is net cost about $0.45, just to let you know where you are. To snip an individual map is going to cost-- so sorry, an individual map is going to be about 90,000, OK? So 1,000 individuals is going to cost you 90 million. And that money, they're very difficult to find in that structure. If you go to NIH and ask for this kind of money, you are going to get them. Work is not the problem. Postdocs at Harvard are cheap. And they work very hard. So we could actually have 1,000 of complete SNP maps in about 7,000 days, now, which is about two years for 10 postdocs, nothing. How can we solve this problem? Well, you remember when we said that these things go together? Maybe we don't need to genotype all of them. This is an example of a gene. I remember we-- I don't remember what gene. But this is a map of a gene that tells you the distance in terms of r-squared, which is the measure I was telling you before, the distance between-- the pairwise distance between two points. And as you can see, you can identify blocks there, the blue areas in which the correlation among the r-squared in this area is so high that you don't really need to genotype all of them. You can just decide where you go. And you say, OK, pick any of them. Problem is to know this, you first have to genotype them. So at that point, there is not much purpose for it. And the other problem is once you have this-- so here it's easy, right? When I have a region-- that region of recombination, there are four or five of lower recombination. I can say, OK, I pick up one, two, three, four. Now, the question is, which one I pick? Now, suppose they don't have five. I have in this case, 59, which are not that many. Regions like this there are 229 long. So well, your guess. How many SNPs do I need to genotype? Not which ones, of course. It would require a little. But how many do I need to genotype here to account for all the variations? So these are the SNPs. These are all the combination I get in the population, OK? These SNPs go together. So I don't expect to see all possible combinations. How many of them I will need to account for the 14 different alternative variations of [INAUDIBLE] over these 59 SNPs? I want the number. Jose, you're up to your third pizza. 15? AUDIENCE: 10. MARCO RAMONI: 10? OK, we have a 10 here. Anybody getting closer? That's a good guess. You will buy 10? Buy 10? Five. These five SNPs allow you to reconstruct all the variations without any information loss. I have a guy, Channing, a good friend of mine. And when we develop the method to identify this thing, bought me dinner because he has spent two years staring at these pictures and trying to figure out which were the SNPs that came [INAUDIBLE]. And actually, when he was wrong, getting a lot of shit from people because, oh, you got me the wrong thing. Now I mentioned it's a mind numbing task to stare at this picture and try to find out which are the optimal one to genotype. So what they have been doing is to play with the structure called haplotypes. Haplotypes are the things you get from your mother and from your father. And there is stretch of the genome that is consistent one chromosome on another. Now, the problem with haplotypes is that they're very difficult to identify for us. When you sequence somebody, you are going to have maps that look like the one I showed before. I can tell you if you're heterozygous, homozygous, or majority all homozygous, and of the minority of. But I cannot give you two bases. So if that particular SNP is A and T, as two alternative bases, I can tell if you are A-T, T-T, A-A. But I cannot tell you if you are T-A. I cannot decide if two things are coming both from your mother or both from your father, which is exactly the kind of information we need to build these kind of maps to reduce this information. So these structures are called haplotypes. And there are molecular methods. They are horribly expensive to do this. So they go into the chromosomes. And they tear apart the chromosome and return to separate chromosomes. And you know that one is coming from your mother and one is coming from your father. There are stochastic methods to do this in which you can try to figure out by making some assumption how these things are transmitted on a population. What is the distribution of an entire haplotype? But the best way of doing this is to use trios and try to guess what is the SNP that is coming from the mother and which is the one coming from the father. And so what people are doing now is developing a thing called HapMap in which we will have a map-- you can go to hapmap.org. And you will find all the data that's currently available. It's systematic. Now the resolution is 30,000 bases, 30 kilobases, a systematic genotyping of all SNPs on a relatively small population. There are 33 years in which we can actually find out which are the phases-- the fact that one is coming from the father and the mother is called the phase of a genotype. We can identify the phases from the parents. And why you do this? Well, because next time we want to design a genetic study, I don't have to run the genetic study and then say, oh, I could just genotype this one. I can actually go there, see I am interested in this particular gene, find out which are the SNPs that I really need to genotype. But there is some even more good news that apparently, all these things actually go together in stretches. So this is a paper from a couple of years ago in which they show that they made a high resolution map of a region of chromosome four associated with Crohn's disease. And what they found aside from the clinical things, what they found out is that this region is actually from an evolutionary point of view broken in 11 subregions. These regions are stable. They are transmitted together, so to speak. And they are interrupted once in a while by [INAUDIBLE] regional recombination. And the [INAUDIBLE] is they found out that if you look at all these variations of the alternative common haplotypes you have in the population, they actually come-- you see the different colors? They actually come from four ancestral haplotypes that are recombined. They've been recombined over the generations. So again, this gets back to your question. I'm not sure that there is, from a physical point of view, there is no-- you see the big block there in the middle, 92 kilobases? Well, I'm not entirely sure that there is not a very high recombination spot there. But because they all come from four haplotypes, with not other variation, I cannot see any recombination there because the probability of recombining requires a little pool to fish from. And in this case, they pool to fish from is very, very poor. Needless to say, these are all Caucasians. And they all come from this handful of individuals. So apparently, these people all come-- there's 129 SNP pairs-- not SNP pairs, trios. They all come from four ancestral haplotypes. The way you do is to use-- the way you identify this block is to use the Markov models. I will talk about the Markov models next week for the machine learning method. So I will not bother you with this. And at this point, what we can do once we have these blocks is to identify the SNPs that are tagging those particular blocks and then run them. And the same thing we've shown is an exponential saving with length. So it's about an average 10%. But if your block is very big, the number of SNPs you will need will be much smaller than 10%. And if it's a small block, the number of SNP you will need will be a bigger number. So the take home message, the fundamental take home message for a medical class is that all this technology has these beautiful properties of finding out stuff. But the only way to find out this stuff is to have good phenotypes. One day, running a SNP will cost $0.01. Maybe it will cost a tenth of a cent-- near nothing. But what makes very precious, what makes these studies possible, is the fact that you have something like the [INAUDIBLE] study that you have critically annotated, carefully recorded phenotypes and nicely characterized. And this is where the other side of medical knowledge comes in, not to find out what is really the cause, but just to identify precisely, as crisply as possible, a particular phenotype. At that point, you can actually map the phenotype. If your phenotype is slippery, yeah, well, it's very hard you will find any association. And if you find it, it probably will not be association with the thing you want because that's going to be a different phenotype, the one you're thinking. So the critical thing here is not, to go back to the very first slide, is not really to do bean bag genetics. The real critical thing here is to have good medicine that is able to characterize phenotypes that in turn allowed these studies to happen and to create good explanations for these phenotypes. And the other big take home message is that hypothesis-driven is out of fashion. We don't need hypothesis anymore. With enough money, we can have skills. We have the intellect. And we have phenotypes. We have enough information. We have enough juice to go and get our answer without thinking them in the shower, but looking systematically at the genome and taking advantage of the fact that something is more likely with something else. So if I tell you that something is associated with something else and this is my only study, we will need 10 years before people can consolidate the studies in a meta analysis study and say, oh, the evidence is not that big. But if I start telling you, I analyzed 500, or 5,000, or 50,000 SNPs and here it comes. This SNP is a million times more probable than any other SNP to be associated with this phenotype. That's a measure that you cannot get from an hypothesis-driven test. So OK, so when you will be reviewers of papers and grants, usually back people because their studies are not hypothesis-driven, remember this. See you next--
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_7_Informational_Resources.txt
ALBERTO RIVA: Alberto Riva, I'm an instructor at CHB. I'm going to talk to you today about the most important resources for finding and using biomedical information, especially information connected with the study of the human genome. So this is going to be something probably slightly different from what you've heard so far, which concentrate more on actual locations, and most of them will be websites, where you can find information, and to talk about how this information is stored and represented, how it's accessible, and what it can be used for. So you're going to see a long list of references to sites, websites, with URLs. Don't worry if you can't remember all of them because, of course, I'm going to distribute the slides and will be easier to just look them up. So I'm going to start with something that you probably heard about many times before. The so-called central dogma of molecular biology, as you know, almost all our cells contain DNA in their nucleus-- DNA is the molecule that encodes information-- for at least from the purposes of this presentation, this is what we're interested in. And this information is transcribed into our name molecules, that then exit the nucleus under the form of mRNA. MRNA is then translated into proteins. And proteins are ultimately what is responsible for essentially all the external manifestations, all the observable properties of our biology. So I'm talking about things like metabolism, general physiology, diversity between individuals, diseases, drug response. These are all, n some way or another, due to the different proteins that act within our cells and outside our cells. So we have names, of course, for the extremes of the spectrum. We call genotype-- this information is encoded in the DNA. And on the other hand, we call phenotype whatever is at the other extreme of the spectrum, anything we can observe, we can measure, from the outside. So what I'm going to try to show you is that, as you move from one end of the spectrum to the other, you're going to encounter very different forms of information, of data. And each one has its own specific nature and function, and needs to be treated with different tools. And needs to be represented in different ways. So essentially these are the questions that I'm going to answer. How is all this information represented? What are the different ways that we can store and describe this information? Where does it come from, where is it stored, how do we find retrieve and use it? So we've talked about the two ends of the spectrum. So there are some very deep differences between the kind of information you find when you're talking about the genotype, and when we're talking about phenotypes. For example, the genotype is digital, because each base pair in our DNA can be exactly represented using one of four symbols, A, P, G, C. Then you can also think about insertions, deletions, and so on. But essentially, using a small number of symbols, you can provide an exact representation of our genome, of all the 3 billion base pairs that compose our DNA. On the other hand, the phenotype is, say, analog. Because most phenotypes are qualitative in nature. They cannot be measured exactly or precise, they cannot even be defined precisely in most cases. You always have to take into account the effect of environmental factors that, again, are very hard to describe in a quantitative way. At the root of all this is that one of the biggest problems in the study of protein is the fact that the proteins are not uniquely determined by their sequence. For DNA, you just look at the sequence and you know essentially all that there is to know about DNA. For proteins, you cannot look at the subsequence of a protein and understand just by looking at it what the protein is going to do. Not even how it's going to be-- not even what it's three-dimensional structure is going to be. That's difficult enough. Then understanding what the protein does, just by looking at the sequence, is still very far from being feasible. On the other hand, it's interesting to see that our knowledge of these things has progressed in the opposite direction. Because obviously, it's much easier to observe a phenotype than to observe DNA in ourselves. So the first thought is of inherited traits date back to Mendel in 1866. And DNA was discovered, more or less, the same years. But at the time, nobody had any idea that there was any connection between these two things, between the DNA and inherited traits. It took over 80 years for this concept to be proven. So the definite proof that genes are made of DNA dates back to 1952. After that, progress was faster because the elucidation of the structure of DNA, and the DNA replication mechanism, came one year later. Then the geniculate code was deciphered between 1961 and 1966. Something that we now take for granted, like the discovery of introns, only happened in 1977. And finally, the Human Genome Project, that was officially declared a success last year, brought us to the point where we now know the exact base pair sequence of our genome. So we now know with a sufficient degree of certainty-- we're going to talk about this more later-- we know the exact base pair composition of the human genome. And also several other genomes, but of course, the human one is the one we're most interested in. Going back again from genotype to phenotype, there is another thing to note. I've just said that we now have the complete sequence for our genome, but of course, this is an approximation. It's an abstraction, actually, because even if we're all human beings, there are no two human beings that are exactly the same. And this is a consequence of the fact that there are differences between the DNA of two any human beings. These differences are due to polymorphisms, like single nucleotide polymorphisms, so locations which, instead of having the base that everybody else has, you have a different base. Microsatellites, repeats, insertions, deletions, translocations, these are all things that can happen to your DNA sequence that can modify in ways that, of course, are not enough to turn you into another animal-- you're still a human being-- but your DNA sequence is slightly different from the sequence of any other human being. So the other is, one of these polymorphisms, every 1,000 bases. So if you think we have 3 billion base pairs, it adds up to a very large number of differences. Which means that when you study the human genome, I mean, we now have the sequence of the human genome. But then if you go and look at one individual, you're not going to find that his DNA matches exactly the sequence that you find in the human genome databases. You're going to find approximately one difference every 1,000 bases. And understanding what these differences do and mean, and what is their consequence, is one of the most interesting problems in current bioinformatics and molecular biology, because now we finally have the tools of looking at our genome with this level of detail. We can look at individual base pairs and we can see, well, there should be an A here, and instead, we have a C. Does that cause a problem? So again, we're going to go back to this soon. And the same thing happens for phenotypes, although in a slightly different ways. Phenotypes are generalizations, too. So when we talk about things like species. Again, I said, we're all human beings, but we're all different from a geniculate point of view. So it means that putting all of us together into one big group, in one species, is a generalization, of course. And even going down to ethnicity, or even a concept like disease, these are generalizations, because these are concepts that cannot be defined in a precise, formal way. So we will see as we go forward that we're going to encounter very different forms of data. And we're going to need different methods to manipulate this data, according to what the purpose of our work is. So just to make this clear, if we are working at the level of DNA, then the typical operations we might be interested in doing are, for example, sequence matching. So to understand if certain stretch of a sequence matches anything else that you've seen before. So this is useful, for example, when you discover a new gene, you want to know, first of all, if it's really a new gene, or if it's already been seen somewhere else. And if it's a new gene, you would like to understand-- have an idea of what it does. And if you find a similarity between your new gene and something that's already known, that can give a lot of information. We're talking about discovering genes-- finding genes in a DNA sequence is not trivial. There are programs that do this. They just look at the sequence and they find locations in the DNA sequence that might contain genes. And I'm not going to go into details on this, but there are various reasons why this is a pretty complex thing to do from a computational point of view. Homology searches-- again, these refer to looking for similarities between DNA sequences in different organisms. So if you discover the function of a certain gene in the mouse, for example, you would like to if it does the same thing in humans. So again, if you find a high degree of similarity between the two genes, you can hypothesize that they are also going to have the same function. We've talked about polymorphisms, so another pretty common cooperation is performing DNA sequences, is SNP detection. So if you sequence the same stretch of DNA from a certain number of different individuals, then you can compare them. And you're going to find that most of the locations are the same for all individuals. But some of them will be different. And this is how SNPs are discovered, how single nucleotide polymorphisms are discovered. They are locations where different individuals don't have the exact same nucleotide. And we'll talk later about why genotyping is important, what kind of information you can get from that, and how this relates to diseases, to basically trying to figure out what is the relationship between the genotype, and in this case, between polymorphisms in your genotype and phenotype, like disease. At a level of RNA, it might be interesting to look at alternative splicing transcription rearrangements-- these are all things that happen to the original DNA sequence when it's transcribed into RNA. It undergoes a series of transformations that can, of course, affect, in a very deep way, the final product. And this process, the process of transcribing the DNA sequences into RNA, is, of course, at the basis of expression analysis. And you've heard a lot in the other lectures about microarrays. So I'm not going to go into too much details on this, but differentiable analysis, clustering, and so on, these are all the usual things that can be done using microarrays, using gene expression microarrays. We talk about proteins-- if you're studying a protein, the interesting thing is to do with a protein are trying to predict its active domains, in order to have an idea of how the protein might behave, what function it might have, how it might interact with other proteins, with other genes, and so on. Predicting the three dimensional structure of a protein is another very important and very complex task. Starting homology and conservation of proteins across different organisms can give you a very good idea of the importance of some proteins. So if something that's been around for millions of years, it probably means that it's involved in a very basic mechanism, while there are some proteins that are new, that are only specific to human beings. And that, again, can give you some information. And finally, something that is very challenging, and it's receiving a lot of attention lately, is the automatic construction and analysis of metabolic pathways and regulatory pathways. If you are able to understand how proteins interact with each other, and interact with the rest of the cell, how they then regulate other genes, and in turn other proteins, then you can use this information to try to build, in a computational way, the kind of pathway maps that biologists have been drawing by hand for decades. And of course, we're still very far away from being able to do this in the general case. It works in some limited case, and we're going to see later some examples of some of these things. But these are all very challenging problems that, of course, are still very much open. And finally, we get to the phenotype then, we could put a very long list of things here, but could talk about population genetics, about association studies. Association studies are studies that try to correlate the presence of a certain genotype with an observed phenotype. Like in the most common case, association studies based on SNPs, they just look at two different alleles of a SNP, and they try to figure out if there is statistical correlation between one or two alleles and the disease, and that might mean that the SNP is indeed responsible for the disease in some way. And clinical trials, of course, to validate all this. OK, so two more slides about philosophy and then we'll start with the more practical stuff. I've already mentioned the word gene a lot of times, and I'm going to mention it again very often. So it might be interesting to ask ourselves, what is a gene? And this is something that it's a question to which the answer is probably obvious, but it turns out that there are actually many possible answers, according to the context you're in, according to the different view of the world that you're using. So for example, if you ask a classical geneticist what is a gene, you will get the answer that a gene is the smallest unit of inheritance. This is the definition that goes back to Mendel, essentially. If you ask someone who is doing medical research, you will get the answer that the gene is a disease-causing trait. So hear about the [INAUDIBLE] gene, or the gene for cystic fibrosis, and so on. So in this case, the word gene has a very clear clinical connotation. If you ask a molecular biologist, you get the answer that the gene is a recipe-- is essentially a program to build one or more proteins. And we can go on, we can ask biochemists, and you will get the answer that the gene is an element in a metabolic network. It's an active element in one of those big networks of interacting genes that regulate each other, and that overall realize the metabolic process. If you ask a motor geneticist, you will get the answer that the gene is a locus on a chromosome, in a certain region of a chromosome, that has a functional characterization. Locus that was studied, and was found to have a specific function in our biology. And finally, if you ask a bioinformatician, you will get the answer that a gene is just a stretch of DNA where we know there is a gene, because the database tells us that there is a gene there. So we know it has a transcription start site, encoding sequence start site, has exons and intrinsic in certain positions. So in the following, we are to see examples of all of these different ways to look at the gene. And to start, of course, we're going to start from the beginning. So from DNA sequence data. And now, we're starting to look at where all these different pieces of information I've told you about can be found, and how they're stored and represented. So if we're talking about DNA sequence data, the first place to go is, of course, GenBank. GenBank is the largest repository of sequence data. It accepts direct submissions from researchers. So anybody in the world who sequences a new piece of DNA can send it to GenBank, and it's put in the big cauldron. This is data from-- the most recent data that I could find from one year ago. It contained more than 22 million sequences and 100,000 distinct organisms. With a total of almost 30 billion nucleotides. And this is the URL for GenBank. And GenBank is at the basis of the NCBI cluster. So the National Center for Biotechnology Information, is a branch of the NIH, that has the task of assembling the largest possible number of databases of biomedical information. They manage GenBank, and GenBank, in turn, is at the basis for a lot of other resources that we're going to see now, that are all part of this cluster of NCBI resources. They're all interconnected, so you can easily jump from one to the other, and that is a very powerful way of exploring this kind of data. So this is a graph that shows you the growth of GenBank in recent years. You can see the very steep growth of the number of base pairs. And the almost equally steep rate growth of the number of sequences. And you can probably tell that the number-- we're sequencing longer and longer sequences, because the blue graph grows more rapidly than the red one. But anyway, what do you do when you have all these accumulated sequences? So in GenBank, you just have sequences by themselves that can be very short, very long, but they're just independent sequences that were put there by investigators. So the thing you can do, if you have enough sequences from the same organism, you can try assembling them, putting them together, and trying to reconstruct the entire genome. And this is what was done to assemble the human genome, for example, and all the other genomes that are being sequenced. You start with-- you look at the sequences you have, and if you can find overlaps, then you know that these two sequences are related in some way, and you proceed from there. So let's say you've sequenced this sequence, then another one, they are distinct and they have an overlap, so you don't know where they are on the genome. But then, if you sequence a third one that overlaps the first one and includes the second one, then you know that you can basically arrange them in this way. And then let's say you sequence another one, and again you find an overlap with one that you already have. So like this, and in the end you're going to build a map that tells you where all these fragments should be positioned on the chromosome. And you have different levels of coverage, so you have regions that you've seen only once, regions you've seen twice, regions you've seen three times. If you have a sufficient degree of coverage, then you can say, well, OK, I believe this is the right sequence. And you proceed from there, you take that for granted, you proceed from there by connecting more and more overlapping pieces. And again, this is how the human genome was sequenced essentially. Up to a level of coverage-- I think it's five or six times covered, so each stretch of DNA in the human genome has been sequenced at least five or six times for validation. So and the details of how this process has been implemented and CVI are here. And in addition to the human genome, of course, we have a lot of other genomes that are completed, or near completion. These numbers are probably higher by now we have over 1,000 viruses. And then many other organisms from different domains of life. Of course, eukaryotes are the hardest organisms to sequence. But the human genome is considered finished by now. It's hard to go above this level of accuracy. It's probably not even necessary because the differences between two human beings are probably the same order of magnitude. But now we have several other organisms, including a chimp, that was recently released. And it's going to be very interesting because it's essentially identical to the human genome. So there is 1%-- the differences are about 1% between the genomes of the human and the chimp. And it will be very interesting to see exactly where these differences are, and why these differences are responsible for the fact that we are humans and they are monkeys. Again this is a link to the entry point for the description of all the genomes that are stored in NCBI. And a new genomes, the small ones-- the viruses and bacteria, there are new genomes, essentially, every week. So these numbers constantly change. Now, we now have the complete subsequence of the human genome. Where do you find it? So the best resource, in my opinion, for looking at the human genome in literally is Golden Path. Golden Path is a genome browser for several different organisms. Initially it was only for human-- now it has mouse, rat, chimp, Drosophila, yeast, and a few others. The nice thing about Golden Path is that it gives a graphical view, so you can see it in the next slide, it's very clear, it's very easy to find all the information you need about a certain region of the chromosome. On the other hand, all the information it provides is available in easy to download and to parse formats. So if you want to build your own database, it contains the same information that's something that is pretty easy. It provides arbitrary DNA sequences-- so you can ask for any region of any human chromosome, you'll get back the exact DNA sequence for that region. For something that might think it's easy by now, but two or three years ago, it was still something that was almost impossible to obtain. And Golden Path was the first site to provide something like this. It gives you the absolute position of all the known elements of our genome. So genes, markers, mutations, other features, they tell you exactly they are at this location, in terms of the absolute base pair. This is the URL for Golden Path. And this is how it looks like-- this is an example, we're looking at the region that contains the TLR1 gene. And you can see all these different tracks that provide different information on different objects. So for example, up here we have the genes, we have different sets of known genes. We have predicted genes, according to different prediction algorithms. We have the mRNAs that were aligned to this gene. We have a tract like this and that shows the conservation between human and mouse in this region. So you can see it's very interesting to see that obviously the coding part of the gene is the one that is most highly conserved between human and mouse-- that is the functional part. So it undergoes selective pressure. There are tracks that tell you the location of SNPs, and so on. There are many others that of course don't fit in here, but you can customize the display. You can select the tracks you want to see, and you get your own view of a certain genetic region. You have the coordinates up here, chromosome 4, the band, and so on. And this is just to show that you can query it for any-- this is the same DNA region we were looking at before. But in this case, we asked for the DNA sequence, and we get it. There's another way of looking at the genome using the NCBI map viewer. It's essentially the same thing-- it's a graphical browser to look at genomes and annotations of genomes. It's organized around several maps-- they have sequence maps, cytogenetic maps, language maps, radiation hybrid, human mouse homology maps. So each one of these is a view that gives you a different set of objects in the view. So in the sequence map, you can find information about the genes, transcripts, gene clusters, and so on. Inside the genetic map, for example, you find information about disease genes, what bands break points. It's extremely detailed, because of course, it can rely on the whole set of NCBI databases, so basically everything that could possibly want to know is in there. I, personally, find it a bit complex to use, a bit harder to use than Golden Path. But it's is a matter of taste. It's organized in a different way. Shows you much more detailed information here in this graphical bars, on the features of the genetic region. But then the coded information is a bit harder. They use all these abbreviations here. So it's a matter of taste. They essentially serve the same purpose with different levels of detail in different areas. Now, we've talked about SNPs. It's the only form of mutation we're going to talk about, but it's also the most important one. Because first of all, SNPs are the most common form of variation in our genome. They're much more frequent than microsatellites or insertions, deletions, and other things. And they're important because for example, they can be used as genomic markers. So SNPs are at a fixed location in the genome. And if you know where the SNP is, you can find it-- you can find the same location in different individuals. So you can use them as markers. You can use them as causal candidates for diseases, because a certain percentage of the SNPs introduce changes that then have some consequence on the genotype, on the phenotype. What I mean is that, for example, if you have a SNP in the coding subsequence of a protein, you're going to get a protein that has an abnormal sequence. And that can be a change that doesn't cause any consequence, or it might be a very dramatic change. For example, the most extreme example is there are some SNPs that introduce a stop in the protein sequence. So the protein sequence is truncated-- instead of just being modified, it's truncated. It's shorter than it should be. As you can imagine, this is a chain that can be very dangerous. There are many diseases that are due to the fact that you have SNPs that truncate proteins. They can be used as evolutionary markers, because SNPs arise randomly during the replication, and then they are transmitted from one generation to the next. And it's very interesting to study how SNPs get-- how the frequency of the SNP changes in a population. So if you have a SNP that provides an advantage for you-- because I mean, most SNPs are deleterious. But in some cases, the SNP can also provide an advantage, if it generates something that was not present before, and that works better than the original. So if you have a SNP that introduces a change is beneficial, then you will-- given enough time-- you will see that the frequency of the SNPs increases in the population, more and more individuals are going to have the variant form of the SNP. On the other hand, if a SNP is neutral, then there is no selective pressure, and it will either go away by chance, or will stay at a certain basic level of frequency. So you can study the frequency of the SNP to understand if it's undergoing selective pressure, so to know if it's deleterious or not, or you can use it to reconstruct basically the history of our genome. There are ways of calculating the age of the SNP, so when that mutation arose in the history of our genome. Now the largest database of SNPs that we have again, is at NCBI, it's called the dbSNP-- currently contains over 4 million human SNPs-- actually, I think that by now this number is closer to 5 million SNPs. And almost 50% of the SNPs are validated, which is something very important, means that the SNP has been observed independently multiple times. So you know it's a true SNP. It could be, many times, since sequencing is not an exact process, if you just look at a set of sequencing traces, you could think that there is a SNP when it's actually just a sequencing error. Now, if the SNP was validated, means it was observed several times by independent investigators, and that gives you the almost total certainty that it's a true SNP. There are other databases of SNPs. Another very important one is the SNP Consortium Database at Cold Spring Harbor, that offers-- the important thing about TSC is that first of all, all the TSC SNPs are validated. So they basically take SNPs on dbSNPs, then they check them again, to make sure that they're really SNPs. And while doing that, they also look at the frequency of the SNPs. So what do I mean? A SNP is a polymorphism that substitutes the nucleotide you should have at one location with a different one. So if you look at a population of individuals, you're going to see that the major allele of the SNP, the common one has a certain frequency, so it appears, for example, in 80% of individuals. And the alternative allele appears in 20% of the population. Now, knowing this frequency is very important, because it allows you, then, to do association studies. For example, to look for a correlation between a disease and this polymorphism. Because if then you observe a second population that is affected by a disease, and you find that in that second population, the alternative allele occurs with a frequency of 40% instead of 20, then that might be an indication that the SNP has something to do with the disease. But in order to be able to do this, you have to know what is the baseline frequency. What is the original frequency in normal, so to speak, human beings. Yes? AUDIENCE: I guess the question then becomes, what is the base population is across base population-- ALBERTO RIVA: That's what I was going to say next. Of course, the biggest problem here is that different populations may have different frequencies of SNPs. And this is one of the reasons why SNPs are used for population genetics, because especially in the past, when populations were much more closer there now. If a SNP arises in a population, then it tends to be limited to that population. You're not going to find it in a different population, unless there is some genetic interchange between the two. So when you look at the frequency of a SNP, it's very important to specify what population you're looking at, because we're going to have an example in two slides. So let's just get back to this in a second. Because I wanted to tell you about other SNP resources quickly. Haplotype Map project-- this is kind of a new project that is aimed at developing a haplotype map of the human genome. I don't if you had a lecture about haplotypes, about selecting-- you're going to-- OK, so you're going to have it later than this one. But when you hear about haplotypes, just remember that HapMap is a project that is aiming at building a complete haplotype map of the human genome. And don't have time to go into that now, but it's a very important resource that is really the next step after what TSC is doing, after determining the frequency of SNPs in different populations, that the HapMap project allows you to understand exactly what this can tell you about the evolution of our genome. But this will become clearer in the lecture about haplotypes. HGbase, another database of SNPs, it's manually curated, so you find-- it's very limited, but you find information that has very high quality. It's all manually verified, and it focuses on the potential consequences of SNPs. So you're going to find a lot of information about known associations between SNPs and diseases. Alfred at Yale is another very small database, but it has a very high quality, and it focuses on frequency data. And what they do is very, very interesting. They go look at many, many different populations, and especially a population, at small isolated populations from where we are, places like small islands, and Pacific or remote villages in Siberia, and so on. So they actually try to look for isolated populations to maximize the differences in SNP frequency that they're going to find, in order to have a picture of human diversity as complete as possible. And finally, SNPper that I'm citing because we developed a chip, this is a resource that tries to integrate information from all the places that I sited so far. So it takes information mainly from dbSNP, from Golden Path, from TSC, from Alfred, from HGbase, and it tries to put everything together in a unified view that allows you to look at the gene, find all the SNPs around that gene, see all the features of the SNPs, whether they are different decoding sequence, or the promoter sequence, or whatever. And then look at everything that is known about individual SNPs. And it provides a way of exporting this data in different formats to make it easier to process later. And I just want to show you one slide from SNPper, but this is a window that describes-- that tells you information about the particular SNP-- this is a [? SNP ?] identifier. And so you can see there is a top part where you have general information where the SNP is, just the position on chromosome 6, where the alleles are, the gene it belongs to, notch 4. And here, it tells you that this gene is in the coding sequence of the gene, and it actually causes an amino acid change at position 319. It affects protein domains-- this is the list of protein domains that are affected by the SNP. This is the list of some [? matters ?] of the investigators who have observed this SNP-- and it's a long list, so it means that this is definitely a true SNP. And finally under here, I wanted to show you, this data comes from TSC, and it's a frequency information data. So they sampled 41 individuals from a population of African-Americans, and they found that these are the frequencies for two alleles, 72% A, 20% G. And then they looked at a different population-- these are Caucasian, I think-- and they found very different alleles frequencies. So different that what was the minor allele in the first case is now the major allele. So this is a very clear demonstration of why it's important to know what population we're talking about when we study the frequency of a SNP. Because if you started-- if you believe these numbers, and then you try to run association studies in the SNP on a different population, you're going to find totally different numbers. And this doesn't have anything to do with disease-- you're going to just get results that are misleading, because you are not looking at the same population. And the baseline frequency of this SNP and the two populations is very different. So this is just to show that the advantages of having an integrated view that brings together information from different sources, and allows you to get a clear picture of what the SNP does and everything that is known about it. AUDIENCE: [INAUDIBLE] ALBERTO RIVA: Excuse me? This one? Oh, well, it's just telling you that proteins have-- the sequence of a protein is-- well, it contains portions that are active domains, they are the portions of the protein that then physically do something. For example, this domain here is the extracellular domain, is the main that goes outside the cell. This is a calcium binding domain. So these are structures of the protein sequences that are known to have some function, they are important because they do something. And if you have a SNP that affects one of them, that SNP, in turn, might cause a protein to work-- it can change the function of a SNP-- of a protein. AUDIENCE: [INAUDIBLE] ALBERTO RIVA: Well, this is not meant to be an accurate prediction of what the SNP does. And we get so many because all these domains are overlapping. And this information comes from Swiss [INAUDIBLE] database of protein information. And so you see for example, this first domain covers almost all of the protein. AUDIENCE: [INAUDIBLE] ALBERTO RIVA: Excuse me? AUDIENCE: So six would be the maximum number? ALBERTO RIVA: No, no, it's just that these domains can be overlapping, just because the Swiss people, they annotate the protein sequence saying OK, from here to here, we know that this happens. But-- well, sometimes they-- well, there are some domains that cover the entire protein, or half of the protein, just because, for example, in this case, the extracellular domain, it means that this portion of the protein is extracellular. And then inside that domain, you can have other subdomains like all these that have other characteristics. So I'm just reporting here a list of all the domains that contain that location, but they can be overlapping. So it doesn't necessarily mean that the SNP affects all of them in some meaningful way. This one is probably the only one that could be affected by the presence of a SNP. Because it's a binding domain, so it might be that it doesn't work anymore as a binding domain. So don't get confused by this place. Just a list of Swiss [INAUDIBLE] domains that include that location. Now, the next step I'm going to talk about genes again. And the starting point, when we talk about genes, it's LocusLink. LocusLink is a curated directory of genes from 13 organisms. The word curated here is very important. So genes are discovered either experimentally, or by programs like GenScan that look at the DNA sequence and tell you where a gene might be. Then the gene has to be studied in order to know what it does, what all its relationship to other genes and biological processes are. So LocusLink is basically a repository of information about genes, and it collects everything that is known about the genes. So they say their central function is to establish an accurate connection between the defining sequence for locus and other descriptors. It basically means, you have a stretch of DNA, you know that there is a gene, let's collect everything that is known about that gene. So it gives you information about the sequence, itself, about the functions of the gene, links to other databases, about the gene, different names for the gene, phenotypes that are known to be associated with that gene, homologous to other genes in the same organism, or in different organisms, the location of this gene in several different maps. This is all information that you can find in LocusLink. And the most important thing-- at least from our point of view-- is that LocusLink provides a nomenclature of genes. No LocusLink assigns a name to each gene, and if you stick to that name, then you're sure that everybody knows what you're talking about. Because this, again, it might seem a trivial problem, but for historical reasons, in many, many cases, genes have lots of different names, even if it's the same gene, people have been calling them with different names, and it's a mess when you try to out which gene is which. If you stick to the LocusLink nomenclature, then at least you have one way of naming genes, and that's it. So it gives a name, it gives a number, and you can use these as identifiers to look up your gene in other databases, if they use the same nomenclature. And of course, again, it's part of the NCBI cluster, and all NCBI resources use this way of naming genes. Then, unfortunately, there are other resources that we'll mention later that use a different way of naming genes, and this makes things very difficult when you're trying to build programs to integrate information from different places, because it's very, very hard to know exactly how to reconcile different ways of naming genes. Again, it might seem a trivial problem, but it's not. And it's also complicated by the fact that, as I was saying before, genes may appear in several different forms-- there are variants of the same gene, there are genes that are very similar to each other. So sometimes they are considered to be the same genes, sometimes they're not. And all these are things that make the naming genes kind of a complex and not a deterministic task. So UniGene is another resource at NCBI that takes a slightly different approach. It's an attempt at collecting all the GenBank sequences that refer to a region of the genome where a gene is known to be. So essentially, if we know that a certain region of our chromosome contains a gene, then we can go into GenBank and look at all the sequences that fall into that region. So all the sequences ultimately come from that gene or part of that gene. And UniGene puts them all together in one cluster. And then tries to provide a description of why all these sequences-- of description of the features of all these sequences. So they're all similar, they all come from the same location from the same region of the genome. But they might represent multiple forms of the same genes, so they're probably not identical to each other. They might come from different tissues, so they might have different properties, and so on. And again, this is the URL for UniGene. It includes information for 38 organisms. And I think that one year ago, this number was something like 14. So it's growing very fast. And the interesting thing is, this is an automated process. So LocusLink is a curated directory, means that there are people who spend their days going through gene records and adding information, checking it, correcting it. UniGene is an automated system, so it's actually an automated procedure that looks at all the GenBank sequences and tries to build these clusters based on the location of the sequences. I've mentioned the fact that it's interesting to study homologies between genes and different organisms. So HomoloGene is a database of all orthologs. So what it does, they take all the sequences in GenBank, they compare each sequence with all the other sequences in GenBank, at least in a set of organisms. And if they find a good match between the two sequences, then this pair is added to the HomoloGene database. So right now it encompasses 25 organisms. And in these 25 organisms, they have 470,000 ortholog pairs-- so pairs of genes from different organisms that are highly similar to each other. All these are put into the database. And then if you find there are three organisms that share a similarity relationship, then this, in turn, is marked, because it means that you're finding a match that has an even higher quality. So if you find that organism A shares a gene with organism B, and B shares it with C, If then you find a C shares it with A, then you've built what they call a triplet. And that's a confirmation that actually this gene might be really the same gene that is conserved across all these organisms. This one is partly curated, partly calculated. So they have an automated procedure that looks at sequence similarity using all the many algorithms to do that. And they give the similarity score. And then they have a subset-- this is not mentioned here. But then, most of these entries in [INAUDIBLE] are also manually curated to make sure that they are really-- that they're really similar genes. AUDIENCE: [INAUDIBLE] ALBERTO RIVA: I think it's a Swift-- no, it's part of the [? blast ?] score. Yeah, they have a threshold of something-- I don't remember. But they give it the score in addition to all the other information. OK, Ensemble-- this is not part of NCBI. This is something that comes from Europe, from the EMBL-- European Bioinformatics Institute, the Sanger Institute. It's something that is pretty similar to LocusLink in scope. Again, it's a software system for the automated annotation of genomes-- it's basically means it's a system that discovers genes and tries to find as much information as possible about these genes. And then all the information is available through a search interface. It's limited to 10 organisms, but it provides a lot of information about the genes in this-- about this organism. So it provides information about genes, about proteins, diseases, SNPs, cross-species analysis, microarray data. So it's essentially a combination of LocusLink, dbSNP, HomoloGene, and a few other things. It has a very powerful data access interface. It's actually very, very nice, very easy to use. So you can do queries on this huge database in a relatively simple way. One of the biggest problems with this system, at least from our point of view, is that it uses its own way of naming genes. This is essentially what I was referring to before when I was saying that not everybody uses the LocusLink way of naming genes. They have their own alternative scheme for naming genes. And going from one to the other is sometimes tricky. There are links between the two databases. But of course it's not-- they don't necessarily match very well. What else? OK, and finally, a few words about gene regulation. So gene regulation-- of course, it's almost needless to say, it's an extremely complex mechanism. Our understanding of how gene regulation works is still very limited. When you hear about microarrays, about the concept of gene expression as measured by microarrays, gene expression is the most visible consequence of everything that is in this complex mechanism. So what you see is that under certain conditions, a certain set of genes is highly regulated, is highly expressed, or under expressed, and so on. But this is a consequence of the fact that there is a very complex machinery behind it that determines which genes are active or not, and how much, in different conditions. And this is actually a system that integrates a lot of different factors that might include the following, in no particular order-- The tissue, we know very well that the set of genes that are expressed in one tissue is very different from the set of genes that are expressed in another tissue. Developmental stage, genes that are expressed during the development of the embryo, for example, are not the same that are expressed in an adult organism. The time-- the time can mean either a time of day, for the case like the circadian rhythm, there are genes that are expressed in the morning and not in the evening. Or time at a larger scale, there are processes that take years to complete, like puberty, for example. So there is this regulation mechanism is able to work at very different temporal resolutions. External signals, of course, all response to external stimuli. And it also depends on the expression state of any number of other genes, because genes regulate each other through feedback loops and so on. So again, it's a very complex system. We're slowly working to try to understand how it works. So what we have for now is some understanding of what transcription factors-- the transcription factors are proteins that bind to the upstream regions of the genes, and are able to control their expression, their activity. Because transcription factor, because usually the most common case-- these bind to the promoter region of a gene, they combine with each other, forming complexes. And these complexes then activate the transcription machinery, that then gives rise to what starts everything else. And in the end, you get the gene is expressed, because it was latent in the protein is produced. And the transcription factors, as I was saying, don't act alone. They have to interact with the target gene, but they also interact with each other in a combinatorial fashion. What this means is that, looking at the individual transcription factor is usually not sufficient to understand what it's going to do. Because the same transcription factor in different combinations with other transcription factors might have different roles. So what we need to look at is the pattern of transcription factors that binds to a certain gene. And that, in turn, will determine the spatial, temporal, dependent expression of the target gene. And again, we are still doing the very early steps in the process of trying to understand how these patterns are actually structured, how they work. So what we-- OK, sorry. So the first step, again, we're moving the first steps. The first thing you need to do is need to be able to reliably identify which transcription factors bind to a given gene, and where, exactly, in the promoter region of the gene they bind. And transcription factors bind to locations that are called transcription factor binding sites. They're small stretches of DNA that are recognized by the factor. And so if you know where the binding sites are, you have a first idea of what factors bind to this gene, and how they may be arranged spatially. So if you know that two factors have to interact with each other, probably their binding sites will have to be close to each other. Or, at least, let's say if you find two binding sites that are close to each other, there is a very high chance that two factors will interact. And it might be that when they interact, they act in a certain way. Where they don't interact because they're far apart, and they act in a different way. And so knowing the map of binding sites, and the promoter of a gene is something that can give you some initial information that it can build on. It's still something that's very hard to do computationally. The ways that [? truth ?] that people have been using to do this are usually based on pattern matching. So the binding site, as I said, is a small stretch of DNA, usually goes from five to about 20 or 25 base pairs. So they're really short. And they're characterized by [? concentric ?] sequences-- in general, they are not very [? concern-- ?] not very precise. So it's essentially impossible to look at a piece of DNA and say, well, OK, I'm sure that this location here is a binding site. So you can try using deterministic methods, just looking for instances of the motifs. Sorry, thought I had something on this. But you can look for instances of the motifs using either deterministic methods, or probabilistic methods, pattern matching, there is lots of things that you can try. And in almost all cases, people rely on TransFIC-- TransFIC is the largest available database about transcription factors. It's a database that provides information on the factors, themselves. It provides examples of their binding sites. And it provides descriptions of their interactions with genes. And the important thing is that most of the information in TransFIC is experimentally validated. So for example, the binding sites, these are binding sites that have been observed experimentally. So you can actually trust the fact that particular piece of sequence they give you is the binding site for the transcription factor in question. And so in the end, without going into too much detail, what you can do is can take these binding sites, you can use them to train your favorite pattern matching method, and then you can try scanning new sequences looking for binding sites. And this is one of the things that we're currently working on at CHB-- there are various ways of doing this. And again, it's a rather difficult problem, from a computational point of view, because these patterns that you have to look for are not very specific. They are not very clear. On the other hand, doing it computationally-- sorry, doing it experimentally is very slow, very expensive. So you can only do that for a small number of genes, and a small number of factors. If you have a method like this, if you have a computational meter to detect binding sites that works well, then you can think about doing this on a large scale, looking for example, all the binding sites for a certain factor in all the human genes. And that will give you a very interesting picture of everything that might be regulated by that factor. So we're not there yet. This is one of the things we are working on in our lab. And it's going to take a lot of work but the rewards are potentially very interesting because this is something that will then allow you, if it works, to automatically build the networks that describe how genes regulate each other. And that is something that, of course, has a lot of potential interest. OK, we've talked about gene expression, we talked about microarrays. You might have already heard about these things, but I was just going to list the main sources of available microarray data, public microarray data. So for example, again at NCBI, GEO is a database, Gene Expression Omnibus, is a database of gene expression and hybridization array data. It offers 12,000 experiments, essentially, 12,000 hybridization experiments on over 500 . Platforms so if you're interested in doing some form of data analysis on microarray data, and you don't have the time or the money to-- sorry-- to do your own microarrays, you can go to GEO, and you have 12,000 of them to choose from. And they also offer a very powerful interface to search-- since microarray data sets are very large, they include thousands of measurements. They provide a very useful search interface that allows you to select the data sets we're interested in, and to extract data from these data sets and look at, for example, the behavior of the same gene in different experiments, or different genes in the same experiment. And there are lots of different queries that are common when you work with microarray data. The Stanford microarray database, again, is a repository of all the-- of a large number of micro experiments performed at Stanford, and a portion of these are public. NCI60, again, from Stanford, is a famous data set that includes gene expression profiles for 60 human cancer cell lines. And the information on drug activity correlated with gene expression patterns. So they measure how the gene expression patterns change when these cell lines are subject to different drugs. Other resources for gene expression are found in different PGA projects, PGA are programs for genomic applications, they are are large projects managed by the NIH. So the [? tracks ?] PGA, for example, offers 565 microarrays from mouse and rat models of sleep, infection, hypertension, pulmonary disease. The Hopkins PGA, again more than 500 microarrays from several human diseases. Cardio genomics provide microarray data on mouse models of cardiac development and signal transduction. And finally, human gene expression index-- these are just some of the most important most useful public resources of microarray data. OK, I'm going to go through this final part very quickly because I'm almost out of time. And if you'd rather stop me with questions, or if there's anything you would like to discuss about what I said so far, we could stop here, or I could just run through this last portion quickly. So this last part was about the last step in the process from proteins to phenotypes. I was going to talk about protein databases. The situation in protein databases is a bit different from what we've seen so far. The protein world is much more complex than the DNA and RNA world for the reasons that I've explained at the beginning. Some of the reasons are that proteins interact with each other in very complex ways. They combine in three dimensions, they catalyze chemical reactions. They have a behavioral that is much harder to describe in [INAUDIBLE] terms than everything else we've seen so far. So what protein databases give you is usually information about the sequence of a protein, and that's the easy part. The known or computed three dimensional structure, the known or inferred function domains. And ideally, also, the functional protein, what the protein does in different conditions. But again, this is-- we're getting to the area where things start becoming hard to formalize and to represent in a computational system. So as a consequence, protein databases, first of all, tend to be older, because they were started earlier than genomic databases. They are less integrated, they are less complete. Nomenclature is much less standardized. So it's harder to work on protein databases than with all the other resources we've seen so far. The biggest database is SwissProt, 120,000 sequenced centuries, 9,000 human proteins in SwissProt, which is a pretty small number if you think that we already have complete information in Golden Path about 20,000 genes, and each gene is known to, on the average, code for probably two or more proteins. So these are the proteins for which we know something, and they're very, very, very small number compared to the total number of proteins that are thought to be in our cells. It's composed of a core set of data elements, this sequence, the references, taxonomic data for this protein. And then our [INAUDIBLE] about the functions of this protein, domains and sites, the structure, similarities, association with diseases, variant forms of the protein. And again, it's hard to link this database with LocusLink or UniGene, but is its own identifiers for proteins. But don't need to go into these problems now. This is a graph that shows you the growth of SwissProt in recent years. And as you can see it's growing but at a much smaller rate than GenBank or other resources like that. We have databases about the three dimensional structural proteins like PDB, different visualization options. MMDB is essentially the same thing, but implemented at NCBI. PFM at the Sanger Institute is a database of protein domains and protein families. They look for domains in the proteins, and then they look for similarities between proteins on the basis of the domains that were identified. They use similarity measures, they use hidden Markov models. Again, they have a curated portion with a small number of protein families. With the notation there is a high quality. And then there's a second portion of PFM that has smaller families of lower quality. This is an example of a display, a PFM display, of a protein with all the different domains that were found in the protein with the tails here. So give it this nice graphical display. I'm going to skip protein interaction databases. And I want to get to the end. We're getting to the phenotype and to the spectrum, finally. And there's just a couple of resources that have to be cited because they're extremely important. One of them is OMIM-- OMIM is a catalog of human genes and genetic disorders. Again, hosted by the NCBI. It's basically a collection of text articles that talk either about a gene or about a disorder, and they're linked with each other. So if you're looking at the entry for a gene, you can find a description, mechanical feature, the function mapping, and then you can find all known correlations between that gene and diseases, allelic variants, so all known polymorphisms of that gene with the corresponding clinical outcome if there is any. And then you can also go the other way around. It has 14,000 entries. Again, these numbers are probably larger by now, because again this is a graph that shows you how-- it is not very up to date, but you can imagine that it's been growing at least at this speed or faster since '98. And finally, PubMed, you probably all know what PubMed is, a database of citations from the biomedical literature. It contains 12 million entries starting from the mid-'60s, and it provides references, abstracts, linked to online resources. Full text articles, in some cases, supplementary materials, and it's one of the most used resources in this field. They claim they receive 30 million searches per month. OK, one last thing-- gene ontology. Gene ontology is something that stays at a slightly higher level, above everything that we've seen so far. The idea of gene ontology is to build a dynamic controlled vocabulary that can be used to describe biological concepts. If you look at something like OMIM or PubMed, you're going to find a textual description, for example, of a disease that references concepts that need to be precisely defined so that we all know what we're talking about the same thing when you use the same word. And the purpose of gene ontology is to try to do this in at least three domains-- molecular function, biological process, cellular component. So it's organized in three taxonomies, and each taxonomy contains concepts and sub-concepts and so on, that try to describe everything that is known about molecular functions, molecular biological process, and several components using a standardized nomenclature. So that when you want to refer, for example, for to a certain component of a cell, instead of just saying its name, you can cite the gene ontology term that describes that component, and everybody else will be able to go to gene ontology and see what's the exact definition of the word you're using. It's a work in progress-- still very far from being complete. It has all the usual problems that occur when you're trying to build taxonomies that it's very hard to formalize things that come from natural language. So it could find exact definition of all the terms that people use, especially in this field is very hard. But this is where they are now, and it's a work in progress, so it will keep growing in the future. And this is a view of taxonomy, for example, for biological process. If you are talking about site communication, then response to external stimulus is a subclass of communication. Response the biology stimulus, the first response is a subclass again of all this. And if you want to talk about the immune response, you can cite this biology term, and everybody will be able to go to gene ontology and see exactly where this term is in the taxonomy of concepts about biological processes. OK, I think we're out of time. Well, just a conclusion slide that I'll just let you read, because I think it's just repeating what we're saying so far that we are drowning in data and converting this data into knowledge is not easy. We need automated tools to access this data, to make sense of it, to convert into formats that we can use. And of course, this is a challenging task, because as we saw, biomedical data covers the whole spectrum of knowledge representation and management techniques that we know about.
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_2_Introduction_to_Biology_and_Genomic_Measurement.txt
ATUL J. BUTTE: So I can and have in the past talked for about 6 hours on this subject. Today, we're just going to talk about the first of these, microbiology for the [? bioinformaticist. ?] And if we have time, then we can talk about gene measurement techniques, not just microarrays, but all sorts of different technologies depending on how fast or slow we go. So let's just get started then. So I have about 10 slides of basic biology, OK? If I see you start to yawn, then we'll just go faster. There's a lot that we can cover. So how many of you remember this kind of thing? Do you remember this? Do you remember this? Do you remember this? OK, so we can go pretty fast. So obviously, the key problem in biology or in all of life science is that organisms need to produce proteins for a variety of different things that they need to do over their lifetime. So the goal is proteins, but the organisms have to make the same proteins again and again in a very set pattern type of way. Some of these proteins are important. They're enzymes that catalyze reactions. These reactions would otherwise take 10,000 years to complete. Because you have the enzyme [INAUDIBLE],, it can take 10,000 microseconds instead. Structural support-- otherwise, a bag of water would just collapse. These are things that are going to hold the cell wall together, to give it some shape. And certainly, we have hormones to signal from one part of an organism to another part of the organism or for one organism to another organism. So the key problem in life science over the past 50 years is how to encode the instructions for making these specific proteins because they have so many disparate shapes, disparate lengths, characteristics. How does the organism know or how do the cells know how to make these proteins? And the first step is obviously nucleotides. That's the most basic element of these blueprints that goes into making the proteins. So have adenine, cytosine, guanine, and thymine, H, C, G, T. And as we all know, 50 years ago we've learned that these were actually arranged in a chain, actually two anti-parallel chains where the As base pair with a Ts and the Cs go along with the Gs. And these are not just arbitrary chains, but these are chains with polarity. So you have a start and an end to one chain. It's lined up with the start and the end to the other chain. And this is the double helix. This naturally forms a double helix if you melt this to the right temperature. If you heat the DNA, it starts to denature. But if you cool it again, it'll start to come back together again. It'll re-nature. So it's a natural formation depending on the temperature of the bonds here, the temperature needed to break these bonds. And so you get an idea of what this double helix looks like here. So now, let's take a step back. We have the DNA. We know that's going to be the end point, but we haven't had to even sequence the As, Gs, and Ts, and Cs to make genomic or genetic diagnoses. So even for the past more than 50 years, we've been able to make genetic diagnoses even before sequencing was invented. How? Well, obviously, you can actually look at the chromosome structure. So you can get a blood sample from any human, isolate the white blood cells, which still has DNA, and stain them in a particular way, take a photograph, a photomicrograph of that, cut out the chromosomes, and line them up. And so people have been doing this for decades, more than half a century. And basically, these chromosomes have been numbered based on the original estimates of the size of these chromosomes. So chromosome 1 is thought to be the biggest. Chromosome 22 is thought to be the smallest, but it turns out that 21 is actually smaller than 22. And there's a couple other missed pairings like this now that we have the exact length. But this is how we've been able to make genetic diagnoses. Each chromosome, so each chromosome is a single double strand of DNA from end to end. And obviously, it's wound and coiled and rewound and recoiled in such a degree that this looks nothing like a double helix only because we're at a big picture here. We're at least five orders of magnitude up in terms of magnification from actually visualizing a double helix. Now, like I said, we've been able to make diagnoses for more than 50 years using-- come on in-- in genomics. Anyone want to guess what this diagnosis is? Let's keep this interactive here. AUDIENCE: [INAUDIBLE] ATUL J. BUTTE: This is Down syndrome or trisomy 21. So you can see, obviously, it's highlight here with the little arrow. But there's three copies of chromosome 21 here. And that happens to be a viable phenotype. You can born with this. And you can actually live for quite some time, though you have marked phenotypic problems or phenotypic issues. You can live with that. This is trisomy 21. There's two other chromosomes that you can have an entire trisomy of and still survive one way or another. Now, we have 22 pairs of chromosomes here. AUDIENCE: Interruption. ATUL J. BUTTE: Yes. AUDIENCE: [INAUDIBLE] question. Explain the following fact. So this [INAUDIBLE]. How come with exception of testicular cancer Down syndrome people don't get cancer? ATUL J. BUTTE: OK. AUDIENCE: Look at the genes, figure it out. [INAUDIBLE] who don't get cancer. ATUL J. BUTTE: So like I was saying, we have 22 pairs of chromosomes that are ideally paired, although we can get trisomies rarely. We also have a pair of sex chromosomes. Either you have two Xs or you have an X and a Y. And those are the normal conditions. There are abnormal conditions as well, but these are the sex chromosomes here. Most of the Y pairs with a piece of the X, but the Y has some unique material. And the X certainly has a lot of unique material that doesn't pair at all. There's some very important things on the Y, for example, a piece of the androgen receptor and things like that. The coding for that is on the Y chromosome. So this is what an actual photograph looks like. And this is the idealized version of this ideogram. The bands essentially are actually quite specific with a particular stain. When a particular stain is applied, when you apply different stains, you get different banding patterns. And it turns out these bands actually have a lot to do with the base pair composition of those regions. For example, GC-rich, or regions that have a lot of Gs and Cs, will turn out to be one color versus the As and the Ts. So now, we've seen the big picture from the chromosome. We started with the DNA and the nucleotides. And we've already covered the double helices. So just as an introduction, I got only 10 slides of basic biology refresher, and then we'll get to some interesting stuff. So how do we get from the DNA to the genes, right? Because even though we have 3.5 billion base pairs, only 1.5% of that codes for the proteins. We said the proteins were the final goal. And out of the 3.5 billion base pairs, only 1 and 1/2% actually are in a coding region per se. So let's talk about what we call a coding region exactly. This line here, this red line here, represents the double helix. That represents the As, Cs, Ts, and Gs. And the structure of a gene is such that there's a piece here, which is going to serve as our blueprint for the protein called an open reading frame. It's got a start initiation. It's got an end or a termination. And this piece of that is going to be spliced in and spliced out as part of this blueprint. And we'll talk about the splicing in a second. The piece of the double helix that's upstream or before this, towards the 5 prime end, it's called upstream. And the piece of the double helix that's after this termination codon, it's called downstream when it's on 3 prime end. Now, here's the most crucial point. We all love diagramming these things like this, but there is no punctuation in the genome. There's no highlighter. There's no caps lowercase. There's no italics bold. All you have are the letters, right? So one of the hardest problems is trying to figure out this structure from the letters because there's no flashing arrow that says, here is where the gene begins. All you have are the As, Ts, Gs, and Cs. And it's only informatics that actually deciphers this, OK? So we have algorithms that people use commonly now that have been trained on manual experiments and the results of where this gene began and that gene began. They learn the patterns. They code it into algorithms, and the algorithms run on the entire genome to try to make predictions. But in the end, they're still only predictions of where these genes are, and they're still commonly wrong. So each gene, a gene, encodes instructions to make a single protein. The DNA is called, before, is upstream. And a lot of the regulatory elements for this gene are actually in the upstream component. So let's say more than they are in the downstream component. Regarding the splicing, imagine if an architect made a series of blueprints, but the architect said, well, if you don't want the Jacuzzi in your house, you can take this page out. If you only want a one car garage instead of a two car garage, you could take this page out. That's what the splicing is. You start with a whole set of what could be in the plans and pieces could be taken out or pieces could be left in. That's what the exons and introns can serve as. The introns get spliced out. They can be thrown out. And exons remain to actually be part of the blueprint, which we'll talk about in the next few slides. Different tissues may splice in and out different components here. It's not always the case. Even though I'm drawing it here as exon, intron, exon, in a different part of the same organism that intron might be left in. That's still a mystery as to exactly why that happens in some tissues, not in other tissues. And even the mechanism, the proteins that are involved in cutting and splicing these things, are reasonably well-defined. It's not clear why is it sometimes specific to one tissue versus another. These are called alternative spliced products. If I have the same open reading frame from start to end-- that's how we define an open reading frame-- if I have the exon intron exon in one blueprint, but just the two exons in the other, those are alternative splice products. And because they're two different blueprints, you're going get two different houses or two different proteins at the end. So we talked about the code at the start, a code at the end. And again, the biological system can figure out the promoter regions, where is the boundary between the intron and exon, where the start, where is the end using just the sequence syntax. There's no highlighter. There's no bold or uppercase, lowercase. So we have really 3.5 billion base pairs, but only 35,000 genes. And the reason why-- this was an early estimate of 3%. It's actually only 1.5% of the genome, so only around 60 or 70 or 80 megabases or million bases out of the 3.5 billion base pairs. So what makes up the rest? If I just showed you the structure of the gene-- here is the gene that I just showed you, let's say. What makes up the space between genes? Well, we're not going to call it junk here, but the 50% of the regions between genes are repeated elements. So in other words, there are at least four different types of repeated elements. And those four are repeated many times in the genome. That makes up 50% of the genome itself. Now, these repeated elements-- so the repeated sequences are interesting. Because early on, when we had to actually-- when people were using DNA in a forensic type of way, the number of repeats can be different between individuals. So what do I mean by that? So here is one type of repeated element. This is a LINE, or Long Interspersed Nuclear Element. It doesn't matter what it is, but here you see 1, 2, 3, 4, 5, maybe 6, 7, 8 copies of this line between this gene and this gene. So you might have 998 repeats between these two parts, but I might have 997 of them. And if you do that between a whole bunch of different areas of the genome, you can actually measure it. You can actually find differences. You can do simple things, like figure out is a child a descendent of one or both parents. You can certainly tell, to some certain degree of confidence, whether a sample left at a crime scene is from this individual versus the random population. So just counting these repeats is actually pretty useful. Even though today we have much more accurate ways of figuring out whether a sample came from one person versus another, this is actually what was first used almost 20 years ago in the forensic view of DNA. So these are the repeated elements here. And the number of repeats can be different between individuals. Now, the repeat-- AUDIENCE: Another [INAUDIBLE] idea. Look at the number and placement of these repeats. And as you learn about [INAUDIBLE],, you can measure comprehensively especially these genomes. We have thousands of [INAUDIBLE] in the public. And we actually determine what is it about the spacing or number of these repeats that does or does not influence the expression. Do they have a purpose given that they're [? matured ?] quite widely throughout the species? ATUL J. BUTTE: So the repeats themselves are also interesting for one other thing. They repeat, so it makes you think that these are copies of an original. And in fact, when we look at the sequence of these repeated elements, we can tell what the original looked like and how deviant is this sequence from the original. So how can we tell that? So what a repeat exactly is a repeat codes for machinery to make a copy of itself. That's all a repeat is. So let me be really clear. The repeated elements making up 50% of the genome codes for the machinery that goes back and makes a copy of itself. That's why it's going to persist in the genome, right? If all you need is one or a couple of these things, eventually they're going to come back and make copies of themselves as the genome goes on through evolution. Now, it turns out all of these repeated elements are broken today in humans. None of them work today in humans. That's not true in other organisms. Repeats are still alive and well in the mouse, but not in humans. In fact, they're all dead in humans. Why? AUDIENCE: You mean they don't work in-- ATUL J. BUTTE: They can't work. None of them are working. They cannot go back and make copies anymore. None of them code for that anymore. AUDIENCE: OK. Generally, evolutionary what we see is the first of these classes are repeats that populate a certain branch in a tree. ATUL J. BUTTE: Absolutely, I have a whole thing on this if we get to it, OK? AUDIENCE: Oh, all right. ATUL J. BUTTE: If we get to it. So just think about that, OK? So these different types of nuclear-- these different repeated sequences are all different ways to code for that machinery. That's all it is. We're the smartest organisms on this planet or so we like to think, so we must have the largest genome, right? Absolutely wrong. So we have 3.5 million bases. So it turns out with 3.5 million bases and only four letters A, G, T, C, you can fit your entire genome on a CD-ROM. You don't even need a DVD for this. You can easily fit-- with no additional compression, you can fit your entire genome in 750 megabytes, which is what a typical CD-ROM can hold. So just think about that for a second. So look at these other organisms and what their genome sizes are, the E. coli. So E. coli is a common bacteria that lives in your gut. It can be friendly. It can also be pathogenic-- has 4 million basis. Yeast, which is used to make bread-- 12 million bases. But even the pea, the garden pea-- you got peas in your salad maybe for lunch-- has more of a genome than we do, 4.8 billion. Maize and wheat-- wheat has 17 billion base pairs. And we have 3 billion base pairs. So the size of the genome has nothing to do with the intelligence of the organism at all it turns out. So where are these genomes different? A lot of the difference is in the space between the genes. So here are four organisms. Here's human, the same 50,000 base pairs I just showed you on the previous slide. And here are four genes within 50,000 base pairs and a whole bunch of repeats in the middle. Here's yeast. And you can see how many genes there are and very few spaces. So it's much more compact. Here's maize. And wheat would be the same thing. And 50,000 base pairs, there's only one gene there. And here's E. coli. In 50,000 bases, not only is it so compact, but there's actually genes on both sides of the genome, both strands. And they can even overlap it turns out. Then we talk about that more specifically here. This is a genome for plasmodium falciparum, which is the organism that causes-- AUDIENCE: Malaria? ATUL J. BUTTE: --malaria, still the number one infectious disease killer in the world. This is chromosome 2. And you see the arrows here. The arrows represent which strand of the DNA the gene is on. They don't all have to be on the same strand, making the problem even harder. And they can overlap. There's many instances of a gene where another gene exactly overlaps. It makes a hard to measure one versus the other it turns out. So this is the central dogma. We'll talk about central dogmas in a second, but this is basically the meat of how this happens. It serves as a blueprint. As we all know, the DNA-- when the cell decides to make this protein, somehow the DNA unravels in such a way to expose just the smallest piece of the 1.5% of the genome that's going to code for the protein it wants to make. And so the DNA is kept locked up and nice and safe in the nucleus, the center of the cells. And the proteins have to be made in exterior of the cells. So a temporary copy is made of the DNA that goes out. And it's going to be worked on to make the protein. And the temporary copy is called RNA or, specifically, messenger RNA. And so when you unravel, a messenger RNA is actually made in an opposite sequence, or this is actually built in the opposite complementarity of the DNA. So if there's an A there, there's a T here, et cetera. That gets exported out actively out of the nucleus and goes into the cytoplasm where it's made into a protein. And to me, this is the most interesting part of the whole thing. Somehow nature realized that you can't just make proteins with the four letters. You have As, Ts, Cs, and Gs. And proteins are made up of 21 amino acids-- not 20, 21 amino acids. The 21st is actually one that was just discovered about 10 years or so ago it turns out. And the coding for that one breaks all the rules. And we could talk about that if people are interested. But there's 20 amino acids, let's say. And I have four letters. Obviously, if I made one amino acid for one letter, I couldn't do it. If I had two letters, I got four things in this one and four things in this one. That would code for 16 amino acids, but I have 20 to choose from. So I need three letters to code for at least 20 amino acids. So that's what's known as the genetic code. I'm blanking on the term. So each position can be one of the four nucleotides. And nature evolved into using three nucleotides to code for a single amino acid. So that messenger RNA comes in here. And one of the most amazing pieces of nanotechnology, this ribosome, looks at the strand as it's coming in. And it's able to link that strand with other amino acids that are held in place by these transfer RNAs that recognize these triplet sequences. So if there's a particular A, A, T, the tRNA that recognizes A, A, T, or rather the opposite of that, which is holding the right amino acid, comes into place. And the ribosome attaches it to build a growing protein chain. So in comes the RNA, out comes the protein. The micro machinery for this is astounding when you think about it. Remember, this is happening many, many times per second in all of your cells. You don't have to think about it. It hardly ever fails. If it failed, we just couldn't do anything. So here is this genetic code. So there's 64 different positions here. And so you see something like UUU. So Us are used in the RNA instead of Ts. It's just a slight difference there. UUU is code for phenylalanine. UCC codes for serine. And that's the code, essentially. Now, obviously, these four different codes code for serine. It doesn't actually matter what the third base pair is. It's the first two are what are specific. So there's degeneracies in this code because we don't have to choose from 64. We only have 20 or 21 amino acids to choose from. So this is actually the central dogma here. And these are all the different concepts we've talked about so far. We talked about nucleotides, which are held in a double helix. One single double helix makes up a chromosome. A chromosome holds the genes or DNA. And the entire set of all these genes is called the genome. The genes code for messenger RNA. The transfer RNAs bring the amino acids, and the ribosome operates on that to make the proteins. We'll talk about signal sequence in a second. The amino acids are joined together to make proteins. And this sequence of arrows here is called a central dogma. Even though the arrows work bidirectionally now, this is the original central dogma. Now, another fascinating part is this. Sometimes the cell needs to make something that would be quite detrimental to the cell. So for example, there are cells in your pancreas that need to make things to digest the food we eat. I just went to the truck. I had a little dumpling. Now, I need to break down that protein. And it turns out I can break down protein with something called trypsin. Trypsin will help me break down proteins as well as many other things, but those things are proteins themselves. So here's the quandary here. My pancreatic cells, how are they going to make this trypsin without digesting itself? How do I make proteins that are going to kill me if I need to make them? Well, so what happens is the first few base pairs of the sequence, in fact, the first few amino acids that get spit out by the ribosome, might code for what's called a signal peptide. And the ribosome might see this and say, hold on. The thing I'm about to make could kill me. I need to stop here. And what happens is somehow the ribosome makes its way to a safer place to make the proteins called the Endoplasmic Reticulum, or the ER. So this is the ribosome. Here's the RNA coming in. Here's a growing protein chain. And all of a sudden this red piece here gets recognized by the accessory part of the ribosome, the signal recognition peptide. It says, hold on. Don't keep going here. Because if I even have just one molecule of this, I'm going to start degrading the proteins within the cell. So the whole thing goes over. It starts to spit the protein chain into this hole into the endoplasmic reticulum where that can actually then be built safely. This is in a separate compartment that can be exported. It's a pretty sophisticated piece of nanotechnology, this thing. It knows, because it's given the right codes, to not digest itself because it might need to make something quite toxic to the cell. There are signal recognition peptides like this likely for a lot of different compartments in the cell. Don't build this into the ER. Build it somewhere else. The complete catalog of those is still unknown, but would be quite valuable to know. So how did the cell decide to make this gene? This area of research is called transcriptional regulation. So why does this cell decide to make that messenger RNA? No one knows for sure in any one case how we actually get the DNA to unfold in just the right way for these things to actually be built, but we love to theorize. And we love to draw pictures like this. And you'll even see pictures like this in The New England Journal of Medicine now. This is a commonly accepted way to draw a gene. So here's the start of a gene. The arrow here means that this is where the gene, a copy, is going to be taken off. And this is the upstream region. And these boxes represent things that can bind to that upstream region to turn on or turn off this whole process. So the GR here represents the Glucocorticoid Receptor. So when this cell sees steroids, the steroids are going to bind to glucocorticoid receptor, and the glucocorticoid receptor is going to bind here. And this picture here shows that with the hormone, with steroids, this gets turned off. They put a little X here. And here, this is saying without, so with no steroids present, these two are actually bound in the natural state. But when you have that plus a glucocorticoid receptor, the thing gets turned on. So you can imagine things, in fact other proteins-- the glucocorticoid receptor is another protein-- come back to the DNA and bind to the regions upstream of the genes to turn on and turn off other genes. So the whole thing is one amazing repetitive cascade. Genes code for proteins, some of which do their job, some of which come back to the DNA and turn on other genes or turn off other genes. And this cycle goes on endlessly. And we just know the smallest fraction of these cycles. Remember, we're drawing nice pretty boxes like this here. There is no boldface, no highlighter, nothing upstream that tells us this is where it's going to bind. All we have are the letters. And a lot of slow motion biology goes into figuring out where these binding sites are. And we love to think that we have algorithms to do this. Some work and some don't. Huge area of research here if we could figure out this immense network of transcription. What can start the process even though these are going on endlessly? Whole bunches, whole different actions, on cells can actually trigger specific responses. So for example, hormone action on receptors-- I just had lunch. My pancreas is now making insulin. That insulin is a small protein. It's getting exported out of my pancreatic cells into my bloodstream. That insulin is going to now go to the muscle and fat because my stomach is taking in all this sugar from what I just ate. I need to store that sugar somewhere. So the insulin is going from the pancreas into the blood and triggering a response as I speak in my liver fat and muscle cells. So there might be a receptor up here, another protein. The insulin comes and binds and triggers a whole bunch of things to turn on and turn off different genes being expressed. Shock or stress to the cell-- I just walked over here from Children's. It is frickin' cold out there. A whole bunch of things just started to be transcribed in my genome, new source of or lack of nutrients. If you have yeast and you put in one type of sugar, all of a sudden it starts to make things to deal with that type of sugar. If you change it to a different source of sugar, those genes go down. Another set of genes go up. So that could start a whole bunch of genes being transcribed. Internal derangements of a cell, this is actually two different ways to make a transcriptional program. If I fly from here to San Francisco, I'm going to be hit with cosmic rays. And more than likely, one of them is going to hit one of my cells. And all of a sudden, if it hits something in the wrong place, I might start to make a gene that I shouldn't be making. But even better, if my cell detects that it's making something it shouldn't be making, it might just decide to kill itself, save me the problem of having cancer. So those are two different ways internal derangements can actually trigger these things. Sometimes cells can decide to kill themselves for the benefit of the organism. And that's another common way to start these programs. And of course, the list of things that can trigger a transcriptional program is endless. It's infinite, right? I can go like this, and I've just triggered some genes in my hand, for example. Anything could start these things going. Any questions so far? I have about three or four slides left on biology. Is this stuff you all already knew? Are you learning anything at all so far? Perhaps, OK. So this is a little bit more esoteric now, this idea of temporal programs. So one of the more important places where genes are getting turned on and turned off is during the development of an organism. So in our 10 months of gestation when we go from fertilized egg to actually leaving the womb, whole bunches of genes are getting turned on and turned off in a pretty well-defined program. And the picture I'm showing here is the difference between development and-- I guess-- so the terms they use here are segmentation versus homeosis. Take a look at this picture, for example. These are two houses built in San Francisco in 1857. And they both started with the same blueprint. And they both started out-- they're essentially the same house right next to each other. But after more than 100 years, the two houses look very different from each other. So this is given as an example on this cell paper as basically saying, the genes that are involved in the development of a particular tissue might not be the same ones that refine the tissue after it's been built. So the same tissue can have two different programs on two different time scales in terms of gene expression. Now, let's talk about this process of making the messenger RNA because this is important. Most common, our most impressive way to measure genes now happens to do a lot with messenger RNA. mRNA can be transcribed at several hundred nucleotides per minute. That's not that fast, actually. So if you have a gene that's 10,000 base pairs long, I'm only doing 100 a minute, maybe a couple 100 minute. It could take me hours to transcribe one gene. So in thinking about this process, this actually puts it into a real time scale that we can know and love here in terms of minutes. So the gene dystrophin, dystrophin is involved in muscle. It turns out dystrophin has defects that can lead to muscular dystrophy. Dystrophin is involved with how muscles are formed and contract. One dystrophin gene can take 20 hours to transcribe. So how are we able to make enough of this if it takes 20 hours to make, any ideas? If it takes 20 hours for me to make one transcript of dystrophin and I need to make a bunch, how am I going to do that? AUDIENCE: [INAUDIBLE]. ATUL J. BUTTE: You have to have a whole bunch of ribosomes, but what? AUDIENCE: Multiple copies of the gene. ATUL J. BUTTE: OK, you can have multiple copies of the gene in the genome, but there's one other way that you haven't thought of. Look, I'm telling you it takes 20 hours at the start of the pipe to the end of the pipe, right? Now, I'm not telling you how fast I'm shoving things into the pipe. I can have one of the things making the [INAUDIBLE] right behind the next, behind the next, behind the next. So I could just stream off thousands of copies at a time, right? I don't have to wait for one to be done to start the next. I can have them all go in one base pair off of each other, a couple of base pairs off from each other. That's how I can make more or fewer. But I still need all the ribosomes to make them into proteins. You're absolutely right. So that's an important concept. Even though they are long, it doesn't tell you really how many we can make given any unit time. If the cell really wants to make a bunch of them, it could probably figure out a way to do it by just lining up the polymerases to make copies of the gene. Now, it turns out a lot of messenger RNA happens to end with poly(A). So what do I mean by this? So as that little polymerase is working on the DNA streaming off a copy of the mRNA, when it's done it seems to stutter. So it adds a whole bunch of A, A, A, A, As at the end. Lucky for us-- I'm sure there's some biological reason for this, but it happens to be very fortunate for us. Because if we have something that sticks to this wall that's just made of T, T, T, T, T, it's going to bind to the A, A, A, A, A. And all of a sudden, we can now have a big filter for all the messenger RNAs. All we had to do is stream the messenger RNAs past my little thing stuck to the wall with the poly(T)s, and it's going to bind to the poly(A)s. That's how we can fish out all the messenger RNAs from the general pool of RNAs that might be out there because not all RNAs are actually coding for proteins. And in general, this is how we detect RNAs. You can take the sequence of RNAs and build the reverse complement of it. So everywhere there's a G, put a C in my detection. And if there's an A, put a T. If there's a C, put a G. So we can use the reverse complement to help us tell whether this sequence is there or not depending on how we designed the probe. And we're going to talk about that. So this is, I think, the last slide. So why are we trying to do this whole Genome Project? Why was it important to actually complete the Genome Project? [? Eric ?] [? Weiner ?] basically uses this analogy that knowing all the genes is equivalent to knowing the periodic table of the elements for biology. So around 1850s, 1860s, 1870s, Mendeleev came up with the way of lining up all the elements in such a way to be able to predict characteristics of those elements. Now, back then, most of this was empty spaces because they hadn't seen some of these rare elements before. But they knew, by golly, there has to be an element right here that's inert, for example. Or there's got to be an element here that's magnetic. They can make these predictions even though they didn't see what was actually there yet. They hadn't discovered it. The same way, the periodic table for biology for genomics isn't going to be a table. It might be a tree because that's how these genes are actually formed, through duplication events. And so the idea is maybe we can predict that there must be something in the genome that's binding to this. So there must be something in the genome that's responsible for that. Now that we have the entire catalog, we can start to fish those things out. And what this whole course is is how this is relevant to medicine. So most of the figures I've shown you here are from this book Genomes. In fact, there's a second edition now. It might be one of the required reading or something for the course. AUDIENCE: [INAUDIBLE] ATUL J. BUTTE: But it's freely available at NCBI. Definitely know this. So I would-- and in fact, all the figures are there, too, if you're interested in adding these things to reports and stuff. I highly recommend this more than anything else only because it's very readable, lots of great figures to help explain these processes, and it's very cutting edge, starts with proteomics, mass spec, microarrays, the whole works, much better than other books, relevant books in this field. And the Department of Medicine at this website has a primer on this. It's starting to get old now. AUDIENCE: [INAUDIBLE]. You guys received another email OK? ATUL J. BUTTE: This is starting to get a little old, but it's still quite relevant. Some of that stuff will never go out of date. OK, so I got another, let's say, 45 minutes or so? AUDIENCE: Yup. ATUL J. BUTTE: OK. Let's talk about gene measurement techniques. I will come up with something different for Tuesday. So what I'm going to talk about now-- so any questions so far? Yes. AUDIENCE: [INAUDIBLE] time scale for the transcription? ATUL J. BUTTE: Sure. AUDIENCE: Does anybody know like said you can line up multiple transcription [INAUDIBLE] to combat these slower [INAUDIBLE],, right? ATUL J. BUTTE: Yup. AUDIENCE: And I guess assuming that transcription complex binds to the start-- ATUL J. BUTTE: Exactly. AUDIENCE: --and then faster than it moves along the strain, what's the lower limit? I guess the [INAUDIBLE] what's the fastest these things move? ATUL J. BUTTE: So what's the fastest that gene can be transcribed? AUDIENCE: Yeah. I mean because you've got the first transcription complex just move out of the way. ATUL J. BUTTE: Absolutely. Yeah, exactly. AUDIENCE: [INAUDIBLE]. ATUL J. BUTTE: You could probably do the math and figure this out, OK? AUDIENCE: I think that's a great project, actually. ATUL J. BUTTE: Right. So-- AUDIENCE: Helps with a-- no, seriously. ATUL J. BUTTE: Absolutely. You can model this, right? AUDIENCE: [INAUDIBLE] a sound basis understand everything we know about the initiation of the [? transcriptual ?] site and moving on and the letting go, all the things that have to happen. And there are quite detailed documentations of the [? characteristics ?] and processes. What is the upper bound for transcription and translation? ATUL J. BUTTE: Well, basically, the model, the whole thing, is a pipe or a channel. And I already told you the length of this thing, how long it takes to make dystrophin. You just have to model how fast I can put the next one in. You might be able to model it based on just looking at the size of the polymerase and maybe how many base pairs it's covering up. Maybe it's got to move out of the way before the next one can come in. You can model it just like that, for example. It would be quite interesting to see how that matches up with the actual published numbers for it. My bet is that's probably still not the rate limiting step in this process. That's another interesting question. What's the slow part here? I'm not actually sure. It probably depends on the genes, actually. There are some genes that are only-- there are some important proteins that are only 10 amino acids long. Insulin, for example, is one with 50 amino acids. So some of those can be spit out pretty quick. And then you got some that are huge like dystrophin and titin, some of these others. Any other questions so far? AUDIENCE: [INAUDIBLE] in terms of [INAUDIBLE],, what in terms of bind [INAUDIBLE]? ATUL J. BUTTE: So again, remember, there's no nice line there. It's got to be in the sequence. So there are specific sequences that are known to be intron-exon boundaries, but clearly they're not 100%. Because in other cells, that same sequence might result in another transcript. So there are intron-exon boundaries about the start of the intron and the end of the intron. And it turns out one end of it complements the other side. So these things are actually taken out in a loop type of form. So the thing that cuts this thing out lines up the mRNA in such a way that's able to put the start and end together and just basically snip the whole thing out like that. I can refer you to pictures in the genome exactly on this. But again, it's not 100%. AUDIENCE: [INAUDIBLE] like GA, for instance, is one of the dinucleotides that sometimes initiates the splicing complex. But often it doesn't, meaning it's a very common dinucleotide. So [INAUDIBLE] the syntax that determines the splicing is not [INAUDIBLE]. So we have some heuristics which work some of the time. We have not reverse engineered [INAUDIBLE] the alternative splicing code. ATUL J. BUTTE: You think about this. This is a massive decryption effort in the end. All we have are the As, Ts, Cs, and Gs. One level of coding is the triplet codon for the amino acids, but another more interesting one is the grammar here. Where is the start of the gene? Where is the end of the gene? When do I splice it? And when do I splice that? What are all the transcriptional regulatory elements? That's the Holy Grail here. That is the Holy Grail. AUDIENCE: [INAUDIBLE]? ATUL J. BUTTE: We know those. AUDIENCE: But it doesn't always work either. ATUL J. BUTTE: Absolutely. OK. So I'll give you-- I'll tell you exactly why. The code for the start, there's only one code for start, AUG, but there's three codes to stop. It turns out one of the codes to stop, if there's another code 100 base pairs later, that makes the 21st amino acid. So in other words, you can't even just look at the triplets. It's in the context of other things, too. It makes things messy again. It's so nice and neat with just triplet codons. But that stop plus something else 100 base pairs later gives, actually, the 21st amino acid, selenocysteine. That's life, messy. AUDIENCE: So people discovered the selenocysteine. ATUL J. BUTTE: People for the life of them couldn't figure out why. How are you making selenocysteine proteins? There's only four in humans, one of which is involved in the thyroid, which is why a whole bunch of endocrinologists look at this. It metabolizes thyroid hormone into one form to another, thyroxine to triiodothyronine. That enzyme has a selenocysteine. And people couldn't figure out why it was coding for selenocysteine because it looked like a stop codon their. Well, you could start to figure these things out once you have the whole genome now. You can look at these exceptions. AUDIENCE: [INAUDIBLE] the function of [INAUDIBLE] mice and humans. What [INAUDIBLE]? ATUL J. BUTTE: OK, let's be very specific. The repeats code for a protein that comes back, looks for the sequence, and actually integrates the sequence somewhere else in the genome. It repeats code for protein like any other gene that finds its sequence and puts it in somewhere else in the genome. But for that to happen the code for the protein has to work. If it's damaged beyond repair, the thing can't work. And sometimes it makes mistakes. So the thing's coming back. I made a protein. It's coming back. It's trying to make a copy of itself. Sometimes it grabs the wrong thing to make a copy of. So that's why we have copies of lots of things in the genome. AUDIENCE: [INAUDIBLE]. ATUL J. BUTTE: Good question. It's able to recognize its own sequence, but it's not at all perfect. But what we can say about the difference between human and mouse is that the mouse has those sequences for that protein, a reverse transcriptase. It has codes for it that seem to be fully functional. And in a human, there are none that are still functional. That's what we're saying. It's very curious why that is, this whole area of genomic archaeology that you can do with the genome now. In fact, what Zach was saying earlier is that we have so many millions of repeats that we can look to see how deviant they are from a working copy. And so we can age each of the repeats. This one's 40 million years old. This one's 50 million years old. Because if it's got four places where it's different and this one only has two, the one with four is probably older than the one with two, right? So like that, we can age all of these repeats. And it turns out a whole bunch of them were made at a certain point in time and then they dropped off. Again, no one knows why. I'll show you the graph in the second here. AUDIENCE: So we can assume that the genome is growing all the time. So the [INAUDIBLE] much bigger than that. ATUL J. BUTTE: Or smaller, too. There's ways to get rid of these things, too. If you do too much damage, the thing can't reproduce. And that's it. It's the end of the line. The most mind blowing thing that the Genome Project has done for me personally is that, now, I realize there's only one life form on this planet. It's DNA. Everything else is a side effect. That DNA is doing everything it can to actually keep its code going. That's all there's is. There's a great book called-- AUDIENCE: The Self Machine. ATUL J. BUTTE: The Self Machine, of course, but there's also a more recent one called-- it's on genomes. It's a paperback. The Autobiography of a Genome I think it's called. It's like 23 chapters. And basically, they just pick one gene from each chapter and make a story. AUDIENCE: Oh, yes. ATUL J. BUTTE: It's a great book. I recommend reading that. It's a lay kind of reading. AUDIENCE: Yeah, [INAUDIBLE]. ATUL J. BUTTE: It was on a bestseller list for a while. AUDIENCE: That's right. ATUL J. BUTTE: Autobiography of a Genome I think it's called, something like that. AUDIENCE: [INAUDIBLE]. ATUL J. BUTTE: Some chapters were harder to write than others, I think, for this guy. [INTERPOSING VOICES] AUDIENCE: It's just a question about [INAUDIBLE] age, you know, and aging [INAUDIBLE]. Given that you can pretty much get rid of or add to or-- it seems like dating genes according to [INAUDIBLE] number of mutations against mutation frequency. And it seems kind of invalid now that we know that there are [INAUDIBLE]. AUDIENCE: Are you saying that because of these kinds of influences just looking at, let's say, the refresh rate of synonymous substitutions versus non-synonymous substitutions, it's not a good clock anymore? AUDIENCE: I guess. I guess I'm doing-- what I'm driving at is it seems like one of the main-- one of the underlying points that you've been making is that there's a lot of change and a lot of mistakes that can be introduced from a variety of sources, none of which have to be all that magnificent or catastrophic. It's every day types of things. And yet, on the other hand, when we're talking about, well, this gene's probably over because it has four mutations, not two-- ATUL J. BUTTE: Right. It's all based on-- that's a great point. We can age these things because we think we know models. So given a fixed rate of nucleotide substitution where people have tried to calculate numbers for that, we can make a premise. But if that's not true, then it's not going to hold. AUDIENCE: So first, if you have a transpose that on the [? box ?] right in the middle of the gene, then that would be invalid when you look at the rate of the mutation processing [INAUDIBLE].. It got a new puzzle. [INAUDIBLE] like a new to maybe kind of compare. The model [? is wrong. ?] But if you have a stretch, I mean-- ATUL J. BUTTE: Right. AUDIENCE: --a stretch, it's more or less the same except for some [INAUDIBLE],, then a lot of these-- not catastrophic, but more violent changes has to happen. ATUL J. BUTTE: It's all a matter of time scales. These things are making copies of themselves over millions of years. It's not a day to day kind of thing at all. The cell in my lung can make a copy of itself and move it somewhere in the genome. On the grand scheme of things, who cares? It's not in my sperm cells. It's not going to go on. So it's got to happen in the right cell at the right time. Across millions of years these things are happening when you think about it. It's got to be in the right cell, too. AUDIENCE: When you look at the [INAUDIBLE] between genes, is there anything to say that the rate of the mutations would be about the same all regions of [INAUDIBLE]?? ATUL J. BUTTE: Yeah. AUDIENCE: Actually, one of [INAUDIBLE] study. [INAUDIBLE] ATUL J. BUTTE: Yeah, I think the rates are definitely different between different regions only because some of these processes have different rates. Remember the banding pattern I showed you in the AT-rich? That corresponds to what regions are rich in AT and rich in GC. Some processes occur at different rates between them because there are tighter bonds there. AUDIENCE: [INAUDIBLE]. ATUL J. BUTTE: It might, but we don't have anywhere to model it. AUDIENCE: That we don't know. ATUL J. BUTTE: We don't have any way to model it. AUDIENCE: [INAUDIBLE] when the genome was first presented in 2000 with [INAUDIBLE],, people [INAUDIBLE] stopped seeing the genes in junk DNA. But if you look at the plate of sequences from [INAUDIBLE] of DNA from [INAUDIBLE] phylogenetic tree, the percentages are quite similar for the intergenic region. That's what the [INAUDIBLE] region tells us. [INAUDIBLE] speaking, this stuff is important whether it is to keep selfishly some [INAUDIBLE] to survive or because it's actually necessary to the organism's function. ATUL J. BUTTE: Let me just show this one slide because you're asking so many questions about repeats. And this sort of summarizes a lot. So 3 million repeat copies started as working elements. That's how they can model this. There's 3 million repeats in the genome. So measure how far are they away from a working copy of them. Because through time, these mutations just happen randomly presumably at a fixed rate, but not necessarily so. Most of the repeats predate the mammalian radiations, so before mammals were even formed. Most of these repeats are actually older than that in the genomes. But there's no evidence for transposon activity in the last 50 million years since we've diverged from monkeys. None of these things have been alive. But they're still alive and well in the mouse and the rat. So not-- go ahead. AUDIENCE: Is that it? I mean, what's sort of the breakdown of how many species are still-- I mean are monkeys still doing this or they're [INAUDIBLE]?? ATUL J. BUTTE: I don't think we know yet because the monkey genomes are just being finished now. So we can better answer this question every month. By the end of this course, you'll be able to answer it better than today because these genomes are coming online. Literally, this is just only two years old information. AUDIENCE: [INAUDIBLE]. ATUL J. BUTTE: What's that? AUDIENCE: Severely [INAUDIBLE]. ATUL J. BUTTE: You're never going to be done sequencing every organism on the planet, but I think we will have a better estimate now than when this paper came out if you choose to do this. Here's the graph that I like to think of. So the different bars here, the different colors represent the different repeat elements. The lines, the signs, they're just different codes. And here are less than 1% substitution from a working copy. And here's 34% substitution from working copy. And you can read that as millions of years basically, going back hundreds of millions of years. And you can see at 7% substitution there's a peak. And then it drops and then a slight peak, and then it drops. So these aren't fixed at all. It makes you wonder what happened on the planet at this point and in evolution. Why were all of these things alive, for example, then all of a sudden they died off down to zero? So it's amazing when you think about that only because they model these as working copies. And how do they diverge from those working copies? It's fascinating. Genomic archaeology, this was not a field three years ago. The relevance in medicine still has to be determined, though, which is why we're going to move on. OK, so I've got about 20 minutes left. Let's see how much of gene measurement techniques I can cover here. AUDIENCE: [INAUDIBLE]. ATUL J. BUTTE: I'm going till 2:00, right? OK. Let's say 30 minutes. So the gene measurement techniques I'm going to talk about today are-- well, one's how do we measure DNA? How do we measure RNA? And how do we measure proteins? If you can get this, you're going to know 95% of what you need to in this area. Let's start with DNA. So DNA sequencing was invented back in the mid-'70s. And when we measure DNA, we're sequencing it. So we want to know the sequence of As, Gs, Ts, and Cs. And so how did we get all 3 and 1/2 billion base pairs-- basically, just slightly more automated ways than what I'm going to show you here. So what you can do is, if you have a DNA sample-- remember, it's As, Gs, Ts, and Cs-- we have tools to make copies of them. In fact, we can borrow the enzymes that the cell uses. And in a test tube, I can take my As, Gs, Ts, and Cs and make copies of them. The process of making copies is a technique called Polymerase Chain Reaction, or PCR. All PCR does is make copies of things and amplify them. If you have one, you make it into two. Then those two become four. Four become eight, et cetera. And it grows to be pretty big, something that you can actually visualize. So now, basically the way this technique works is this. As I'm making a copy of one strand of DNA, let's stop the reaction. So as I'm making a copy-- let's say I have 100 base pairs. Let's stop the reaction. And I stopped it after I have a whole bunch of copies going. So some of the copies are going to be 50 when I stopped it. Some are going to be 99. Some are going to be 98. Some are going to be 97. So I'm going to have a whole variety of lengths of copies of that gene of that sequence. It might not even be a gene. So then let's start it up again, but the last base pair I'm going to add to that copy I'm going to make a fluorescent. And we'll make it fluorescent in such a way that I'm going to make four colors, one color for A, one for T, one for C, one for G. So I have four different fluorescent colors that I could incorporate into the nucleotide, the last nucleotide I'm going to put on this particular sequence. So now, I have a whole variety of lengths, and the last base pair might be a fluorescent color. Then what I can do is I can put that whole mix of length of DNAs and put them in a gel. I want to actually activate an electric current. The DNA starts to fall because DNA gets pulled based on its charge. But as you would guess, the smaller sequences get pulled faster because it's easier to move than the larger ones. Even by one base pair you can separate them this way. It's called gel electrophoresis. That's why a one base pair difference we can see on a gel depending on the charge and all of this kind of thing. So then what you do is you put a laser down at the bottom of the gel. So it used to be we'd run the gel out. And if it went off past the end, you'd swear because you just ruined your sequence. You lost it. It's in the puddle at the bottom of the gel. But now, we want it to happen this way because we set up a laser at the end that excites whatever fluorescent is coming down there. And depending on the color, we can just read it off. Oh, here's a red peak. Here's a blue peak. Here's a green peak. So we have a whole bunch of these peaks, and then software looks at this. In fact, the most commonly used software for this is something called phrap and phred, which is an open source, freely available program. People don't even use the software that came with the thing. They wrote their own, and it's open source. It looks at the peaks, and then makes the call. I see this peak here. I think it's a G. I see this peak here. I think it's a C. There's going to be immediate problems with this. First, despite how fancy this looks, we still can't do more than 300 base pairs with this. AUDIENCE: [INAUDIBLE] some flaws related to the size of the [INAUDIBLE]. ATUL J. BUTTE: They all, in theory, should be just one base pair apart, but there's all sorts of reasons. So why are they some jumbled here and some they're spaced out here? As you let the thing go-- it's not a linear thing, the spacing. So as you let it go, it gets wider and wider and wider in general. But sometimes the copying process itself can have errors. Let me be clear. If the sequence I'm starting with has a whole bunch of Gs in a row, sometimes it gets confused. And it just happens to make one the wrong length. So if I have a lot of repeats, in the sequence I'm trying to sequence, I'm going to have problems. And I just told you 50% of the genome is repeats. It's not easy. AUDIENCE: Different sequences might actually have different proficiencies in these reactions. ATUL J. BUTTE: Yeah. AUDIENCE: And so that might cause some slight shifts. ATUL J. BUTTE: But basically, we can't do more than 300 to 400 base pairs than this, beyond this. And this is state of the art. This is the state of the art. So to go from this to 3 and 1/2 billion base pairs, you do it 300 at a time. That's it. There's no more magic than this. There's also another problem. What if I have two peaks in the same spot? Remember? I have two chromosomes. On one chromosome, it might be one letter. On the other, it might be another letter. Happens all the time, 1 in 1,000 it turns out, which we'll talk about. That's a polymorphism. It could be this or that. So that's how you can read it by looking at the peaks as well. That's basically it. You terminate the chain with the fluorescent nucleotide. And you just line them up, and you just read them off at the bottom. And phred basically makes the calls here. The software looks at the peaks and makes it the best guess here. So all of these peaks, not just the letters-- I told you the letters would fit on a CD-ROM. If you wanted to keep the tracings, you're going to need a hell of a lot of storage, on the order of hundreds of terabytes for just one human. But they're all stored, and they're all saved for the Human Genome Project. Because people go back and say, well, this peak wasn't so high. I don't think this is the right letter. But for the Human Genome Project, not only do we have all 3 and 1/2 billion base pairs, we also have an assessment of how good of a call it was for each letter based on these peaks. So how do we get to the entire Human Genome Project? You automate the process. So this is a picture of the Genome Center, the Whitehead Institute. And basically, it's a whole bunch of robots just going off setting these things going. Most of the genome was sequence in just the 12 months prior to the finish of the genome. Even though the process had been going on for 10 years, it takes time to develop these machines, develop the technology, and more importantly to develop the strategy, which we'll talk about in a second. And then the machines were built. And basically, they could do the whole genome in 12 months at the end. You could see the number of genes that were in draft form grew exponentially until the end even though it started back in the mid-'90s. The Whitehead can run 100,000 sequencing reactions every 12 hours. Multiply that by 300 base pairs for each, and you can get on the order of 3 million base pairs. Let's say if I do 300, that's going to be three-- 30 million base pairs, let's say, every 12 hours now. Robots pick all the colonies because these things are grown up in bacteria. OK, here we go, 60 million nucleotides per day is the current estimate of what they can sequence. And we couldn't do it even five years ago. So then you have all of these little, little, little pieces of 300 base pairs. And somehow, you need to put this entire jigsaw puzzle back together. So there's essentially two ways you can do this. You can start with some strategy and say, well, I'm just going to take this piece of this chromosome and sequence that. Then let's move to the next piece and sequence that and take that and take that and just have some structure in mind as to the order you're going to do it and keep tract of the pieces. And then how they're going to go back. The opposite approach is called shotgun sequencing. And that's basically saying, take the entire genome, split it up into these 300 base pair fragments, and split it up again. Take the whole genome and split up again into 300 base pair fragments using some other way. So the place where I'm going to cut it for this method are going to be different than the ways I cut it for this method. Sequence the whole mess and just see what overlaps. Basically, this piece of this puzzle seems to go on this piece of that kind of puzzle. And you basically put the whole pieces together like that. Now, that assumes a lot of computational power. You got to keep all these things in memory. You got to line them up and see which ones work best. So the best approach in the end was a hybrid. You don't just put all the pieces together, but you also keep some idea of the scaffolding, where you got the pieces from, instead of just doing it randomly like this. So like most things in life, the hybrid worked the best. So here are all the pieces cut one way. Here are all the pieces cut another way. And this is basically saying, well, A1 to A2 has no overlap, so we're stuck there. We don't know how they join. But A2 and A3 might have some overlap. And there's B here, and then you basically put them together based on the pieces. And hopefully, you have some overlaps between them. This A goes with this A, but we also see B and B here. Here's B1 fits right here. And B3 fits like there, et cetera, et cetera. That's how these things are joined. AUDIENCE: [INAUDIBLE] you look for overlap [INAUDIBLE]?? ATUL J. BUTTE: Obviously, you're going to need a whole bunch. You're going to need a whole bunch. And so there's a lot of holes. AUDIENCE: So there's only 300 base pairs [? information. ?] ATUL J. BUTTE: And again-- AUDIENCE: [INAUDIBLE]. ATUL J. BUTTE: Again, I told you 50% of the genome is repeats. So statistically speaking, you're going to get a whole bunch that look like they fit together. That's why this doesn't work that easily right out of the box because you have three small pieces that look so much like each other. So that's why hybrid approach works best if you remember that this piece came from chromosome 1 on this band on this thing, actually you then can put it somewhere physically in the genome. So for a human they had 10 4-processor machines, so 4 gigabytes of RAM each, and a 16-processor with 64 gigabytes. That took 10,000 CPU hours. That's a huge amount of computational power just to put all the pieces together for the human. The overlaps take memory, 600 gigabytes of RAM. And in the end, hybrid of putting it on a physical map is the way to go. Let's see if I have that there. So now, we're at a state where you can actually go to websites, like this one at the University of California Santa Cruz, which is probably one of the more popular ones, and you can just start to browse the genome. So if you just pick-- you can even hit a button to take you to a random spot on the genome. Here's an arbitrary region of chromosome 19. It shows you where the bands are, where the GC-rich regions are, what you would see on a karyotype, where the predicted genes are, where actually people have seen different pieces of gene in messenger RNA. So you can start to tell where the introns and exons are, where the polymorphisms are, all in one screen. And there's actually two different commonly used browsers for this. And I think the next screen is just a blow up of the UCSC browser. This is like a running thing of the GC content here. So that gets us to this. So now, I already started to hint at this. Let's say, off the automated sequencer, you see peaks like this. Or here, I see a little bit of a hump here and a little bit of a hump here or here, here exactly, two humps that are exactly the same size. That makes me think-- I mean, the program puts a little n here because I don't know which it is, an A or a T. But what that really could mean is that that could be a polymorphism, that that human actually has both letters, one on one chromosome, one on the other chromosome. And so if you have a chunk of DNA that you're interested in finding polymorphisms, you can get that gene or get that chunk of DNA, sequence in 100 humans, and just see what the different letters are in that sequence. That's one way to do it. But they found a whole bunch of these polymorphisms just by doing the Genome Project. The Genome Project was sequencing 30 or so humans. And there are going to be differences in 30 people. So a whole bunch of them were found that way as well. So you can either do this while sequencing the genome. You find arbitrary polymorphisms. Or if I'm interested in this gene because I think it's involved in asthma, I want to see if there are polymorphisms, I can go after that gene and sequence it the same way. Both ways are going to look like this on an automated sequencer, two peaks there. You can also use microwaves, which we'll skip this one. But you can use an array to help you find these SNPs as well. So I think you're going to have electron SNPs. If someone-- AUDIENCE: Joel Hirschhorn, who is an expert for Whitehead, told you about SNPs and problems of interpreting SNP [INAUDIBLE]. ATUL J. BUTTE: So I think this is my last slide on DNA measurements. And I'm just going to end on clinical uses since this is genomic medicine. So again, SNPs are Single Nucleotide Polymorphisms. At a particular arbitrary spot in that genome, you have an A. I have a T, let's say. And it's usually just one thing or another. Rarely, there's places where there's three combinations. Usually, it's one or the other. That happens, on average, rough estimate-- about 1 in 1,000 base pairs can be a polymorphism. So if you do the math, that's maybe 3 million, 10 million different spots in the genome where you could have one letter or another. And figuring out the association of those letters to differences in susceptibility to disease is another Holy Grail now. If you have diabetes, I don't have diabetes. You have these letters. I have these letters. Maybe those letters have something to do with it is the basic idea. Now, here's one particular figure from an article in The New England Journal of Medicine just to give you an idea. There's one particular gene called HLA. And this is specifically HLA-B. It's a particular HLA gene. And you can have one particular polymorphism on both your chromosomes. You can have a mix of one or the other, or you can have the most common form of this. So there's a common form of the letter. This is a rare form, or we could have a mix of the two. And this one particular gene, if you're infected with HIV, that one spelling determines your conversion to AIDS. It can actually have a huge impact. So if you have this rare form of the polymorphism on both your chromosomes, homozygous, you can go from having HIV to AIDS very quickly within 10 years. If you don't have it, if both your chromosomes have the more common form, you can go on and on. And some people never get AIDS as far out as 18 years. And if you have the mix, you're somewhere in between here. So associations of SNPs with diseases are exceedingly common to find now. In fact, every other week in The New England Journal of Medicine, this is what you're reading. It's going to be some association with this spelling versus that disease. AUDIENCE: What's the most frequent form in this case? ATUL J. BUTTE: The most frequent is going to be no HLA, in the black one. Yup. So let me ask you. How much do you think it costs to measure a SNP? So if I have one arbitrary spot in all of our genomes that I want to measure, whether there's A, T, C, or G in any arbitrary genome, how much do you think it costs? Not how much the lab charges, how much do you think it costs to measure one base pair? AUDIENCE: [INAUDIBLE]. ATUL J. BUTTE: How much does it cost? AUDIENCE: I already [INAUDIBLE]. ATUL J. BUTTE: Guesses? AUDIENCE: [INAUDIBLE]. ATUL J. BUTTE: You already guess? Yeah. So we come up with the answer of $0.05 to $0.10. It's probably going to $0.01 in the next few years, pretty cheap. AUDIENCE: Stole your thunder, sorry. ATUL J. BUTTE: Stole my thunder, all right. Oh, yeah. Last thing on SNPs, you didn't take this, did you? AUDIENCE: I did, too. ATUL J. BUTTE: Damn, I'll stop giving you my slides. How much time we have? OK. Let's talk about gene expression. Let's talk about microarrays, fourth slide, and then we'll talk about protein. So we recovered DNA. Now, we're working our way out of the nucleus. Now, let's talk about the messenger RNA. So like we said, if the cell decides to make a particular protein, it's going to start making the blueprints for that protein, the messenger RNA. It turns out it's very easy to measure these now. The most common way to measure them is using a microarray. All a microarray is is a 2 centimeter by 2 centimeter, about the size of your thumbnail, it's a man-made grid of DNA where each spot in the grid is looking for one particular RNA. And so the density is about 500 by 500. So that's on order of a quarter million different spots on these arrays. And suffice it to say, you can measure the entire genome of transcripts with one array today. And the way this works is have to get the tissue that you're interested in. You can't just do this from blood unless you're interested in blood. If you're interested in the brain, you've got to get the brain tissue to figure out what genes are turned on there. It might expose it to certain influence. Get the RNA, so isolate it. Remember, most of them are sticky with the little A. We just have to get the Ts and pull them out. Make them fluorescent by making copies of them. Remember, I showed you that technique. You can make a copy of something using the PCR and just have it put fluorescent ones in there, and it'll become fluorescent. Let it sit on this man-made grid overnight. Come in the morning. Wash it off. Light it up with the laser. The laser lights up the fluorescence, and you basically take a picture. And you get a TIFF image. If you scan in pictures at home, you know what a TIFF image is. But these are huge. These are on order of 10 megabytes, these files. And then the biology takes a pause. The informatics comes in and starts to quantitate each of these. It's quantitative to 4 to 5 orders of magnitude, and it's not perfect. There's a lot of noise. Most people like this technique. Again, these are also commodity items today. You can get one of these for the entire genome for about $300, $350 bucks today on [? Longwood. ?] The quantitative, there's at least two different ways to do this. There's an absolute measurement technique and a relative technique. So what does that mean? How much of this gene is present is one technique. How much is this gene is present relative to this other tissue is another technique. And the relative one uses two fluorescent colors. The absolute one uses one fluorescent color. It depends on what's the nature of the spots are, but we're not going to go into those details unless you really want to. The important point is that the genes are chosen arbitrarily. Statisticians hate this aspect of it. These are not drawn from some master normal distribution here. This year's chips are different than last year's chips because we know more of the genes. Next year's chips are going to be different than this year's chips. So every year, these things are changing. There's not ever going to be one fixed genome here. And you need the functional tissue. There are some diseases that are easy to study with microarrays, like cancer. If you have a solid tumor, the patient goes to the operating room. The tumor gets taken out. You have a hunk of tissue. You can extract the RNA. You can find a lot of genes this way. And so that's why most of the microarray papers involve cancer. A slightly harder disease to study might be something like diabetes. Diabetes affects the liver, muscle, fat, pancreas, maybe brain simultaneously. You might want to get all of those tissues. And let me tell you, we don't normally do biopsies of these things when we diagnose diabetes. Well, we could use animal models, et cetera. Slightly harder disease to study would be something like schizophrenia. I know the functional tissue, but just which part of the brain am I going to actually get and put on this array? I can't take an entire human brain to do this. Is it this part or this part? So I even know the tissue, but I don't know where I'm going to put the array, what I'm going to put on the array. And then one harder disease to study would be something like hypertension. What's the functional tissue in high blood pressure? Is it the heart? Is it the blood vessel? Is it the smooth muscle of the blood vessel? What exactly am I going to put on the array to learn what genes are up and down? So this is not the panacea for all diseases. If you want to know what genes are functionally involved, you have to be able to think about the functional tissue here. Yes. AUDIENCE: How long after death can you do this [INAUDIBLE]?? ATUL J. BUTTE: The easy way to answer that is that people have been able to get RNA from paraffin-embedded tissues. And they think that they're successful, but most people flash freeze these things. If you're in the OR with the tumor getting taken out, most people just put it into [? monosodium ?] right then and there. AUDIENCE: The slightly longer answer is RNA does degrade as opposed to DNA, which can stick around. And that's why they can do Jurassic Park-like stunts. ATUL J. BUTTE: But there's a reason why RNA degrades, because we degrade it. A lot of viruses are just strands of RNA. So that's on our interest to degrade it. So we have RNA, so we exude RNAses. It's much harder to work-- I mean, high school students work with DNA. If you've ever done this in your high school where they go get blood from an abattoir-- and they just basically get the blood. And you can look at the strands of DNA. You can't do that with RNA because we all have RNAses. We're going to degrade it immediately. So you have to wear gloves. You have to use special techniques, et cetera. AUDIENCE: But sometimes studies [INAUDIBLE] you have hours after death. And definitely, a muscle tissue four hours after death has a different [INAUDIBLE] factor [INAUDIBLE],, for instance, [INAUDIBLE] in many other [INAUDIBLE].. And so if you really want to do it reasonably, then basically as close to get the moment of death is where you want to be. ATUL J. BUTTE: This is what these arrays actually look like. This is the whole human genome array from a company called Affymetrix that makes these things. And they actually make a 96-well format of this. So here are 96 microarrays in just one plate. So you can imagine how much data you get out of this with 10 megabytes of each for each of these images, huge amount of data being collected this way. And we're not going to talk about the differences here. And we'll not talk about that. So you can use these microarrays to tell which genes are up. So for example, if I have a whole bunch of patients and half of them had one type of leukemia and half had another type of leukemia, leukemia means a whole bunch of white blood cells go up in the blood. So you get easy access to that tissue. And you can find genes that are up in one and down and the other or down in one and up in the other, et cetera, et cetera. In the end, you can get lists of genes very easily. Now, we all commonly do this. And this core facility is basically in every hospital. If you're a clinician at the Brigham and Women's Hospital, you can take your sample down to the core facility, and they'll give you back the text file-- this gene, this amount, this gene, this amount. It's just a matter of asking the right questions now. But the list of genes isn't enough. Sometimes you want to go back and validate that a gene is actually where it's supposed to be or what you thought was actually happening. And there's two ways to do this, in situ hybridization and real-time PCR. So these are usually inserted after you've done a microarray analysis. In situ hybridization is very easy. I take the same RNA that I'm looking for, make the exact reverse complement of it, and make it fluorescent again. And I'll make enough of it that I can then stain a sample, a tissue sample, with that and then light it up to make a picture. So wherever that transcript was, this is going to stick to it after I've washed it off. And it's going to light up. And I can take glorious pictures like this. So I can use different colors to light up one RNA versus another. And that might help me tell that it's in the right tissue that I thought it was. Real-time PCR, it's even-- it's actually a little bit harder to conceptualize, but basically real-time PCR uses that polymerase chain reaction technique to actually detect one sequence as compared to a control sequence. So it gives me a more quantitative measurement. This gives me a nice picture that I can put on the cover of Nature, but RT PCR actually gives me a number that actually can then be used to validate the microarray findings. It was three times higher. It was four times higher. It's basically using a PCR technique to see how fast do I get a detectable amount using the repetitive the exponential growth aspects of PCR. So now, let's get into proteins because we're coming close to the end of the hour here. So again, the Holy Grail is get to proteins. We all essentially use microarrays as a proxy for proteins, but we don't have an array. The same kind of array that we have for microarrays, we just don't have it for proteins. Why? Because proteins are much harder to measure. The RNA, basically, is As, Ts, Cs, Gs. And it's essentially a strand. And sometimes it buckles and has twists, but it's pretty easy to measure using these microarrays. Proteins can be positively charged, negatively charged. They can love water. They can hate water. They can be big. They can be small. If you look at it this way, it looks like one thing. If you look at it this way, it looks like another thing. So there's no easy consistent way to measure proteins like there is for RNA. Just to give you an idea, here is a commonly used technique called 2D Polyacrylamide Gel Electrophoresis, or 2D-PAGE. Take one sample, a gemish of proteins. All the proteins are radioactive, and I'm going to spread them out this way based on the size. Remember, if I put a charge, the smaller things move down faster. So here's size. And here's pH. Here's the pH that they like to hang out in, two different things, two different aspects of proteins. And in the one gemish, I have all of these spots. And if I do this again, they're going to be slightly different. If I do a third time, it'll be slightly different. This is a picture. This is the most irreducible thing you can imagine. The gel is set just right. It might just come out a different way. So we wish that we could look at this and say, oh, it's got this size and this pH. It's got to be this protein-- can't do that at all. So then what we'll do is we'll cut out one of these spots and use the next technique to figure out what protein might be in there. But literally, people look at these pictures and they say, well, here's a picture from cancer. Here's not cancer. And oh, I see this spot here, and I don't see it here. Maybe it's a protein of interest. And there's a whole informatics trying to figure out and automate and image process these things, but they're terrible. This is not reproducible. Look how these spots are streaky, right? I mean, it's just a mess. Each spot has dozens, hundreds of proteins in there. Here's one paper I picked. These guys looked at all of these spots and sequenced all of them just to identify all these different proteins. So they've numbered each one. So back in '99, this is the-- [INAUDIBLE] actually in this building. I'm laughing at this, but, I mean, it's a huge amount of work to do this. So from that one spot, we might want to figure out what proteins are there. And this is a technique that most people use. It's this technique called mass spectrometry. And so the way this works is you get this little spot. Still, there's a whole bunch of peptides in there. Even though we separated them a lot in these two axes, there's a bunch of proteins here. And if you actually shoot it into this detector-- so basically, what mass spectrometry does is is it takes a spot and hits it with a laser. And the things come flying off of there. And the way that they fly off of there depends a lot on the size of the protein and the charge of the protein. So basically, in the end through all of this, you basically have one axis here, mass divided by charge. And I have peaks, basically, that are coming off of the detector. Oh, there's a whole bunch of stuff at this. Now, there's nothing. There's a whole bunch. There's nothing, like that, mass versus charge. So the way this works is what you can do is, these proteins here, you can digest them. Remember how these are long proteins. That same enzyme that I have in my pancreas that's still working on my lunch, I can use it to my advantage. I could take a spot and digest these peptides into successfully-- to have sequentially shorter protein. So here's one protein that's 40 amino acids long. Here's one with 39. Here's 138. Here's 137. Here's 136. And basically, I do this in such a way so that all of these peptides get broken up into a series of smaller peptides. Why am I doing all this? Why am I doing all this? Because this is the trick. If I want a protein that's 30 amino acids long and one the same protein, but it's only 29 amino acids long, the difference in size is the last amino acid. Then to go from 29 to 28, the difference in size there is that amino acid. So if I've basically successfully cut this one amino acid at a time and have a whole range of these, then I've got the difference in sizes to help me tell what that last amino acid was. And that's how we use it. We just look at the difference in peaks, the difference between the peaks is what we measure here. AUDIENCE: [INAUDIBLE]? ATUL J. BUTTE: No, not at all. It's a mess, right? AUDIENCE: [INAUDIBLE] ATUL J. BUTTE: It's a ratio. So there's a bunch of peaks here, but that same peptide might be somewhere else, too, if it happened to get an extra charge. Absolutely. It's not perfect at all. It's not perfect at all, a whole new area here that's dying for new algorithms. You do it your best shot. You look at the peaks here. There's naive ways to do it, but they're not perfect at all. And of course, there's some proteins that love to have variable amounts of charge more than others. So some things are easier to detect than others. AUDIENCE: But I mean, the technique itself, there are known methods of how amino acids [INAUDIBLE] species themselves that they take [INAUDIBLE].. ATUL J. BUTTE: We know the size of each amino acid, each of the 20, 21 amino acids. So we can tell if there's a difference of peak here. But remember, there are multiple proteins. AUDIENCE: [INAUDIBLE] is not infallible, but there's characteristic [INAUDIBLE].. ATUL J. BUTTE: Absolutely, that's why we're able to use it at all. AUDIENCE: And there's libraries of that. There's a lot of-- ATUL J. BUTTE: Exactly. In fact, we can predict. We can look at every protein in the database and predict what it would look like when applied with trypsin to cut it. What would the peaks look like? And you can actually make predictions. And you basically take your pattern and compare it to the computer pattern. This might be predicted. This is the actual. And the computer, the algorithm here basically says, this is the protein. Now, there's another important point here I want to bring up. This is not quantitative. This helps you identify a protein in a sample, but it's not quantitative. It doesn't tell you how much there is. There's all sorts of newer, fancier techniques to compare this to some other sample to try to get some kind of quantitative measurement. But the way this is commonly used, it's just identifying. It's not quantitative, [INAUDIBLE].. AUDIENCE: Just a point, in The Science Times, in The New York Times this week, there was an article I saw about a test, a diagnostic for cervical cancer [INAUDIBLE]. ATUL J. BUTTE: Absolutely. AUDIENCE: They don't know how. It's just a tumor. ATUL J. BUTTE: I think I have that picture. AUDIENCE: OK. ATUL J. BUTTE: Yeah. Here, here's the one. So here's ovarian cancer-- same thing, right? So these guys use a particular chip to do this, but the chip is the smallest piece of it. It's basically still a mass spec that's on the end. Here is unaffected, unaffected, cancer one, cancer two. Here are all the different bands. And they say, to hell with identifying them. Here's the pattern. Oh, wow. I see a band here, a band here, but no band here. This must be diagnostic. I'll easily answer back. I mean, we're in danger of overfitting the data here. You have hundreds of thousands of peaks here and only four samples. How hard is it to find them that will answer your question? I tend to believe these things when they have something to do with the biology. If we go ahead and identify and there's some causal mechanism, that's great. But others are going to be very happy with this, especially since we have no other test for ovarian cancer today. That's fine-- or cervical cancer. For cervical cancer, we do Pap smears. For ovarian cancer, we have nothing. If there is something in the blood, it's better than nothing is what they would answer back. So here's basically the peak differences here. So they say, here's this peak. Here's this peak. We think that's an arginine or-- that's basically how this works. You use the differences in peaks. These things are called peak lists. Quantitative, we're not going to go there, but, basically, you can take one sample and another sample and grow the one sample in heavy water. Instead of H2O, we'll use H3O. And that extra H, it's not extra-- well, actually-- AUDIENCE: [INAUDIBLE] ATUL J. BUTTE: It's-- AUDIENCE: Deuterium. ATUL J. BUTTE: It's deuterium, right. So it's not just deuterium, but you can also use other different isotopes. But basically, you have the same atom. Like, carbon can have 12 protons, or you can have 12 plus a neutron. You can have all sorts of other things in the nucleus of the atom that could change how it looks on mass spec because you're changing the mass just slightly. And then you can tell which sample it came from and then try to quantitate that way. But again, the charge could change. So one protein's not in one spot. It's in a whole bunch of slots. You need to sum them all up, and they're overlapping. AUDIENCE: [INAUDIBLE] trypsin [INAUDIBLE] peptide one [INAUDIBLE] then-- ATUL J. BUTTE: OK, let me be clear. The trypsin cuts into manageable fragments. And there's another fragmenter in front of the machine that's actually cutting it up into these one amino acids. Go ahead. AUDIENCE: You have [INAUDIBLE] different ways [INAUDIBLE].. ATUL J. BUTTE: Yes. AUDIENCE: [INAUDIBLE] and you have maybe [INAUDIBLE] number of [? charges-- ?] ATUL J. BUTTE: Absolutely. That's right. AUDIENCE: [INAUDIBLE] pretty much [INAUDIBLE] on what exactly [INAUDIBLE]. ATUL J. BUTTE: Well, the thing is the only hard part is that you don't know where the protein is. You see a whole bunch of spots there. And you see that it's here, but it could be on either side of the spectrum you're looking at. And if you let the machine run long enough, you get a huge amount of data off of that machine, an unmanageable amount of peaks. And we can look at and isolate one peak. Here, let me-- I think from this peak, we go to this peak. From that peak, we go to this peak. We've zoomed in so far to get the differences, but it could be elsewhere depending on the charge. It's doable. People do get great results off of this, but by no means is the informatics done in this field yet. And as more people use this technique, it's screaming for new algorithms. AUDIENCE: A quick statement-- is it obvious to everybody that the concentration of protein is not the same as concentration of RNA? Everybody should know that. In fact-- ATUL J. BUTTE: Right. Like I was saying, we're using it as a proxy. But let's be clear. If the cell has a whole bunch of this protein there and it's happy with that amount and it's not going anywhere, it doesn't need to make more. That's the simplest example of something where there might be zero RNA, but a whole lot of protein. If that protein's degrading really fast, then the cell might need to make more. But there's a massive disconnect here. Some things are going to be correlating, a lot of things are not. AUDIENCE: [INAUDIBLE]. ATUL J. BUTTE: All these things about proteins which we don't know-- depends on the protein. AUDIENCE: I just have a question. I'm just not sure. When you do small molecules [INAUDIBLE] compare the [INAUDIBLE] you eliminate the charges, is there any equivalent way that we could prepare-- ATUL J. BUTTE: To eliminate the charges? AUDIENCE: Yeah. ATUL J. BUTTE: You asking a detail I don't know. AUDIENCE: No, because that's the reason why [INAUDIBLE] small molecule [INAUDIBLE] proxy [INAUDIBLE].. ATUL J. BUTTE: Absolutely. But the thing is is that it's picking up charge, right? AUDIENCE: Yeah, of course. It is a charge, ultimately. But the reason why [INAUDIBLE] use that as a proxy [INAUDIBLE]. ATUL J. BUTTE: Absolutely. AUDIENCE: [INAUDIBLE] is not [INAUDIBLE].. ATUL J. BUTTE: Yeah. The biggest problem, though, is that these are longer than small molecules. I mean, these are much larger masses. That's the problem. And the proteins themselves, if you just chopped off the thing that's holding the charge or that loves-- if you just chopped off the polar part of it, it's going to have different properties. That next peak could be wildly different. It totally depends on that. AUDIENCE: But for my microarray lecture is, why is it, if they, in fact, as [INAUDIBLE] explain it, protein function, protein synthesis is not in a 1 to 1 relationship with RNA synthesis [INAUDIBLE] so successful in defining disease classes and the [INAUDIBLE] just based on RNA expression? Why is it-- why should we [INAUDIBLE]?? Or could it? ATUL J. BUTTE: I think this is my-- this or the next one is the last slide here. Now, despite it being hard to measure these things, there are definitely companies that are trying to make protein chips here. One way to detect proteins is to have an antibody against a protein. That's how people have been doing things like Western blots for decades. And you can radiolabel them and stuff. So you can make a plate here where each well has an antibody against one particular protein. Or you could make-- for example, people make a cytokine array. So this array just basically-- each spot looks for a different cytokine. You can buy these things, but you can't do this comprehensively like can with microarrays. That's the point. If you look across all possible proteins plus alternative splice products in proteins, you just can't do it today. You can do functional assays as well. I mean, it's antibody array. That's it. So we talked about sequencing, polymorphisms. We skipped SAGE. Most people are using microarrays now. We didn't even cover wafers. Let's cover wafers in a second-- 2D-PAGE, mass spec protein arrays. Let me at least cover the wafer thing because it's worth knowing that. It's all slides. So the wafers blow my mind. And that's why I like to end with this. So those companies that make these microarrays, they don't just make them one at a time. They make them the same way computer chips are made. They're made 40 at a time in a wafer, and then they're just cut out. So there's a company called Perlegen, which is owned by that other company Affymetrix, that basically said, why are we making 40 of the same array? Let's just make a wafer one big array. And with that, you have so many spots. You have 60 million spots in one wafer. It's about 5 inches. I'm drawing it like this, but it's really this small. You can see the size here. This is a wafer. A 5 inch square has 60 million probes. So even two years ago, they showed how you can use these wafers to re-sequence an entire chromosome. And that was one paper. You think about how many years it took to sequence of first chromosome, the Human Genome Project. They can just do that now over a weekend because they basically have each spot is a moving window of 25 base pairs. So what do I mean by that? They took the smallest chromosome. I think it was, like, 22, 23. They took the smallest chromosome, and they took the first 25 nucleotides. And they put it in one spot. And then the middle one they said, well, it could be an A, T, C, or G. So that's four spots. Now, let's move to the next one, to the next one. So they have a moving window of 25 nucleotides going from one end to the other end of the chromosome across a series of wafers. And once they have that done, they can just take anyone's blood and just basically get the entire pattern now. And they don't have to say, well, I think there's a SNP here or a SNP there. They just know all the SNPs now because it's just a matter of how many humans you put on the array. And it's just blood. The big issue-- each scan, the TIFF image, takes 10 terabytes. We were just learning about gigabytes. Terabyte is 1,000 of those, and this is 10 of them for just one of these wafers. So it's well-known that life science data is growing much faster than this famous Moore's law. Moore's law is from Gordon Moore from Intel that said that microprocessor power doubles every 18 months. Life science data is growing way faster than that. This is just one example of this. They've publicly stated that they can sequence all SNPs in a human in 10 days, not just the ones we know about. They just know all of them. And this is just one of many companies. There's many in the 95 Beltway here that are doing the same thing as competition. So remember how I was joking around that you could fit your entire genome on a CD-ROM? It is absolutely conceivable that we can have this within the next one or two years if we want it. The technology is there to do this. They certainly have the hard drive space there at Perlegen to do it. All right, I think that's it. And we should end since we're a little bit over.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Inferring_a_Continuous_Random_Variable_from_a_Discrete_Measurement.txt
Hey guys. Welcome back. Today, we're going to be working on a problem that asks you to find the PMF of a function of a random variable. So let's just jump right in. The problem statement gives you the PMF for a random variable called x. So we're told that there's this random variable x that takes on values minus 3, minus 2, minus 1, 1, 2, and 3. And for each of those values, the probability mass lying over that value is given by this formula, x squared over a. Now I didn't write it here to save room, but we're also told that a is a real number that is greater than 0. And we're told that the probability of x taking on any value outside of the set is 0. Now we're asked to do two things in the problem. First is to find the value of the parameter a. And that's sort of a natural question to ask, because if you think about it, the PMF isn't fully specified. And in fact, if you plug in the wrong number for a, you actually won't get a valid PMF. So we'll explore that idea in the first part. And then the second part, you're given a new random variable called z. And z happens to be a function of x. In fact, it's equal to x squared. And then you're asked to compute that PMF. So this problem is a good practice problem. I think, at this point, you guys are sort of newly acquainted with the idea of a PMF, or probability mass function. So this problem will hopefully help you get more familiar with that concept and how to manipulate PMFs. And by the way, just to make sure we're all on the same page, what does a PMF really tell you? So p sub X, where this is a capital X, because the convention in this class is to use capital letters for random variables. So p X of k, this is defined to be the probability that your random variable X takes on a value of k. So essentially, this says-- and this is just some number. So in our particular case, this would be equal to k squared over a. And how you can interpret this is this px guy is sort of like a machine. He takes in some value that your random variable could take on, and then he spits out the amount of probability mass lying over that value. OK. So now that we've done that quick recap, let's get back to the first part of the problem. So we have this formula for px of x, and we need to solve for a. So in order to do that, we're going to use one of our axioms of probability to set up an equation. And then we can solve precisely for a. So namely, we know that every PMF must sum to 1. And so essentially, if you sum this guy over all possible values of x, you should get a 1, and that equation will let us solve for a. So let's do that. Summation over x of px of x. So here, essentially you're only summing over these six values. So this is equal to px of minus 3, plus px of minus 2, plus px of minus 1, et cetera. Oops. px of 2 plus px of 3. OK. And again, like the interpretation as we said, this number here should be interpreted as the amount of probability mass lying over minus 3. And to help you visualize this, actually, before we go further with the computation, let's actually plot this PMF. So the amount of probability mass lying over minus 3, the way we figure that out is we take minus 3 and we plug it into this formula up here. So you get 9/a. Now you can do this for minus 2. You've got 4/a, looking at the formula. For 1, you get 1/a. And of course, this graph, it's the mirror image over 0, because of the symmetry. So hopefully this little visualization helps you understand what I'm talking about. And now we can just read these values off of the plot we just made. So we know px minus 3 is equal to px of 3. So we can go ahead and just take 2 times 9/a. Similarly, we get 2 times 4/a, and then plus 2 times 1/a. So now it's just a question of algebra. So simplifying this, you're going to get 18 plus 8 plus 2, divided by a. And this gives you 28/a. And as I argued before, you know that if you sum a PMF over all possible values, you must get 1. So this is equal to 1, which of course implies that a is equal to 28. So what we've shown here is that you actually don't have a choice for what value a can take on. It must take on 28. And in fact, if you plug in any other value than 28 in here, you actually are not going to have a valid PMF, because it's not going to sum to 1. OK. So I'm going to write my answer here, and then erase to give myself more room for part
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
21_Bayesian_Statistical_Inference_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: It involves real phenomena out there. So we have real stuff that happens. So it might be an arrival process to a bank that we're trying to model. This is a reality, but this is what we have been doing so far. We have been playing with models of probabilistic phenomena. And somehow we need to tie the two together. The way these are tied is that we observe the real world and this gives us data. And then based on these data, we try to come up with a model of what exactly is going on. For example, for an arrival process, you might ask the model in question, is my arrival process Poisson or is it something different? If it is Poisson, what is the rate of the arrival process? Once you come up with your model and you come up with the parameters of the model, then you can use it to make predictions about reality or to figure out certain hidden things, certain hidden aspects of reality, that you do not observe directly, but you try to infer what they are. So that's where the usefulness of the model comes in. Now this field is of course tremendously useful. And it shows up pretty much everywhere. So we talked about the polling examples in the last couple of lectures. This is, of course, a real application. You sample and on the basis of the sample that you have, you try to make some inferences about, let's say, the preferences in a given population. Let's say in the medical field, you want to try whether a certain drug makes a difference or not. So people would do medical trials, get some results, and then from the data somehow you need to make sense of them and make a decision. Is the new drug useful or is it not? How do we go systematically about the question of this type? A sexier, more recent topic, there's this famous Netflix competition where Netflix gives you a huge table of movies and people. And people have rated the movies, but not everyone has watched all of the movies in there. You have some of the ratings. For example, this person gave a 4 to that particular movie. So you get the table that's partially filled. And the Netflix asks you to make recommendations to people. So this means trying to guess. This person here, how much would they like this particular movie? And you can start thinking, well, maybe this person has given somewhat similar ratings with another person. And if that other person has also seen that movie, maybe the rating of that other person is relevant. But of course it's a lot more complicated than that. And this has been a serious competition where people have been using every heavy, wet machinery that there is in statistics, trying to come up with good recommendation systems. Then the other people, of course, are trying to analyze financial data. Somebody gives you the sequence of the values, let's say of the SMP index. You look at something like this and you can ask questions. How do I model these data using any of the models that we have in our bag of tools? How can I make predictions about what's going to happen afterwards, and so on? On the engineering side, anywhere where you have noise inference comes in. Signal processing, in some sense, is just an inference problem. You observe signals that are noisy and you try to figure out exactly what's happening out there or what kind of signal has been sent. Maybe the beginning of the field could be traced a few hundred years ago where people would observe, make astronomical observations of the position of the planets in the sky. They would have some beliefs that perhaps the orbits of planets is an ellipse. Or if it's a comet, maybe it's a parabola, hyperbola, don't know what it is. But they would have a model of that. But, of course, astronomical measurements would not be perfectly exact. And they would try to find the curve that fits these data. How do you go about choosing this particular curve on the base of noisy data and try to do it in a somewhat principled way? OK, so questions of this type-- clearly the applications are all over the place. But how is this related conceptually with what we have been doing so far? What's the relation between the field of inference and the field of probability as we have been practicing until now? Well, mathematically speaking, what's going to happen in the next few lectures could be just exercises or homework problems in the class in based on what we have done so far. That means you're not going to get any new facts about probability theory. Everything we're going to do will be simple applications of things that you already do know. So in some sense, statistics and inference is just an applied exercise in probability. But actually, things are not that simple in the following sense. If you get a probability problem, there's a correct answer. There's a correct solution. And that correct solution is unique. There's no ambiguity. The theory of probability has clearly defined rules. These are the axioms. You're given some information about probability distributions. You're asked to calculate certain other things. There's no ambiguity. Answers are always unique. In statistical questions, it's no longer the case that the question has a unique answer. If I give you data and I ask you what's the best way of estimating the motion of that planet, reasonable people can come up with different methods. And reasonable people will try to argue that's my method has these desirable properties but somebody else may say, here's another method that has certain desirable properties. And it's not clear what the best method is. So it's good to have some understanding of what the issues are and to know at least what is the general class of methods that one tries to consider, how does one go about such problems. So we're going to see lots and lots of different inference methods. We're not going to tell you that one is better than the other. But it's important to understand what are the concepts between those different methods. And finally, statistics can be misused really badly. That is, one can come up with methods that you think are sound, but in fact they're not quite that. I will bring some examples next time and talk a little more about this. So, they want to say, you have some data, you want to make some inference from them, what many people will do is to go to Wikipedia, find a statistical test that they think it applies to that situation, plug in numbers, and present results. Are the conclusions that they get really justified or are they misusing statistical methods? Well, too many people actually do misuse statistics and conclusions that people get are often false. So it's important to, besides just being able to copy statistical tests and use them, to understand what are the assumptions between the different methods and what kind of guarantees they have, if any. All right, so we'll try to do a quick tour through the field of inference in this lecture and the next few lectures that we have left this semester and try to highlight at the very high level the main concept skills, and techniques that come in. Let's start with some generalities and some general statements. One first statement is that statistics or inference problems come up in very different guises. And they may look as if they are of very different forms. Although, at some fundamental level, the basic issues turn out to be always pretty much the same. So let's look at this example. There's an unknown signal that's being sent. It's sent through some medium, and that medium just takes the signal and amplifies it by a certain number. So you can think of somebody shouting. There's the air out there. What you shouted will be attenuated through the air until it gets to a receiver. And that receiver then observes this, but together with some random noise. Here I meant S. S is the signal that's being sent. And what you observe is an X. You observe X, so what kind of inference problems could we have here? In some cases, you want to build a model of the physical phenomenon that you're dealing with. So for example, you don't know the attenuation of your signal and you try to find out what this number is based on the observations that you have. So the way this is done in engineering systems is that you design a certain signal, you know what it is, you shout a particular word, and then the receiver listens. And based on the intensity of the signal that they get, they try to make a guess about A. So you don't know A, but you know S. And by observing X, you get some information about what A is. So in this case, you're trying to build a model of the medium through which your signal is propagating. So sometimes one would call problems of this kind, let's say, system identification. In a different version of an inference problem that comes with this picture, you've done your modeling. You know your A. You know the medium through which the signal is going, but it's a communication system. This person is trying to communicate something to that person. So you send the signal S, but that person receives a noisy version of S. So that person tries to reconstruct S based on X. So in both cases, we have a linear relation between X and the unknown quantity. In one version, A is the unknown and we know S. In the other version, A is known, and so we try to infer S. Mathematically, you can see that this is essentially the same kind of problem in both cases. Although, the kind of practical problem that you're trying to solve is a little different. So we will not be making any distinctions between problems of the model building type as opposed to models where you try to estimate some unknown signal and so on. Because conceptually, the tools that one uses for both types of problems are essentially the same. OK, next a very useful classification of inference problems-- the unknown quantity that you're trying to estimate could be either a discrete one that takes a small number of values. So this could be discrete problems, such as the airplane radar problem we encountered back a long time ago in this class. So there's two possibilities-- an airplane is out there or an airplane is not out there. And you're trying to make a decision between these two options. Or you can have other problems would you have, let's say, four possible options. You don't know which one is true, but you get data and you try to figure out which one is true. In problems of these kind, usually you want to make a decision based on your data. And you're interested in the probability of making a correct decision. You would like that probability to be as high as possible. Estimation problems are a little different. Here you have some continuous quantity that's not known. And you try to make a good guess of that quantity. And you would like your guess to be as close as possible to the true quantity. So the polling problem was of this type. There was an unknown fraction f of the population that had some property. And you try to estimate f as accurately as you can. So the distinction here is that usually here the unknown quantity takes on discrete set of values. Here the unknown quantity takes a continuous set of values. Here we're interested in the probability of error. Here we're interested in the size of the error. Broadly speaking, most inference problems fall either in this category or in that category. Although, if you want to complicate life, you can also think or construct problems where both of these aspects are simultaneously present. OK, finally since we're in classification mode, there is a very big, important dichotomy into how one goes about inference problems. And here there's two fundamentally different philosophical points of view, which is how do we model the quantity that is unknown? In one approach, you say there's a certain quantity that has a definite value. It just happens that they don't know it. But it's a number. There's nothing random about it. So think of trying to estimate some physical quantity. You're making measurements, you try to estimate the mass of an electron, which is a sort of universal physical constant. There's nothing random about it. It's a fixed number. You get data, because you have some measuring apparatus. And that measuring apparatus, depending on what that results that you get are affected by the true mass of the electron, but there's also some noise. You take the data out of your measuring apparatus and you try to come up with some estimate of that quantity theta. So this is definitely a legitimate picture, but the important thing in this picture is that this theta is written as lowercase. And that's to make the point that it's a real number, not a random variable. There's a different philosophical approach which says, well, anything that I don't know I should model it as a random variable. Yes, I know. The mass of the electron is not really random. It's a constant. But I don't know what it is. I have some vague sense, perhaps, what it is perhaps because of the experiments that some other people carried out. So perhaps I have a prior distribution on the possible values of Theta. And that prior distribution doesn't mean that the nature is random, but it's more of a subjective description of my subjective beliefs of where do I think this constant number happens to be. So even though it's not truly random, I model my initial beliefs before the experiment starts. In terms of a prior distribution, I view it as a random variable. Then I observe another related random variable through some measuring apparatus. And then I use this again to create an estimate. So these two pictures philosophically are very different from each other. Here we treat the unknown quantities as unknown numbers. Here we treat them as random variables. When we treat them as a random variables, then we know pretty much already what we should be doing. We should just use the Bayes rule. Based on X, find the conditional distribution of Theta. And that's what we will be doing mostly over this lecture and the next lecture. Now in both cases, what you end up getting at the end is an estimate. But actually, that estimate is what kind of object is it? It's a random variable in both cases. Why? Even in this case where theta was a constant, my data are random. I do my data processing. So I calculate a function of the data, the data are random variables. So out here we output something which is a function of a random variable. So this quantity here will be also random. It's affected by the noise and the experiment that I have been doing. That's why these estimators will be denoted by uppercase Thetas. And we will be using hats. Hat, usually in estimation, means an estimate of something. All right, so this is the big picture. We're going to start with the Bayesian version. And then the last few lectures we're going to talk about the non-Bayesian version or the classical one. By the way, I should say that statisticians have been debating fiercely for 100 years whether the right way to approach statistics is to go the classical way or the Bayesian way. And there have been tides going back and forth between the two sides. These days, Bayesian methods tend to become a little more popular for various reasons. We're going to come back to this later. All right, so in Bayesian estimation, what we got in our hands is Bayes rule. And if you have Bayes rule, there's not a lot that's left to do. We have different forms of the Bayes rule, depending on whether we're dealing with discrete data, And discrete quantities to estimate, or continuous data, and so on. In the hypothesis testing problem, the unknown quantity Theta is discrete. So in both cases here, we have a P of Theta. We obtain data, the X's. And on the basis of the X that we observe, we can calculate the posterior distribution of Theta, given the data. So to use Bayesian inference, what do we start with? We start with some priors. These are our initial beliefs about what Theta that might be. That's before we do the experiment. We have a model of the experimental aparatus. And the model of the experimental apparatus tells us if this Theta is true, I'm going to see X's of that kind. If that other Theta is true, I'm going to see X's that they are somewhere else. That models my apparatus. And based on that knowledge, once I observe I have these two functions in my hands, we have already seen that if you know those two functions, you can also calculate the denominator here. So all of these functions are available, so you can compute, you can find a formula for this function as well. And as soon as you observe the data, that X's, you plug in here the numerical value of those X's. And you get a function of Theta. And this is the posterior distribution of Theta, given the data that you have seen. So you've already done a fair number of exercises of these kind. So we not say more about this. And there's a similar formula as you know for the case where we have continuous data. If the X's are continuous random variable, then the formula is the same, except that X's are described by densities instead of being described by a probability mass functions. OK, now if Theta is continuous, then we're dealing with estimation problems. But the story is once more the same. You're going to use the Bayes rule to come up with the posterior density of Theta, given the data that you have observed. Now just for the sake of the example, let's come back to this picture here. Suppose that something is flying in the air, and maybe this is just an object in the air close to the Earth. So because of gravity, the trajectory that it's going to follow it's going to be a parabola. So this is the general equation of a parabola. Zt is the position of my objects at time t. But I don't know exactly which parabola it is. So the parameters of the parabola are unknown quantities. What I can do is to go and measure the position of my objects at different times. But unfortunately, my measurements are noisy. What I want to do is to model the motion of my object. So I guess in the picture, the axis would be t going this way and Z going this way. And on the basis of the data that they get, these are my X's. I want to figure out the Thetas. That is, I want to figure out the exact equation of this parabola. Now if somebody gives you probability distributions for Theta, these would be your priors. So this is given. We need the conditional distribution of the X's given the Thetas. Well, we have the conditional distribution of Z, given the Thetas from this equation. And then by playing with this equation, you can also find how is X distributed if Theta takes a particular value. So you do have all of the densities that you might need. And you can apply the Bayes rule. And at the end, your end result would be a formula for the distribution of Theta, given to the X that you have observed-- except for one sort of computation, or to make things more interesting. Instead of these X's and Theta's being single random variables that we have here, typically those X's and Theta's will be multi-dimensional random variables or will correspond to multiple ones. So this little Theta here actually stands for a triplet of Theta0, Theta1, and Theta2. And that X here stands here for the entire sequence of X's that we have observed. So in reality, the object that you're going to get at to the end after inference is done is a function that you plug in the values of the data and you get the function of the Theta's that tells you the relative likelihoods of different Theta triplets. So what I'm saying is that this is no harder than the problems that you have dealt with so far, except perhaps for the complication that's usually in interesting inference problems. Your Theta's and X's are often the vectors of random variables instead of individual random variables. Now if you are to do estimation in a case where you have discrete data, again the situation is no different. We still have a Bayes rule of the same kind, except that densities gets replaced by PMF's. If X is discrete, you put a P here instead of putting an f. So an example of an estimation problem with discrete data is similar to the polling problem. You have a coin. It has an unknown parameter Theta. This is the probability of obtaining heads. You flip the coin many times. What can you tell me about the true value of Theta? A classical statistician, at this point, would say, OK, I'm going to use an estimator, the most reasonable one, which is this. How many heads did they obtain in n trials? Divide by the total number of trials. This is my estimate of the bias of my coin. And then the classical statistician would continue from here and try to prove some properties and argue that this estimate is a good one. For example, we have the weak law of large numbers that tells us that this particular estimate converges in probability to the true parameter. This is a kind of guarantee that's useful to have. And the classical statistician would pretty much close the subject in this way. What would the Bayesian person do differently? The Bayesian person would start by assuming a prior distribution of Theta. Instead of treating Theta as an unknown constant, they would say that Theta would speak randomly or pretend that it would speak randomly and assume a distribution on Theta. So for example, if you don't know they need anything more, you might assume that any value for the bias of the coin is as likely as any other value of the bias of the coin. And this way so the probability distribution that's uniform. Or if you have a little more faith in the manufacturing processes that's created that coin, you might choose your prior to be a distribution that's centered around 1/2 and sits fairly narrowly centered around 1/2. That would be a prior distribution in which you say, well, I believe that the manufacturer tried to make my coin to be fair. But they often makes some mistakes, so it's going to be, I believe, it's approximately 1/2 but not quite. So depending on your beliefs, you would choose an appropriate prior for the distribution of Theta. And then you would use the Bayes rule to find the probabilities of different values of Theta, based on the data that you have observed. So no matter which version of the Bayes rule that you use, the end product of the Bayes rule is going to be either a plot of this kind or a plot of that kind. So what am I plotting here? This axis is the Theta axis. These are the possible values of the unknown quantity that we're trying to estimate. In the continuous case, theta is a continuous random variable. I obtain my data. And I plot for the posterior probability distribution after observing my data. And I'm plotting here the probability density for Theta. So this is a plot of that density. In the discrete case, theta can take finitely many values or a discrete set of values. And for each one of those values, I'm telling you how likely is that the value to be the correct one, given the data that I have observed. And in general, what you would go back to your boss and report after you've done all your inference work would be either a plot of this kinds or of that kind. So you go to your boss who asks you, what is the value of Theta? And you say, well, I only have limited data. That I don't know what it is. It could be this, with so much probability. There's probability. OK, let's throw in some numbers here. There's probability 0.3 that Theta is this value. There's probability 0.2 that Theta is this value, 0.1 that it's this one, 0.1 that it's this one, 0.2 that it's that one, and so on. OK, now bosses often want simple answers. They say, OK, you're talking too much. What do you think Theta is? And now you're forced to make a decision. If that was the situation and you have to make a decision, how would you make it? Well, I'm going to make a decision that's most likely to be correct. If I make this decision, what's going to happen? Theta is this value with probability 0.2, which means there's probably 0.8 that they make an error if I make that guess. If I make that decision, this decision has probably 0.3 of being the correct one. So I have probably of error 0.7. So if you want to just maximize the probability of giving the correct decision, or if you want to minimize the probability of making an incorrect decision, what you're going to choose to report is that value of Theta for which the probability is highest. So in this case, I would choose to report this particular value, the most likely value of Theta, given what I have observed. And that value is called them maximum a posteriori probability estimate. It's going to be this one in our case. So picking the point in the posterior PMF that has the highest probability. That's the reasonable thing to do. This is the optimal thing to do if you want to minimize the probability of an incorrect inference. And that's what people do usually if they need to report a single answer, if they need to report a single decision. How about in the estimation context? If that's what you know about Theta, Theta could be around here, but there's also some sharp probability that it is around here. What's the single answer that you would give to your boss? One option is to use the same philosophy and say, OK, I'm going to find the Theta at which this posterior density is highest. So I would pick this point here and report this particular Theta. So this would be my Theta, again, Theta MAP, the Theta that has the highest a posteriori probability, just because it corresponds to the peak of the density. But in this context, the maximum a posteriori probability theta was the one that was most likely to be true. In the continuous case, you cannot really say that this is the most likely value of Theta. In a continuous setting, any value of Theta has zero probability, so when we talk about densities. So it's not the most likely. It's the one for which the density, so the probabilities of that neighborhoods, are highest. So the rationale for picking this particular estimate in the continuous case is much less compelling than the rationale that we had in here. So in this case, reasonable people might choose different quantities to report. And the very popular one would be to report instead the conditional expectation. So I don't know quite what Theta is. Given the data that I have, Theta has this distribution. Let me just report the average over that distribution. Let me report to the center of gravity of this figure. And in this figure, the center of gravity would probably be somewhere around here. And that would be a different estimate that you might choose to report. So center of gravity is something around here. And this is a conditional expectation of Theta, given the data that you have. So these are two, in some sense, fairly reasonable ways of choosing what to report to your boss. Some people might choose to report this. Some people might choose to report that. And a priori, if there's no compelling reason why one would be preferable than other one, unless you set some rules for the game and you describe a little more precisely what your objectives are. But no matter which one you report, a single answer, a point estimate, doesn't really tell you the whole story. There's a lot more information conveyed by this posterior distribution plot than any single number that you might report. So in general, you may wish to convince your boss that's it's worth their time to look at the entire plot, because that plot sort of covers all the possibilities. It tells your boss most likely we're in that range, but there's also a distinct change that our Theta happens to lie in that range. All right, now let us try to perhaps differentiate between these two and see under what circumstances this one might be the better estimate to perform. Better with respect to what? We need some rules. So we're going to throw in some rules. As a warm up, we're going to deal with the problem of making an estimation if you had no information at all, except for a prior distribution. So this is a warm up for what's coming next, which would be estimation that takes into account some information. So we have a Theta. And because of your subjective beliefs or models by others, you believe that Theta is uniformly distributed between, let's say, 4 and 10. You want to come up with a point estimate. Let's try to look for an estimate. Call it c, in this case. I want to pick a number with which to estimate the value of Theta. I will be interested in the size of the error that I make. And I really dislike large errors, so I'm going to focus on the square of the error that they make. So I pick c. Theta that has a random value that I don't know. But whatever it is, once it becomes known, it results into a squared error between what it is and what I guessed that it was. And I'm interested in making a small air on the average, where the average is taken with respect to all the possible and unknown values of Theta. So the problem, this is a least squares formulation of the problem, where we try to minimize the least squares errors. How do you find the optimal c? Well, we take that expression and expand it. And it is, using linearity of expectations-- square minus 2c expected Theta plus c squared-- that's the quantity that we want to minimize, with respect to c. To do the minimization, take the derivative with respect to c and set it to 0. So that differentiation gives us from here minus 2 expected value of Theta plus 2c is equal to 0. And the answer that you get by solving this equation is that c is the expected value of Theta. So when you do this optimization, you find that the optimal estimate, the things you should be reporting, is the expected value of Theta. So in this particular example, you would choose your estimate c to be just the middle of these values, which would be 7. OK, and in case your boss asks you, how good is your estimate? How big is your error going to be? What you could report is the average size of the estimation error that you are making. We picked our estimates to be the expected value of Theta. So for this particular way that I'm choosing to do my estimation, this is the mean squared error that I get. And this is a familiar quantity. It's just the variance of the distribution. So the expectation is that best way to estimate a quantity, if you're interested in the mean squared error. And the resulting mean squared error is the variance itself. How will this story change if we now have data as well? Now having data means that we can compute posterior distributions or conditional distributions. So we get transported into a new universe where instead the working with the original distribution of Theta, the prior distribution, now we work with the condition of distribution of Theta, given the data that we have observed. Now remember our old slogan that conditional models and conditional probabilities are no different than ordinary probabilities, except that we live now in a new universe where the new information has been taken into account. So if you use that philosophy and you're asked to minimize the squared error but now that you live in a new universe where X has been fixed to something, what would the optimal solution be? It would again be the expectation of theta, but which expectation? It's the expectation which applies in the new conditional universe in which we live right now. So because of what we did before, by the same calculation, we would find that the optimal estimates is the expected value of X of Theta, but the optimal estimate that takes into account the information that we have. So the conclusion, once you get your data, if you want to minimize the mean squared error, you should just report the conditional estimation of this unknown quantity based on the data that you have. So the picture here is that Theta is unknown. You have your apparatus that creates measurements. So this creates an X. You take an X, and here you have a box that does calculations. It does calculations and it spits out the conditional expectation of Theta, given the particular data that you have observed. And what we have done in this class so far is, to some extent, developing the computational tools and skills to do with this particular calculation-- how to calculate the posterior density for Theta and how to calculate expectations, conditional expectations. So in principle, we know how to do this. In principle, we can program a computer to take the data and to spit out condition expectations. Somebody who doesn't think like us might instead design a calculating machine that does something differently and produces some other estimate. So we went through this argument and we decided to program our computer to calculate conditional expectations. Somebody else came up with some other crazy idea for how to estimate the random variable. They came up with some function g and the programmed it, and they designed a machine that estimates Theta's by outputting a certain g of X. That could be an alternative estimator. Which one is better? Well, we convinced ourselves that this is the optimal one in a universe where we have fixed the particular value of the data. So what we have proved so far is a relation of this kind. In this conditional universe, the mean squared error that I get-- I'm the one who's using this estimator-- is less than or equal than the mean squared error that this person will get, the person who uses that estimator. For any particular value of the data, I'm going to do better than the other person. Now the data themselves are random. If I average over all possible values of the data, I should still be better off. If I'm better off for any possible value X, then I should be better off on the average over all possible values of X. So let us average both sides of this quantity with respect to the probability distribution of X. If you want to do it formally, you can write this inequality between numbers as an inequality between random variables. And it tells that no matter what that random variable turns out to be, this quantity is better than that quantity. Take expectations of both sides, and you get this inequality between expectations overall. And this last inequality tells me that the person who's using this estimator who produces estimates according to this machine will have a mean squared estimation error that's less than or equal to the estimation error that's produced by the other person. In a few words, the conditional expectation estimator is the optimal estimator. It's the ultimate estimating machine. That's how you should solve estimation problems and report a single value. If you're forced to report a single value and if you're interested in estimation errors. OK, while we could have told you that story, of course, a month or two ago, this is really about interpretation -- about realizing that conditional expectations have a very nice property. But other than that, any probabilistic skills that come into this business are just the probabilistic skills of being able to calculate conditional expectations, which you already know how to do. So conclusion, all of optimal Bayesian estimation just means calculating and reporting conditional expectations. Well, if the world were that simple, then statisticians wouldn't be able to find jobs if life is that simple. So real life is not that simple. There are complications. And that perhaps makes their life a little more interesting. OK, one complication is that we would deal with the vectors instead of just single random variables. I use the notation here as if X was a single random variable. In real life, you get several data. Does our story change? Not really, same argument-- given all the data that you have observed, you should still report the conditional expectation of Theta. But what kind of work does it take in order to report this conditional expectation? One issue is that you need to cook up a plausible prior distribution for Theta. How do you do that? In a given application , this is a bit of a judgment call, what prior would you be working with. And there's a certain skill there of not making silly choices. A more pragmatic, practical issue is that this is a formula that's extremely nice and compact and simple that you can write with minimal ink. But the behind it there could be hidden a huge amount of calculation. So doing any sort of calculations that involve multiple random variables really involves calculating multi-dimensional integrals. And the multi-dimensional integrals are hard to compute. So implementing actually this calculating machine here may not be easy, might be complicated computationally. It's also complicated in terms of not being able to derive intuition about it. So perhaps you might want to have a simpler version, a simpler alternative to this formula that's easier to work with and easier to calculate. We will be talking about one such simpler alternative next time. So again, to conclude, at the high level, Bayesian estimation is very, very simple, given that you have mastered everything that has happened in this course so far. There are certain practical issues and it's also good to be familiar with the concepts and the issues that in general, you would prefer to report that complete posterior distribution. But if you're forced to report a point estimate, then there's a number of reasonable ways to do it. And perhaps the most reasonable one is to just the report the conditional expectation itself.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Joint_Probability_Mass_Function_PMF_Drill_2.txt
Hey, guys. Welcome back. Today, we're going to do another fun problem, which is a drill problem on joint PMFs. And the goal is that you will feel more comfortable by the end of this problem, manipulating joint PMFs. And we'll also review some ideas about independents in the process. So just to go over what I've drawn here, we are given an xy plane. And we're told what the PMF is. And it's plotted for you here. What these stars indicate is simply that there is a value there. But we don't know what it is. It could be anything between 0 and 1. And so we're given this list of questions. And we're just going to work through them linearly together. So we start off pretty simply. We want to compute, in part a, the probability that x takes on a value of 1. So for those of you who like formulas, I'm going to use the formula, which is usually referred to as marginalization. So the marginal over x is given by summing over the joint. So here we are interested in the probability that x is 1. So I'm just going to freeze the value of 1 here. And we sum over y. And in particular, 1, 2, and 3. So carrying this out, this is the Pxy of 1, 1, plus Pxy of 1, 2, plus Pxy 1, 3. And this, of course, reading from the graph, is 1/12 plus 2/12 plus 1/12, which is equal to 4/12, or 1/3. So now you guys know the formula. Hopefully you'll remember the term marginalization. But I want to point out that intuitively you can come up with the answer much faster. So the probability that x is equal to 1 is the probability that this dot happens or this dot happens or this dot happens. Now, these dots, or outcomes, they're disjoint. So you can just sum the probability to get the probability of one of these things happening. So it's the same computation. And you'll probably get there a little bit faster. So we're done with a already, which is great. So for part b, conditioning on x is equal to 1, we want to sketch the PMF of y. So if x is equal to 1 we are suddenly living in this universe. y can take values of 1, 2, or 3 with these relative frequencies. So let's draw this here. So this is y. I said, already, y can take on a value of 1. y can take on a value of 2. Or it can take on a value of 3. And we're plotting here, Py given x, y, conditioned on x is equal to 1. OK, so what I mean by preserving the relative frequencies is that in unconditional world this is dot is twice as likely to happen as either this dot or this dot. And that relative likelihood remains the same after conditioning. And the reason why we have to change these values is because they have to sum to 1. So in other words, we have to scale them up. So you can use a formula. But again, I'm here to show you faster ways of thinking about it. So my little algorithm for figuring out conditional PMFs is to take the numerators-- so 1, 2, and 1-- and sum them. So here that gives us 4. And then to preserve the relative frequency, you actually keep the same numerators but divide it by the sum, which you just computed. So I'm going fast. I'll review in a second. But this is what you will end up getting. So to recap, I did 1 plus 2 plus 1, which is 4, to get these denominators. And so I skipped a step here. This is really 2/4, which is 1/2, obviously. So you add these guys to get 4. And then you keep the numerators and just divide them by 4. So 1/4, 2/4, which is 1/2 and 1/4. And that's what we mean by preserving the relative frequency. Except so this thing now sums to 1, which is what we want. OK, so we're done with part b. Part c actually follows almost immediately from part b. In part c we're interested in computing the conditional expectation of y given that x is equal to 1. So we've already done most of the legwork because we have the conditional PMF that we need. And so expectation, you guys have calculated a bunch of these by now. So I'm just going to appeal to your intuition and to symmetry. Expectation acts like center of mass. This is a symmetrical distribution of mass. And so the center is right here at 2. So this is simply 2. And if that went too fast, just convince yourselves. Use the normal formula for expectations. And your answer will agree with ours. OK, so d is a really cool question. Because you can do a lot of math. Or you can think and ask yourself, at the most fundamental level, what is independents? And if you think that way you'll come to the answer very easily. So essentially, I rephrased this to truncate it from the problem statement that you guys are reading. But the idea is that these stars are unknown probability masses. And this question is asking can you figure out a way of assigning numbers between 0 and 1 to these values such that you end up with a valid probability mass function, so everything sums to 1 and such that x and y are independent? So it seems hard a priori. But let's think about it a bit. And in the meantime I'm going to erase this so I have more room. What does it mean for x and y to be independent? Well, it means that they don't, essentially, have information about each other. So if I tell you something about x and if x and y are independent, your belief about y shouldn't change. In other words, if you're a rational person, x shouldn't change your belief about y. So let's look more closely at this diagram. Now, the number 0 should be popping out to you. Because this essentially means that the 0.31 can't happen. Or it happens with 0 probability. So let's say fix x equal to 3. If you condition on x is equal to 3, as I just said, this outcome can't happen. So y could only take on values of 2 or 3. However, if you condition on x is equal to 1, y could take on a value of 1 with probability 1/4 as we computed in the other problem. It could take on a value of 2 with probability of 1/2. Or it could take on a value of 3 with probability 1/4. So these are actually very different cases, right? Because if you observe x is equal to 3 y can only be 2 or 3. But if you observe x is equal to 1, y can be 1, 2, or 3. So actually, x, no matter what values these stars have on, x always tells you something about y. Therefore, the answer to this, part d, is no. So let's put a no with an exclamation point. So I like that problem a lot. And hopefully it clarifies independents for you guys. So parts e and f, we're going to be thinking about independents again. To go over what the problem statement gives you, we defined this event, b, which is the event that x is less than or equal to 2 and y is less than or equal to 2. So let's get some colors. So do bright pink. So that means we're essentially living in this world. There's only those four dots. And we're also told a very important piece of information that conditions on B. x and y are conditionally independent. OK, so part e, now that we have this. And by the way, these two assumptions apply to both parts e and part f. So in part e, we want to find out Pxy of 2, 2. Or in English, what is the probability that x takes on a value of 2 and y takes on a value of 2? So determine the value of this star. And the whole trick here is that the possible values that this star could take on are constrained by the fact that we need to make sure that x and y are conditionally independent given B. So my claim is that if two random variables are independent and you condition on one of them, say we condition on x. If you condition on different values of x, the relative frequencies of y should be the same. So here, the relative frequency, condition on x is equal to 1. The relative frequencies of y are 2 to 1. This outcome is twice as likely to happen as this one. If we condition on 2 this outcome needs to be twice as likely to happen as this outcome. If they weren't, x would tell you information about y. Because you would know that the distribution over 2 and 1 would be different. OK? So because the relative frequencies have to be the same and 2/12 is 2 times 1/12 this guy must also be 2 times 2/12. So that gives us our answer for part e. Let me write up here. Part e, we need Pxy 2, 2 to be equal to 4/12. And again, the way we got this is simply we need x and y to be conditionally independent given B. And if this were anything other than 4 the relative frequency of y is equal to 2 to 1 would be different from over here. So here condition on x is equal to 1. The outcome, y is equal to 2 is twice as likely as x is equal to 1. Here, if we put a value of 4/12 and you condition on x is equal to 2, the outcome y is equal to 2 is still twice as likely as the outcome y is equal to 1. And if you put any other number there the relative frequencies would be different. So x would be telling you something about y. So there would not be independent condition on B. OK, that was a mouthful. But hopefully you guys have it now. And lastly, we have part f, which follows pretty directly from part e. So we were still in the unconditional universe. In part e, we were figuring out what's the value of star in the whole unconditional universe? Now, in part f, we want the value of star in the conditional universe where B occurred. So let's come over here and plot a new graph so we don't confuse ourselves. So we have xy. x can be 1 or 2. Y could be 1 or 2. So we have a plot that looks something like this. And so again, same argument as before. Let me just fill this in. From part e, we have that this is 4/12. And we're going to use my algorithm again. So in the conditional world, the relative frequencies of these four dots should be the same. But you need to scale them up so that if you sum over all of them the probability sums to 1. So you have a valid PMF. So my algorithm from before was to add up all the numerators. So 1 plus 2 plus 4 plus 2 gives you 9. And then to preserve the relative frequency you keep the same numerator. So here we had a numerator of 1. That becomes 1/9. Here we had a numerator of 2. This becomes 2/9. Here we had a numerator of 4. That becomes 4/9. Here we had a numerator of 2, so 2/9. And indeed, the relative frequencies are preserved. And they all sum to 1. So our answer for part f-- let's box it here-- is that Pxy 2, 2 conditioned on B is equal to 4/9, is just that guy. So we're done. Hopefully that wasn't too painful. And this is a good drill problem, because we got more comfortable working with PMFs, joint PMFs. We went over marginalization. We went over conditioning. We went over into independents. And I also gave you this quick algorithm for figuring out what conditional PMFs are if you don't want to use the formulas. Namely, you sum all of the numerators to get a new denominator and then divide all the old numerators by the new denominator you computed. So I hope that was helpful. I'll see you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
A_Coin_Tossing_Puzzle.txt
Hi. In this problem, we'll be going over practice with the calculation of conditional probabilities. We'll start with a game where our friend Alice will be tossing a coin with certain bias of having a head, and tosses this coin twice. And we're interested in knowing, what's the probability that both coin tosses will end up being a head? The first step we're going to do is to convert the problem into a mathematical form by defining two events as the following. Event A is where the first coin toss is a head. And similarly, event B will be having the second coin toss also being a head. Having these two events will allow us to say, well, the event that A intersection B will be the event that both coin tosses are a head. And we'd like to know the probability of such an event. In particular, the probability of A and B will be calculated under two types of information. In the first case, we'll be conditioning on that we know the first coin toss is a head. I'd like to know what the probability of A and B is. In the second case, we know that at least one of the two coin tosses is a head expressed in the form A union B. And under this conditioning, what is the probability of A and B, A intersection B? So Alice, in this problem, says-- well, her guess will be that the first quantity is no smaller than the second quantity. Namely, knowing that the first coin toss is a head somehow more strongly implies that both coin tosses will be a head, compared to the case that we only know at least one of the two coin tosses is a head. And we'd like to verify if this inequality is indeed true. To do so, let's just use the basic calculation of conditional probability. Now, from the lectures, you've already learned that to calculate this quantity, we'll write out a fraction where the numerator is the probability of the intersection of these two events. So we have A intersect B intersection A divided by the probability of the event that we're conditioning on, which is A. Now, the top quantity, since we know that A and B is a subset of event A, then taking the intersection of these two quantities will just give us the first event. So we have A and B. And the bottom is still probability of A. Let's do the same thing for the second quantity here. We have the top probability of A and B intersection the event A union B, and on the bottom, probability of the event A and B. Again, we see the event A and B is a subset of the event A union B. So the top will be A and B. And the bottom-- A union B. OK, now let's stop for a little bit. We've computed the probability for each expression in the following fractional form. And we observed that for both fractions, the numerator is the same. So the numerator is a probability of A and B. And the denominator in the first case is probably of A, and the second case, probably of A union B. Since we know that A is a subset of the event A union B, and by the monotonicity of probabilities, we know that the probability of A is hence no greater than a probability of A union B. Substituting this back into these expressions, we know that because they lie in the denominators, the first expression is indeed no smaller than the second expression. So our friend Alice was correct. So throughout this problem, we never used the fact that the probability of a particular coin toss results, let's say, in a head is a certain number. Actually, this bias for the coin is irrelevant. Whether the coin is fair or unfair, this fact is always true. So indeed, it does not depend on the probability of the coin. But if you're really curious what happens when the coin is fair, we can plug in the numbers. And here, we're assuming the coin is fair, which means probability of having a head is 1/2. Then, we'll see after going through the calculations that the first probability is 1/2, whereas the second probability is 1/3, which means, in this case, the [? dominance ?] actually is strict. So the first one is strictly greater than the second one, OK? So this completes the first part of the problem. How do we generalize this into more general settings? There are multiple ways, but we'll go over one particular form. And to do so, we'll be defining three events somewhat more abstractly. Let's say we have three events-- C, D, and E. Imagine any event, but all three events have to satisfy the following condition. First, event D will be a subset of E. And second, the intersection of C and D is equal to the intersection of C and E, OK? So this will be our choice events. And let's see a particular example. Let's say you have a sample space here and some event E. Now, by the first condition, D will have to lie somewhere in E. For the second condition, we'll pick some event C such that this is true. And one way to do so is simply picking C that lies within both D and E. And you can see C intersection D will be C. And C intersection E will still be C. Hence, the second equality is true. So if both equalities are true, we have the following relationship, that the probability of C conditional on D will be no smaller than the probability of C conditional on event E. And this will be the more general form of the inequality that we saw before. So first of all, the way to prove this is in fact the same. We simply write out the value of this using the fractional form. And based on these two facts, we can arrive at this equation, which I shall now go over. But just to see why this form is more general, in fact, if we-- say we let C be the event A intersection B, D be the event A, and E be the event A and B where A and B are the events that we defined earlier. We can verify that, indeed, these conditions are true, namely D is a subset of E. Because A is a subset of A union B, and C is a subset of both D and E. And hence, condition two is also true. And if that's the case, we will actually recover the result we got earlier for events A and B. And hence, this equation here is a more general form. So that's the end of the problem. See you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Rooks_on_a_Chessboard.txt
Today, we're going to do a fun problem called rooks on a chessboard. And rooks on a chessboard is a problem that's going to test your ability on counting. So hopefully by now in class, you've learned a few tricks to approach counting problems. You've learned about permutations, you've learned about k-permutations, you've learned about combinations, and you've learned about partitions. And historically for students that we've taught in the past and many people, counting can be a tricky topic. So this is just one drill problem to help you get those skills under your belt. So what does the rooks on a chessboard problem ask you? Well, you're given an 8-by-8 chessboard, which I've tried to draw here. It's not very symmetrical. Sorry about that. And you're told that you have eight rooks. I'm sure most of you guys are familiar with chess. But if any of you aren't, chess is a sophisticated board game. And one of the types of pieces you have in this game is called a rook. And in this particular problem, there are eight rooks. And your job is to place all eight rooks onto this 8-by-8 chessboard. Now, you're told in the problem statement that all placements of rooks are equally likely. And you are tasked with finding the probability that you get a safe arrangement. So that is to say, you place your eight rooks on the board. What is the probability that the way you placed them is safe? So what do I mean by "safe"? Well, if you're familiar with the way chess works, so if you place a rook here, it can move vertically or it can move horizontally. Those are the only two legal positions. So if you place a rook here and you have another piece here, then this is not a safe arrangement, because the rook can move this way and kill you. Similarly, if you have a rook here and another piece here, the rook can move horizontally and kill you that way. So two rooks on this board are only safe from each other if they are neither in the same column nor in the same row. And that's going to be key for us to solve this problem. So let's see-- where did my marker go? I've been talking a lot, and I haven't really been writing anything. So our job is again, to find the probability that you get a safe arrangement. So I'm just going to do "arrange" for short. Now, I talked about this previously, and you guys have heard it in lecture. Hopefully you remember something called the discrete uniform law. So the discrete uniform law is applicable when your sample space is discrete and all outcomes are equally likely. So let's do a quick check here. What is our sample space for this problem? Well, a logical choice would be that the set of all possible outcomes is the set of all possible spatial arrangements of rooks. And hopefully it's clear to you that that is discrete. And the problem statement furthermore gives us that they're equally likely. So the discrete uniform law is in fact applicable in our setting. So I'm going to go ahead and write what this means. So when your sample space is discrete and all outcomes are equally likely, then you can compute the probability of any event, A, simply by counting the number of outcomes in A and then dividing it by the total number of outcomes in your sample space. So here we just have to find the number of total safe arrangements and then divide it by the total number of arrangements. So again, as you've seen in other problems, the discrete uniform law is really nice, because you reduce the problem of computing probabilities to the problem of counting. And so here's where we're going to exercise those counting skills, as I promised earlier. Now, I would like to start with computing the denominator, or the total number of arrangements, because I think it's a slightly easier computation. So we don't care about the arrangements being safe. We just care about how many possible arrangements are there. Now, again, we have eight rooks, and we need to place all of them. And we have this 8-by-8 board. So pretty quickly, you guys could probably tell me that the total number of square is 64, because this is just 8 times 8. Now, I like to approach problems sequentially. That sort of really helps me think clearly about them. So I want you to imagine a sequential process during which we place each rook one at a time. So pick a rook. The chessboard is currently empty. So how many squares can you place that rook in? Well, nobody's on the board. You can place it in 64 spots. So for the first rook that you pick, there are 64 spots. Now, once you place this rook, you need to place the second rook, because again, we're not done until all eight are placed. So how many possible spots are left. Well, I claim that there are 63, because one rule of chess is that if you put a piece in a particular square, you can no longer put anything else on that square. You can't put two or more things. So the first rook is occupying one spot, so there's only 63 spots left. So the second rook has 63 spots that it could go in. Similarly, the third rook has 62 spots. Hopefully you see the pattern. You can continue this down. And remember, we have to place all eight rooks. So you could do it out yourself or just do the simple math. You'll figure out that the eighth rook only has 57 spots that it could be in. So this is a good start. We've sort of figured out if we sequentially place each rook, how many options do we have. But we haven't combined these numbers in any useful way yet. We haven't counted the number of total arrangements. And this may already be obvious to some, but it wasn't obvious to me when I was first learning this material, so I want to go through this slowly. You have probably heard in lecture by now about the counting principle. And what the counting principle tells you is that whenever you have a process that is done in stages and in each stage, you have a particular number of choices, to get the total number of choices available at the end of the process, you simply multiply the number of choices at each stage. This might be clear to you, again, simply from the statement, for some of you. But for others, it might still not be clear. So let's just take a simple example. Forget about the rook problem for a second. Let's say you're at a deli, and you want to make a sandwich. And to make a sandwich, you need a choice of bread and you need a choice of meat. So we have a sandwich-building process, and there's two stages. First, you have to pick the bread, and then you have to pick the meat. So let's say for the choice of bread, you can choose wheat or rye. So again, you can always use a little decision tree-- wheat or rye. And then let's say that for the meats, you have three options. You have ham, turkey, and salami. So you can have ham, turkey, or salami-- ham, turkey, or salami. How many total possible sandwiches can you make? Well, six. And I got to that by 2 times 3. And hopefully this makes sense for you, because there's two options in the first stage. Freeze an option. Given this choice, there's three options at the second stage. But you have to also realize that for every other option you have at the first stage, you have to add an additional three options for the second stage. And this is the definition of multiplication. If you add three two times, you know that's 3 times 2. So if you extrapolate this example to a larger, more general picture, you will have derived for yourself the counting principle. And we're going to use the counting principle here to determine what the total number of arrangements are. So we have a sequential process, because we're placing the first rook and then the second rook, et cetera. So at the first stage, we have 64 choices. At the second stage, we have 63 choices. At the third stage, we have 62 choices, et cetera. And so I'm just multiplying these numbers together, because the counting principle says I can do this. So my claim is that this product is equal to the total number of arrangements. And we could stop here, but I'm going to actually write this in a more useful way. You guys should have been introduced to the factorial function. So you can express this equivalently as 64 factorial divided by 56 factorial. And this is not necessary for your problem solution, but sometimes it's helpful to express these types of products in factorials, because you can see cancellations more easily. So if it's OK with everybody, I'm going to erase this work to give myself more room. So we'll just put our answer for the denominator up here, and then we're going to get started on the numerator. So for the numerator, thanks to the discrete uniform law, we only need to count the number of safe arrangements. But this is a little bit more tricky, because now, we have to apply our definition of what "safe" means. But we're going to use the same higher-level strategy, which is realizing that we can place rooks sequentially. So we can think of it as a sequential process. And then if we figure out how many choices you have in each stage that sort of maintain the "safeness" of the setup, then you can use the counting principle to multiply all those numbers together and get your answer. So we have to place eight rooks. Starting the same way we did last time, how many spots are there for the first rook that are safe? Nobody is on the board yet, so nobody can harm the first rook we put down. So I claim that it's just our total of 64. Now, let's see what happens. Let's pick a random square in here. Let's say we put our first rook here. Now, I claim a bunch of spots get invalidated because of the rules of chess. So before, I told you a rook can kill anything in the same column or in the same row. So you can't put a rook here, because they'll kill each other, and you can't put a rook here. So by extension, you can see that everything in the column and the row that I'm highlighting in blue, it's no longer an option. You can't place a rook in there. Otherwise, we will have violated our "safety" principle. So where can our second rook go? Well, our second rook can go in any of the blank spots, any of the spots that are not highlighted by blue. And let's stare at this a little bit. Imagine that you were to take scissors to your chessboard and cut along this line and this line and this line and this line. So you essentially sawed off this cross that we created. Then you would have four free-floating chessboard pieces-- this one, this one, this one, and this one. So this is a 3-by-4 piece, this is 3-by-3, this is 4-by-3, and this is 4-by-4. Well, because you cut this part out, you can now slide those pieces back together. And hopefully you can convince yourself that that would leave you with a 7-by-7 chessboard. And you can see that the dimensions match up here. So essentially, the second rook can be placed anywhere in the remaining 7-by-7 chessboard. And of course, there are 49 spots in a 7-by-7 chessboard. So you get 49. So let's do this experiment again. Let me rewrite the reduced 7-by-7 chessboard. You're going to have to forgive me if the lines are not perfect-- one, two, three, four, five, six, seven; one, two, three, four, five, six, seven. Yep, I did that right. And then we have one, two, three, four, five, six, seven. That's not too bad for my first attempt. So again, how did I get this chessboard from this one? Well, I took scissors and I cut off of the blue strips, and then I just merged the remaining four pieces. So now, I'm placing my second rook. So I know that I can place my second rook in any of these squares, and it'll be safe from this rook. Of course, in reality, you wouldn't really cut up your chessboard. I'm just using this as a visual aid to help you guys see why there are 49 spots. Another way you could see 49 spots is literally just by counting all the white squares, but I think it takes time to count 49 squares. And this is a faster way of seeing it. So you can put your second rook anywhere here. Let's actually put in the corner, because the corner is a nice case. If you put your rook in the corner, immediately, all the spots in here and all the spots in here become invalid for the third rook, because otherwise, the rooks can hurt each other. So again, you'll see that if you take scissors and cut off the blue part, you will have reduced the dimension of the chessboard again. And you can see pretty quickly that what you're left with is a 6-by-6 chessboard. So for the third rook, you get a 6-by-6 chessboard, which has 36 free spots. And I'm not going to insult your intelligence. You guys can see the pattern-- 64, 49, 36. These are just perfect squares decreasing. So you know that the fourth rook will have 25 spots. I'm going to come over here because I'm out of room. The fifth rook will have 16 spots. The sixth rook will have nine spots. The seventh rook will have four spots. And the eighth rook will just have one spot. And now, here we're going to invoke the counting principle again. Remember the thing that I just defined to you by talking about sandwiches. And we'll see that to get the total number of safe arrangements, we can just multiply these numbers together. So I'm going to go ahead and put that up here. You get 64 times 49 times 36 times 25 times 16 times 9 times 4. And in fact, this is our answer. So we're all done. So I really like this problem, because we don't normally ask you to think about different spatial arrangements. So it's a nice exercise, because it lets you practice your counting skills in a new and creative way. And in particular, the thing that we've been using for a while now is the discrete uniform law. But now, I also introduced the counting principle. And we used the counting principle twice-- once to compute the numerator and once to compute the denominator. Counting can take a long time for you to absorb it. So if you still don't totally buy the counting principle, that's OK. I just recommend you do some more examples and try to convince yourself that it's really counting the right number of things. So counting principle is the second takeaway. And then the other thing that is just worth mentioning is, you guys should get really comfortable with these factorials, because they will just show up again and again. So that's the end of the problem, and I'll see you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
A_Chess_Tournament_Problem.txt
Hi. Welcome back. Today, we're going to do a fun problem called the chess tournament problem. Now, it's a very long problem, so I just want to jump straight in. Essentially, the problem statement describes a very special chess tournament, which involves players named Al, Bo, and Chi. Now Al is the current reigning championship, and Bo and Chi are this year's contenders, and, of course, they're vying with each other to beat out Al and become the new champion. And so essentially, the tournament is divided into two rounds-- a first round, during which Bo and Chi play against each other, and then a second round, during which the surviving contender from the first round plays against Al. And the problem statement also gives you a bunch of information like what's the probability that Bo beats Chi in a particular game, et cetera. So without further ado, let's get started on part a. In part a, the first thing we're asked to compute is the probability that a second round is required. Now, to save myself some writing, I used the notation R2 to denote that event. So we are interested in probability of R2. Now, I claim that this problem is very sequential in nature so I would like to draw a tree to describe what's happening. So in the first part of the tournament, when Bo and Chi play their first game, exactly one of two things can happen-- either Bo can win or Chi can win. And we're told by the problem statement that Bo wins with the probability of 0.6 and, therefore, Chi must win with the probability of 0.4, right? Because these two possibilities must sum to 1, because either this must happen or this happen. Now, let's imagine that the first game has been played and that Bo won. Well, during the second game, there's still two options for the outcome-- Bo could win the second game or Chi could win the second game. And because the problem statement says that in every scenario Bo always wins against Chi with the probability of 0.6, we can go ahead and put a 0.6 along this branch as well. Similarly, 0.4 here. And similar logic, you've got a tree that looks like this. And for those of you who haven't seen trees before, it's just a structure that looks something like this. And it helps us do better accounting. It helps us keep straight in our head what are the various outcomes, so that we don't get confused. And so very quickly here, you can see that there's four possible outcomes. So each node in this tree corresponds to an outcome. And the leaves are those nodes at the furthest stage. And it's convention to draw the probability of a particular-- so for instance, the probability that Bo wins the first game-- it's just convention to draw that probability over the corresponding branch. And the reason why such diagrams are so useful is because to compute the probability of a particular outcome, if you've designed your tree correctly, all you have to do is multiply the probabilities along the branches that get into that outcome. So let's see that in action. When is a second round required? Well, a second round is required here, right? Because in this case, Bo would be the surviving challenger and he'd play the next round against Al. It's also required here. But of course, it's not required here or here, because no second round is played. And so these two outcomes comprise the event R2. And now, to get the probability of this outcome, you multiply along the branches. So 0.6 times 0.6 give you 0.36. And 0.4 times 0.4 gives you 0.16. And we're almost done. We know that these two events are disjoint, because if Bo won the first two games, then, certainly, Chi could've won the first two games. And so you can just sum the probabilities to get the probability of the reunion. So the probability of R2 is equal to the probability that Bo won the first two games or Chi won the first two games. And that's equal to 0.36 plus 0.16, which is equal to a 0.52. OK, now the second part of part a asks for the probability that Bo wins the first round. And this is first round. This is a very straightforward one. So Bo wins the first round, that correspondence only to this particular outcome. And we already know the probability associated with that outcome is equal to the 0.36. So we're done with that one. And now the last part is sort of an interesting one. It asks for the probability that Al retains his championship this year. So I'm going to just call that A for short. A is the event that Al retains his championship this year. And for that we're going to need a larger tree, because Al has a lot of activity in the second round, and so far our tree only describes what happens in the first round. Now, to save time, I've actually drawn the rest of the tree over there up in the corner. So let's get rid of this one and let's look at the full tree. So let's see when does Al retain his championship? Well, Al certainly retains his championship here, right? Because no second round is required. Similarly, here. Al retains his championship here, because the second round was required, but Al beat Bo. And similarly, here Bo didn't win both games in the second round against Al, so Al wins. Here, Bo is the new championship. So we don't want to include about one. And sort of by symmetry, we also get this one and this one. So by my argument before, we know that the outcomes that comprise our event of interest are this one, this one, this one, this one, this one, and this one. So we could multiply the probabilities along each branch and sum them, because they're disjoint, to get the total probability. But we're not going to do that because that's a lot of algebra. Instead, we're going to look at the complement of the event. So we're going to notice, there's only two branches on which Al does not retain his current championship. So P of A is, of course, equal to 1 minus P of A. And we're going to get P of A by inspection. I'm sorry, so P of A compliment. I'm just testing you, guys. So P of A compliment corresponds to here and to here, because those are the outcomes where Al didn't win. And so again, you multiply along the branches to get the probabilities. So you get 0.6 squared times 0.5 squared plus 0.4 squared times 0.3 squared. And if you do all the algebra, you should get around 0.8956. So we're cruising through this problem. Let's go to part b. Part b is a little bit less straightforward than part a, because it starts asking you for conditional probabilities, as opposed to a priori probabilities. So in the first part-- and again, I'm going to continue my notation with R2-- we want the probability that Bo is the surviving challenger-- so I'm just going to use B to denote that-- given R2. Now, by definition, you should remember from lecture that this is equal to probability of B and R2 divided by the probability of R2. And of course, we've already computed this value right up here with part a. We know it's 2.5. So we don't have to do any more work there. We only have to look at the numerator. So we need to go and figure out what nodes in that tree correspond to the event B intersect R2. So let's use a new color. Let's see, Bo is the surviving challenger here only, right? And R2 is automatically satisfied, right? Because a second round is required there and there, not on those two. But here Chi is the surviving challenger, not Bo, so we're really only interested in that node. And you multiply along the branches to get the probabilities. So we have 0.36 over 0.52, which is approximately equal to 0.6923. OK, now, the next part wants the conditional probability that AL retains his championship, conditioned, again, on R2. So we already have A being the event Al retains his championship. So we want the probability of A, given R2. And let's just apply the direct definition of conditional probability again. You get P of A and R2 divided by a probability of R2. Of course, we have the probability of R2 already, so we just need to find the node in the tree that corresponds to A and R2. So where is R2? R2 is going to correspond to every node to the right that is not one of these two. So a second round is required here, here, here, here, here, and here. Now, where does Al retain his championship? So Al retains his championship here. He retains his championship here. He retains his championship here and here, but no second round is required, so these guys don't belong in the intersection. But this does, and this does. So we can again multiply the probabilities along the branches and then some them. So let's see, we get-- this marker's not working very well, so I'm going to switch back to the pink-- so you get 0.6 squared times 0.5. That gets rid of this one. And then we want 0.6 squared times 0.5 squared. That gets rid of that one. And then plus-- let's see-- 0.4 squared times 0.7, which takes care of this one. And then lastly, 0.4 squared times 0.3 times 0.7. And that is a long expression. But it happens to be about 0.7992. OK, so we are done with part b and we can move along to part c. And I am, since we're running out of room, I'm actually just going to erase this. And hopefully you guys have had a chance to copy it down by now. If not, you can always pause the video and go back. So let's see, part c asks us given that the second round is required and that it comprised of one game only. So let's denote I. So let's I be the event that the second round was one game only. So essentially, in math conditioned on R2 and I, what is the probability that it was Bo who won the first round? So let's let B be the event that Bo won the first round. OK, so again translating the English to math, we just want the probability of B given R2 and I. Now, I am once again going to use the definition of conditional probability. You might be concerned that we haven't defined explicitly yet the definition of conditional probability, when what lies behind the conditioning bar is not a single event, but it's rather an intersection of an event. And so my claim to you is that it doesn't matter and that the same exact definition applies. But we'll go through it slowly. So R2 is an event, I is an event, and we know that the intersection of two events is itself an event. So I'm going to make up a new letter, and I'm going to call this event W. So just using the new notation, this is equal to probability of B, given W. Now, this is the normal definition that we know. We know that this is probability of B intersect W over probability of W. And then we just resubstitute what the definition of W was. And so if you do that over here, you get probability of B and R2 and I divided by probability of R2 and I. So hopefully, jumping from here ahead to here, you see that the definitions act exactly the same way. But these are two very short intermediate steps that should help you convince yourself that the same definition still works. So let's start with the denominator, because the denominator looks a little bit easier. Where is R2 and I in our tree? Well, let's see. Here, a second round was required, but it comprised two games. Same with this one. Here, a second round was required and it was comprised only of one game. So this is good. This is one of the outcomes that we're looking for. Here, no second round was required. So this doesn't qualify. Same with this one. Here, a second round was required, and there was only one game, so that's good. And then these don't qualify for the same reasons as we set up there. So we just have to multiply the probabilities along those branches. And we see that it's 0.4 squared times 0.7 plus 0.6 squared times 0.5. OK, we're almost done. We just need to look at the intersection of R2 and I. So R2 and I are the ones we've already circled. But now, we want to add one more constraint, which is that Bo had to have won the first round. And so we see here that Chi won the first round, if we're looking at this outcome. And so he's no good. Let's use a different color. Let's see, maybe this one. But here Bo did win the first round. So we're going to get 0.6 squared times 0.5. And I got that, of course, just by multiplying the probabilities along the right branches. And this, if you're curious, comes out to be about 0.6164. OK, so I know that was a lengthy problem, but you should feel really comfortable now doing sort of basic probability manipulations. One thing that this problem emphasized a lot was your ability to compute conditional probabilities. So you saw me apply the definition of conditional probability twice in part b. And then you saw me apply the definition again in part c in a sort of slightly modified way. So that's one thing that you should have gotten out of this problem. And then another thing is that hopefully, you noticed that by using a tree diagram, we made the problem much easier. We almost didn't even have to think about computing probabilities anymore. We reduced the problem to just saying, OK, what are the outcomes that comprise our event of interest? And then once you select those, to compute their probability you multiply the probabilities along the branches. You have the right to just add those together, because if you draw your tree correctly, all of these guys should be disjoint from one another. So you have to be careful, of course, to set up your tree appropriately. But once you do set up your tree appropriately, your life is much simpler. So that's it for today. And we'll see you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Competing_Exponentials.txt
Hi, in this problem, we're going to look at competing exponential. So we have three exponential random variables, X with parameter lambda, Y with parameters mu, and Z with parameter nu. And we want to calculate some probability. And the probability that we want to calculate is the probability that X is less than Y is less than Z. Now we can reinterpret this as 3 plus Poisson processes. Because the link between exponentials and Poisson processes is that the inter-arrival time of Poisson processes are exponentially distributed. So you can think of X being the time until the first arrival in a Poisson process with parameter lambda. And same thing for Y is the first arrival time of a Poisson process with parameter mu. The same thing for Z and nu. And so in that interpretation, X less than Y less than Z, you could interpret as a race, meaning that X finishes first followed by Y and then Z comes in last. So with that interpretation, let's see if we can calculate what this probability is. We can rewrite this probability as a combination of two things occurring. One is that X is less than the minimum of Y and Z. And then the other is that Y is less than Z. So what is this first event mean? This first event means that X comes in first. And it doesn't matter whether Y comes in second or Z comes in second. So we first say that X has to come in first which means it has to beat the better of Y and Z. And then that combined with the fact that Y does better than Z is the same thing as saying that X is first, Y is second, and Z is third. And now, let's try to argue using Poisson processes, that these two events are actually independent. So this event occurring means that X is smaller than Y and Z. So let's take these Poisson processes, and because these random variables are assumed to be independent, these are independent Poisson processes. So we can merge them. So let's merge these two. And we'll get a Poisson process that has rate mu plus nu. And we can also merge this first one and that one. And we'll get another Poisson process with predator lambda plus mu plus nu. So in that context, what does it mean that X is less than the minimum Y and Z? It just means that in this merged process, the first arrival came from the X process. In that case, if that's true, then X is less than minimum Y and Z. Well, let's say that event does occur that the first arrival is from the X process. Now we're interested in what the order of the second two arrivals are. Is it Y first and then Z? Or Z first and then Y? Well, it doesn't matter because of the fresh start property. Because after this arrival comes, and say it is from the X process, the Poisson processes start anew, and they're still independent. And so what happens after that is independent of what happened here, when X arrived. And so whether Y came first followed by Z, or Z came first followed by Y is independent of what happened here. And so because of that, these two events are independent, and so when we have the probability of the intersection of two independent events, we can write that as the product of those two probabilities. Now, what is the probability of this first event? The probability that X is less than the minimum Y and Z? Well, we just said that that corresponds to the first arrival of this merge process being from the X process. Well, that probability is lambda over lambda plus mu plus nu. So it's equal to this ratio where the process that you're interested in, its rate comes in the numerator. And then the merged rate is on the denominator, And what about the second one? What's the probability that Y is less than Z? Well, let's go now to this merge process where we merged just the Y and Z processes and see which one comes first. Well, in that case what we want to know is in this merge process, what is the probability that the first arrival came from the Y process? Well, analogously, that probability is going to mu over mu plus nu. And that gives us our answer. And so we see that what looked like a pretty complex calculation when we reinterpreted it in terms of Poisson processes, it becomes relatively easy to solve. But this still seems like a complicated expression. So let's try to check to see whether it actually makes sense. So one way to do that is to look at a specific example of the choice of lambda, mu, and nu, and see if it actually makes sense. So one example is suppose that all three of these parameters are the same. Well, if they're all the same then this probability, the first part becomes 1 over 3, 1/3. And the second one is 1 over 2. And so if all three parameters are the same, probability becomes 1/6. And let's see if that makes sense. If all three parameters are the same, that means that these rates, these arrival rates are all the same. And what that means is that any three ordering of these three arrivals is as likely as any other ordering. And what we're interested in is the probability of one's particular ordering happening which is X first then Y then Z. But if everything is symmetric then any of the orderings is as likely as any other one. And how many orderings are there? Well, there's three choices for who comes in first. Two for who comes in second. And one for who comes in last. So there's a total of six possible orders in which these three contestants, if you think of it that way, could finish this race. And out of none of those, we want the probability that one of those outcomes happens. And so the probability should be 1/6. And that's what our formula tells us. So as I said, in this problem, we saw how to reinterpret exponentials in the context of Poisson processes that helped us solve a--
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Ambulance_Travel_Time.txt
In this problem, we'll be looking at an ambulance that is traveling back and forth in interval of size l. Say from 0 to l. At some point in time, there's an accident occurring, let's say at location x. And we'll assume the accident occurs in a random location so that x is uniformly distributed between 0 and l. Now, at this point in time, let's say the ambulance turns out to be at location y. Again, we'll assume that y is a uniform random variable between 0 and l, and also that x and y are independently distributed. The question we're interested in answering is how long it would take an ambulance to respond to travel from point y to point x. Let's call this time T. And in particular, we want to know everything about distribution of T. For example, what is the CDF of T given by the probability of big T, less than or equal to little t, or the PDF, which is done by differentiating the CDF once we have it. Now, to start, we'll express T you as a function of X and Y. Since we know that the ambulance travels at a speed V-- V meters or V units of distance per second-- then we can write that big T is simply equal to Y minus X, absolute value the distance between X and Y, divided by the speed at which the ambulance is traveling at, V. So now if we look at the probability of T less than or equal to little t, this is then equal to the probability that Y minus X divided by V less than or equal to little t. We now take off the absolute value by writing the expression as negative vt less equal to Y minus X less equal to positive vt. Here we multiply v on the other side of t, and then took out the absolute value sign. As a final step, we'll also move X to the other side of inequalities by writing this as X minus vt less equal to y less equal to x plus vt. To compute this quantity, we'll define a set A as a set of all points that satisfies this condition right here. In particular, it's a pair of all X and Y such that X minus vt less equal to little y less equal to X plus vt, and also that X is within 0 and l, and so is Y. So the set A will be the set of values we'll be integrating over. Now that we have A, we can express the above probability as the integral of all X and Y, this pair within the set A, integrating the PDF of f of X, Y, little x, little y. Let's now evaluate this expression right here in a graphical way. On the right, we're plotting out what we just illustrated here, where the shaded region is precisely the set A. As we can see, this is a set of values of X and Y where Y is sandwiched between two lines, the upper one being X plus vt right here, and the lower line being X minus vt, right here. So these are the values that correspond to the set A. Now that we have A, let's look f of x, y. We know that both x and y are uniform random variables between 0 and l, and therefore, since they're independent, the probability density of x and y being at any point between 0 and l is precisely 1 over l squared, where l squared is the size of this square box right here. So given this picture, all we need to do is to multiply by 1 over l squared the area of the region A. And depending on the value of T, we'll get different answers as right here. If T is less than 0, obviously, the area of A diminishes to nothing, so we get 0. If T is greater than l over V, the area of A fills up the entire square, and we get 1. Now, if T is somewhere in between 0 and l over v, we will have 1 over l squared, multiply by the area looking like something like that right here-- the shaded region. Now, if you wonder how we arrive at exactly this expression right here, here is a simple way to calculate it. What we want is 1 over l squared times the area A. Now, area A can be viewed as the entire square, l squared, minus whatever's not in area A, which is these two triangles right here. Now, each triangle has area 1/2, l minus vt squared. This multiply 2, and this, after some algebra, will give the answer right here. At this point, we have obtained the probability of big T less equal to little t. Namely, we have gotten the CDF for T. And as a final step, we can also compute the probability density function for T. We'll call it little f of t. And we do so by simply differentiating the CDF in different regions of T. To begin, we'll look at t between 0 and l over v right here at differentiating the expression right here with respect to t. And doing so will give us 2v over l minus 2v squared t over L squared. And this applies to t greater or equal to 0, less than l/v. Now, any other region, either t less than 0 or t greater than l/v, we have a constant for the CDF, and hence its derivative will be 0. So this is for any other t. We call it otherwise. Now, this completely characterized the PDF of big T, and hence, we've also finished a problem.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Conditional_Probability_Example.txt
Hi. Today we're going to do another fun problem that involves rolling two dice. So if you guys happen to frequent casinos, this problem might be really useful for you. I'm just kidding. But in all seriousness, this problem is a good problem, because it's going to remind us how and when to use the discrete uniform law. Don't worry, I'll review what that says. And it's also going to exercise your understanding of conditional probability. So quick recap. The discrete uniform law says that when your sample space is discrete, and when the outcomes in your sample space are equally likely, then to compute the probability of any event A, you can simply count the number of outcomes in A and divide it by the total number of possible outcomes. OK, so coming back to our problem. The problem statement tells us that we roll two fair six-sided die. And it also tells us that each one of the 36 possible outcomes is assumed to be equally likely. So you know alarm bell should be going off in your head. Our sample space is clearly discrete. And it says explicitly that all outcomes are equally likely. So clearly, we can use the discrete uniform law. And again, this is helpful because it reduces a problem of computing probabilities to a problem of counting. OK, and before we go any further, I just want to review what this graph is plotting. You've seen it a few times, but just to clarify, on one axis, we're plotting the outcome of the first die roll, and on the second axis, we're plotting the outcome of the second die roll. So if you got a 4 on your first die, and you get a 1 on your second die, that corresponds to this point over 4 and up 1. OK, so part a asks us to find the probability that doubles our rolls. So let's use some shorthand. We're going to let D be the event that doubles are rolled. And we want to compute the probability of D. I argue before we can use the discrete uniform law. So if we apply that, we just get the number of outcomes that comprise the event "doubles rolled" divided by 36, because there are 36 possible outcomes, which you can see just by counting the dots in this graph. Six possible outcomes for the first die, six possible outcomes for the second die. That's how you-- 6 times 6 is 36. So I've been assuming this entire time that you know what doubles are. For those of you who don't know, doubles is essentially when that number on the first die matches the number on the second die. So this outcome here 1-1 is part of the event "doubles rolled." Similarly, 2-2, 3-3, 4-4, 5-5, and 6-6-- these six points comprise the event "doubles rolled." So we can go ahead and put 6 over 36, which is equal to 1/6. So we're done with part a. We haven't seen any conditioning yet. The conditioning comes in part b. So in part b we're still interested in the event D, in the event that "doubles are rolled." But now we want to compute this probability conditioned on the event that the sum of the results is less than or equal to 4. So I'm going to use this shorthand sum less than or equal to 4 to denote the event that the role results in the sum of 4 or smaller. So there's two ways we're going to go about solving part b. Let's just jump right into the first way. The first way is applying the definition of conditional probability. So hopefully you remember that this is just probability of D intersect sum less than or equal to 4, divided by probability of sum less than or equal to 4. Now, sum less than or equal to 4 and D intersect sum less than or equal to 4 are just two events. And so we can apply the discrete uniform law to calculate both the numerator and the denominator. So let's start with the denominator first because it seems a little bit easier. So sum less than or equal to 4, let's figure this out. Well, 1-1 gives us a sum of 2, that's less than or equal to 4. 2-1 gives us 3. 3-1 gives us 4. 4-1 gives us 5, so we don't want to include this or this, or this point. And you can sort of convince yourself that the next point we want to include is this one. That corresponds to 2-2, which is 4, so it makes sense that these guys should form the boundary, because all dots sort of up and to the right will have a bigger sum. 3-1 gives us 4. And 1-2 gives us 3. So these six points-- 1, 2, 3, 4, 5, 6-- are the outcomes that comprise the event sum less than or equal to 4. So we can go ahead and write in the denominator, 6 over 36, because we just counted the outcomes in sum less than or equal to, 4 and divided it by the number of outcomes in omega. Now, let's compute the numerator. D intersect sum less than or equal to 4. So we already found the blue check marks. Those correspond to sum less than or equal to 4. Out of the points that have blue check marks, which one correspond to doubles? Well, they're actually already circled. It's just these two points. So we don't even need to circle those, so we get 2 over 36, using the discrete uniform law. And you see that these two 36s cancel each other. So you just get 2/6 or 1/3. So that is one way of solving part b, but I want to take you, guys, through a different way, which I think is important, and that make sure you really understand what conditioning means. So another way that you can solve part b is to say, OK, we are now in the universe, we are in the conditional universe, where we know the sum of our results is 4 or smaller. And so that means our new sample space is really just this set of six points. And one thing that it's worth noting is that conditioning never changes the relative frequencies or relative likelihoods of the different outcomes. So because all outcomes were equally likely in our original sample space omega, in the conditional worlds the outcomes are also equally likely. So using that argument, we could say that in our sort of blue conditional universe all of the outcomes are equally likely. And therefore, we can apply a conditional version of the discrete uniform law. So namely, to compute the probability of some event in that conditional world. So the conditional probability that "doubles are rolled", we need only count the number of outcomes in that event and divide it by the total number of outcomes. So in the conditional world, there's only two outcomes that comprise the event "doubles rolled." These are the only two circles in the blue region, right? So applying the conditional version number law, we have two. And then we need to divide by the size of omega. So our conditional universe, we've already said, has six possible dots. So we just divide by 6, and you see that we get the same answer of 1/3. And so again, we used two different strategies. I happen to prefer the second one, because it's slightly faster and it makes you think about what does conditioning really mean. Conditioning means you're now restricting your attention to a conditional universe. And given that you're in this conditional universe where the sum was less than or equal to 4, what is then the probability that doubles also happened? OK, hopefully you, guys, are following. Let's move on to part c. So part c asks for the probability that at least one die roll is a 6. So I'm going to use the letter S to denote this, the probability that at least one die roll is a 6. So let's go back to our picture and we'll use a green marker. So hopefully you agree that anything in this column corresponds to at least one 6. So this point, this point, this point, this point, this point, and this point your first die landed on a 6, so at least one 6 is satisfied. Similarly, if your second die has a 6, then we're also OK. So I claim we want to look at these 11 points. Let me just check that, yeah, 6 plus 5-- 11. So using the discrete uniform law again, we get 11 divided by 36. OK, last problem, we're almost done. So again, we're interested in the event S again, so the event that at least one die roll is a 6. But now we want to compute the probability of that event in the conditional world where the two dice land on different numbers. So I'm going to call this probability of S. Let's see, I'm running out of letters. Let's for lack of a better letter, my name is Katie, so we'll just use a K. We want to compute the probability of S given K. And instead of using the definition of conditional probability, like we did back in part b, we're going to use the faster route. So essentially, we're going to find the number of outcomes in the conditional world. And then we're also going to compute the number of outcomes that comprise S in the conditional world. So let's take a look at this. We are conditioning on the event that the two dice land on different numbers. So hopefully you agree with me that every single dot that is not on the diagonal, so every single dot that doesn't correspond to doubles, is a dot that we care about. So our conditional universe of that the two dice land on "different numbers", that corresponds to these dots. And it corresponds to these dots. I don't want to get this one. OK, that's good. So let's see, how many outcomes do we have in our conditional world? And I'm sorry I don't know why I didn't include this. This is absolutely included. I'm just testing to see if you, guys, are paying attention. So we counted before that there are six dots on the diagonal, and we know that there are 36 dots total. So the number of dots, or outcomes to use the proper word, in our conditional world is 36 minus 6, or 30. So we get a 30 on the denominator. And now we're sort of using a conditional version of our discrete uniform law, again. And the reason why we can do this is, as I argued before, that conditioning doesn't change the relative frequency of the outcomes. So in this conditional world, all of the outcomes are still equally likely, hence we can apply this law again. So now we need to count the number of outcomes that are in the orange conditional world, but that also satisfy at least one die roll is a 6. So you can see-- 1-- we just need to count the green circles that are also in the orange. So that's 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. So we get a 10, so our answer is 10 over 30, or 1/3. So now we're done with this problem. As you see, hopefully, it wasn't too painful. And what are the important takeaways here for this problem? Well, one is that whenever you have a discrete sample space, in which all of outcomes are equally likely, you should think about using the discrete uniform law, because this law lets you reduce the problem from computing probabilities to just counting outcomes within events. And the second takeaway is the way we thought about conditioning. So we talked about one thing, which is that in your conditional world, when you condition, the relative likelihoods of the various outcomes don't change. So in our original universe, all of the outcomes were equally likely. So in our conditional universe, all of the outcomes are equally likely. And we saw it was much faster to apply a conditional version of the discrete uniform law. So that's it for today. And we'll do more problems next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Using_the_Conditional_Expectation_and_Variance.txt
Hey guys. Welcome back. Today we're going to do a fun problem that will test your knowledge of the law of total variance. And in the process, we'll also get more practice dealing with joint PDFs and computing conditional expectations and conditional variances. So in this problem, we are given a joint PDF for x and y. So we're told that x and y can take on the following values in the shape of this parallelogram, which I've drawn. And moreover, that x and y are uniformly distributed. So the joint PDF is just flat over this parallelogram. And because the parallelogram has an area of 1, the height of the PDF must also be 1 so that the PDF integrates to 1. OK. And then we are asked to compute the variance of x plus y. So you can think of x plus y as a new random variable whose variance we want to compute. And moreover, we're told we should compute this variance by using something called the law of total variance. So from lecture, you should remember or you should recall that the law of total variance can be written in these two ways. And the reason why there's two different forms for this case is because the formula always has you conditioning on something. Here we condition on x, here we condition on y. And for this problem, the logical choice you have for what to condition on is x or y. So again, we have this option. And my claim is that we should condition on x. And the reason has to do with the geometry of this diagram. So notice that if you freeze an x and then you sort of vary x, the width of this parallelogram stays constant. However, if you condition on y and look at the width this way, you see that the width of the slices you get by conditioning vary with y. So to make our lives easier, we're going to condition on x. And I'm going to erase this bottom one, because we're not using it. So this really can seem quite intimidating, because we have nested variances and expectations going on, but we'll just take it slowly step by step. So first, I want to focus on this term-- the conditional expectation of x plus y conditioned on x. So coming back over to this picture, if you fix an arbitrary x in the interval, 0 to 1, we're restricting ourselves to this universe. So y can only vary between this point and this point. Now, I've already written down here that the formula for this line is given by y is equal to x. And the formula for this line is given by y is equal to x plus 1. So in particular, when we condition on x, we know that y varies between x and x plus 1. But we actually know more than that. We know that in the unconditional universe, x and y were uniformly distributed. So it follows that in the conditional universe, y should also be uniformly distributed, because conditioning doesn't change the relative frequency of outcomes. So that reasoning means that we can draw the conditional PDF of y conditioned on x as this. We said it varies between x and x plus 1. And we also said that it's uniform, which means that it must have a height of 1. So this is py given x, y given x. Now, you might be concerned, because, well, we're trying to compute the expectation of x plus y and this is the conditional PDF of y, not of the random variable, x plus y. But I claim that we're OK, this is still useful, because if we're conditioning on x, this x just acts as a constant. It's not really going to change anything except shift the expectation of y by an amount of x. So what I'm saying in math terms is that this is actually just x plus the expectation of y given x. And now our conditional PDF comes into play. Conditioned on x, this is the PDF of y. And because it's uniformly distributed and because expectation acts like center of mass, we know that the expectation should be the midpoint, right? And so to compute this point, we simply take the average of the endpoints, x plus 1 plus x over 2, which gives us 2x plus 1 over 2. So plugging this back up here, we get 2x/2 plus 2x plus 1 over 2, which is 4x plus 1 over 2, or 2x plus 1/2. OK. So now I want to look at the next term, the next inner term, which is this guy. So this computation is going to be very similar in nature, actually. So we already discussed that the joint-- sorry, not the joint, the conditional PDF of y given x is this guy. So the variance of x plus y conditioned on x, we sort of have a similar phenomenon occurring. x now in this conditional world just acts like a constant that shifts the PDF but doesn't change the width of the distribution at all. So this is actually just equal to the variance of y given x, because constants don't affect the variance. And now we can look at this conditional PDF to figure out what this is. So we're going to take a quick tangent over here, and I'm just going to remind you guys that we have a formula for computing the variance of a random variable when it's uniformly distributed between two endpoints. So say we have a random variable whose PDF looks something like this. Let's call it, let's say, w. This is pww. We have a formula that says variance of w is equal to b minus a squared over 12. So we can apply that formula over here. b is x plus 1, a is x. So b minus a squared over 12 is just 1/12. So we get 1/12. So we're making good progress, because we have this inner quantity and this inner quantity. So now all we need to do is take the outer variance and the outer expectation. So writing this all down, we get variance of x plus y is equal to variance of this guy, 2x plus 1/2 plus the expectation of 1/12. So this term is quite simple. We know that the expectation of a constant or of a scalar is simply that scalar. So this evaluates to 1/12. And this one is not bad either. So similar to our discussion up here, we know constants do not affect variance. You know they shift your distribution, they don't change the variance. So we can ignore the 1/2. This scaling factor of 2, however, will change the variance. But we know how to handle this already from previous lectures. We know that you can just take out this scalar scaling factor as long as we square it. So this becomes 2 squared, or 4 times the variance of x plus 1/12. And now to compute the variance of x, we're going to use that formula again, and we're going to use this picture. So here we have the joint PDF of x and y, but really we want now the PDF of x, so we can figure out what the variance is. So hopefully you remember a trick we taught you called marginalization. To get the PDF of x given a joint PDF, you simply marginalize over the values of y. So if you freeze x is equal to 0, you get the probability density line over x by integrating over this interval, over y. So if you integrate over this strip, you get 1. If you move x over a little bit and you integrate over this strip, you get 1. This is the argument I was making earlier that the width of this interval stays the same, and hence, the variance stays the same. So based on that argument, which was slightly hand wavy, let's come over here and draw it. We're claiming that the PDF of x, px of x, looks like this. It's just uniformly distributed between 0 and 1. And if you buy that, then we're done, we're home free, because we can apply this formula, b minus a squared over 12, gives us the variance. So b is 1, a is 0, which gives variance of x is equal to 1/12. So coming back over here, we get 4 times 1/12 plus 1/12, which is 5/12. And that is our answer. So this problem was straightforward in the sense that our task was very clear. We had to compute this, and we had to do so by using the law of total variance. But we sort of reviewed a lot of concepts along the way. We saw how, given a joint PDF, you marginalize to get the PDF of x. We saw how constants don't change variance. We got a lot of practice finding conditional distributions and computing conditional expectations and variances. And we also saw this trick. And it might seem like cheating to memorize formulas, but there's a few important ones you should know. And it will help you sort of become faster at doing computations. And that's important, especially if you guys take the exams. So that's it. See you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
18_Markov_Chains_III.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN TSITSIKLIS: So what we're going to do is to review what we have discussed last time. Then we're going to talk about the classic application of Markov chains to analyze how do you dimension a phone system. And finally, there will be two new things today. We will see how we can calculate certain interesting quantities that have to do with Markov chains. So let us start. We've got our Markov chain and let's make the assumption that our chain is kind of nice. And by nice we mean that we've got maybe some transient states. And then we've got a single recurrent class of recurrent states. So this is a single recurrent class in the sense that from any state in that class you can get to any other state. So once you're in here you're going to circulate and keep visiting all of those states. Those states appear transient. The trajectory may move around here, but eventually one of these transitions will happen and you're going to end up in this lump. Let's make the assumption that the single recurrent class is not periodic. These are the nicest kind of Markov chains. And they're nicest because they have the following property, the probability that you find yourself at some particular state j at the time n when that time is very large. That probability settles to a steady state value that we denote by pi sub j. And there are two parts in the statement. One part is that this limit exists. So the probability of state j settles to something, and furthermore that probability is not affected by i. It doesn't matter where you started, no matter where you started, the probability of state j is going to be the same in the long run. Maybe a clearer notation could be of this form. The probability of being at state j given the initial state being i is equal to pi(j) in the limit. Now, if I don't tell you where you started and you look at the unconditional probability of being at state i, you can average over the initial states, use the total expectation theorem and you're going to get the same answer pi(j) in the limit. So this tells you that to the conditional probability given the initial state in the limit is the same as the unconditional probability. And that's a situation that we recognize as being one where we have independence. So what this result tells us is that Xn and Xi are approximately independent. They become independent in the limit as n goes to infinity. So that's what the steady state theorem tells us. The initial conditions don't matter, so your state at some large time n has nothing to do, is not affected by what your initial state was. Knowing the initial state doesn't tell you anything about your state at time n, therefore the states at the times-- sorry that should be a 1, or it should be a 0 -- so the state is not affected by where the process started. So if the Markov chain is to operate for a long time and we're interested in the question where is the state, then your answer would be, I don't know, it's random. But it's going to be a particular j with this particular probability. So the steady state probabilities are interesting to us and that raises the question of how do we compute them. The way we compute them is by solving a linear system of equations, which are called the balance equations, together with an extra equation, the normalization equation that has to be satisfied by probability, because probabilities must always add up to 1. We talked about the interpretation of this equation last time. It's basically a conservation of probability flow in some sense. What comes in must get out. The probability of finding yourself at state j at a particular time is the total probability of the last transition taking me into state j. The last transition takes me into state j in various ways. It could be that the previous time I was at the particular state, j and i made a transition from k into j. So this number here, we interpret as the frequency with which transitions of these particular type k to j, occur. And then by adding over all k's we consider transitions of all types that lead us inside state j. So the probability of being at the j is the sum total of the probabilities of getting into j. What if we had multiple recurrent classes? So if we take this picture and change it to this. So here we got a secondary recurrent class. If you're here, you cannot get there. If you are here, you cannot get there. What happens in the long run? Well, in the long run, if you start from here you're going to make a transition eventually, either of this type and you would end up here, or you will make a transitional of that type and you will end up there. If you end up here, the long term statistics of your chain, that is, the probabilities of the different states, will be the steady state probabilities of this chain regarded in isolation. So you go ahead and you solve this system of equations just for this chain, and these will be your steady state probabilities. If you happened to get in here. If, on the other hand, it happens that you went there, given that event, then what happens in the long run has to do with just this chain running by itself. So you find the steady state probabilities inside that sub chain. So you solve the linear system, the steady state equations, for this chain separately and for that chain separately. If you happen to start inside here then the steady state probabilities for this sub chain are going to apply. Now of course this raises the question, if I start here, how do I know whether I'm going to get here or there? Well, you don't know, it's random. It may turn out that you get to here, it may turn out that you get there. So we will be interested in calculating the probabilities that eventually you end up here versus the probability that eventually you end up there. This is something that we're going to do towards the end of today's lecture. So, as a warm up, just to see how we interpret those steady state probabilities, let us look at our familiar example. This is a 2-state Markov chain. Last time we did write down the balance equations for this chain and we found the steady state probabilities to be 2/7 and 5/7 respectively. So let us try to calculate some quantities. Suppose that you start at state 1, and you want to calculate to this particular probability. So since we're assuming that we're starting at state 1, essentially here we are conditioning on the initial state being equal to 1. Now the conditional probability of two things happening is the probability that the first thing happens. But we're living in the world where we said that the initial state was 1. And then given that this thing happened, the probability that the second thing happens. But again, we're talking about conditional probabilities given that the initial state was 1. So what is this quantity? This one is the transition probability from state 1 to state 1, so it's P11. How about the second probability? So given that you started at 1 and the next time you were at 1, what's the probability that at the time 100 you are at 1? Now because of the Markov property, if I tell you that at this time you are at 1, it doesn't matter how you get there. So this part of the conditioning doesn't matter. And what we have is the 99 step transition probability from state 1 to state 1. So the probability that you get to 1 and then 99 steps later you find yourself again at one is the probability that the first transition takes you to 1 times the probability that over the next 99 transitions starting from 1, after 99 steps you end up again at state 1. Now, 99 is possibly a big number, and so we approximate this quantity. We're using the steady state probability of state 1. And that gives us an approximation for this particular expression. We can do the same thing to calculate something of the same kind. So you start at state 1. What's the probability that 100 steps later you are again at state 1? So that's going to be P11-- not P -- R11. The 100 step transition probability that starting from 1 you get to 1, and then after you get to 1 at time 100 what's the probability that the next time you find yourself at state 2? This is going to be the probability P12. And approximately, since 100 is a large number, this is approximately pi(1) times P12. OK. So that's how we can use steady state probabilities to make approximations. Or you could, for example, if you continue doing examples of this kind, you could ask for what's the probability that X at time 100 is 1, and also X at time 200 is equal to 1. Then this is going to be the transition probability from 1 to 1 in 100 steps, and then over the next 100 steps from 1 you get again to 1. And this is going to be approximately pi(1) times pi(1). So we approximate multi-step transition probabilities by the steady state probabilities when the number n that's involved in here is big. Now I said that's 99 or 100 is big. How do we know that it's big enough so that the limit has taken effect, and that our approximation is good? This has something to do with the time scale of our Markov chain, and by time scale, I mean how long does it take for the initial states to be forgotten. How long does it take for there to be enough randomness so that things sort of mix and it doesn't matter where you started? So if you look at this chain, it takes on the average, let's say 5 tries to make a transition of this kind. It takes on the average 2 tries for a transition of that kind to take place. So every 10 time steps or so there's a little bit of randomness. Over 100 times steps there's a lot of randomness, so you expect that the initial state will have been forgotten. It doesn't matter. There's enough mixing and randomness that happens over 100 time steps. And so this approximation is good. On the other hand, if the numbers were different, the story would have been different. Suppose that this number is 0.999 and that number is something like 0.998, so that this number becomes 0.002, and that number becomes 0.001. Suppose that the numbers were of this kind. How long does it take to forget the initial state? If I start here, there's a probability of 1 in 1,000 that next time I'm going to be there. So on the average it's going to take me about a thousand tries just to leave that state. So, over roughly a thousand time steps my initial state really does matter. If I tell you that you started here, you're pretty certain that, let's say over the next 100 time steps, you will still be here. So the initial state has a big effect. In this case we say that this Markov chain has a much slower time scale. It takes a much longer time to mix, it takes a much longer time for the initial state to be forgotten, and this means that we cannot do this kind of approximation if the number of steps is just 99. Here we might need n to be as large as, let's say, 10,000 or so before we can start using the approximation. So when one uses that approximation, one needs to have some sense of how quickly does the state move around and take that into account. So there's a whole sub-field that deals with estimating or figuring out how quickly different Markov chains mix, and that's the question of when can you apply those steady state approximations. So now let's get a little closer to the real world. We're going to talk about a famous problem that was posed, started, and solved by a Danish engineer by the name of Erlang. This is the same person whose name is given to the Erlang distribution that we saw in the context of the Poisson processes. So this was more than 100 years ago, when phones had just started existing. And he was trying to figure out what it would take to set up a phone system that how many lines should you set up for a community to be able to communicate to the outside world. So here's the story. You've got a village, and that village has a certain population, and you want to set up phone lines. So you want to set up a number of phone lines, let's say that number is B, to the outside world. And how do you want to do that? Well, you want B to be kind of small. You don't want to set up too many wires because that's expensive. On the other hand, you want to have enough wires so that if a reasonable number of people place phone calls simultaneously, they will all get a line and they will be able to talk. So if B is 10 and 12 people want to talk at the same time, then 2 of these people would get a busy signal, and that's not something that we like. We would like B to be large enough so that there's a substantial probability, that there's almost certainty that, under reasonable conditions, no one is going to get a busy signal. So how do we go about modeling a situation like this? Well, to set up a model you need two pieces, one is to describe how do phone calls get initiated, and once a phone call gets started, how long does it take until the phone call is terminated? So we're going to make the simplest assumptions possible. Let's assume that phone calls originate as a Poisson process. That is, out of that population people do not really coordinate. At completely random times, different people with decide to pick up the phone. There's no dependencies between different people, there's nothing special about different times, different times are independent. So a Poisson model is a reasonable way of modeling this situation. And it's going to be a Poisson process with some rate lambda. Now, the rate lambda would be easy to estimate in practice. You observe what happens in that village just over a couple of days, and you figure out what's the rate at which people attempt to place phone calls. Now, about phone calls themselves, we're going to make the assumption that the duration of a phone call is a random variable that has an exponential distribution with a certain parameter mu. So 1/mu is the mean duration of a phone call. So the mean duration, , again, is easy to estimate. You just observe what's happening, see on the average how long these phone calls are. Is the exponential assumption a good assumption? Well, it's means that most phone calls will be kind of short, but there's going to be a fraction of phone calls that are going to be larger, and then a very small fraction that are going to be even larger. So it sounds plausible. It's not exactly realistic, that is, phone calls that last short of 15 seconds are not that common. So either nothing happens or you have to say a few sentences and so on. Also, back into the days when people used to connect to the internet using dial up modems, that assumption was completely destroyed, because people would dial up and then keep their phone line busy for a few hours, if the phone call was a free one. So at those times the exponential assumption for the phone call duration was completely destroyed. But leaving that detail aside, it's sort of a reasonable assumption to just get started with this problem. All right, so now that we have those assumptions, let's try to come up with the model. And we're going to set up a Markov process model. Now the Poisson process runs in continuous time, and call durations being exponential random variables also are continuous random variables, so it seems that we are in a continuous time universe. But we have only started Markov chains for the discrete time case. What are we going to do? We can either develop the theory of continuous time Markov chains, which is possible. But we are not going to do that in this class. Or we can discretize time and work with a discrete time model. So we're going to discretize time in the familiar way, the way we did it when we started the Poisson process. We're going to take the time axis and split it into little discrete mini slots, where every mini slot has a duration delta. So this delta is supposed to be a very small number. So what is the state of the system? So, you look at the situation in the system at some particular time and I ask you what is going on right now, what's the information you would tell me? Well, you would tell me that right now out of these capital B lines, 10 of them are busy, or 12 of them are busy. That describes the state of the system, that tells me what's happening at this point. So we set up our states base by being the numbers from 0 to B. 0 corresponds to a state in which all the phone lines are free, no one is talking. Capital B corresponds to a case where all the phone lines are busy. And then you've got states in between. And now let's look at the transition probabilities. Suppose that right so now we have i-1 lines that are busy. Or maybe, let me look here. Suppose that there's i lines that are busy. What can happen the next time? What can happen is that the new phone call gets placed, in which case my state moves up by 1, or an existing call terminates, in which case my state goes down by 1, or none of the two happens, in which case I stay at the same state. Well, it's also possible that the phone call gets terminated and a new phone call gets placed sort of simultaneously. But when you take your time slots to be very, very small, this is going to have a negligible probability order of delta squared, so we ignore this. So what's the probability of an upwards transition? That's the probability that the Poisson process records an arrival during a mini slot of duration delta. By the definition of the Poisson process, the probability of this happening is just lambda delta. So each one of these upwards transitions has the same probability of lambda delta. So you've got lambda deltas everywhere in this diagram. How about, now, phone call terminations? If you had the single call that was active, so if you were here, what's the probability that the phone call terminates? So the phone call has an exponential duration with parameter mu. And we discussed before that an exponential random variable can be thought of as the first arrival time in a Poisson process. So the probability that you get this event to happen over a delta time interval is just mu times delta. So if you have a single phone call that's happening right now, with probability mu times delta, that call is going to terminate. But suppose that we have i phone calls that are currently active. Each one of them has a probability of mu delta, of terminating, but collectively the probability that one of them terminates becomes i times mu delta. So that's because you get the mu delta contribution -- the probability of termination from each one of the different phone calls. OK, now this is an approximate calculation, because it ignores the possibility that two phone calls terminate at the same time. Again, the way to think of why this is the correct rate, when you have i phone calls that are simultaneously running and waiting for one of them to terminate, this is like having i separate Poisson processes that are running in parallel, and you ask for the probability that one of those processes records an event. Now when you put all those process together, it's like having a Poisson process with total rate i times mu, and so i times mu delta is the overall probability that something happens in terms of phone call terminations at those times. So in any case, this is the transition probability for downwards transitions. Now that we've got this, we can analyze this chain. This chain has the birth death form that we discussed towards the end of last lecture. And for birth death chains, it's easy to write it out to find the steady state probabilities. Instead of writing down the balance equations in the general form, we think in terms of a conservation of probabilities or of transitions by looking at what happens across a particular cut in this diagram. Number of transitions in the chain that cross from here to here has to be approximately equal to the number of transitions from here to there because whatever comes up must come down and then come up and so on. So the frequency with which transitions of this kind are observed has to be the same as the frequency of transitions of this kind. What's the frequency of how often the transitions of this kind happen? And by frequency I mean quite percentage of the mini slots involve a transition of this kind? Well, for a transition of that kind to happen we need to be at states i-1, which happens this much of the time. And then the probability lambda delta that the transition is of this kind. So the frequency of transitions of with which this kind of transition is observed is lambda delta times pi(i-1). This is the fraction of time steps at which a transition from specifically this state to specifically that state are observed. This has to be the same as the frequency with which transitions of that kind are observed, and that frequency is going to be i mu delta times pi(i), and then we cancel the deltas, and we are left with this equation here. So this equation expresses pi(i) in terms of pi(i-1). So if we knew pi(0) we can use that equation to determine pi(1). Once we know pi(1), we can use that equation to determine pi(2), and so on, you keep going. And the general formula that comes out of this, I will not do the algebra, it's a straightforward substitution, you find that pi(i), the steady state probability of state i is given by this expression, which involves the pi(0) from which we started. Now what is pi(0)? Well, we don't know yet, but we can find it by using the normalization equation. The sum of pi(i) has to be equal to 1. So the sum of all of those numbers has to be equal to 1. And the only way that this can happen is by setting pi(0) to be equal to that particular number. So if I tell you the value of capital B, you can set up this Markov chain, you can calculate pi(0), and then you can calculate pi(i), and so you know what fraction, you know the steady state probabilities of this chain, so you can answer the question. If I drop in at a random time, how likely is it that I'm going to find the states to be here, or the states to be there? So the steady state probabilities are probabilities, but we also interpret them as frequencies. So once I find pi(i), it also tells me what fraction of the time is the state equal to i. And you can answer that question for every possible i. Now, why did we do this exercise? We're interested in the probability of the system is busy. So if a person, a new phone call gets placed, it just drops out of the sky. According to that Poisson process, that new phone call is going to find the system at a random state. That random state is described in steady state by the probabilities pi(i)'s. And the probability that you find the system to be busy is the probability that when you drop in the state happens to be that particular number B. So i sub b is the probability of being busy. And this is the probability that you would like to be small in a well engineered system. So you ask the question, how should, given my lambda and mu, my design question is to determine capital B the number of phone lines so that this number is small. Could we have done, could we figure out a good value for B by doing a back of the envelope calculation? Let's suppose that lambda is 30 and that mu is 1/3. So I guess that's, let us these rates to be calls per minute. And this mu, again, is a rate per minute. Again, the units of mu are going to be calls per minute. So since our time unit is minutes, the mean duration of calls is 1/mu minutes. So a typical call, or on the average a call lasts for 3 minutes. So you get 30 calls per minute. Each call lasts for 3 minutes on the average. So on the average, if B was infinite, every call goes through. How many calls would be active on the average? So you get 30 per minute. If a call lasted exactly 1 minute, then at any time you would have 30 calls being active. Now a call lasts on the average for 3 minutes. So during each minute you generate 90 minutes of talking time. So by thinking in terms of averages you would expect that at any time there would be about 90 calls that are active. And if 90 calls are active on the average, you could say OK, I'm going to set up my capital B to be 90. But that's not very good, because if the average number of phone calls that want to happen is if the average number is 90, sometimes you're going to have 85, sometimes you will have 95. And to be sure that the phone calls will go through you probably want to choose your capital B to be a number a little larger than 90. How much larger than 90? Well, this is a question that you can answer numerically. So you go through the following procedure. I tried different values of capital B. For any given value of capital B, I do this numerical calculation, I find the probability that the system is busy, and then I ask what's the value of B that makes my probability of being busy to be, let's say, roughly 1 %. And if you do that calculation with the parameters that they gave you, you find that B would be something like 106. So with the parameters they gave where you have, on the average, 90 phone calls being active, you actually need some margin to protect against the [?] fluctuation, if suddenly by chance more people want to talk, and if you want to have a good guarantee that an incoming person will have a very small probability of finding a busy system, then you will need about 106 phone lines. So that's the calculation and the argument that the Erlang went through a long time ago. It's actually interesting that Erlang did this calculation before Markov chains were invented. So Markov's work, and the beginning of work on Markov chains, happens about 10-15 years after Erlang. So obviously he didn't call that a Markov chain. But it was something that he could study from first principles. So this is a pretty useful thing. These probabilities that come out of that model, at least in the old days, they would all be very well tabulated in handbooks that every decent phone company engineer would sort of have with them. So this is about as practical as it gets. It's one of the sort of standard real world applications of Markov chains. So now to close our subjects, we're going to consider a couple of new skills and see how we can calculate the few additional interesting quantities that have to do with the Markov chain. So the problem we're going to deal with here is the one I hinted that when I was talking about this picture. You start at a transient state, you're going to eventually end up here or there. We want to find the probabilities of one option of the two happening or the other happening. So in this picture we have a class of states that's are transient. These are transient because you're going to move around those states, but there's a transition that you can make, and you go to a state from which you cannot escape afterwards. Are you going to end up here or are you going to end up there? You don't know. It's random. Let's try to calculate the probability that you end up at state 4. Now, the probability that you end up at state 4 will depend on where you start. Because if you start here, you probably have more chances of getting to 4 because you get that chance immediately, whereas if you start here there's more chances that you're going to escape that way because it kind of takes you time to get there. It's more likely that you exit right away. So the probability of exiting and ending up at state 4 will depend on the initial state. That's why when we talk about these absorption probability we include an index i that tells us what the initial state is. And we want to find this absorption probability, the probability that we end up here for the different initial states. Now for some initial states this is very easy to answer. If you start at state 4, what's the probability that eventually you end up in this part of the chain? It's 1. You're certain to be there, that's where you started. If you start at state 5, what's the probability that you end up eventually at state 4? It's probability 0, there's no way to get there. Now, how about if you start at a state like state 2? If you start at state 2 then there's a few different things that can happen. Either you end up at state 4 right away and this happens with probability 0.2, or you end up at state 1, and this happens with probability 0.6. So if you end up at state 4, you are done. We are there. If you end up at state 1, then what? Starting from state 1 there's two possibilities. Either eventually you're going to end up at state 4, or eventually you're going to end up at state 5. What's the probability of this happening? We don't know what it is, but it's what we defined to be a1. This is the probability -- a1 is the probability -- that eventually you settle in state 4 given that the initial state was 1. So this probability is a1. So our event of interest can happen in two ways. Either I go there directly, or I go here with probability 0.6. And given that I go there, eventually I end up at state 4, which happens with probability a1. So the total probability of ending up at state 4 is going to be the sum of the probabilities of the different ways that this event can happen. So our equation, in this case, is going to be, that's a2, is going to be 0.2 (that's the probability of going there directly) plus with probability 0.8 I end up at state 1, and then from state 1 I will end up at state 4 with probability a1. So this is one particular equation that we've got for what happens if we start from this state. We can do a similar argument starting from any other state. Starting from state i the probability that eventually I end up at state 4 is, we consider the different possible scenarios of where do I go next, which is my state j, with probability Pij. Next time I go to j, and given that I started at j, this is the probability that I end up at state 4. So this equation that we have here is just an abstract version in symbols of what we wrote down for the particular case where the initial state was 2. So you write down an equation of this type for every state inside here. You'll have a separate equation for a1, a2, and a3. And that's going to be a system of 3 equations with 3 unknowns, the a's inside the transient states. So you can solve that 3 by 3 system of equations. Fortunately, it turns out to have a unique solution, and so once you solve it you have found the probabilities of absorption and the probability that eventually you get absorbed at state 4. Now, in the picture that we had here, this was a single state, and that one was a single state. How do things change if our recurrent, or trapping sets consist of multiple states? Well, it doesn't really matter that we have multiple states. All that matters is that this is one lump and once we get there we are stuck in there. So if the picture was, let's say, like this, 0.1 and 0.2, that basically means that whenever you are in that state there's a total probability of 0.3 of ending in that lump and getting stuck inside that lump. So you would take that picture and change it and make it instead a total probability of 0.3, of ending somewhere inside that lump. And similarly, you take this lump and you view it as just one entity, and from any state you record the total probability that given that I'm here I end up in that entity. So basically, if the only thing you care is the probability that you're going to end up in this lump, you can replace that lump with a single state, view it as a single state, and calculate probabilities using this formula. All right, so now we know where the chain is going to get to. At least we know probabilistically. We know with what probability it is going to go here, and that also tells us the probability that eventually it's going to get there. Other question, how long is it going to take until we get to either this state or that state? We can call that event absorption, meaning that the state got somewhere into a recurrent class from which it could not get out. Okay. Let's deal with that question for the case where we have only 1 absorbing state. So here our Markov chain is a little simpler than the one in the previous slide. We've got our transient states, we've got our recurrent state, and once you get into the recurrent state you just stay there. So here we're certain that no matter where we start we're going to end up here. How long is it going to take? Well, we don't know. It's a random variable. The expected value of that random variable, let's call it mu. But how long it takes to get there certainly depends on where we start. So let's put in our notation again this index i that indicates where we started from. And now the argument is going to be of the same type as the one we used before. We can think in terms of a tree once more, that considers all the possible options. So suppose that you start at state 1. Starting from state 1, the expected time until you end up dropping states is mu1. Now, starting from state 1, what are the possibilities? You make your first transition, and that first transition is going to take you either to state 2 or to state 3. It takes you to state 2 with probability 0.6, it takes you to state 3 with probability 0.4. Starting from state 2, eventually you're going to get to state 4. How long does it take? We don't know, it's a random variable. But the expected time until this happens is mu2. Starting from state 2, how long does it take you to get to state 4. And similarly starting from state 3, it's going to take you on the average mu3 time steps until you get to state 4. So what's the expected value of the time until I end at state 4? So with probability 0.6, I'm going to end up at state 2 and from there on it's going to be expected time mu2, and with probability 0.4 I'm going to end up at state 3, and from there it's going to take me so much time. So this is the expected time it's going to take me after the first transition. But we also spent 1 time step for the first transition. The total time to get there is the time of the first transition, which is 1, plus the expected time starting from the next state. This expression here is the expected time starting from the next state, but we also need to account for the first transition, so we add 1. And this is going to be our mu1. So once more we have a linear equation that ties together the different mu's. And the equation starting from state 4 in this case, of course is going to be simple, starting from that state the expected number of steps it takes you to get there for the first time is of course, 0 because you're already there. So for that state this is fine, and for all the other states you get an equation of this form. Now we're going to have an equation for every state. It's a system of linear equations, once more we can solve them, and this gives us the expected times until our chain gets absorbed in this absorbing state. And it's nice to know that this system of equations always has a unique solution. OK so this was the expected time to absorption. For this case where we had this scene absorbing state. Suppose that we have our transient states and that we have multiple recurrent classes, or multiple absorbing states. Suppose you've got the picture like this. And we want to calculate the expected time until we get here or there. Expected time until we get to an absorbing state. What's the trick? Well, we can lump both of these states together and think of them as just one bad state, one place for which we're interested in how long it takes us to get there. So lump them as one state, and accordingly kind of merge all of those probabilities. So starting from here, my probability that the next I end up in this lump and they get absorbed is going to be this probability plus that probability. So we would change that picture. Think of this as being just one big state. And sort of add those two probabilities together to come up with a single probability, which is the probability that starting from here next time I find myself at some absorbing state. So once you know how to deal with a situation like this, you can also find expected times to absorption for the case where you've got multiple absorbing states. You just lump all of those multiple absorbing states into a single one. Finally, there's a kind of related quantity that's of interest. The question is almost the same as in the previous slide, except that here we do not have any absorbing states. Rather, we have a single recurrent class of states. You start at some state i. You have a special state, that is state s. And you ask the question, how long is it going to take me until I get to s for the first time? It's a single recurrent class of states. So you know that the state keeps circulating here and it keeps visiting all of the possible states. So eventually this state will be visited. How long does it take for this to happen? Ok. So we're interested in how long it takes for this to happen, how long it takes until we get to s for the first time. And we don't care about what happens afterwards. So we might as well change this picture and remove the transitions out of s and to make them self transitions. Is the answer going to change? No. The only thing that we changed was what happens after you get to s. But what happens after you get to s doesn't matter. The question we're dealing with is how long does it take us to get to s. So essentially, it's after we do this transformation -- it's the same question as before, what's the time it takes until eventually we hit this state. And it's now in this new picture, this state is an absorbing state. Or you can just think from first principles. Starting from the state itself, s, it takes you 0 time steps until you get to s. Starting from anywhere else, you need one transition and then after the first transition you find yourself at state j with probability Pij and from then on you are going to take expected time Tj until you get to that terminal state s. So once more these equations have a unique solution, you can solve them and find the answer. And finally, there's a related question, which is the mean recurrence time of s. In that question you start at s, the chain will move randomly, and you ask how long is it going to take until I come back to s for the next time. So notice the difference. Here we're talking the first time after time 0, whereas here it's just the first time anywhere. So here if you start from s, Ts* is not 0. You want to do at least one transition and that's how long it's going to take me until it gets back to s. Well, how long does it take me until I get back to s? I do my first transition, and then after my first transition I calculate the expected time from the next state how long it's going to take me until I come back to s. So all of these equations that I wrote down, they all kind of look the same. But they are different. So you can either memorize all of these equations, or instead what's better is to just to get the basic idea. That is, to calculate probabilities or expected values you use the total probability or total expectation theorem and conditional the first transition and take it from there. So you're going to get a little bit of practice with these skills in recitation tomorrow, and of course it's in your problem set as well.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
7_Discrete_Random_Variables_III.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, good morning. So today, we're going to have a fairly packed lecture. We are going to conclude with chapter two, discrete random variables. And we will be talking mostly about multiple random variables. And this is also the last lecture as far as quiz one is concerned. So it's going to cover the material until today, and of course the next recitation and tutorial as well. OK, so we're going to review quickly what we introduced at the end of last lecture, where we talked about the joint PMF of two random variables. We're going to talk about the case of more than two random variables as well. We're going to talk about the familiar concepts of conditioning and independence, but applied to random variables instead of events. We're going to look at the expectations once more, talk about a few properties that they have, and then solve a couple of problems and calculate a few things in somewhat clever ways. So the first point I want to make is that, to a large extent, whatever is happening in our chapter on discrete random variables is just an exercise in notation. There is stuff and concepts that you are already familiar with-- probabilities, probabilities of two things happening, conditional probabilities. And all that we're doing, to some extent, is rewriting those familiar concepts in new notation. So for example, this is the joint PMF of two random variable. It gives us, for any pair or possible values of those random variables, the probability that that pair occurs simultaneously. So it's the probability that simultaneously x takes that value, and y takes that other value. And similarly, we have the notion of the conditional PMF, which is just a list of the -- condition of -- the various conditional probabilities of interest, conditional probability that one random variable takes this value given that the other random variable takes that value. Now, a remark about conditional probabilities. Conditional probabilities generally are like ordinary probabilities. You condition on something particular. So here we condition on a particular y. So think of little y as a fixed quantity. And then look at this as a function of x. So given that y, which we condition on, given our new universe, we're considering the various possibilities for x and the probabilities that they have. Now, the probabilities over all x's, of course, needs to add to 1. So we should have a relation of this kind. So they're just like ordinary probabilities over the different x's in a universe where we are told the value of the random variable y. Now, how are these related? So we call these the marginal, these the joint, these the conditional. And there are some relations between these. For example, to find the marginal from the joint, it's pretty straightforward. The probability that x takes a particular value is the sum of the probabilities of all of the different ways that this particular value may occur. What are the different ways? Well, it may occur together with a certain y, or together with some other y, or together with some other y. So you look at all the possible y's that can go together with this x, and add the probabilities of all of those pairs for which we get this particular value of x. And then there's a relation between that connects these two probabilities with the conditional probability. And it's this relation. It's nothing new. It's just new notation for writing what we already know, that the probability of two things happening is the probability that the first thing happens, and then given that the first thing happens, the probability that the second one happened. So how do we go from one to the other? Think of A as being the event that X takes the value, little x, and B being the event that Y takes the value, little y. So the joint probability is the probability that these two things happen simultaneously. It's the probability that X takes this value times the conditional probability that Y takes this value, given that X took that first value. So it's the familiar multiplication rule, but just transcribed in our new notation. So nothing new so far. OK, why did we go through this exercise and this notation? It's because in the experiments where we're interested in the real world, typically there's going to be lots of uncertain quantities. There's going to be multiple random variables. And we want to be able to talk about them simultaneously. Okay. Why two and not more than two? How about three random variables? Well, if you understand what's going on in this slide, you should be able to kind of automatically generalize this to the case of multiple random variables. So for example, if we have three random variables, X, Y, and Z, and you see an expression like this, it should be clear what it means. It's the probability that X takes this value and simultaneously Y takes that value and simultaneously Z takes that value. I guess that's an uppercase Z here, that's a lowercase z. And if I ask you to find the marginal of X, if I tell you the joint PMF of the three random variables and I ask you for this value, how would you find it? Well, you will try to generalize this relation here. The probability that x occurs is the sum of the probabilities of all events that make X to take that particular value. So what are all the events? Well, this particular x can happen together with some y and some z. We don't care which y and z. Any y and z will do. So when we consider all possibilities, we need to add here over all possible values of y's and z's. So consider all triples, x, y, z. Fix x and consider all the possibilities for the remaining variables, y and z, add these up, and that gives you the marginal PMF of X. And then there's other things that you can do. This is the multiplication rule for two events. We saw back in chapter one that there's a multiplication rule when you talk about more than two events. And you can write a chain of conditional probabilities. We can certainly do the same in our new notation. So let's look at this rule up here. Multiplication rule for three random variables, what does it say? The probability of three things happening simultaneously, X, Y, Z taking specific values, little x, little y, little z, that probability is the probability that the first thing happens, that X takes that value. Given that X takes that value, we multiply it with the probability that Y takes also a certain value. And now, given that X and Y have taken those particular values, we multiply with a conditional probability that the third thing happens, given that the first two things happen. So this is just the multiplication rule for three events, which would be probability of A intersection B intersection C equals-- you know the rest of the formula. You just rewrite this formula in PMF notation. Probability of A intersection B intersection C is the probability of A, which corresponds to this term, times the probability of B given A, times the probability of C given A and B. So what else is there that's left from chapter one that we can or should generalize to random variables? Well, there's the notion of independence. So let's define what independence means. Instead of talking about just two random variables, let's go directly to the case of multiple random variables. When we talked about events, things were a little complicated. We had a simple definition for independence of two events. Two events are independent if the probability of both is equal to the product of the probabilities. But for three events, it was kind of messy. We needed to write down lots of conditions. For random variables, things in some sense are a little simpler. We only need to write down one formula and take this as the definition of independence. Three random variables are independent if and only if, by definition, their joint probability mass function factors out into individual probability mass functions. So the probability that all three things happen is the product of the individual probabilities that each one of these three things is happening. So independence means mathematically that you can just multiply probabilities to get to the probability of several things happening simultaneously. So with three events, we have to write a huge number of equations, of equalities that have to hold. How can it be that with random variables we can only manage with one equality? Well, the catch is that this is not really just one equality. We require this to be true for every little x, y, and z. So in some sense, this is a bunch of conditions that are being put on the joint PMF, a bunch of conditions that we need to check. So this is the mathematical definition. What is the intuitive content of this definition? The intuitive content is the same as for events. Random variables are independent if knowing something about the realized values of some of these random variables does not change our beliefs about the likelihood of various values for the remaining random variables. So independence would translate, for example, to a condition such as the conditional PMF of X , given y, should be equal to the marginal PMF of X. What is this saying? That you have some original beliefs about how likely it is for X to take this value. Now, someone comes and tells you that Y took on a certain value. This causes you, in principle, to revise your beliefs. And your new beliefs will be captured by the conditional PMF, or the conditional probabilities. Independence means that your revised beliefs actually will be the same as your original beliefs. Telling you information about the value of Y doesn't change what you expect for the random variable X. Why didn't we use this definition for independence? Well, because this definition only makes sense when this conditional is well-defined. And this conditional is only well-defined if the events that Y takes on that particular value has positive probability. We cannot condition on events that have zero probability, so conditional probabilities are only defined for y's that are likely to occur, that have a positive probability. Now, similarly, with multiple random variables, if they're independent, you would have relations such as the conditional of X, given y and z, should be the same as the marginal of X. What is this saying? Again, that if I tell you the values, the realized values of random variables Y and Z, this is not going to change your beliefs about how likely x is to occur. Whatever you believed in the beginning, you're going to believe the same thing afterwards. So it's important to keep that intuition in mind, because sometimes this way you can tell whether random variables are independent without having to do calculations and to check this formula. OK, so let's check our concepts with a simple example. Let's look at two random variables that are discrete, take values between one and for each. And this is a table that gives us the joint PMF. So it tells us the probability that X equals to 2 and Y equals to 1 happening simultaneously. It's an event that has probability 1/20. Are these two random variables independent? You can try to check a condition like this. But can we tell directly from the table? If I tell you a value of Y, could that give you useful information about X? Certainly. If I tell you that Y is equal to 1, this tells you that X must be equal to 2. But if I tell you that Y was equal to 3, this tells you that, still, X could be anything. So telling you the value of Y kind of changes what you expect or what you consider possible for the values of the other random variable. So by just inspecting here, we can tell that the random variables are not independent. Okay. What's the other concept we introduced in chapter one? We introduced the concept of conditional independence. And conditional independence is like ordinary independence but applied to a conditional universe where we're given some information. So suppose someone tells you that the outcome of the experiment is such that X is less than or equal to 2 and Y is larger than or equal to 3. So we are given the information that we now live inside this universe. So what happens inside this universe? Inside this universe, our random variables are going to have a new joint PMF which is conditioned on the event that we were told that it has occurred. So let A correspond to this sort of event here. And now we're dealing with conditional probabilities. What are those conditional probabilities? We can put them in a table. So it's a two by two table, since we only have two possible values. What are they going to be? Well, these probabilities show up in the ratios 1, 2, 2, and 4. Those ratios have to stay the same. The probabilities need to add up to one. So what should the denominators be since these numbers add up to nine? These are the conditional probabilities. So this is the conditional PMF in this example. Now, in this conditional universe, is x independent from y? If I tell you that y takes this value, so we live in this universe, what do you know about x? What you know about x is at this value is twice as likely as that value. If I condition on y taking this value, so we're living here, what do you know about x? What you know about x is that this value is twice as likely as that value. So it's the same. Whether we live here or we live there, this x is twice as likely as that x. So the conditional PMF in this new universe, the conditional PMF of X given y, in the new universe is the same as the marginal PMF of X, but of course in the new universe. So no matter what y is, the conditional PMF of X is the same. And that conditional PMF is 1/3 and 2/3. This is the conditional PMF of X in the new universe no matter what y occurs. So Y does not give us any information about X, doesn't cause us to change our beliefs inside this little universe. And therefore the two random variables are independent. Now, the other way that you can verify that we have independence is to find the marginal PMFs of the two random variables. The marginal PMF of X, you find it by adding those two terms. You get 1/3. Adding those two terms, you get 2/3. Marginal PMF of Y, you find it, you add these two terms, and you get 1/3. And the marginal PMF of Y here is going to be 2/3. And then you ask the question, is the joint the product of the marginals? And indeed it is. This times this gives you 1/9. This times this gives you 2/9. So the values in the table with the joint PMFs is the product of the marginal PMFs of X and Y in this universe, so the two random variables are independent inside this universe. So we say that they're conditionally independent. All right. Now let's move to the new topic, to the new concept that we introduce in this chapter, which is the concept of expectations. So what are the things to know here? One is the general idea. The way to think about expectations is that it's something like the average value for random variable if you do an experiment over and over, and if you interpret probabilities as frequencies. So you get x's over and over with a certain frequency -- P(x) -- a particular value, little x, gets realized. And each time that this happens, you get x dollars. How many dollars do you get on the average? Well, this formula gives you that particular average. So first thing we do is to write down a definition for this sort of concept. But then the other things you need to know is how to calculate expectations using shortcuts sometimes, and what properties they have. The most important shortcut there is is that, if you want to calculate the expected value, the average value for a random variable, you do not need to find the PMF of that random variable. But you can work directly with the x's and the y's. So you do the experiment over and over. The outcome of the experiment is a pair (x,y). And each time that a certain (x,y) happens, you get so many dollars. So this fraction of the time, a certain (x,y) happens. And that fraction of the time, you get so many dollars, so this is the average number of dollars that you get. So what you end up, since it is the average, then that means that it corresponds to the expected value. Now, this is something that, of course, needs a little bit of mathematical proof. But this is just a different way of accounting. And it turns out we give you the right answer. And it's a very useful shortcut. Now, when we're talking about functions of random variables, in general, we cannot speak just about averages. That is, the expected value of a function of a random variable is not the same as the function of the expected values. A function of averages is not the same as the average of a function. So in general, this is not true. But what it's important to know is to know the exceptions to this rule. And the important exceptions are mainly two. One is the case of linear functions of a random variable. We discussed this last time. So the expected value of temperature in Celsius is, you first find the expected value of temperature in Fahrenheit, and then you do the conversion to Celsius. So whether you first average and then do the conversion to the new units or not, it shouldn't matter when you get the result. The other property that turns out to be true when you talk about multiple random variables is that expectation still behaves linearly. So let X, Y, and Z be the score of a random student at each one of the three sections of the SAT. So the overall SAT score is X plus Y plus Z. This is the average score, the average total SAT score. Another way to calculate that average is to look at the first section of the SAT and see what was the average. Look at the second section, look at what was the average, and so the third, and add the averages. So you can do the averages for each section separately, add the averages, or you can find total scores for each student and average them. So I guess you probably believe that this is correct if you talk just about averaging scores. Since expectations are just the variation of averages, it turns out that this is also true in general. And the derivation of this is very simple, based on the expected value rule. And you can look at it in the notes. So this is one exception, which is linearity. The second important exception is the case of independent random variables, that the product of two random variables has an expectation which is the product of the expectations. In general, this is not true. But for the case where we have independence, the expectation works out as follows. Using the expected value rule, this is how you calculate the expected value of a function of a random variable. So think of this as being your g(X, Y) and this being your g(little x, y). So this is something that's generally true. Now, if we have independence, then the PMFs factor out, and then you can separate this sum by bringing together the x terms, bring them outside the y summation. And you find that this is the same as expected value of X times the expected value of Y. So independence is used in this step here. OK, now what if X and Y are independent, but instead of taking the expectation of X times Y, we take the expectation of the product of two functions of X and Y? I claim that the expected value of the product is still going to be the product of the expected values. How do we show that? We could show it by just redoing this derivation here. Instead of X and Y, we would have g(X) and h(Y), so the algebra goes through. But there's a better way to think about it which is more conceptual. And here's the idea. If X and Y are independent, what does it mean? X does not convey any information about Y. If X conveys no information about Y, does X convey information about h(Y)? No. If X tells me nothing about Y, nothing new, it shouldn't tell me anything about h(Y). Now, if X tells me nothing about h of h(Y), could g(X) tell me something about h(Y)? No. So the idea is that, if X is unrelated to Y, doesn't have any useful information, then g(X) could not have any useful information for h(Y). So if X and Y are independent, then g(X) and h(Y) are also independent. So this is something that one can try to prove mathematically, but it's more important to understand conceptually why this is so. It's in terms of conveying information. So if X tells me nothing about Y, X cannot tell me anything about Y cubed, or X cannot tell me anything by Y squared, and so on. That's the idea. And once we are convinced that g(X) and h(Y) are independent, then we can apply our previous rule, that for independent random variables, expectations multiply the right way. Apply the previous rule, but apply it now to these two independent random variables. And we get the conclusion that we wanted. Now, besides expectations, we also introduced the concept of the variance. And if you remember the definition of the variance, let me write down the formula for the variance of aX. It's the expected value of the random variable that we're looking at minus the expected value of the random variable that we're looking at. So this is the difference of the random variable from its mean. And we take that difference and square it, so it's the squared distance from the mean, and then take expectations of the whole thing. So when you look at that expression, you realize that a can be pulled out of those expressions. And because there is a squared, when you pull out the a, it's going to come out as an a-squared. So that gives us the rule for finding the variance of a scale or product of a random variable. The variance captures the idea of how wide, how spread out a certain distribution is. Bigger variance means it's more spread out. Now, if you take a random variable and the constants to it, what does it do to its distribution? It just shifts it, but it doesn't change its width. So intuitively it means that the variance should not change. You can check that mathematically, but it should also make sense intuitively. So the variance, when you add the constant, does not change. Now, can you add variances is the way we added expectations? Does variance behave linearly? It turns out that not always. Here, we need a condition. It's only in special cases-- for example, when the two random variables are independent-- that you can add variances. The variance of the sum is the sum of the variances if X and Y are independent. The derivation of this is, again, very short and simple. We'll skip it, but it's an important fact to remember. Now, to appreciate why this equality is not true always, we can think of some extreme examples. Suppose that X is the same as Y. What's going to be the variance of X plus Y? Well, X plus Y, in this case, is the same as 2X, so we're going to get 4 times the variance of X, which is different than the variance of X plus the variance of X. So that expression would give us twice the variance of X. But actually now it's 4 times the variance of X. The other extreme would be if X is equal to -Y. Then the variance is the variance of the random variable, which is always equal to 0. Now, a random variable which is always equal to 0 has no uncertainty. It is always equal to its mean value, so the variance, in this case, turns out to be 0. So in both of these cases, of course we have random variables that are extremely dependent. Why are they dependent? Because if I tell you something about Y, it tells you an awful lot about the value of X. There's a lot of information about X if I tell you Y, in this case or in that case. And finally, a short drill. If I tell you that the random variables are independent and you want to calculate the variance of a linear combination of this kind, then how do you argue? You argue that, since X and Y are independent, this means that X and 3Y are also independent. X has no information about Y, so X has no information about -Y. X has no information about -Y, so X should not have any information about -3Y. So X and -3Y are independent. So the variance of Z should be the variance of X plus the variance of -3Y, which is the variance of X plus 9 times the variance of Y. The important thing to note here is that no matter what happens, you end up getting a plus here, not a minus. So that's the sort of important thing to remember in this type of calculation. So this has been all concepts, reviews, new concepts and all that. It's the usual fire hose. Now let's use them to do something useful finally. So let's revisit our old example, the binomial distribution, which counts the number of successes in independent trials of a coin. It's a biased coin that has a probability of heads, or probability of success, equal to p at each trial. Finally, we can go through the exercise of calculating the expected value of this random variable. And there's the way of calculating that expectation that would be the favorite of those people who enjoy algebra, which is to write down the definition of the expected value. We add over all possible values of the random variable, over all the possible k's, and weigh them according to the probabilities that this particular k occurs. The probability that X takes on a particular value k is, of course, the binomial PMF, which is this familiar formula. Clearly, that would be a messy and challenging calculation. Can we find a shortcut? There's a very clever trick. There's lots of problems in probability that you can approach really nicely by breaking up the random variable of interest into a sum of simpler and more manageable random variables. And if you can make it to be a sum of random variables that are just 0's or 1's, so much the better. Life is easier. Random variables that take values 0 or 1, we call them indicator variables. They indicate whether an event has occurred or not. In this case, we look at each coin flip one at a time. For the i-th flip, if it resulted in heads or a success, we record it 1. If not, we record it 0. And then we look at the random variable. If we take the sum of the Xi's, what is it going to be? We add one each time that we get a success, so the sum is going to be the total number of successes. So we break up the random variable of interest as a sum of really nice and simple random variables. And now we can use the linearity of expectations. We're going to find the expectation of X by finding the expectation of the Xi's and then adding the expectations. What's the expected value of Xi? Well, Xi takes the value 1 with probability p, and takes the value 0 with probability 1-p. So the expected value of Xi is just p. So the expected value of X is going to be just n times p. Because X is the sum of n terms, each one of which has expectation p, the expected value of the sum is the sum of the expected values. So I guess that's a pretty good shortcut for doing this horrendous calculation up there. So in case you didn't realize it, that's what we just established without doing any algebra. Good. How about the variance of X, of Xi? Two ways to calculate it. One is by using directly the formula for the variance, which would be -- let's see what it would be. With probability p, you get a 1. And in this case, you are so far from the mean. That's your squared distance from the mean. With probability 1-p, you get a 0, which is so far away from the mean. And then you can simplify that formula and get an answer. How about a slightly easier way of doing it. Instead of doing the algebra here, let me indicate the slightly easier way. We have a formula for the variance that tells us that we can find the variance by proceeding this way. That's a formula that's generally true for variances. Why is this easier? What's the expected value of Xi squared? Backtrack. What is Xi squared, after all? It's the same thing as Xi. Since Xi takes value 0 and 1, Xi squared also takes the same values, 0 and 1. So the expected value of Xi squared is the same as the expected value of Xi, which is equal to p. And the expected value of Xi squared is p squared, so we get the final answer, p times (1-p). If you were to work through and do the cancellations in this messy expression here, after one line you would also get to the same formula. But this sort of illustrates that working with this formula for the variance, sometimes things work out a little faster. Finally, are we in business? Can we calculate the variance of the random variable X as well? Well, we have the rule that for independent random variables, the variance of the sum is the sum of the variances. So to find the variance of X, we just need to add the variances of the Xi's. We have n Xi's, and each one of them has variance p_n times (1-p). And we are done. So this way, we have calculated both the mean and the variance of the binomial random variable. It's interesting to look at this particular formula and see what it tells us. If you are to plot the variance of X as a function of p, it has this shape. And the maximum is here at 1/2. p times (1-p) is 0 when p is equal to 0. And when p equals to 1, it's a quadratic, so it must have this particular shape. So what does it tell us? If you think about variance as a measure of uncertainty, it tells you that coin flips are most uncertain when your coin is fair. When p is equal to 1/2, that's when you have the most randomness. And this is kind of intuitive. if on the other hand I tell you that the coin is extremely biased, p very close to 1, which means it almost always gives you heads, then that would be a case of low variance. There's low variability in the results. There's little uncertainty about what's going to happen. It's going to be mostly heads with some occasional tails. So p equals 1/2. Fair coin, that's the coin which is the most uncertain of all coins, in some sense. And it corresponds to the biggest variance. It corresponds to an X that has the widest distribution. Now that we're on a roll and we can calculate such hugely complicated sums in simple ways, let us try to push our luck and do a problem with this flavor, but a little harder than that. So you go to one of those old-fashioned cocktail parties. All males at least will have those standard big hats which look identical. They check them in when they walk in. And when they walk out, since they look pretty identical, they just pick a random hat and go home. So n people, they pick their hats completely at random, quote, unquote, and then leave. And the question is, to say something about the number of people who end up, by accident or by luck, to get back their own hat, the exact same hat that they checked in. OK, first what do we mean completely at random? Completely at random, we basically mean that any permutation of the hats is equally likely. Any way of distributing those n hats to the n people, any particular way is as likely as any other way. So there's complete symmetry between hats and people. So what we want to do is to calculate the expected value and the variance of this random variable X. Let's start with the expected value. Let's reuse the trick from the binomial case. So total number of hats picked, we're going to think of total number of hats picked as a sum of (0, 1) random variables. X1 tells us whether person 1 got their own hat back. If they did, we record a 1. X2, the same thing. By adding all X's is how many 1's did we get, which counts how many people selected their own hats. So we broke down the random variable of interest, the number of people who get their own hats back, as a sum of random variables. And these random variables, again, are easy to handle, because they're binary. The only take two values. What's the probability that Xi is equal to 1, the i-th person has a probability that they get their own hat? There's n hats by symmetry. The chance is that they end up getting their own hat, as opposed to any one of the other n - 1 hats, is going to be 1/n. So what's the expected value of Xi? It's one times 1/n. With probability 1/n, you get your own hat, or you get a value of 0 with probability 1-1/n, which is 1/n. All right, so we got the expected value of the Xi's. And remember, we want to do is to calculate the expected value of X by using this decomposition? Are the random variables Xi independent of each other? You can try to answer that question by writing down a joint PMF for the X's, but I'm sure that you will not succeed. But can you think intuitively? If I tell you information about some of the Xi's, does it give you information about the remaining ones? Yeah. If I tell you that out of 10 people, 9 of them got their own hat back, does that tell you something about the 10th person? Yes. If 9 got their own hat, then the 10th must also have gotten their own hat back. So the first 9 random variables tell you something about the 10th one. And conveying information of this sort, that's the case of dependence. All right, so the random variables are not independent. Are we stuck? Can we still calculate the expected value of X? Yes, we can. And the reason we can is that expectations are linear. Expectation of a sum of random variables is the sum of the expectations. And that's always true. There's no independence assumption that's being used to apply that rule. So we have that the expected value of X is the sum of the expected value of the Xi's. And this is a property that's always true. You don't need independence. You don't care. So we're adding n terms, each one of which has expected value 1/n. And the final answer is 1. So out of the 100 people who selected hats at random, on the average, you expect only one of them to end up getting their own hat back. Very good. So since we are succeeding so far, let's try to see if we can succeed in calculating the variance as well. And of course, we will. But it's going to be a little more complicated. The reason it's going to be a little more complicated is because the Xi's are not independent, so the variance of the sum is not the same as the sum of the variances. So it's not enough to find the variances of the Xi's. We'll have to do more work. And here's what's involved. Let's start with the general formula for the variance, which, as I mentioned before, it's usually the simpler way to go about calculating variances. So we need to calculate the expected value for X-squared, and subtract from it the expectation squared. Well, we already found the expected value of X. It's equal to 1. So 1-squared gives us just 1. So we're left with the task of calculating the expected value of X-squared, the random variable X-squared. Let's try to follow the same idea. Write this messy random variable, X-squared, as a sum of hopefully simpler random variables. So X is the sum of the Xi's, so you square both sides of this. And then you expand the right-hand side. When you expand the right-hand side, you get the squares of the terms that appear here. And then you get all the cross-terms. For every pair of (i,j) that are different, i different than j, you're going to have a cross-term in the sum. So now, in order to calculate the expected value of X-squared, what does our task reduce to? It reduces to calculating the expected value of this term and calculating the expected value of that term. So let's do them one at a time. Expected value of Xi squared, what is it going to be? Same trick as before. Xi takes value 0 or 1, so Xi squared takes just the same values, 0 or 1. So that's the easy one. That's the same as expected value of Xi, which we already know to be 1/n. So this gives us a first contribution down here. The expected value of this term is going to be what? We have n terms in the summation. And each one of these terms has an expectation of 1/n. So we did a piece of the puzzle. So now let's deal with the second piece of the puzzle. Let's find the expected value of Xi times Xj. Now by symmetry, the expected value of Xi times Xj is going to be the same no matter what i and j you see. So let's just think about X1 and X2 and try to find the expected value of X1 and X2. X1 times X2 is a random variable. What values does it take? Only 0 or 1? Since X1 and X2 are 0 or 1, their product can only take the values of 0 or 1. So to find the probability distribution of this random variable, it's just sufficient to find the probability that it takes the value of 1. Now, what does X1 times X2 equal to 1 mean? It means that X1 was 1 and X2 was 1. The only way that you can get a product of 1 is if both of them turned out to be 1's. So that's the same as saying, persons 1 and 2 both picked their own hats. The probability that person 1 and person 2 both pick their own hats is the probability of two things happening, which is the product of the first thing happening times the conditional probability of the second, given that the first happened. And in words, this is the probability that the first person picked their own hat times the probability that the second person picks their own hat, given that the first person already picked their own. So what's the probability that the first person picks their own hat? We know that it's 1/n. Now, how about the second person? If I tell you that one person has their own hat, and that person takes their hat and goes away, from the point of view of the second person, there's n - 1 people left looking at n - 1 hats. And they're getting just hats at random. What's the chance that I will get my own? It's 1/n - 1. So think of them as person 1 goes, picks a hat at random, it happens to be their own, and it leaves. You're left with n - 1 people, and there are n - 1 hats out there. Person 2 goes and picks a hat at random, with probability 1/n - 1, is going to pick his own hat. So the expected value now of this random variable is, again, that same number, because this is a 0, 1 random variable. So this is the same as expected value of Xi times Xj when i different than j. So here, all that's left to do is to add the expectations of these terms. Each one of these terms has an expected value that's 1/n times (1/n - 1). And how many terms do we have? How many of these are we adding up? It's n-squared - n. When you expand the quadratic, there's a total of n-squared terms. Some are self-terms, n of them. And the remaining number of terms is n-squared - n. So here we got n-squared - n terms. And so we need to multiply here with n-squared - n. And after you realize that this number here is 1, and you realize that this is the same as the denominator, you get the answer that the expected value of X squared equals 2. And then, finally going up to the top formula, we get the expected value of X squared, which is 2 - 1, and the variance is just equal to 1. So the variance of this random variable, number of people who get their own hats back, is also equal to 1, equal to the mean. Looks like magic. Why is this the case? Well, there's a deeper explanation why these two numbers should come out to be the same. But this is something that would probably have to wait a couple of chapters before we could actually explain it. And so I'll stop here.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Hypergeometric_Probabilities.txt
In this problem, we're given an urn with n balls in it, out of which m balls are red balls. To visualize it, we can draw a box that represents the set of all n balls. Somewhere in the middle or somewhere else we have a cut, such that to the left we have all the red balls (there are m), and non-red balls. Let's for now call it black balls. That is n minus m. Now, from this box, we are to draw k balls, and we'd like to know the probability that i out of those k balls are red balls. For the rest of the problem, we'll refer to this probability as p-r, where r stands for the red balls. So from this picture, we know that we're going to draw a subset of the balls, such that i of them are red, and the remaining k minus i are black. And we'll like to know what is the probability that this event would occur. To start, we define our sample space, omega, as the set of all ways to draw k balls out of n balls. We found a simple counting argument -- we know that size of our sample space has n-choose-k, which is the total number of ways to draw k balls out of n balls. Next, we'd like to know how many of those samples correspond to the event that we're interested in. In particular, we would like to know c, which is equal to the number of ways to get i red balls after we draw the k balls. To do so, we'll break c into a product of two numbers -- let's call it a times b -- where a is the total number of ways to select i red balls out of m red balls. So the number of ways to get i out of m red balls. Going back to the picture, this corresponds to the total number of ways to get these balls. And similarly, we define b as the total number of ways to get the remaining k minus i balls out of the set n minus m black balls. This corresponds to the total number of ways to select the subset right here in the right side of the box. Now as you can see, once we have a and b, we multiply them together, and this yields the total number of ways to get i red balls. To compute what these numbers are, we see that a is equal to m-choose-i number of ways to get i red balls, and b is n minus m, the total number of black balls, choose k minus i, the balls that are not red within those k balls. Now putting everything back, we have p-r, the probability we set out to compute, is equal to c, the size of the event, divided by the size of the entire sample space. From the previous calculations, we know that c is equal to a times b, which is then equal to m-choose-i times (n minus m)-choose-(k minus i). And on the denominator, we have the entire sample space is a size n-choose-k. And that completes our derivation. Now let's look at a numerical example of this problem. Here, let's say we have a deck of 52 cards. And we draw a box with n equals 52, out of which we know that there are 4 aces. So we'll call these the left side of the box, which is we have m equals 4 aces. Now if we were to draw seven cards-- call it k equal to 7-- and we'd like to know what is the probability that out of the 7 cards, we have 3 aces. Using the notation we did earlier, if we were to draw a circle representing the seven cards, we want to know what is the probability that we have 3 aces in the left side of the box and 4 non-aces for the remainder of the deck. In particular, we'll call i equal to 3. So by this point, we've cast the problem of drawing cards from the deck in the same way as we did earlier of drawing balls from an urn. And from the expression right here, which we computed earlier, we can readily compute the probability of having 3 aces. In particular, we just have to substitute into the expression right here the value of m equal to 4, n equal to 52, k equal to 7, finally, i equal to 3. So we have 4-choose-3 times n minus m, in this case would be 48, choose k minus i, will be 4, and on the denominator, we have 52 total number of cards, choosing 7 cards. That gives us [the] numerical answer [for] the probability of getting 3 aces when we draw 7 cards.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Convergence_in_Probability_Example.txt
In this problem, we're given a random variable X which has a uniform distribution in the interval negative 1 to 1. In other words, if we were to draw out the PDF of X, we see that in the interval negative 1 to 1, it has value 1/2. Now we're given a sequence random variables X1, X2, and so on, where each Xi has the same distribution as X and different Xi's are independent. For part a, we would like to know if the sequence Xi converges to some number-- let's call it c-- in probability as i goes to infinity-- whether this is true. Let's first recall the definition of convergence in probability. If this does happen, then by definition, we'll have that for every epsilon greater than 0, the probability Xi minus c greater equal to epsilon, this quantity will go to 0 in the limit of i going to infinity. In other words, with very high probability, we will find Xi to be very concentrated around the number c if this were to be the PDF of Xi. Now, can this be true? Well, we know that each Xi is simply a uniform distribution over negative 1 to 1. It doesn't really change as we increase i. So intuitively, the concentration around any number c is not going to happen. So we should not expect a convergence in probability in this sense. For part b, we would like to know whether the sequence Yi, defined as Xi divided by i, converges to anything in probability. Well, by just looking at the shape of Yi, we know that since the absolute value of Xi is less than 1, then we expect the absolute value of Yi is less than 1/i. So eventually, Yi gets very close to 0 as i goes to infinity. So it's safe to bet that maybe Yi will converge to 0 in probability. Let's see if this is indeed the case. The probability of Yi minus 0 greater equal to epsilon is equal to the probability of Yi absolute value greater equal to epsilon. Now, previously we know that the absolute value of Yi is at most 1/i by the definition of Yi. And hence the probability right here is upper bounded by the probability of 1i greater equal to epsilon. Notice in this expression, there is nothing random. i is simply a number. Hence this is either 1 if i is less equal to 1/epsilon, or 0 if i is greater than 1/epsilon. Now, this tells us, as long as i is great enough-- it's big enough compared to epsilon-- we know that this quantity here is [INAUDIBLE] 0. And that tells us in the limit of i goes to infinity probability of Yi deviating from 0 by more than epsilon goes to 0. And that shows that indeed, Yi converges to 0 in probability because the expression right here, this limit, holds for all epsilon. Now, in the last part of the problem, we are looking at a sequence Zi defined by Xi raised to the i-th power. Again, since we know Xi is some number between negative 1 and 1, this number raised to the i-th power is likely to be very small. And likely to be small in the sense that it will have absolute value close to 0. So a safe guess will be the sequence Zi converges to 0 as well as i goes to infinity. How do we prove this formally? We'll start again with a probability that Zi stays away from 0 by more than epsilon and see how that evolves. And this is equal to the probability that Xi raised to the i-th power greater equal to epsilon. Or again, we can write this by taking out the absolute value that Xi is less equal to negative epsilon raised to the 1 over i-th power or Xi greater equal to epsilon 1 over i-th power. So here, we'll divide into two cases, depending on the value of epsilon. In the first case, epsilon is greater than 1. Well, if that's the case, then we know epsilon raised to some positive power is still greater than 1. But again, Xi cannot have any positive density be on the interval negative 1 or 1. And hence we know the probability above, which is Xi less than some number smaller than negative 1 or greater than some number bigger than 1 is 0. So that case is handled. Now let's look at a case where epsilon is less than 1, greater than 0. So in this case, epsilon to the 1/i will be less than 1. And it's not that difficult to check that since Xi has uniform density between negative 1 and 1 of magnitude 1/2, then the probability here was simply 2 times 1/2 times the distance between epsilon to the 1 over i-th power and 1. So in order to prove this quantity converge to 0, we simply have to justify why does epsilon to the 1/i converge to 1 as i goes to infinity. For that, we'll recall the properties of exponential functions. In particular, if a is a positive number and x is its exponent, if we were to take the limit as x goes to 0 and look at the value of a to the power of x, we see that this goes to 1. So in this case, we'll let a be equal to epsilon and x be equal to 1/i. As we can see that as i goes to infinity, the value of x, which is 1/i, does go to 0. And therefore, in the limit i going to infinity, the value of epsilon to the 1 over i-th power goes to 1. And that shows if we plug this limit into the expression right here that indeed, the term right here goes to 0 as i goes to infinity. And all in all, this implies the probability of Zi minus 0 absolute value greater equal to epsilon in the limit of i going to infinity converges to 0 for all positive epsilon. And that completes our proof that indeed, Zi converges to 0 in probability.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Widgets_and_Crates.txt
Hi. In this problem, we'll get more practice using conditioning to help us calculate expectations of variances. We'll see that in this problem, which deals with widgets and crates, it's actually similar in flavor to an earlier problem that we did, involving breaking a stick twice. And you'll see that in this problem, we'll again use the law of iterated expectations and the law of total variance to help us calculate expectations of variances. And again, we'll be taking the approach of attacking the problem by splitting into the stages and building up from the bottom up. So in this problem, what we have is a crate, which contains some number of boxes. And we don't know how many boxes are. It's random. And it's given by some discrete random variable, n. And in each box, there are some number of widgets. And again, this is also random. And in each box, say for Box I, there are xi number of widgets in each one. What we're really interested in in this problem is, how many widgets are there total in this crate? So in the crate, there are boxes, and in the boxes, there are widgets. How many widgets are there total within the crate? And we'll call that a random variable, t. And the problem gives us some information. It tells us that the expectation of the number of widgets in each box for all the boxes is the same. It's 10. And also, the expectation of the number of boxes is also 10. And furthermore, the variance of x of the number of widgets and the number of boxes is all 16. And lastly, an important fact is that all the xi's, so all the widgets for each box, and the total number of boxes, these random variables are all independent. So to calculate t, t is just a sum of x1 through xn. So x1 is the number of widgets in Box 1, z2 is the number of widgets in Box 2, and all the way through Box n. So what makes this difficult is that the n is actually random. We don't actually know how many boxes there are. So we don't even know how many terms there are in the sum. Well, let's take a slightly simpler problem. Let's pretend that we actually know there are exactly 12 boxes. And in that case, the only thing that's random now is how many widgets there are in each box. And so let's call [? sum ?] a new random variable, s, the sum of x1 through x12. So this would tell us, this is the number of widgets in 12 boxes. All right. And because each of these xi's are independent, and they have the same expectation, just by linearity of expectations, we know that the expectation of s is just 12 copies of the same expectation of xi. And similarly, because we also assume that all the xi's are independent, the variance of s, we can just add the variances of each of these terms. So again, there are 12 copies of the variance of xi. So we've done a simpler version of this problem, where we've assumed we know what n is, that n is 12. And we've seen that in this simpler case, it's pretty simple to calculate what the expectation of the sum is. So let's try to use that knowledge to help us calculate the actual problem, where n is actually random. So what we'll do is use the law of iterated expectations. And so this is written in terms of x and y, but we can very easily just substitute in for the random variables that we care about. Where in this case, what we see is that in order to build things up, it would be helpful if we condition on something that is useful. And in this case, it's fairly clear that it would be helpful if we condition on n, the number of boxes. So if we knew how many boxes there were, then we can drop down to the level of widgets within each box. And then once we have that, we can build up and average over the total number of boxes. So what we should do is condition on n, the number of boxes. So what have we discovered through this simpler exercise earlier? Well, we've discovered that if we knew the number of boxes, then the expectation of the total number of widgets is just the number of boxes times the number of widgets in each one, or the expectation of the number of widgets in each one. So we can use that information to help us here. Because now, this is basically the same scenario, except that the number of boxes is now random. Instead of being 12, it could be anything. But if we just condition on the number of boxes being equal to n, then we know that there are exactly n copies of this. But notice that n here is still random. And so what we get is that the expectation is n times the expectation of the number of widgets in each box, which we know is 10. So it's expectation of 10 times n or 10 times the expectation of n, which gives us 100. Because there are, on expectation, 10 boxes. So this, again, makes intuitive sense. Because we know that on average, there are 10 boxes. And on average, each box has 10 widgets inside. And so on average, we expect that there will be 100 widgets. And the key thing here is that we actually relied on this independence. So if the number of widgets in each box vary depending on-- or if the distribution of the number of widgets in each box vary depending on how many boxes there were, then we wouldn't be able to do it this simply. OK, so that gives us the answer to the first part, the expectation of the total number of widgets. Now let's do the second part, which is the variance. The variance, we'll again use this idea of conditioning and splitting things up, and use the law of total variance. So the variance of t is going to be equal to the expectation of the conditional variance plus the variance of the conditional expectation. So what we have to do now is just to calculate what all of these pieces are. So let's start with this thing here, the conditional variance. So what is the conditional variance? Well, again, let's go back to our simpler case. We know that if we knew what n is, then the variance would just be n times the variance of each xi. So what does that tell us? That tells us that, well, if we knew what n was, so condition on n, the variance would just be n times the variance of each xi. So we've just taken this analogy and generalized it to the case where we don't actually know what n is. We just condition on n, and we still have a random variable. So then from that, we know that the expectation now, to get this first term, take the expectation of this conditional variance, it's just the expectation of n and the variance of xi, we're given that. That's equal to 16. So it's n times 16, which we know is 160, because the expectation of n, we also know, is 10. All right, let's do this second term now. We need the variance of the conditional expectation of t given n. Well, what is the conditional expectation of t given n? We've already kind of used that here. And again, it's using the fact that if we knew what n was, the expectation would just be n times the expectation of the number of widgets in each box. So it would be n times the expectation of each xi. Now, to get the second term, we just take the variance of this. So the variance is the variance of n times the expectation of each xi. And the expectation of each xi is 10. So it's n times 10. And now remember, when you calculate variances, [? if you ?] have a constant term inside, when you pull it out, you have to square it. So you get 100 times the variance of n. And we know that the variance of n is also 16. So this gives us 1600. All right. So now we've calculated both terms here. The first term is equal to 160. The second term is equal to 1600. So to get the final answer, all we have to do is add this up. So we get that the final answer is equal to 1760. And this is not as obvious as the expectation, where you could have just kind of guessed that it was equal to 100. So again, this was just another example of using conditioning and the laws of total variance and iterated expectations in order to help you solve a problem. And in this case, you could kind of see that there is a hierarchy, where you start with widgets. Widgets are contained in boxes, and then crates contain some number of boxes. And so it's easy to just condition and do it level by level. So you condition on the number of boxes. If you know what the number of boxes are, then you can easily calculate how many widgets there are, on average. And then you average over the number of boxes to get the final answer. So I hope that was helpful. And we'll see you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
A_Random_Walker.txt
In this problem, we'll be working with a object called random walk, where we have a person on the line-- or a tight rope, according to the problem. Let's start from the origin, and each time step, it would randomly either go forward or backward with certain probability. In our case, with probability P, the person would go forward, and 1 minus P going backwards. Now, the walk is random in the following sense-- that the choice going forward or backward in each step is random, and it's completely independent from all past history. So let's look at the problem. It has three parts. In the first part, we'd like to know what's the probability that after two steps the person returns to the starting point, which in this case is 0? Now, throughout this problem, I'm going to be using the following notation. F indicates the action of going forward and B indicates the action of going backwards. A sequence says F and B implies the sample that the person first goes forward, and then backwards. If I add another F, it will mean, forward, backward, forward again. OK? So in order for the person to go somewhere after two steps and return to the origin, the following must happen. Either the person went forward followed by backward, or backward followed by forward. And indeed, this event-- namely, the union of these two possibilities-- defines the event of interest in our case. And we'd like to know what's the probability of A, which we'll break down into the probability of forward, backward, backward, forward. Now, forward, backward and backward, forward-- they are two completely different outcomes. And we know that because they're disjoint, this would just be the sum of the two probabilities-- plus probability of backward/forward. Here's where the independence will come in. When we try to compute the probability of going forward and backward, because the action-- each step is completely independent from the past, we know this is the same as saying, in the first step, we have probability P of going forward, in the next step, probability 1 minus P of going backwards. We can do so-- namely, writing the probability of forward, backward as a product of going forward times the probability of going backwards, because these actions are independent. And similarly, for the second one, we have going backwards first, times going forward the second time. Adding these two up, we have 2 times P times 1 minus P. And that will be the answer to the first part of the problem. In the second part of the problem, we're interested in the probability that after three steps, the person ends up in position 1, or one step forward compared to where he started. Now, the only possibilities here are that among the three steps, exactly two steps are forward, and one step is backwards, because otherwise there's no way the person will end up in position 1. To do so, there, again, are three possibilities in which we go forward, forward, backward, or forward, backward, forward, or backward, forward, forward. And that exhausts all the possibilities that the person can end up in position 1 after three steps. And we'll define the collection of all these outcomes as event C. The probability of event C-- same as before-- is simply the sum of the probability of each individual outcome. Now, based on the independence assumption that we used before, each outcome here has the same probability, which is equal to P squared times 1 minus P. The P squared comes from the fact that two forward steps are taken, and 1 minus P, the probability of that one backwards step. And since there are three of them, we multiply 3 in front, and that will give us the probability. In the last part of the problem, we're asked to compute that, conditional on event C already took place, what is the probability that the first step he took was a forward step? Without going into the details, let's take a look at the C, in which we have three elements, and only the first two elements correspond to a forward step in the first step. So we can define event D as simply the first two outcomes-- forward, forward, backward, and forward, backward, forward. Now, the probability we're interested in is simply probability of D conditional on C. We'd write it out using the law of conditional probability-- D intersection C conditional on C. Now, because D is a subset of C, we have probability of D divided by the probability of C. Again, because all samples here have the same probability, all we need to do is to count the number of samples here, which is 2, and divide by the number of samples here, which is 3. So we end up with 2 over 3. And that concludes the problem. See you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Random_Incidence_Under_Erlang_Arrivals.txt
Hi. In this problem, we're going to look at random incidence under Erlang arrivals. First, let's parse what that means. In a Poisson process, remember, the time between arrivals, or the inter-arrival time, is distributed as an exponential random variable. And random incidence for a Poisson process refers to the somewhat surprising result that when you consider a specific time, say, T-star, then the length of the inter-arrival interval that contains that time T-star is not distributed as an exponential random variable. It's actually distributed as an Erlang random variable of order 2 or it's distributed as a sum of two exponential random variables. And the reason for that is that it comprises of two parts. One is the time since the last arrival until T-star, which is exponentially distributed, and the time from T-star until the next arrival, which is also exponentially distributed. So that brings us to a review of what Erlang random variables are. An Erlang random variable of order k is just the sum of k independent and identically distributed exponential random variables. So to be more specific, if Ti is an exponential random variable with parameter lambda, then if you take kiid copies of Ti and add them up, and call that Yk, then Yk is an Erlang random variable of order k. And one other fact is that the mean of Yk, the mean of an Erlang random variable of order k, is just k, the order, over lambda, which is the rate of the underlying exponential random variables. So as a consequence, if you have an Erlang random variable of order two and that random variable also has a mean of two over lambda, we can interpret that random variable as just being the sum of two exponential random variables. 2 iid exponential random variables, T1 and T2, where each one takes exponential with the rate in lambda. OK, so in this problem now, we're dealing with the random incidence not under Poisson processes, but under something else, which we call here an Erlang process with Erlang arrival times. So to be more specific, what we're saying is that, instead of inter-arrival time being exponentially distributed, in this process, and inter-arrival time is actually distributed as an Erlang random variable of order 2 with mean 2 over lambda. So to be explicit, this is no longer a Poisson process. It's some other process because the inter-arrival times are not exponential. So let's make use of this fact that we talked about earlier because now we know that the inter-arrival times of this Erlang process are Erlang order 2 with mean 2 over lambda. But we know that that can just be re-interpreted as a sum of two simple exponentials, each with parameter lambda. So let's just draw another picture and imagine that for each of these arrivals, so say we have three sample arrivals in this Erlang process, we can fill in, kind of, the gaps between these with additional arrivals. And then think of each one of these times as all being exponential with parameter lambda. So this is a valid interpretation because when we connect these, these inter-arrival times correspond to the combination of two inter-arrival times, which we know we can split that into just two exponentials. So each one of these is an exponential random variable. And when you combine them, you get an Erlang order of 2. But the nice thing about this is that if we look at this diagram, it actually is just exactly a Poisson process with a rate lambda because now, what we're dealing with are exactly-- the inter-arrival times are now exactly exponential random variables. And so this is in fact, now, just a simple Poisson process. And we can also just think of it as we take the Poisson process, and take every other arrival, say, all the even-numbered arrivals, and make those corresponds to be arrivals in the Erlang process. OK, so now let's think about some specific time T-star. We want to know what is the distribution of the length of this to be specific inter-arrival interval that T-star is in. Well, what we can do is take it down to the level of this Poisson process and look at it from there. Well, we do that because, for a Poisson process, we know about random incidence for Poisson processes. And we know how to deal with Poisson processes. So let's think about this now. Well, T-star is here. And what we know from random incidence for a Poisson processes is that the length of this inter-arrival interval for the Poisson process, we know that this is an exponential plus an exponential. So combined, this is Erlang order 2. But that only covers from here to here. And what we want is actually from here to there. Well now, we tack on an extra exponential because we know that the inter-arrival times-- the time between this arrival and that arrival in the Poisson process is just another exponential. And now all of these are in [INAUDIBLE] time intervals. And they're all independent. And so the time of this inter-arrival interval in the Erlang process is just going to be the sum of three independent exponentials within the underlying Poisson process. And so to answer here is actually, it's going to be an Erlang of order 3. Now this is one possible scenario for how this might occur. Another scenario is actually that T-star is somewhere else. So let's draw this again. And suppose now, in this case, T-star landed between an even numbered arrival in the Poisson process and an odd numbered arrival. Now it could also arrive between an odd numbered and an even numbered arrival. So it could be that T-star is actually here. Well, but in this case, it's actually more or less the same thing because now what we want is the length of this entire inter-arrival interval, which, in the Poisson world, we can break it down into random incidence within this interval, this inter-arrival interval, which is two exponentials, or an Erlang of 2, plus this interval, which is just a standard inter-arrival time, which is another exponential. So in this case as well, we have the sum of three independent exponential random variables. And so, in either case, we have that the inter-arrival time in the Erlang process is an Erlang of order 3. And so the final answer is, in fact, that the inter-arrival for random incidence under these Erlang-type arrivals is an Erlang of order 3. OK, so in this problem we looked at the random incidence under a different type of an arrival process, not Poisson, but with Erlang random variables. But we used the insight that Erlang really can be re-interpreted as the sum of independent and identically distributed exponential random variables. And exponential random variables can be viewed as one way of interpreting and viewing a Poisson process. And so by going through those steps, we were able to use what we know about random incidence under Poisson processes to help us solve this problem of random incidence its Erlang arrivals. So I hope that was helpful. And I'll see you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
16_Markov_Chains_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So we're going to start now with a new chapter. We're going to talk about Markov processes. The good news is that this is a subject that is a lot more intuitive and simple in many ways than, let's say, the Poisson processes. So hopefully this will be enjoyable. So Markov processes is, a general class of random processes. In some sense, it's more elaborate than the Bernoulli and Poisson processes, because now we're going to have dependencies between difference times, instead of having memoryless processes. So the basic idea is the following. In physics, for example, you write down equations for how a system evolves that has the general form. The new state of a system one second later is some function of old state. So Newton's equations and all that in physics allow you to write equations of this kind. And so if that a particle is moving at a certain velocity and it's at some location, you can predict when it's going to be a little later. Markov processes have the same flavor, except that there's also some randomness thrown inside the equation. So that's what Markov process essentially is. It describes the evolution of the system, or some variables, but in the presence of some noise so that the motion itself is a bit random. So this is a pretty general framework. So pretty much any useful or interesting random process that you can think about, you can always described it as a Markov process if you define properly the notion of the state. So what we're going to do is we're going to introduce the class of Markov processes by, example, by talking about the checkout counter in a supermarket. Then we're going to abstract from our example so that we get a more general definition. And then we're going to do a few things, such as how to predict what's going to happen n time steps later, if we start at the particular state. And then talk a little bit about some structural properties of Markov processes or Markov chains. So here's our example. You go to the checkout counter at the supermarket, and you stand there and watch the customers who come. So customers come, they get in queue, and customers get served one at a time. So the discussion is going to be in terms of supermarket checkout counters, but the same story applies to any service system. You may have a server, jobs arrive to that server, they get put into the queue, and the server processes those jobs one at a time. Now to make a probabilistic model, we need to make some assumption about the customer arrivals and the customer departures. And we want to keep things as simple as possible to get started. So let's assume that customers arrive according to a Bernoulli process with some parameter b. So essentially, that's the same as the assumption that the time between consecutive customer arrivals is a geometric random variable with parameter b. Another way of thinking about the arrival process-- that's not how it happens, but it's helpful, mathematically, is to think of someone who's flipping a coin with bias equal to b. And whenever the coin lands heads, then a customer arrives. So it's as if there's a coin flip being done by nature that decides the arrivals of the customers. So we know that coin flipping to determine the customer arrivals is the same as having geometric inter-arrival times. We know that from our study of the Bernoulli process. OK. And now how about the customer service times. We're going to assume that-- OK. If there is no customer in queue, no one being served, then of course, no one is going to depart from the queue. But if there a customer in queue, then that customer starts being served, and is going to be served for a random amount of time. And we make the assumption that the time it takes for the clerk to serve the customer has a geometric distribution with some known parameter q. So the time it takes to serve a customer is random, because it's random how many items they got in their cart, and how many coupons they have to unload and so on. So it's random. In the real world, it has some probability distribution. Let's not care exactly about what it would be in the real world, but as a modeling approximation or just to get started, let's pretend that customer service time are well described by a geometric distribution, with a parameter q. An equivalent way of thinking about the customer service, mathematically, would be, again, in terms of coin flipping. That is, the clerk has a coin with a bias, and at each time slot the clerk flips the coin. With probability q, service is over. With probability 1-q, you continue the service process. An assumption that we're going to make is that the coin flips that happen here to determine the arrivals, they're all independent of each other. The coin flips that determine the end of service are also independent from each other. But also the coin flips involved here are independent from the coin flips that happened there. So how arrivals happen is independent with what happens at the service process. OK. So suppose now you want to answer a question such as the following. The time is 7:00 PM. What's the probability that the customer will be departing at this particular time? Well, you say, it depends. If the queue is empty at that time, then you're certain that you're not going to have a customer departure. But if the queue is not empty, then there is probability q that a departure will happen at that time. So the answer to a question like this has something to do with the state of the system at that time. It depends what the queue is. And if I ask you, will the queue be empty at 7:10? Well, the answer to that question depends on whether at 7 o'clock whether the queue was huge or not. So knowing something about the state of the queue right now gives me relevant information about what may happen in the future. So what is the state of the system? Therefore we're brought to start using this term. So the state basically corresponds to anything that's relevant. Anything that's happening right now that's kind of relevant to what may happen in the future. Knowing the size of the queue right now, is useful information for me to make predictions about what may happen 2 minutes later from now. So in this particular example, a reasonable choice for the state is to just count how many customers we have in the queue. And let's assume that our supermarket building is not too big, so it can only hold 10 people. So we're going to limit the states. Instead of going from 0 to infinity, we're going to truncate our model at ten. So we have 11 possible states, corresponding to 0 customers in queue, 1 customer in queue, 2 customers, and so on, all the way up to 10. So these are the different possible states of the system, assuming that the store cannot handle more than 10 customers. So this is the first step, to write down the set of possible states for our system. Then the next thing to do is to start describing the possible transitions between the states. At any given time step, what are the things that can happen? We can have a customer arrival, which moves the state 1 higher. We can have a customer departure, which moves the state 1 lower. There's a possibility that nothing happens, in which case the state stays the same. And there's also the possibility of having simultaneously an arrival and a departure, in which case the state again stays the same. So let's write some representative probabilities. If we have 2 customers, the probability that during this step we go down, this is the probability that we have a service completion, but to no customer arrival. So this is the probability associated with this transition. The other possibility is that there's a customer arrival, which happens with probability p, and we do not have a customer departure, and so the probability of that particular transition is this number. And then finally, the probability that we stay in the same state, this can happen in 2 possible ways. One way is that we have an arrival and a departure simultaneously. And the other possibility is that we have no arrival and no departure, so that the state stays the same. So these transition probabilities would be the same starting from any other states, state 3, or state 9, and so on. Transition probabilities become a little different at the borders, at the boundaries of this diagram, because if you're in a state 0, then you cannot have any customer departures. There's no one to be served, but there is a probability p that the customer arrives, in which case the number of customers in the system goes to 1. Then probability 1-p, nothing happens. Similarly with departures, if the system is full, there's no room for another arrival. But we may have a departure that happens with probability q, and nothing happens with probability 1-q. So this is the full transition diagram annotated with transition probabilities. And this is a complete description of a discrete time, finite state Markov chain. So this is a complete probabilistic model. Once you have all of these pieces of information, you can start calculating things, and trying to predict what's going to happen in the future. Now let us abstract from this example and come up with a more general definition. So we have this concept of the state which describes the current situation in the system that we're looking at. The current state is random, so we're going to think of it as a random variable Xn is the state, and transitions after the system started operating. So the system starts operating at some initial state X0, and after n transitions, it moves to state Xn. Now we have a set of possible states. State 1 state 2, state 3, and in general, state i and state j. To keep things simple, we assume that the set of possible states is a finite set. As you can imagine, we can have systems in which the state space is going to be infinite. It could be discrete, or continuous. But all that is more difficult and more complicated. It makes sense to start from the simplest possible setting where we just deal with the finite state space. And time is discrete, so we can think of this state in the beginning, after 1 transition, 2 transitions, and so on. So we're in discrete time and we have finite in many states. So the system starts somewhere, and at every time step, the state is, let's say, here. A whistle blows, and the state jumps to a random next state. So it may move here, or it may move there, or it may move here, or it might stay in the place. So one possible transition is the transition before you jump, and just land in the same place where you started from. Now we want to describe the statistics of these transitions. If I am at that state, how likely is it to that, next time, I'm going to find myself at that state? Well, we describe the statistics of this transition by writing down a transition probability, the transition probability of going from state 3 to state 1. So this transition probability is to be thought of as a conditional probability. Given that right now I am at state i what is the probability that next time I find myself at state j? So given that right now I am at state 3, P31 is the probability that the next time I'm going to find myself at state 1. Similarly here, we would have a probability P3i, which is the probability that given that right now I'm at state 3, next time I'm going to find myself at state i. Now one can write such conditional probabilities down in principle, but we need to make-- so you might think of this as a definition here, but we need to make one additional big assumption, and this is the assumption that to make a process to be a Markov process. This is the so-called Markov property, and here's what it says. Let me describe it first in words here. Every time that I find myself at state 3, the probability that next time I'm going to find myself at state 1 is this particular number, no matter how I got there. That is, this transition probability is not affected by the past of the process. It doesn't care about what path I used to find myself at state 3. Mathematically, it means the following. You have this transition probability that from state i jump to state j. Suppose that I gave you some additional information, that I told you everything else that happened in the past of the process, everything that happened, how did you get to state i? The assumption we're making is that this information about the past has no bearing in making predictions about the future, as long as you know where you are right now. So if I tell you, right now, you are at state i, and by the way, you got there by following a particular path, you can ignore the extra information of the particular path that you followed. You only take into account where you are right now. So every time you find yourself at that state, no matter how you got there, you will find yourself next time at state 1 with probability P31. So the past has no bearing into the future, as long as you know where you are sitting right now. For this property to happen, you need to choose your state carefully in the right way. In that sense, the states needs to include any information that's relevant about the future of the system. Anything that's not in the state is not going to play a role, but the state needs to have all the information that's relevant in determining what kind of transitions are going to happen next. So to take an example, before you go to Markov process, just from the deterministic world, if you have a ball that's flying up in the air, and you want to make predictions about the future. If I tell you that the state of the ball is the position of the ball at the particular time, is that enough for you to make predictions where the ball is going to go next? No. You need to know both the position and the velocity. If you know position and velocity, you can make predictions about the future. So the state of a ball that's flying is position together with velocity. If you were to just take position, that would not be enough information, because if I tell you current position, and then I tell you past position, you could use the information from the past position to complete the trajectory and to make the prediction. So information from the past is useful if you don't know the velocity. But if both position and velocity, you don't care how you got there, or what time you started. From position and velocity, you can make predictions about the future. So there's a certain art, or a certain element of thinking, a non-mechanical aspect into problems of this kind, to figure out which is the right state variable. When you define the state of your system, you need to define it in such a way that includes all information that has been accumulated that has some relevance for the future. So the general process for coming up with a Markov model is to first make this big decision of what your state variable is going to be. Then you write down if it may be a picture of the different states. Then you identify the possible transitions. So sometimes the diagram that you're going to have will not include all the possible arcs. You would only show those arcs that correspond to transitions that are possible. For example, in the supermarket example, we did not have a transition from state 2 to state 5, because that cannot happen. You can only have 1 arrival at any time. So in the diagram, we only showed the possible transitions. And for each of the possible transitions, then you work with the description of the model to figure out the correct transition probability. So you got the diagram by writing down transition probabilities. OK, so suppose you got your Markov model. What will you do with it? Well, what do we need models for? We need models in order to make predictions, to make probabilistic predictions. So for example, I tell you that the process started in that state. You let it run for some time. Where do you think it's going to be 10 time steps from now? That's a question that you might want to answer. Since the process is random, there's no way for you to tell me exactly where it's going to be. But maybe you can give me probabilities. You can tell me, with so much probability, the state would be there. With so much probability, the state would be there, and so on. So our first exercise is to calculate those probabilities about what may happen to the process a number of steps in the future. It's handy to have some notation in here. So somebody tells us that this process starts at the particular state i. We let the process run for n transitions. It may land at some state j, but that state j at which it's going to land is going to be random. So we want to give probabilities. Tell me, with what probability the state, n times steps later, is going to be that particular state j? The shorthand notation is to use this symbol here for the n-step transition probabilities that you find yourself at state j given that you started at state i. So the way these two indices are ordered, the way to think about them is that from i, you go to j. So the probability that from i you go to j if you have n steps in front of you. Some of these transition probabilities are, of course easy to write. For example, in 0 transitions, you're going to be exactly where you started. So this probability is going to be equal to 1 if i is equal to j, And 0 if i is different than j. That's an easy one to write down. If you have only 1 transition, what's the probability that 1 step later you find yourself in state j given that you started at state i? What is this? These are just the ordinary 1-step transition probabilities that we are given in the description of the problem. So by definition, the 1-step transition probabilities are of this form. This equality is correct just because of the way that we defined those two quantities. Now we want to say something about the n-step transition probabilities when n is a bigger number. OK. So here, we're going to use the total probability theorem. So we're going to condition in two different scenarios, and break up the calculation of this quantity, by considering the different ways that this event can happen. So what is the event of interest? The event of interest is the following. At time 0 we start i. We are interested in landing at time n at the particular state j. Now this event can happen in several different ways, in lots of different ways. But let us group them into subgroups. One group, or one sort of scenario, is the following. During the first n-1 time steps, things happen, and somehow you end up at state 1. And then from state 1, in the next time step you make a transition to state j. This particular arc here actually corresponds to lots and lots of different possible scenarios, or different spots, or different transitions. In n-1 time steps, there's lots of possible ways by which you could end up at state 1. Different paths through the state space. But all of them together collectively have a probability, which is the (n-1)-step transition probability, that from state i, you end up at state 1 And then there's other possible scenarios. Perhaps in the first n-1 time steps, you follow the trajectory that took you at state m. And then from state m, you did this transition, and you ended up at state j. So this diagram breaks up the set of all possible trajectories from i to j into different collections, where each collection has to do with which one happens to be the state just before the last time step, just before time n. And we're going to condition on the state at time n-1. So the total probability of ending up at state j is the sum of the probabilities of the different scenarios -- the different ways that you can get to state j. If we look at that type of scenario, what's the probability of that scenario happening? With probability Ri1(n-1), I find myself at state 1 at time n-1. This is just by the definition of these multi-step transition probabilities. This is the number of transitions. The probability that from state i, I end up at state 1. And then given that I found myself at state 1, with probability P1j, that's the transition probability, next time I'm going to find myself at state j. So the product of these two is the total probability of my getting from state i to state j through state 1 at the time before. Now where exactly did we use the Markov assumption here? No matter which particular path we used to get from i to state 1, the probability that next I'm going to make this transition is that same number, P1j. So that number does not depend on the particular path that I followed in order to get there. If we didn't have the Markov assumption, we should have considered all possible individual trajectories here, and then we would need to use the transition probability that corresponds to that particular trajectory. But because of the Markov assumption, the only thing that matters is that right now we are at state 1. It does not matter how we got there. So now once you see this scenario, then this scenario, and that scenario, and you add the probabilities of these different scenarios, you end up with this formula here, which is a recursion. It tells us that once you have computed the (n-1)-step transition probabilities, then you can compute also the n-step transition probabilities. This is a recursion that you execute or you run for all i's and j's simultaneously. That is fixed. And for a particular n, you calculate this quantity for all possible i's, j's, k's. You have all of those quantities, and then you use this equation to find those numbers again for all the possible i's and j's. Now this is formula which is always true, and there's a big idea behind the formula. And now there's variations of this formula, depending on whether you're interested in something that's slightly different. So for example, if you were to have a random initial state, somebody gives you the probability distribution of the initial state, so you're told that with probability such and such, you're going to start at state 1. With that probability, you're going to start at state 2, and so on. And you want to find the probability at the time n you find yourself at state j. Well again, total probability theorem, you condition on the initial state. With this probability you find yourself at that particular initial state, and given that this is your initial state, this is the probability that n time steps later you find yourself at state j. Now building again on the same idea, you can run every recursion of this kind by conditioning at different times. So here's a variation. You start at state i. After 1 time step, you find yourself at state 1, with probability pi1, and you find yourself at state m with probability Pim. And once that happens, then you're going to follow some trajectories. And there is a possibility that you're going to end up at state j after n-1 time steps. This scenario can happen in many possible ways. There's lots of possible paths from state 1 to state j. There's many paths from state 1 to state j. What is the collective probability of all these transitions? This is the event that, starting from state 1, I end up at state j in n-1 time steps. So this one has here probability R1j of n-1. And similarly down here. And then by using the same way of thinking as before, we get the formula that Rij(n) is the sum over all k's of Pik, and then the Rkj(n-1). So this formula looks almost the same as this one, but it's actually different. The indices and the way things work out are a bit different, but the basic idea is the same. Here we use the total probability theory by conditioning on the state just 1 step before the end of our time horizon. Here we use total probability theorem by conditioning on the state right after the first transition. So this generally idea has different variations. They're all valid, and depending on the context that you're dealing with, you might want to work with one of these or another. So let's illustrate these calculations in terms of an example. So in this example, we just have 2 states, and somebody gives us transition probabilities to be those particular numbers. Let's write down the equations. So the probability that starting from state 1, I find myself at state 1 n time steps later. This can happen in 2 ways. At time n-1, I might find myself at state 2. And then from state 2, I make a transition back to state 1, which happens with probability-- why'd I put 2 there -- anyway, 0.2. And another way is that from state 1, I go to state 1 in n-1 steps, and then from state 1 I stay where I am, which happens with probability 0.5. So this is for R11(n). Now R12(n), we can write a similar recursion for this one. On the other hand, seems these are probabilities. The state at time n is going to be either state 1 or state 2. So these 2 numbers need to add to 1, so we can just write this as 1 - R11(n). And this is an enough of a recursion to propagate R11 and R12 as time goes on. So after n-1 transitions, either I find myself in state 2, and then there's a point to transition that I go to 1, or I find myself in state 1, which with that probability, and from there, I have probability 0.5 of staying where I am. Now let's start calculating. As we discussed before, if I start at state 1, after 0 transitions I'm certain to be at state , and I'm certain not to be at state 1. If I start from state 1, I'm certain to not to be at state at that time, and I'm certain that I am right now, it's state 1. After I make transition, starting from state 1, there's probability 0.5 that I stay at state 1. And there's probability 0.5 that I stay at state 2. If I were to start from state 2, the probability that I go to 1 in 1 time step is this transition that has probability 0.2, and the other 0.8. OK. So the calculation now becomes more interesting, if we want to calculate the next term. How likely is that at time 2, I find myself at state 1? In order to be here at state 1, this can happen in 2 ways. Either the first transition left me there, and the second transition is the same. So these correspond to this 0.5, that the first transition took me there, and the next transition was also of the same kind. That's one possibility. But there's another scenario. In order to be at state 1 at time 2 -- this can also happen this way. So that's the event that, after 1 transition, I got there. And the next transition happened to be this one. So this corresponds to 0.5 times 0.2. It corresponds to taking the 1-step transition probability of getting there, times the probability that from state 2 I move to state 1, which in this case, is 0.2. So basically we take this number, multiplied with 0.2, and then add those 2 numbers. And after you add them, you get 0.35. And similarly here, you're going to get 0.65. And now to continue with the recursion, we keep doing the same thing. We take this number times 0.5 plus this number times 0.2. Add them up, you get the next entry. Keep doing that, keep doing that, and eventually you will notice that the numbers start settling into a limiting value at 2/7. And let's verify this. If this number is 2/7, what is the next number going to be? The next number is going to be 2/7 -- (not 2.7) -- it's going to be 2/7. That's the probability that I find myself at that state, times 0.5-- that's the next transition that takes me to state 1 -- plus 5/7-- that would be the remaining probability that I find myself in state 2 -- times 1/5. And so that gives me, again, 2/7. So this calculation basically illustrates, if this number has become 2/7, then the next number is also going to be 2/7. And of course this number here is going to have to be 5/7. And this one would have to be again, the same, 5/7. So the probability that I find myself at state 1, after a long time has elapsed, settles into some steady state value. So that's an interesting phenomenon. We just make this observation. Now we can also do the calculation about the probability, starting from state 2. And here, you do the calculations -- I'm not going to do them. But after you do them, you find this probability also settles to 2/7 and this one also settles to 5/7. So these numbers here are the same as those numbers. What's the difference between these? This is the probability that I find myself at state 1 given that I started at 1. This is the probability that I find myself at state 1 given that I started at state 2. These probabilities are the same, no matter where I started from. So this numerical example sort of illustrates the idea that after the chain has run for a long time, what the state of the chain is, does not care about the initial state of the chain. So if you start here, you know that you're going to stay here for some time, a few transitions, because this probability is kind of small. So the initial state does that's tell you something. But in the very long run, transitions of this kind are going to happen. Transitions of that kind are going to happen. There's a lot of randomness that comes in, and that randomness washes out any information that could come from the initial state of the system. We describe this situation by saying that the Markov chain eventually enters a steady state. Where a steady state, what does it mean it? Does it mean the state itself becomes steady and stops at one place? No, the state of the chain keeps jumping forever. The state of the chain will keep making transitions, will keep going back and forth between 1 and 2. So the state itself, the Xn, does not become steady in any sense. What becomes steady are the probabilities that describe Xn. That is, after a long time elapses, the probability that you find yourself at state 1 becomes a constant 2/7, and the probability that you find yourself in state 2 becomes a constant. So jumps will keep happening, but at any given time, if you ask what's the probability that right now I am at state 1, the answer is going to be 2/7. Incidentally, do the numbers sort of makes sense? Why is this number bigger than that number? Well, this state is a little more sticky than that state. Once you enter here, it's kind of harder to get out. So when you enter here, you spend a lot of time here. This one is easier to get out, because the probability is 0.5, so when you enter there, you tend to get out faster. So you keep moving from one to the other, but you tend to spend more time on that state, and this is reflected in this probability being bigger than that one. So no matter where you start, there's 5/7 probability of being here, 2/7 probability being there. So there were some really nice things that happened in this example. The question is, whether things are always as nice for general Markov chains. The two nice things that happened where the following-- as we keep doing this calculation, this number settles to something. The limit exists. The other thing that happens is that this number is the same as that number, which means that the initial state does not matter. Is this always the case? Is it always the case that as n goes to infinity, the transition probabilities converge to something? And if they do converge to something, is it the case that the limit is not affected by the initial state i at which the chain started? So mathematically speaking, the question we are raising is whether Rij(n) converges to something. And whether that something to which it converges to has only to do with j. It's the probability that you find yourself at state j, and that probability doesn't care about the initial state. So it's the question of whether the initial state gets forgotten in the long run. So the answer is that usually, or for nice chains, both of these things will be true. You get the limit which does not depend on the initial state. But if your chain has some peculiar or unique structure, this might not happen. So let's think first about the issue of convergence. So convergence, as n goes to infinity at a steady value, really means the following. If I tell you a lot of time has passed, then you tell me, OK, the state of the probabilities are equal to that value without having to consult your clock. If you don't have convergence, it means that Rij can keep going up and down, without settling to something. So in order for you to tell me the value of Rij, you need to consult your clock to check if, right now, it's up or is it down. So there's some kind of periodic behavior that you might get when you do not get convergence, and this example here illustrates it. So what's happened in this example? Starting from state 2, next time you go here, or there, with probability half. And then next time, no matter where you are, you move back to state 2. So this chain has some randomness, but the randomness is kind of limited type. You go out, you come in. You go out, you come in. So there's a periodic pattern that gets repeated. It means that if you start at state 2 after an even number of steps, you are certain to be back at state 2. So this probability here is 1. On the other hand, if the number of transitions is odd, there's no way that you can be at your initial state. If you start here, at even times you would be here, at odd times you would be there or there. So this probability is 0. As n goes to infinity, these probabilities, the n-step transition probability does not converge to anything. It keeps alternating between 0 and 1. So convergence fails. This is the main mechanism by which convergence can fail if your chain has a periodic structure. And we're going to discuss next time that, if periodicity absent, then we don't have an issue with convergence. The second question if we have convergence, whether the initial state matters or not. In the previous chain, where you could keep going back and forth between states 1 and 2 numerically, one finds that the initial state does not matter. But you can think of situations where the initial state does matter. Look at this chain here. If you start at state 1, you stay at state 1 forever. There's no way to escape. So this means that R11(n) is 1 for all n. If you start at state 3, you will be moving between stage 3 and 4, but there's no way to go in that direction, so there's no way that you go to state 1. And for that reason, R31 is 0 for all n. OK So this is a case where the initial state matters. R11 goes to a limit, as n goes to infinity, because it's constant. It's always 1 so the limit is 1. R31 also has a limit. It's 0 for all times. So these are the long term probabilities of finding yourself at state 1. But those long-term probabilities are affected by where you started. If you start here, you're sure that's, in the long term, you'll be here. If you start here, you're sure that, in the long term, you will not be there. So the initial state does matter here. And this is a situation where certain states are not accessible from certain other states, so it has something to do with the graph structure of our Markov chain. Finally let's answer this question here, at least for large n's. What do you think is going to happen in the long term if you start at state 2? If you start at state 2, you may stay at state 2 for a random amount of time, but eventually this transition will happen, or that transition would happen. Because of the symmetry, you are as likely to escape from state 2 in this direction, or in that direction, so there's probability 1/2 that, when the transition happens, the transition happens in that direction. So for large N, you're certain that the transition does happen. And given that the transition has happened, it has probability 1/2 that it has gone that particular way. So clearly here, you see that the probability of finding yourself in a particular state is very much affected by where you started from. So what we want to do next is to abstract from these two examples and describe the general structural properties that have to do with periodicity, and that have to do with what happened here with certain states, not being accessible from the others. We're going to leave periodicity for next time. But let's talk about the second kind of phenomenon that we have. So here, what we're going to do is to classify the states in a transition diagram into two types, recurrent and transient. So a state is said to be recurrent if the following is true. If you start from the state i, you can go to some places, but no matter where you go, there is a way of coming back. So what's an example for the recurrent state? This one. Starting from here, you can go elsewhere. You can go to state 7. You can go to state 6. That's all where you can go to. But no matter where you go, there is a path that can take you back there. So no matter where you go, there is a chance, and there is a way for returning where you started. Those states we call recurrent. And by this, 8 is recurrent. All of these are recurrent. So this is recurrent, this is recurrent. And this state 5 is also recurrent. You cannot go anywhere from 5 except to 5 itself. Wherever you can go, you can go back to where you start. So this is recurrent. If it is not the recurrent, we say that it is transient. So what does transient mean? You need to take this definition, and reverse it. Transient means that, starting from i, there is a place to which you could go, and from which you cannot return. If it's recurrent, anywhere you go, you can always come back. Transient means there are places where you can go from which you cannot come back. So state 1 is recurrent - because starting from here, there's a possibility that you get there, and then there's no way back. State 4 is recurrent, starting from 4, there's somewhere you can go and-- sorry, transient, correct. State 4 is transient starting from here, there are places where you could go, and from which you cannot come back. And in this particular diagram, all these 4 states are transients. Now if the state is transient, it means that there is a way to go somewhere where you're going to get stuck and not to be able to come. As long as your state keeps circulating around here, eventually one of these transitions is going to happen, and once that happens, then there's no way that you can come back. So that transient state will be visited only a finite number of times. You will not be able to return to it. And in the long run, you're certain that you're going to get out of the transient states, and get to some class of recurrent states, and get stuck forever. So, let's see, in this diagram, if I start here, could I stay in this lump of states forever? Well as long as I'm staying in this type of states, I would keep visiting states 1 and 2 Each time that I visit state 2, there's going to be positive probability that I escape. So in the long run, if I were to stay here, I would visit state 2 an infinite number of times, and I would get infinite chances to escape. But if you have infinite chances to escape, eventually you will escape. So you are certain that with probability 1, starting from here, you're going to move either to those states, or to those states. So starting from transient states, you only stay at the transient states for random but finite amount of time. And after that happens, you end up in a class of recurrent states. And when I say class, what they mean is that, in this picture, I divide the recurrent states into 2 classes, or categories. What's special about them? These states are recurrent. These states are recurrent. But there's no communication between the 2. If you start here, you're stuck here. If you start here, you are stuck there. And this is a case where the initial state does matter, because if you start here, you get stuck here. You start here, you get stuck there. So depending on the initial state, that's going to affect the long term behavior of your chain. So the guess you can make at this point is that, for the initial state to not matter, we should not have multiple recurrent classes. We should have only 1. But we're going to get back to this point next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
A_Derived_Distribution_Example.txt
Hi. In this problem we'll work through an example of calculating a distribution for a minute variable using the method of derived distributions. So in general, the process goes as follows. We know the distribution for some random variable X and what we want is the distribution for another random variable of Y, which is somehow related to X through some function g. So Y is a g of X. And the steps that we follow-- we can actually just kind of summarize them using this four steps. The first step is to write out the CDF of Y. So Y is thing that we want. And what we'll do is we'll write out the CDF first. So remember the CDF is just capital F of y, y is the probability that random variable Y is less than or equal to some value, little y. The next thing we'll do is, we'll use this relationship that we know, between Y and X. And we'll substitute in, instead of writing the random variable Y In here, we'll write it in terms of X. So we'll plug in for-- instead of Y, we'll plug-in X. And we'll use this function g in order to do that. So what we have now is that up to here, we would have that the CDF of Y is now the probability that the random variable X is less than or equal to some value, little y. Next what we'll do is we'll actually rewrite this probability as a CDF of X. So the CDF of X, remember, would be-- F of x is that the probability of X is less than or equal to some little x. And then once we have that, if we differentiate this-- when we differentiate the CDF of X, we get the PDF of X. And what we presume is that we know this PDF already. And from that, what we get is, when we differentiate this thing, we get the PDF of Y. So through this whole process what we get is, we'll get the relationship between the PDF of Y and the PDF of X. So that is the process for calculating the PDF of Y using X. So let's go into our specific example. In this case, what we're told is that X, the one that we know, is a standard normal random variable. Meaning that it's mean 0 and variance 1. And so we know the form of the PDF. The PDF of x is this, 1 over square root of 2 pi e to the minus x squared over 2. And then the next thing that we're told is this relationship between X and Y. So what we're told is, if X is negative, then Y is minus X. If X is positive, then Y is the square root of X. So this is a graphical its representation of the relationship between X and Y. All right, so we have everything that we need. And now let's just go through this process and calculate what the PDF of Y is. So the first thing we do is we write out the PDF of Y. So the PDF of Y is what we've written. It's the probability that the random variable Y is less than or equal to some little y. Now the next step that we do is we have to substitute in, instead of in terms of Y, we want to substitute it in terms of X. Because we actually know stuff about X, but we don't know anything about Y. So what is the probability that Y, the random variable Y, is less than or equal to some little y? Well, let's go back to this relationship and see if we can figure that out. So let's pretend that here is our little y. Well, if the random variable Y is less than or equal to little y, it has to be underneath this horizontal line. And in order for it to be underneath this horizontal line, that means that X has to be between this range. And what is this range? This range goes from minus Y to Y squared. So why is that? It's because in this portion X and Y are related as, Y is negative X and here it's Y is square root of X. So if X is Y squared, then Y would be Y. If X is negative Y, then Y would be Y. All right, so this is the range that we're looking for. So if Y, the random variable Y is less than or equal to little y, then this is the same as if the random variable X is between negative Y and Y squared. So let's plug that in. This is the same as the probability that X is between negative Y and Y squared. So those are the first two steps. Now the third step is, we have to rewrite this as the CDF of x. So right now we have it in terms of a probability of some event related to X. Let's actually transform that to be explicitly in terms of the CDF of X. So how do we do that? Well, this is just the probability that X is within some range. So we can turn that into the CDF by writing it as a difference of two CDFs. So this is the same as the probability that X is less than or equal to Y squared minus the probability that X is less than or equal to negative Y. So in order to find the probability that X is between this range, we take the probability that it's less than Y squared, which is everything here. And then we subtract that probability that it's less than Y, negative Y. So what we're left with is just within this range. So these actually are now exactly CDFs of X. So this is F of X evaluated at Y squared and this is F of X evaluated at negative Y. So now we've completed step three. And the last step that we need to do is differentiate. So if we differentiate both sides of this equation with respect to Y, we'll get that the left side would get what we want, which is the PDF of Y. Now we differentiate the right side-- we'll have to invoke the chain rule. So the first thing that we do is, well, this is a CDF of X. So when we differentiate we'll get the PDF of X. But then we also have invoke the chain rule for this argument inside. So the derivative of Y squared would give us an extra term, 2Y. And then similarly this would give us the PDF of X evaluated at negative Y plus the chain will give us an extra term of negative 1. So let's just clean this up a little bit. So it's 2y F X squared plus F X minus Y. All right, so now we're almost done. We've differentiated. We have the PDF of Y, which is what we're looking for. And we've written it in terms of the PDF of X. And fortunately we know what that is, so once we plug that in, then we're essentially done. So what is the PDF? Well, the PDF of X evaluated at Y squared is going to give us 1 over square root of 2 pi e to the minus-- so in this case, X is Y squared-- so we get Y to the fourth over 2. And then we get another 1 over square root of 2 pi e to the minus Y squared over 2. OK, and now we're almost done. The last thing that we need to take care of is, what is the range? Now remember, it's important when you calculate out PDFs to always think about the ranges where things are valid. So when we think about this, what is the range where this actually is valid? Well, Y, remember is related to X in this relationship. So as we look at this, we see that Y can never be negative. Because no matter what X is, Y gets transformed into some non-negative version. So what we know is that this is now actually valid only for Y greater than 0 and for Y less than 0, the PDF is 0. So this gives us the final PDF of Y. All right, so it seems like at first when you start doing these derived restriction problems that it's pretty difficult. But if we just remember that there are these pretty straightforward steps that we follow, and as long as you go through these steps and do them methodically, then you can actually come up with the solution for any of these problems. And one last thing to remember is to always think about what are the ranges where these things are valid? Because the relationship between these two random variables could be pretty complicated and you need to always be aware of when things are non-zero and when they are 0.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
A_Mixed_Distribution_Example.txt
In this video, we'll look at an example in which we compute the expectation and cumulative density function of a mixed random variable. The problem is as follows. Al arrives at some bus stand or taxi stand at a given time-- let's say time t equals 0. He finds a taxi waiting for him with probability 2/3 in which he takes it. Otherwise, he takes the next arriving taxi or bus. The time that the next taxi arrives between 0 and 10 minutes, and it's uniformly distributed. The next bus leaves exactly in 5 minutes. So the question is, if X is Al's waiting time, what is the CDF and expectation of X? So one way to view this problem that's convenient is the tree structure. So I've drawn it for you here in which the events of interest are B1, B2, and B3, B1 being Al catches the waiting taxi, B2 being Al catches the next taxi, which arrives between 0 and 5 minutes, and B3 being Al catches the bus at the time t plus 5. Notice that these three events are disjoint. So Al catching the waiting taxi means he can't catch the bus or the next arriving taxi. And it also covers the entire set of outcomes. So, in fact, B1, B2, and B3 are a partition. So let's look at the relevant probabilities. Whether or not B1 happens depends on whether or not the taxi's waiting for Al. So if the taxi is waiting for him, which happens with 2/3 probability, B1 happens. Otherwise, with 1/3 probability, we see whether or not a taxi is going to arrive between 0 and 5 minutes. If it arrives, which is going to happen with what probability? Well, we know that the next taxi is going to arrive between 0 and 10 minutes uniform. It's a uniform distribution. And so half the mass is going to be between 0 and 5. And the other half is going to be between 5 and 10. And so this is going to be 1/2 and 1/2. And let's look at what X looks like. If B1 happens, Al isn't waiting at all, so x is going to be equal to 0. If B3 happens, which is the other easy case, Al's going to be waiting for 5 minutes exactly. And if B2 happens, well, it's going to be some value between 0 and 5. We can actually draw the density, so let's see if we can do that here. The original next taxi was uniformly distributed between 0 and 10. But now, we're told two pieces of information. We're told that B2 happens, which means that there's no taxi waiting, and the next taxi arrives between 0 and 5 minutes. Well, the fact that there was no taxi waiting has no bearing on that density. But the fact that the next taxi arrives between 0 and 5 does make a difference, because the density then is going to be definitely 0 in any region outside 0 and 5. Now, the question is, how is it going to look between 0 and 5? Well, it's not going to look crazy. It's not going to look like something different. It's simply going to be a scale version of the original density between 0 and 5. You can verify this by looking at the actual formula for when you condition events on a random variable. Here, it's going to be 1/5 in order for this to integrate out to 1. And now we can jump right into figuring out the expectation. Now, notice that X is actually a mixed random variable? What does that mean? Well, X either takes on values according to either a discrete probability law or a continuous one. So if B1 happens, for example, X is going to be exactly equal to 0 with probability 1, which is a discrete probability problem. On the other hand, if B2 happens, then the value of X depends on the density, which is going to be continuous. So X is going to be a continuous random variable here. So how do you define an expectation in this case? Well, you can do it so that it satisfies the total expectation theorem, which means that the expectation of X is the probability of B1 times the expectation given B1 plus the probability of B2 times the expectation given B2 plus the probability of B3 times the expectation given B3. So this will satisfy the total expectation theorem. So the probability of B1 is going to be exactly 2/3. It's simply the probability of a taxi waiting for Al. The expected value of X-- well, when B1 happens, X is going to be exactly equal to 0. So the expected value is also going to be 0. The probability of B2 happening is the probability of a taxi not being there times the probability of a taxi arriving between 0 and 5. It's going to be 1/3 times 1/2. And the expected value of X given B2 is going to be the expected value of this density. The expected value of this density is the midpoint between 0 and 5. And so it's going to be 5/2. And the probability of B3 is going to be 1/3 times 1/2. Finally, the expected value of X given B3. Well, when B3 happens, X is going to be exactly equal to 5. So the expected value is also going to be 5. Now we're left with 5/12 plus 5/6, which is going to be 15/12. And we can actually fill that in here so that we can clear up the board to do the other part. Now we want to compute the CDF of X. Well, what is the CDF? Well, the CDF of X is going to be equal to the probability that the random variable X is less than or equal to some little x. It's a constant [INAUDIBLE]. Before we jump right in, let's try to understand what's the form of the CDF. And let's consider some interesting cases. You know that the random variable X, the waiting time, is going to be somewhere between 0 and 5, right? So let's consider what happens if little x is going to be less than 0. That's basically saying, what's the probability of the random variable X being less than some number that's less than 0? Waiting time can't be negative, so the probablility of this is going to be 0. Now, what if X is between equaling 0 and strictly less than 5? In that case, either X can fall between 0 and 5 according to this case, in the case of B2, or X can be exactly equal to 0. It's not clear. So let's do that later. Let's fill that in later. What about if x is greater than or equal to 5? Little x, right? That's the probability that the random variable X is less than some number that's bigger than or equal to 5. The waiting time X, the random variable, is definitely going to be less than or equal to 5, so the probability of this is going to be 1. So now this case. How do we do it? Well, let's try to use a similar kind of approach that we did for the expected value and use the total probability theorem in this case. So let's try to review this. First of all, let's assume that this is true, that little x is between 0 and 5, including 0. And let's use the total probability theorem, and use the partitions B1, B2, and B3. So what's the probability of B1? It's the probability that Al catches waiting taxi, which happens with probability 2/3. What's the probability that the random variable X, which is less than or equal to little x under this condition, when B1 happens? Well, if B1 happens, then random variable X is going to be exactly equal to 0, right? So in that case, it's definitely going to be less than or equal to any value of x, including 0. So the probability will be 1. What's the probability that B2 happens now? The probability that B2 happens is 1/3 times 1/2, as we did before. And the probability that the random variable X is less than or equal to little x when B2 happens. Well, if B2 happens, this is your density. And this is our condition. And so x is going to be somewhere in between these spots. And we'd like to compute what's the probably that random variable X is less than or equal to little x. So we want this area. And that area is going to have height of 1/5 and width of x. And so the area's going to be 1/5 times x. And finally, the probability that B3 happens is going to be 1/3 times 1/2 again times the probability that the random variable X is less than or equal to little x given B3. Well, when B3 happens, X is going to be exactly 5 as a random variable. But little x, you know-- we're assuming in this condition-- is going to be between 0 and 5, but strictly less than 5. So there's no way that if the random variable X is 5 and this is strictly less than 5, this is going to be true. And so that probability will be 0. So we're now left with 2/3 plus 1/30. And now we can fill this in. 2/3 plus 1/30 x. And this is our CDF. So now we've finished the problem, computed the expected value here and then the CDF here, and this was a great illustration of how you would do so for a mixed random variable.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
9_Multiple_Continuous_Random_Variables.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN TSITSIKLIS: OK let's start. So we've had the quiz. And I guess there's both good and bad news in it. Yesterday, as you know, the bad news. The average was a little lower than what we would have wanted. On the other hand, the good news is that the distribution was nicely spread. And that's the main purpose of this quiz is basically for you to calibrate and see roughly where you are standing. The other piece of the good news is that, as you know, this quiz doesn't count for very much in your final grade. So it's really a matter of calibration and to get your mind set appropriately to prepare for the second quiz, which counts a lot more. And it's more substantial. And we'll make sure that the second quiz will have a higher average. All right. So let's go to our material. We're talking now these days about continuous random variables. And I'll remind you what we discussed last time. I'll remind you of the concept of the probability density function of a single random variable. And then we're going to rush through all the concepts that we covered for the case of discrete random variables and discuss their analogs for the continuous case. And talk about notions such as conditioning independence and so on. So the big picture is here. We have all those concepts that we developed for the case of discrete random variables. And now we will just talk about their analogs in the continuous case. We already discussed this analog last week, the density of a single random variable. Then there are certain concepts that show up both in the discrete and the continuous case. So we have the cumulative distribution function, which is a description of the probability distribution of a random variable and which applies whether you have a discrete or continuous random variable. Then there's the notion of the expected value. And in the two cases, the expected value is calculated in a slightly different way, but not very different. We have sums in one case, integrals in the other. And this is the general pattern that we're going to have. Formulas for the discrete case translate to corresponding formulas or expressions in the continuous case. We generically replace sums by integrals, and we replace must functions with density functions. Then the new pieces for today are going to be mostly the notion of a joint density function, which is how we describe the probability distribution of two random variables that are somehow related, in general, and then the notion of a conditional density function that tells us the distribution of one random variable X when you're told the value of another random variable Y. There's another concept, which is the conditional PDF given that the certain event has happened. This is a concept that's in some ways simpler. You've already seen a little bit of that in last week's recitation and tutorial. The idea is that we have a single random variable. It's described by a density. Then you're told that the certain event has occurred. Your model changes the universe that you are dealing with. In the new universe, you are dealing with a new density function, the one that applies given the knowledge that we have that the certain event has occurred. All right. So what exactly did we say about continuous random variables? The first thing is the definition, that a random variable is said to be continuous if we are given a certain object that we call the probability density function and we can calculate interval probabilities given this density function. So the definition is that the random variable is continuous if you can calculate probabilities associated with that random variable given that formula. So this formula tells you that the probability that your random variable falls inside this interval is the area under the density curve. OK. There's a few properties that a density function must satisfy. Since we're talking about probabilities, and probabilities are non-negative, we have that the density function is always a non-negative function. The total probability over the entire real line must be equal to 1. So the integral when you integrate over the entire real line has to be equal to 1. That's the second property. Another property that you get is that if you let a equal to b, this integral becomes 0. And that tells you that the probability of a single point in the continuous case is always equal to 0. So these are formal properties. When you want to think intuitively, the best way to think about what the density function is to think in terms of little intervals, the probability that my random variable falls inside the little interval. Well, inside that little interval, the density function here is roughly constant. So that integral becomes the value of the density times the length of the interval over which you are integrating, which is delta. And so the density function basically gives us probabilities of little events, of small events. And the density is to be interpreted as probability per unit length at a certain place in the diagram. So in that place in the diagram, the probability per unit length around this neighborhood would be the height of the density function at that point. What else? We have a formula for calculating expected values of functions of random variables. In the discrete case, we had the formula where here we had the sum, and instead of the density, we had the PMF. The same formula is also valid in the continuous case. And it's not too hard to derive, but we will not do it. But let's think of the intuition of what this formula says. You're trying to figure out on the average how much g(X) is going to be. And then you reason, and you say, well, X may turn out to take a particular value or a small interval of values. This is the probability that X falls inside the small interval. And when that happens, g(X) takes that value. So this fraction of the time, you fall in the little neighborhood of x, and you get so much. Then you average over all the possible x's that can happen. And that gives you the average value of the function g(X). OK. So this is the easy stuff. Now let's get to the new material. We want to talk about multiple random variables simultaneously. So we want to talk now about two random variables that are continuous, and in some sense that they are jointly continuous. And let's see what this means. The definition is similar to the definition we had for a single random variable, where I take this formula here as the definition of continuous random variables. Two random variables are said to be jointly continuous if we can calculate probabilities by integrating a certain function that we call the joint density function over the set of interest. So we have our two-dimensional plane. This is the x-y plane. There's a certain event S that we're interested in. We want to calculate the probability. How do we do that? We are given this function f_(X,Y), the joint density. It's a function of the two arguments x and y. So think of that function as being some kind of surface that sits on top of the two-dimensional plane. The probability of falling inside the set S, we calculate it by looking at the volume under the surface, that volume that sits on top of S. So the surface underneath it has a certain total volume. What should that total volume be? Well, we think of these volumes as probabilities. So the total probability should be equal to 1. The total volume under this surface, should be equal to 1. So that's one property that we want our density function to have. So when you integrate over the entire space, this is of the volume under your surface. That should be equal to 1. Of course, since we're talking about probabilities, the joint density should be a non-negative function. So think of the situation as having one pound of probability that's spread all over your space. And the height of this joint density function basically tells you how much probability tends to be accumulated in certain regions of space as opposed to other parts of the space. So wherever the density is big, that means that this is an area of the two-dimensional plane that's more likely to occur. Where the density is small, that means that those x-y's are less likely to occur. You have already seen one example of continuous densities. That was the example we had in the very beginning of the class with a uniform distribution on the unit square. That was a special case of a density function that was constant. So all places in the unit square were roughly equally likely as any other places. But in other models, some parts of the space may be more likely than others. And we describe those relative likelihoods using this density function. So if somebody gives us the density function, this determines for us probabilities of all the subsets of the two-dimensional plane. Now for an intuitive interpretation, it's good to think about small events. So let's take a particular x here and then x plus delta. So this is a small interval. Take another small interval here that goes from y to y plus delta. And let's look at the event that x falls here and y falls right there. What is this event? Well, this is the event that will fall inside this little rectangle. Using this rule for calculating probabilities, what is the probability of that rectangle going to be? Well, it should be the integral of the density over this rectangle. Or it's the volume under the surface that sits on top of that rectangle. Now, if the rectangle is very small, the joint density is not going to change very much in that neighborhood. So we can treat it as a constant. So the volume is going to be the height times the area of the base. The height at that point is whatever the function happens to be around that point. And the area of the base is delta squared. So this is the intuitive way to understand what a joint density function really tells you. It specifies for you probabilities of little squares, of little rectangles. And it allows you to think of the joint density function as probability per unit area. So these are the units of the density, its probability per unit area in the neighborhood of a certain point. So what do we do with this density function once we have it in our hands? Well, we can use it to calculate expected values. Suppose that you have a function of two random variables described by a joint density. You can find, perhaps, the distribution of this random variable and then use the basic definition of the expectation. Or you can calculate expectations directly, using the distribution of the original random variables. This is a formula that's again identical to the formula that we had for the discrete case. In the discrete case, we had a double sum here, and we had PMFs. So the intuition behind this formula is the same that one had for the discrete case. It's just that the mechanics are different. Then something that we did in the discrete case was to find a way to go from the joint density of the two random variables taken together to the density of just one of the random variables. So we had a formula for the discrete case. Let's see how things are going to work out in the continuous case. So in the continuous case, we have here our two random variables. And we have a density for them. And let's say that we want to calculate the probability that x falls inside this interval. So we're looking at the probability that our random variable X falls in the interval from little x to x plus delta. Now, by the properties that we already have for interpreting the density function of a single random variable, the probability of a little interval is approximately the density of that single random variable times delta. And now we want to find a formula for this marginal density in terms of the joint density. OK. So this is the probability that x falls inside this interval. In terms of the two-dimensional plane, this is the probability that (x,y) falls inside this strip. So to find that probability, we need to calculate the probability that (x,y) falls in here, which is going to be the double integral over the interval over this strip, of the joint density. And what are we integrating over? y goes from minus infinity to plus infinity. And the dummy variable x goes from little x to x plus delta. So to integrate over this strip, what we do is for any given y, we integrate in this dimension. This is the x integral. And then we integrate over the y dimension. Now what is this inner integral? Because x only varies very little, this is approximately constant in that range. So the integral with respect to x just becomes delta times f(x,y). And then we've got our dy. So this is what the inner integral will evaluate to. We are integrating over the little interval. So we're keeping y fixed. Integrating over here, we take the value of the density times how much we're integrating over. And we get this formula. OK. Now, this expression must be equal to that expression. So if we cancel the deltas, we see that the marginal density must be equal to the integral of the joint density, where we have integrated out the value of y. So this formula should come as no surprise at this point. It's exactly the same as the formula that we had for discrete random variables. But now we are replacing the sum with an integral. And instead of using the joint PMF, we are using the joint PDF. Then, continuing going down the list of things we did for discrete random variables, we can now introduce a definition of the notion of independence of two random variables. And by analogy with the discrete case, we define independence to be the following condition. Two random variables are independent if and only if their joint density function factors out as a product of their marginal densities. And this property needs to be true for all x and y. So this is the formal definition. Operationally and intuitively, what does it mean? Well, intuitively it means the same thing as in the discrete case. Knowing anything about X shouldn't tell you anything about Y. That is, information about X is not going to change your beliefs about Y. We are going to come back to this statement in a second. The other thing that it allows you to do-- I'm not going to derive this-- is it allows you to calculate probabilities by multiplying individual probabilities. So if you ask for the probability that x falls in a certain set A and y falls in a certain set B, then you can calculate that probability by multiplying individual probabilities. This takes just two lines of derivation, which I'm not going to do. But it comes back to the usual notion of independence of events. Basically, operationally independence means that you can multiply probabilities. So now let's look at an example. There's a sort of pretty famous and classical one. It goes back a lot more than a 100 years. And it's the famous Needle of Buffon. Buffon was a French naturalist who, for some reason, also decided to play with probability. And look at the following problem. So you have the two-dimensional plane. And on the plane we draw a bunch of parallel lines. And those parallel lines are separated by a length. And the lines are apart at distance d. And we throw a needle at random, completely at random. And we'll have to give a meaning to what "completely at random" means. And when we throw a needle, there's two possibilities. Either the needle is going to fall in a way that does not intersect any of the lines, or it's going to fall in a way that it intersects one of the lines. We're taking the needle to be shorter than this distance, so the needle cannot intersect two lines simultaneously. It either intersects 0, or it intersects one of the lines. The question is to find the probability that the needle is going to intersect a line. What's the probability of this? OK. We are going to approach this problem by using our standard four-step procedure. Set up your sample space, describe a probability law on that sample space, identify the event of interest, and then calculate. These four steps basically correspond to these three bullets and then the last equation down here. So first thing is to set up a sample space. We need some variables to describe what happened in the experiment. So what happens in the experiment is that the needle lands somewhere. And where it lands, we can describe this by specifying the location of the center of the needle. And what do we mean by the location of the center? Well, we can take as our variable to be the distance from the center of the needle to the nearest line. So it tells us the vertical distance of the center of the needle from the nearest line. The other thing that matters is the orientation of the needle. So we need one more variable, which we take to be the angle that the needle is forming with the lines. We can put the angle here, or you can put in there. Yes, it's still the same angle. So we have these two variables that described what happened in the experiment. And we can take our sample space to be the set of all possible x's and theta's. What are the possible x's? The lines are d apart, so the nearest line is going to be anywhere between 0 and d/2 away. So that tells us what the possible x's will be. As for theta, it really depends how you define your angle. We are going to define our theta to be the acute angle that's formed between the needle and a line, if you were to extend it. So theta is going to be something between 0 and pi/2. So I guess these red pieces really correspond to the part of setting up the sample space. OK. So that's part one. Second part is we need a model. OK. Let's take our model to be that we basically know nothing about how the needle falls. It can fall in any possible way, and all possible ways are equally likely. Now, if you have those parallel lines, and you close your eyes completely and throw a needle completely at random, any x should be equally likely. So we describe that situation by saying that X should have a uniform distribution. That is, it should have a constant density over the range of interest. Similarly, if you kind of spin your needle completely at random, any angle should be as likely as any other angle. And we decide to model this situation by saying that theta also has a uniform distribution over the range of interest. And finally, where we put it should have nothing to do with how much we rotate it. And we capture this mathematically by saying that X is going to be independent of theta. Now, this is going to be our model. I'm not deriving the model from anything. I'm only saying that this sounds like a model that does not assume any knowledge or preference for certain values of x rather than other values of theta. In the absence of any other particular information you might have in your hands, that's the most reasonable model to come up with. So you model the problem that way. So what's the formula for the joint density? It's going to be the product of the densities of X and Theta. Why is it the product? This is because we assumed independence. And the density of X, since it's uniform, and since it needs to integrate to 1, that density needs to be 2/d. That's the density of X. And the density of Theta needs to be 2/pi. That's the value for the density of Theta so that the overall probability over this interval ends up being 1. So now we do have our joint density in our hands. The next thing to do is to identify the event of interest. And this is best done in a picture. And there's two possible situations that one could have. Either the needle falls this way, or it falls this way. So how can we tell if one or the other is going to happen? It has to do with whether this interval here is smaller than that or bigger than that. So we are comparing the height of this interval to that interval. This interval here is capital X. This interval here, what is it? This is half of the length of the needle, which is l/2. To find this height, we take l/2 and multiply it with the sine of the angle that we have. So the length of this interval up here is l/2 times sine theta. If this is smaller than x, the needle does not intersect the line. If this is bigger than x, then the needle intersects the line. So the event of interest, that the needle intersects the line, is described this way in terms of x and theta. And now that we have the event of interest described mathematically, all that we need to do is to find the probability of this event, we integrate the joint density over the part of (x, theta) space in which this inequality is true. So it's a double integral over the set of all x's and theta's where this is true. The way to do this integral is we fix theta, and we integrate for x's that go from 0 up to that number. And theta can be anything between 0 and pi/2. So the integral over this set is basically this double integral here. We already have a formula for the joint density. It's 4 over pi d, so we put it here. And now, fortunately, this is a pretty easy integral to evaluate. The integral with respect to x -- there's nothing in here. So the integral is just the length of the interval over which we're integrating. It's l/2 sine theta. And then we need to integrate this with respect to theta. We know that the integral of a sine is a negative cosine. You plug in the values for the negative cosine at the two end points. I'm sure you can do this integral . And we finally obtain the answer, which is amazingly simple for such a pretty complicated-looking problem. It's 2l over pi d. So some people a long, long time ago, after they looked at this answer, they said that maybe that gives us an interesting way where one could estimate the value by pi, for example, experimentally. How do you do that? Fix l and d, the dimensions of the problem. Throw a million needles on your piece of paper. See how often your needless do intersect the line. That gives you a number for this quantity. You know l and d, so you can use that to infer pi. And there's an apocryphal story about a wounded soldier in a hospital after the American Civil War who actually had heard about this and was spending his time in the hospital throwing needles on pieces of paper. I don't know if it's true or not. But let's do something similar here. So let's look at this diagram. We fix the dimensions. This is supposed to be our little d. That's supposed to be our little l. We have the formula from the previous slide that p is 2l over pi d. In this instance, we choose d to be twice l. So this number is 1/pi. So the probability that the needle hits the line is 1/pi. So I need needles that are 3.1 centimeters long. I couldn't find such needles. But I could find paper clips that are 3.1 centimeters long. So let's start throwing paper clips at random and see how many of them will end up intersecting the lines. Good. OK. So out of eight paper clips, we have exactly four that intersected the line. So our estimate for the probability of intersecting the line is 1/2, which gives us an estimate for the value of pi, which is two. Well, I mean, within an engineering approximation, we're in the right ballpark, right? So this might look like a silly way of trying to estimate pi. And it probably is. On the other hand, this kind of methodology is being used especially by physicists and also by statisticians. It's used a lot. When is it used? If you have an integral to calculate, such as this integral, but you're not lucky, and your functions are not so simple where you can do your calculations by hand, and maybe the dimensions are larger-- instead of two random variables you have 100 random variables, so it's a 100-fold integral-- then there's no way to do that in the computer. But the way that you can actually do it is by generating random samples of your random variables, doing that simulation over and over many times. That is, by interpreting an integral as a probability, you can use simulation to estimate that probability. And that gives you a way of calculating integrals. And physicists do actually use that a lot, as well as statisticians, computer scientists, and so on. It's a so-called Monte Carlo method for evaluating integrals. And it's a basic piece of the toolbox in science these days. Finally, the harder concept of the day is the idea of conditioning. And here things become a little subtle when you deal with continuous random variables. OK. First, remember again our basic interpretation of what a density is. A density gives us probabilities of little intervals. So how should we define conditional densities? Conditional densities should again give us probabilities of little intervals, but inside a conditional world where we have been told something about the other random variable. So what we would like to be true is the following. We would like to define a concept of a conditional density of a random variable X given the value of another random variable Y. And it should behave the following way, that the conditional density gives us the probability of little intervals-- same as here-- given that we are told the value of y. And here's where the subtleties come. The main thing to notice is that here I didn't write "equal," I wrote "approximately equal." Why do we need that? Well, the thing is that conditional probabilities are not defined when you condition on an event that has 0 probability. So we need the conditioning event here to have posed this probability. So instead of saying that Y is exactly equal to little y, we want to instead say we're in a new universe where capital Y is very close to little y. And then this notion of "very close" kind of takes the limit and takes it to be infinitesimally close. So this is the way to interpret conditional probabilities. That's what they should mean. Now, in practice, when you actually use probability, you forget about that subtlety. And you say, well, I've been told that Y is equal to 1.3. Give me the conditional distribution of X. But formally or rigorously, you should say I'm being told that Y is infinitesimally close to 1.3. Tell me the distribution of X. Now, if this is what we want, what should this quantity be? It's a conditional probability, so it should be the probability of two things happening-- X being close to little x, Y being close to little y. And that's basically given to us by the joint density divided by the probability of the conditioning event, which has something to do with the density of Y itself. And if you do things carefully, you see that the only way to satisfy this relation is to define the conditional density by this particular formula. OK. Big discussion to come down in the end to what you should have probably guessed by now. We just take any formulas and expressions from the discrete case and replace PMFs by PDFs. So the conditional PDF is defined by this formula where here we have joint PDF and marginal PDF, as opposed to the discrete case where we had the joint PMF and the marginal PMF. So in some sense, it's just a syntactic change. In another sense, it's a little subtler on how you actually interpret it. Speaking about interpretation, what are some ways of thinking about the joint density? Well, the best way to think about it is that somebody has fixed little y for you. So little y is being fixed here. And we look at this density as a function of X. I've told you what Y is. Tell me what you know about X. And you tell me that X has a certain distribution. What does that distribution look like? It has exactly the same shape as the joint density. Remember, we fixed Y. So this is a constant. So the only thing that varies is X. So we get the function that behaves like the joint density when you fix y, which is really you take the joint density, and you take a slice of it. You fix a y, and you see how it varies with x. So in that sense, the conditional PDF is just a slice of the joint PDF. But we need to divide by a certain number, which just scales it and changes its shape. We're coming back to a picture in a second. But before going to the picture, lets go back to the interpretation of independence. If the two random the variables are independent, according to our definition in the previous slide, the joint density is going to factor as the product of the marginal densities. The density of Y in the numerator cancels the density in the denominator. And we're just left with the density of X. So in the case of independence, what we get is that the conditional is the same as the marginal. And that solidifies our intuition that in the case of independence, being told something about the value of Y does not change our beliefs about how X is distributed. So whatever we expected about X is going to remain true even after we are told something about Y. So let's look at some pictures. Here is what the joint PDF might look like. Here we've got our x and y-axis. And if you want to calculate the probability of a certain event, what you do is you look at that event and you see how much of that mass is sitting on top of that event. Now let's start slicing. Let's fix a value of x and look along that slice where we obtain this function. Now what does that slice do? That slice tells us for that particular x what the possible values of y are going to be and how likely they are. If we integrate over all y's, what do we get? Integrating over all y's just gives us the marginal density of X. It's the calculation that we did here. By integrating over all y's, we find the marginal density of X. So the total area under that slice gives us the marginal density of X. And by looking at the different slices, we find how likely the different values of x are going to be. How about the conditional? If we're interested in the conditional of Y given X, how would you think about it? This refers to a universe where we are told that capital X takes on a specific value. So we put ourselves in the universe where this line has happened. There's still possible values of y that can happen. And this shape kind of tells us the relative likelihoods of the different y's. And this is indeed going to be the shape of the conditional distribution of Y given that X has occurred. On the other hand, the conditional distribution must add up to 1. So the total probability over all of the different y's in this universe, that total probability should be equal to 1. Here it's not equal to 1. The total area is the marginal density. To make it equal to 1, we need to divide by the marginal density, which is basically to renormalize this shape so that the total area under that slice, under that shape, is equal to 1. So we start with the joint. We take the slices. And then we adjust the slices so that every slice has an area underneath equal to 1. And this gives us the conditional. So for example, down here-- you can not even see it in this diagram-- but after you renormalize it so that its total area is equal to 1, you get this sort of narrow spike that goes up. And so this is a plot of the conditional distributions that you get for the different values of x. Given a particular value of x, you're going to get this certain conditional distribution. So this picture is worth about as much as anything else in this particular chapter. Make sure you kind of understand exactly all these pieces of the picture. And finally, let's go, in the remaining time, through an example where we're going to throw in the bucket all the concepts and notations that we have introduced so far. So the example is as follows. We start with a stick that has a certain length. And we break it a completely random location. And-- yes, this 1 should be l. OK. So it has length l. And we're going to break it at the random place. And we call that random place where we break it, we call it X. X can be anywhere, uniform distribution. So this means that X has a density that goes from 0 to l. I guess this capital L is supposed to be the same as the lower-case l. So that's the density of X. And since the density needs to integrate to 1, the height of that density has to be 1/l. Now, having broken the stick and given that we are left with this piece of the stick, I'm now going to break it again at a completely random place, meaning I'm going to choose a point where I break it uniformly over the length of the stick. What does this mean? And let's call Y the location where I break it. So Y is going to range between 0 and x. x is the stick that I'm left with. So I'm going to break it somewhere in between. So I pick a y between 0 and x. And of course, x is less than l. And I'm going to break it there. So y is uniform between 0 and x. What does that mean, that the density of y, given that you have already told me x, ranges from 0 to little x? If I told you that the first break happened at a particular x, then y can only range over this interval. And I'm assuming a uniform distribution over that interval. So we have this kind of shape. And that fixes for us the height of the conditional density. So what's the joint density of those two random variables? By the definition of conditional densities, the conditional was defined as the ratio of this divided by that. So we can find the joint density by taking the marginal and then multiplying by the conditional. This is the same formula as in the discrete case. This is our very familiar multiplication rule, but adjusted to the case of continuous random variables. So Ps become Fs. OK. So we do have a formula for this. What is it? It's 1/l-- that's the density of X -- times 1/x, which is the conditional density of Y. This is the formula for the joint density. But we must be careful. This is a formula that's not valid anywhere. It's only valid for the x's and y's that are possible. And the x's and y's that are possible are given by these inequalities. So x can range from 0 to l, and y can only be smaller than x. So this is the formula for the density on this part of our space. The density is 0 anywhere else. So what does it look like? It's basically a 1/x function. So it's sort of constant along that dimension. But as x goes to 0, your density goes up and can even blow up. It sort of looks like a sail that's raised and somewhat curved and has a point up there going to infinity. So this is the joint density. Now once you have in your hands a joint density, then you can answer in principle any problem. It's just a matter of plugging in and doing computations. How about calculating something like a conditional expectation of Y given a value of x? OK. That's a concept we have not defined so far. But how should we define it? Means the reasonable thing. We'll define it the same way as ordinary expectations except that since we're given some conditioning information, we should use the probability distribution that applies to that particular situation. So in a situation where we are told the value of x, the distribution that applies is the conditional distribution of Y. So it's going to be the conditional density of Y given the value of x. Now, we know what this is. It's given by 1/x. So we need to integrate y times 1/x dy. And what should we integrate over? Well, given the value of x, y can only range from 0 to x. So this is what we get. And you do your integral, and you get that this is x/2. Is it a surprise? It shouldn't be. This is just the expected value of Y in a universe where X has been realized and Y is given by this distribution. Y is uniform between 0 and x. The expected value of Y should be the midpoint of this interval, which is x/2. Now let's do fancier stuff. Since we have the joint distribution, we should be able to calculate the marginal. What is the distribution of Y? After breaking the stick twice, how big is the little piece that I'm left with? How do we find this? To find the marginal, we just take the joint and integrate out the variable that we don't want. A particular y can happen in many ways. It can happen together with any x. So we consider all the possible x's that can go together with this y and average over all those x's. So we plug in the formula for the joint density from the previous slide. We know that it's 1/lx. And what's the range of the x's? So to find the density of Y for a particular y up here, I'm going to integrate over x's. The density is 0 here and there. The density is nonzero only in this part. So I need to integrate over x's going from here to there. So what's the "here"? This line goes up at the slope of 1. So this is the line x equals y. So if I fix y, it means that my integral starts from a value of x that is also equal to y. So where the integral starts from is at x equals y. And it goes all the way until the end of the length of our stick, which is l. So we need to integrate from little y up to l. So that's something that almost always comes up. It's not enough to have just this formula for integrating the joint density. You need to keep track of different regions. And if the joint density is 0 in some regions, then you exclude those regions from the range of integration. So the range of integration is only over those values where the particular formula is valid, the places where the joint density is nonzero. All right. The integral of 1/x dx, that gives you a logarithm. So we evaluate this integral, and we get an expression of this kind. So the density of Y has a somewhat unexpected shape. So it's a logarithmic function. And it goes this way. It's for y going all the way to l. When y is equal to l, the logarithm of 1 is equal to 0. But when y approaches 0, logarithm of something big blows up, and we get a shape of this form. OK. Finally, we can calculate the expected value of Y. And we can do this by using the definition of the expectation. So integral of y times the density of y. We already found what that density is, so we can plug it in here. And we're integrating over the range of possible y's, from 0 to l. Now this involves the integral for y log y, which I'm sure you have encountered in your calculus classes but maybe do not remember how to do it. In any case, you look it up in some integral tables or do it by parts. And you get the final answer of l/4. And at this point, you say, that's a really simple answer. Shouldn't I have expected it to be l/4? I guess, yes. I mean, when you break it once, the expected value of what you are left with is going to be 1/2 of what you started with. When you break it the next time, the expected length of what you're left with should be 1/2 of the piece that you are now breaking. So each time that you break it at random, you expected it to become smaller by a factor of 1/2. So if you break it twice, you are left something that's expected to be 1/4. This is reasoning on the average, which happens to give you the right answer in this case. But again, there's the warning that reasoning on the average doesn't always give you the right answer. So be careful about doing arguments of this type. Very good. See you on Wednesday.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
The_Probability_Distribution_Function_PDF_of_X.txt
Hi, In this problem, we'll be looking at the PDF the absolute value of x. So if we know a random variable, x, and we know it's PDF, how can we use that information to help us find the PDF of another random variable-- the absolute value of x? And so throughout this problem, we'll define a new random variable called y. And we'll define that y to be equal to the absolute value of x, just to make things simpler. So we'll do a couple of concrete examples, and then we'll try to generalize at the end. The first example that we'll deal with in part A is this PDF for x. So we're told that the PDF of x is 1/3 between negative 2 and 1, and 0 otherwise. And here's a picture of what it looks like. It's just a rectangle from negative 2 to 1. So now we want to find out what is the PDF of the absolute value of x, which we've called y? And at this point, it may be helpful to step back and think about this problem from the discrete point of view again. So if x were a discrete random variable, the problem would be, what is the probability that the absolute value of x is equal to, say, 1/2? Well, the probability that the absolute value of x is equal to 1/2-- that can occur in two different ways. One is that x itself is 1/2. Or x could be negative 1/2, in which case, the absolute value of x would still be 1/2. So those two events are mutually exclusive. And so the probability of either one of them happening is you can just add them up. And so the probability of the absolute value of x being 1/2 would have two contributions, one from x being 1/2, and one from x being negative 1/2. The analogous idea carries over to the continuous case, when you have a PDF. So now let's say that we're interested in the case where we want to know the PDF of y at 1/2. Well, that again, is going to have two contributions, one from where x is 1/2, and one from where x is minus 1/2. And so you can just imagine that each one of these values for y-- and remember, y has to be non-negative, because it's an absolute value-- has two contributions, one from the right side of 0, and one from the left, or negative, side of 0. So you can come up and write an algebraic expression for this, and we'll do that in Part C. But you can also look at this from a visual point of view. And you can take the PDF diagram itself and imagine transforming it to find out what the PDF of the absolute value of x would look like. So the way to do it would be you take what's on the negative side. You flip it over and take the mirror image, and then you stack it on top of what you have on the right-hand side, or the positive side. So take this, flip it over, and stack it on top. You can imagine just taking this block, flipping it over. And just think of it as like a Tetris block that's falling down from above. And it stacks on top of wherever it lands. So it'll turn it into something that looks like this. So there's already a block of height 1/3 from 0 to 1. That's from the original 1. And now we take this, and flip it over, and drop it on top. Well, this part is going to fall on top of the segment from 0 to 1. And then this part gets flipped over and dropped over here. And it falls down here. And so the final PDF actually looks like this kind of staircase, where this is 2/3 now, because this has two contributions of 1/3 each, and this is 1/3. So that is the graphical way of approaching this. And the PDF for completeness, the PDF of y would be 2/3 for y between 0 and 1, 1/3 for y from 1 to 2, and 0 otherwise. All right, so let's move on to part B, and get some more practice. Part B, we're given that this PDF of x now is 2 times e to the negative 2x for x positive, and 0 otherwise. Now you may just recognize this as an exponential random variable with a parameter of 2. And again, we can graph this and see what it looks like. And it turns out that it's going to start out at 2 and fall off exponentially. So in this case, this is actually quite simple. Because if you look at it, x is already positive. It doesn't have any negative parts. So in fact, the absolute value of x is the same as x itself, because x is never negative. And so y is just the same thing as x. And so in this case, actually, the PDF of y is exactly the same as the PDF of x. It's just 2e to the minus 2y, for y positive and zero otherwise. Now you can also see this graphically also, because to the left of 0, the negative part, there is no PDF. The PDF is 0. And so if you were to take this, flip it over, and drop it on top, you wouldn't get anything, because there's nothing there. And so the entire PDF, even after you take the absolute value, is just the original one. So to generalize, what I said at the beginning was that, remember, the probability in the discrete case, if you wanted the probability that the absolute value of a random variable equals something, that would just be the probability that the random variable equals that value of little x, or the random variable equals negative little x. In either of those two cases, the absolute value would equal x. So you get those two contributions. And so to generalize in the continuous case with PDFs, you get something that looks very similar. So in this case, the PDF or y is just the PDF of x at y. So this is the case where x is just equal to y, plus the PDF of x evaluated negative y. So you, again, have both of these two contributions. And we can rewrite this top one to make it look more similar. So the PMF of some discrete [? number ?] y, where this is a discrete random variable that's equal to the absolute value of x, would be the PMF of x evaluated at y, plus the PMF of x evaluated at negative y. So in both the discrete and continuous cases, you have the same thing. So the overall summary of this problem is that, when you take a transformation-- in this case, an absolute value-- you can reason about it and figure out how to decompose that into arguments about the original random variable, just plain old x. And for the specific case of the absolute value, it just becomes taking a mirror image and popping it on top of what you originally had. So remember, you always have these two contributions. And so if you ever have a random variable that you need to take an absolute value of, you don't have to be scared. All you have to do is consider both of these contributions and add them up, and you have the PDF that you want. So I'll see you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
An_Inference_Example.txt
Hi. In this session, we're going to cover a nice review problem that will look at how to infer one random variable based on another. And in this problem, we're given two random variables-- X and Y-- and we're also given their joint pdf, which we're told is a constant 2/3 within the region bounded by these orange lines. And outside of the region, the joint pdf is 0. So the first thing we're going to look at, or the first thing that we're asked to do, is find the LMS estimator of Y based on X. Now, remember that LMS estimator is really just a conditional expectation. So the LMS estimator of Y based on X is the conditional expectation of Y, given X. Now, when we have a plot of the joint pdf and we're dealing with these two random variables, and especially when the joint pdf is constant like this, it's often easy to calculate this conditional expectation visually. So what we really need to do is just say, given any particular value of X, what is the conditional expectation of Y? So what we can do is we can just pick some values of X and see visually what that initial expectation is. So for example, if X is 1/2, given that X is 1/2, and since this whole joint pdf is uniform, then the conditional slice of Y will be from here to here. And that slice, the conditional distribution of Y, given that X is 1/2, will also be uniform. So it'll be uniform from here to here. And if it's uniform, we know that the conditional expectation will just be the midpoint here. And so that would be what the conditional expectation of Y would be, given that X is 1/2. And we could do the same thing for X equals 1. And we'll see that again, because everything is uniform, this slice is also going to be uniform. And so the conditional expectation will again be the midpoint, which is there. And then if we just look at it within this region, it's always going to be the midpoint. And so we get that the initial expectation of Y, given X, will just look like that line, which you can think of it as just bisecting this angle formed by these two parts of the region. But things are a little bit different, though, when we move to the region where X is between 1 and 2. Between 1 and 2, say at 1 and 1/2, this line doesn't continue. Because now, the slice of Y goes from here to here, and again, it's still uniform. So the midpoint would be there. And similarly for X equals 2, it would be here. And so for X between 1 and 2, the conditional expectation actually looks like this. So you see that there's actually two linear parts of it, but there's a kink at X equals 1. And so by looking at this visually and taking advantage of the fact that everything is uniform, we can pretty easily figure out what this conditional expectation is. So now, let's actually just write it out algebraically. So for X between 0 and 1, we said that it's this line, which if we look at it, that's just 1/2 of X. Now, this is for X between 0 and 1. And if X is between 1 and 2, it's going to be this line, which is a slope of 1. And if we extend this down, it hits the y-axis at negative 1/2. So it's X minus 1/2, if X is between 1 and 2. And otherwise, it's undefined. So we'll focus on these two cases here. Now, the second part of the question, now that we know what the LMS estimator is, we're asked to find what is the traditional mean squared error of this estimator? So we want to know how good is it. And one way of capturing that is to look at the mean squared error. And so recall that the conditional mean squared error is given by this expression. So what we're saying is this is what we estimate Y to be. This is what y really is, so this difference is how wrong, or the error in our estimate. We square it, because otherwise, positive and negative errors might cancel each other out, so we square it. And then this just looking at each individual value of x for now. So this is why it's the conditional mean squared error. So how do we calculate this? Well, remember that this g of X, we said the LMS estimator is just a conditional expectation. So it's just expectation of Y, given X. Well, then if you look at this, what this reminds you of, it reminds you of the definition of what a conditional variance is. A variance is just, you take the random variable, subtract its mean, square it, and take the expectation of that. This is no different, except that everything is now the conditional world of X. So this is actually the conditional variance of Y, given X is little x. What is the conditional variance of Y, given that X is little x? Now, we can again go back to this plot to try to help us out. We can split this up into regions again. So just take some little x as an example and see what the variance is. So if little x is 1/2, then we know that the conditional distribution of Y would be uniform, we said, from 0 up to here. Well, that point is this from 0 to 1/2. And remember, the variance of a uniform distribution is just the width of the uniform distribution squared, divided by 12. And so in this case, the width would be 1/2 squared over 12. And in general, for the region of X between 0 and 1, the width of the conditional distribution of Y will always be X, because the width will go from 0 to wherever X is. So because of that, the conditional variance will just be X squared, the width squared, over 12, when X is between 0 and 1. Now, let's think about the other case, where X is between 1 and 2. Well, if X is between 1 and 2, we're over here. And now, if we take the conditional distribution of Y, it's again uniform. But the width now, instead of varying with Y, it's always going to be the same width. Each of these slices have the same width, and the width goes from here-- this is X minus 1, and that's X. So if the width is always going to be a constant of 1. And so this variance is going to be 1/12. And from that, we get our answer for the conditional mean squared error. Now, part c asks us to find the mean squared error, which is given by this expression. And we'll see that it looks very similar to this, which was the conditional mean squared error. And now, given what we know from part b, this is easy to calculate. We can just apply total expectation, because this is just equal to the integral of the conditional mean squared error. And then we need to also multiply this by the pf of x, and then integrate over X. And that integral will should be from X equals 0 to 2, because that's the only range that applies for x, given this joint pdf. Now, in order to do this first, though, we need to figure out what the pdf of X is. In order to do that, we can go back to our original joint pdf of X and Y and marginalize it. So marginalizing, you could think of it as taking this joint pdf and collapsing it onto the x-axis so that you take everything and integrate out Y. Now to do that, let's do that up here. We can split it up into two sections. So the section of X between 0 and 1, we integrate the joint pdf from Y equals 0 to Y equals X, which is this portion of this line. So we integrate Y. The joint pdf is 2/3, and we integrate Y out from Y equals 0 to X. And then for the portion of X from 1 to 2, we again integrate Y out. Now we integrate Y from X minus 1 up to X. So this is X between 0 and 1, and this is X between 1 and 2. So we just do some little bit of calculus, and we get that this is going to be 2/3 X when X is between 0 and 1. And it's going to be 2/3 when X is between 1 and 2. So now that we have what the marginal pdf of X is, we can plug that into this, and plug in what we had for b, and then calculate what this actually is. So remember, we need to take care of these two cases, these two regions-- X between 0 and 1, and X between 1 and 2. So the conditional mean squared error for X between 0 and 1 is X squared over 2. So between 0 and 1, this first part is X squared over 12. The pdf of X in that same region is 2/3 x. And we integrate that in the region from x equals 0 to 1. And then, we also have the second region which is X from 1 to 2. In that region, the traditional mean squared error from part b is 1/12. The marginal pdf of X is 2/3, and we do this integral. And if you just carry out some calculus here, you'll get that the final answer is equal to 5/72. Now, the last part of this question asks us, is this mean squared error the same thing-- does it equal the expectation of the conditional variance? And it turns out that yes, it does. And to see that, we can just take this, and use the law of iterated expectations, because iterated expectations tells us this is in fact equal to the expectation of Y minus g of X squared, given X. That's just applying law of iterated expectations. And then, if we look at this, this part that's inside is exactly equal to the conditional variance of Y, given X. And so these two are, in fact, the same. In part c, we calculated what the marginal pdf of X is, and it'll actually be used later on in this problem. So for future reference, let's just write it down here in this corner. Now, so far in this problem, we've looked at the LMS estimator. And of course, there are other estimators that you can use as well. And now in part d, let's look at the linear LMS estimator. Now remember, the linear LMS estimator is special, because it forces the estimator to have a linear relationship. So the estimator is going to be a linear function of X. Now, compare that to what the LMS estimator was in this case. It was two linear pieces, but there was a kink. And so the entire estimator wasn't actually linear in X. Now, the LLMS estimator, or the linear LMS estimator, will give us the linear estimator. It's going to be a linear function of X. And we know that we have a formula for this. Is the expectation of Y plus the covariance of X and Y over the variance of X times X minus expectation of X. All right, so in order to calculate what this is, we just need to calculate what four things are. Now, let's start with this last one, the expected value of X. To calculate the expected value of X, we just use a formula. And from before, we know what the pdf of X is. And so we know that this is just going to be X times fx of x dx. And in particular, this will give us that from 0 to 1, it's going to be X times the pdf of X is 2/3 X, so it's 2/3 X squared. And from 1 to 2, it's going to be X times the pdf of X, which is just 2/3, so it's 2/3 X dx. And when you calculate this out, you'll get that is equal to 11/9. Now, let's calculate the variance of X next. In order to calculate that, let's use the formula that variance is equal to the expectation of X squared minus the expectation of X quantity squared. We had the expectation of X, so let's calculate what the expectation of X squared is. Now, it's the same idea. Instead, we have X squared times f of X dx. And we'll get the same sort of formula. We'll split it up again into two different parts from X equals 0 to X equals 1. It's going to be X squared times pdf, so it's 2/3 X cubed dx. And then from X equals 1 to 2, it's going to be X squared times 2/3. So it's 2/3 X squared dx. And when we calculate this out, we'll get that it's equal to 31/18. From that, we know that the variance is going to be equal to expectation of X squared minus expectation of X quantity squared. Now, expectation of X squared is 31/18. Expectation of X is 11/9. And when we calculate this, we get that the variance is equal to 37/162. So now we have this, and we have that. Let's calculate what expectation of Y is. Expectation of Y, let's calculate it using the law of iterated expectations. The law of iterated expectations tells us that this is equal to the expectation of Y conditioned on X. Now, we already know what expectation of Y conditioned on X is. That was the LMS estimator that we calculated earlier. It's this. So now we just need to calculate this out, and we can do that. So we know that in the range from X between 0 and 1, it's equal to 1/2 X. So in the range from 0 to 1, it's equal to 1/2 X. But then, we have to use total expectation, so we have to multiply by the pdf of X in that region which is 2/3 X dx. And then in the range from X equals 1 to 2, this conditional expectation is X minus 1/2. And the pdf of X in that region is 2/3. Now, when we calculate out this value, we'll get that it's equal to 7/9. And now, the last piece is the covariance of X and Y. Remember, the covariance, we can calculate that as the expectation of X times Y minus the expectation of X times the expectation of Y. We already know the expectation of X and the expectation of Y, so we just need to calculate the expectation of X times Y, the product of the two. And for that, we'll use the definition, and we'll use the joint pdf that we have. So this is going to be a double integral of X times Y times the joint pdf. And the tricky part here is just figuring out what these limits are. So we'll integrate in this order-- X and Y. Now, let's split this up. So let's focus on splitting X up. So for X between 0 and 1, we just need to figure out what's the rate right range of Y to integrate over such that this is actually non-zero. Because remember, the joint pdf is easy. It's just a constant 2/3. But it's only a constant 2/3 within this region. So the difficult part is just figuring out what the limits are in order to specify that region. So for X between 0 and 1, Y has to be between 0 and X, because this line is Y equals X. So we need to integrate from 0 to X-- X times Y times the joint pdf, which is 2/3. And now, let's do the other part, which is X from 1 to 2. Well, if X is from 1 to 2, in order to fall into this region, Y has to be between X minus 1 and X. So we integrate Y from X minus 1 to X. Against, it's X times Y times the joint pdf, which is 2/3. And now, once we have this set up, the rest of it we can just do some calculus. And what we find is that the final answer is equal to 41/36. Now, what that tells us is that the covariance of X and Y, which is just expectation of X times Y, the product, minus expectation of X times expectation of Y. We know expectation of X times Y now. It's 41/36. Expectation of X is 11/9. Expectation of Y is 7/9. So when we substitute all of that in, we get that this covariance is 61/324. All right, so now we have everything we need. Expectation of Y is here. Covariance is here. Variance of X is here. And expectation of X is here. So let's substitute that in, and we can figure out what the actual LLMS estimator is. So expectation of Y we know is 7/9. Expectation of X is 11/9. And when you divide the covariance, which is 61/324, by the variance, which is 37/162, we'll get 61/74. And so that is the LLMS estimator that we calculated. And notice that it is, in fact, linear in X. So let's plot that and see what it looks like. So it's going to be a line, and it's going to look like this. So at X equals 2, it's actually a little bit below 1 and 1/2, which is what the LMS estimator would be. At X equals 1, it's actually a little bit above 1/2, which is what the LMS estimator would be. And then it crosses 0 around roughly 1/4, and it drops actually below 0. So if we connect the dots, it's going to look something like this. So notice that it's actually not too far away from the LMS estimator here. But it doesn't have the kink because it is a line. And note also that it actually drops below. So when X is very small, you actually estimate negative values of Y, which is actually impossible, given the joint pdf distribution that we're given. And that is sometimes a feature or artifact of the linear LMS estimator, that you'll get values that don't necessarily seem to make sense. So now that we've calculated the linear LMS estimator in part d, which is this, and the LMS estimator in part a, which is this, we've also compared them visually. The linear LMS estimator is the one in pink, the straight line. And the LMS estimator is the one in black with the kink. It's an interesting question to now ask, which one of these is better? And in order to judge that, we need to come up with some sort of criterion to compare the two with. And the one that we're going to look at in part e is the mean squared error. Which one gives the lower mean squared error. And so specifically, we're going to ask ourselves which of these two estimators gives us the smaller mean squared error? Is it the linear LMS estimator given by l of X? Or is it the LMS estimator, given by g of X? Now, we know that the LMS estimator is the one that actually minimizes this. The LMS estimator is designed to minimize the mean squared error. And so we know that given any estimator of X, this one will have the smallest mean squared error. And so the linear LMS estimator, its mean squared error has to be at least as large as the LMS estimators. And the last part of the question now asks us to look at a third type of estimator, which is the MEP estimator. Now, we want to ask, why is it that we haven't been using the MEP estimator in this problem? Well, remember what the MEP estimator does. In this case, what we would do is it would take the conditional distribution ratio of Y given any value of X. And then it would pick the value of Y that gives the highest value in the conditional distribution. And that would be the MEP estimate of Y. But the problem in this case is that if you take any slice here, so a condition on any value of X, any of these slices, if you just take this out and look at it, it's going to be uniform. This is what the conditional distribution of Y given X is. It's going to be uniform between 0 and X. Now, what the MEP rule tells us is we're going to pick the value of Y that gives us the highest point in this conditional distribution. You can think of it as a posterior distribution. Now, what's the problem here? Well, every single point gives us exactly the same value for this conditional distribution. And so there's no unique MEP rule. Every single value of Y has just the same conditional distribution. So there's no sensible way of choosing a value based on the MEP rule in this case. But compare that with the LMS estimator, which is just get conditional expectation. In that case, we can always find a conditional expectation. In this case, the conditional expectation is the midpoint, which is X/2, just as had found in part a. OK, so in this problem, we reviewed a bunch of different ideas in terms of inference, and we took a joint pdf of X and Y, and we used that to calculate the LMS estimator, the linear LMS estimator. We compared the two, and then we also looked at why in this case, the MEP estimator doesn't really make sense. All right, so I hope that was helpful, and we'll see you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
12_Iterated_Expectations.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN TSITSIKLIS: So today we're going to finish with the core material of this class. That is the material that has to do with probability theory in general. And then for the rest of the semester we're going to look at some special types of models, talk about inference. Well, there's also going to be a small module of core material coming later. But today we're basically finishing chapter four. And what we're going to do is we're going to look at a somewhat familiar concept, the concept of the conditional expectation. But we're going to look at it from a slightly different angle, from a slightly more sophisticated angle. And together with the conditional expectation we will also talk about conditional variances. It's something that we're going to denote this way. And we're going to see what they are, and there are some subtle concepts that are involved here. And we're going to apply some of the tools we're going to develop to deal with a special type of situation in which we're adding random variables. But we're adding a random number of random variables. OK, so let's start talking about conditional expectations. I guess you know what they are. Suppose we are in the discrete the world. xy, or discrete random variables. We defined the conditional expectation of x given that I told you the value of the random variable y. And the way we define it is the same way as an ordinary expectation, except that we're using the conditional PMF. So we're using the probabilities that apply to the new universe where we are told the value of the random variable y. So this is still a familiar concept so far. If we're dealing with the continuous random variable x the formula is the same, except that here we have an integral, and we have to use the conditional density function of x. Now what I'm going to do, I want to introduce it gently through the example that we talked about last time. So last time we talked about having a stick that has a certain length. And we take that stick, and we break it at some point that we choose uniformly at random. And let's denote why the place where we chose to break it. Having chosen y, then we're left with a piece of the stick. And I'm going to choose a place to break it once more uniformly at random between 0 and y. So this is the second place at which we are going to break it, and we call that place x. OK, so what's the conditional expectation of x if I tell you the value of y? I tell you that capital Y happens to take a specific numerical value. So this capital Y is now a specific numerical value, x is chosen uniformly over this range. So the expected value of x is going to be half of this range between 0 and y. So the conditional expectation is little y over 2. The important thing to realize here is that this quantity is a number. I told you that the random variable took a certain numerical value, let's say 3.5. And then you tell me given that the random variable took the numerical value 3.5 the expected value of x is 1.75. So this is an equality between numbers. On the other hand, before you do the experiment you don't know what y is going to turn out to be. So this little y is the numerical value that has been observed when you start doing the experiments and you observe the value of capital Y. So in some sense this quantity is not known ahead of time, it is random itself. So maybe we can start thinking of it as a random variable. So to put it differently, before we do the experiment I ask you what's the expected value of x given y? You're going to answer me well I don't know, it depends on what y is going to turn out to be. So the expected value of x given y itself can be viewed as a random variable, because it depends on the random variable capital Y. So hidden here there's some kind of statement about random variables instead of numbers. And that statement about random variables, we write it this way. By thinking of the expected value, the conditional expectation, as a random variable instead of a number. It's a random variable when we do not specify a specific number, but we think of it as an abstract object. The expected value of x given the random variable y is the random variable y over 2 no matter what capital Y turns out to be. So we turn and take a statement that deals with equality of two numbers, and we make it a statement that's an equality between two random variables. OK so this is clearly a random variable because capital Y is random. What exactly is this object? I didn't yet define it for you formally. So let's now give the formal definition of this object that's going to be denoted this way. The conditional expectation of x given the random variable y is a random variable. Which random variable is it? It's the random variable that takes this specific numerical value whenever capital Y happens to take the specific numerical value little y. In particular, this is a random variable, which is a function of the random variable capital Y. In this instance, it's given by a simple formula in terms of capital Y. In other situations it might be a more complicated formula. So again, to summarize, it's a random. The conditional expectation can be thought of as a random variable instead of something that's just a number. So in any specific context when you're given the value of capital Y the conditional expectation becomes a number. This is the realized value of this random variable. But before the experiment starts, before you know what capital Y is going to be, all that you can say is that the conditional expectation is going to be 1/2 of whatever capital Y turns out to be. This is a pretty subtle concept, it's an abstraction, but it's a useful abstraction. And we're going to see today how to use it. All right, I have made the point that the conditional expectation, the random variable that takes these numerical values is a random variable. If it is a random variable this means that it has an expectation of its own. So let's start thinking what the expectation of the conditional expectation is going to turn out to be. OK, so the conditional expectation is a random variable, and in general it's some function of the random variable y that we are observing. In terms of numerical values if capital Y happens to take a specific numerical value then the conditional expectation also takes a specific numerical value, and we use the same function to evaluate it. The difference here is that this is an equality of random variables, this is an equality between numbers. Now if we want to calculate the expected value of the conditional expectation we're basically talking about the expected value of a function of a random variable. And we know how to calculate expected values of a function. If we are in the discrete case, for example, this would be a sum over all y's of the function who's expected value we're taking times the probability that y takes on a specific numerical value. OK, but let's remember what g is. So g is the numerical value of the conditional expectation of x with y. And now when you see this expression you recognize it. This is the expression that we get in the total expectation theorem. Did I miss something? Yes, in the total expectation theorem to find the expected value of x, we divide the world into different scenarios depending on what y happens. We calculate the expectation in each one of the possible worlds, and we take the weighted average. So this is a formula that you have seen before, and you recognize that this is the expected value of x. So this is a longer, more detailed derivation of what I had written up here, but the important thing to keep in mind is the moral of the story, the punchline. The expected value of the conditional expectation is the expectation itself. So this is just our total expectation theorem, but written in more abstract notation. And it comes handy to have this more abstract notation, as as we're going to see in a while. OK, we can apply this to our stick example. If we want to find the expected value of x how much of the stick is left at the end? We can calculate it using this law of iterated expectations. It's the expected value of the conditional expectation. We know that the conditional expectation is y over 2. So expected value of y is l over 2, because y is uniform so we get l over 4. So this gives us the same answer that we derived last time in a rather long way. All right, now that we have mastered conditional expectations, let's raise the bar a little more and talk about conditional variances. So the conditional expectation is the mean value, or the expected value, in a conditional universe where you're told the value of y. In that same conditional universe you can talk about the conditional distribution of x, which has a mean-- the conditional expectation-- but the conditional distribution of x also has a variance. So we can talk about the variance of x in that conditional universe. The conditional variance as a number is the natural thing. It's the variance of x, except that all the calculations are done in the conditional universe. In the conditional universe the expected value of x is the conditional expectation. This is the distance from the mean in the conditional universe squared. And we take the average value of the squared distance, but calculate it again using the probabilities that apply in the conditional universe. This is an equality between numbers. I tell you the value of y, once you know that value for y you can go ahead and plot the conditional distribution of x. And for that conditional distribution you can calculate the number which is the variance of x in that conditional universe. So now let's repeat the mental gymnastics from the previous slide, and abstract things, and define a random variable-- the conditional variance. And it's going to be a random variable because we leave the numerical value of capital Y unspecified. So ahead of time we don't know what capital Y is going to be, and because of that we don't know ahead of time what the conditional variance is going to be. So before the experiment starts if I ask you what's the conditional variance of x? You're going to tell me well I don't know, It depends on what y is going to turn out to be. It's going to be something that depends on y. So it's a random variable, which is a function of y. So more precisely, the conditional variance when written in this notation just with capital letters, is a random variable. It's a random variable whose value is completely determined once you learned the value of capital Y. And it takes a specific numerical value. If capital Y happens to get a realization that's a specific number, then the variance also becomes a specific number. And it's just a conditional variance of y over x in that universe. All right, OK, so let's continue what we did in the previous slide. We had the law of iterated expectations. That told us that expected value of a conditional expectation is the unconditional expectation. Is there a similar rule that might apply in this context? So you might guess that the variance of x could be found by taking the expected value of the conditional variance. It turns out that this is not true. There is a formula for the variance in terms of conditional quantities. But the formula is a little more complicated. If involves two terms instead of one. So we're going to go quickly through the derivation of this formula. And then, through examples we'll try to get some interpretation of what the different terms here correspond to. All right, so let's try to prove this formula. And the proof is sort of a useful exercise to make sure you understand all the symbols that are involved in here. So the proof is not difficult, it's 4 and 1/2 lines of algebra, of just writing down formulas. But the challenge is to make sure that at each point you understand what each one of the objects is. So we go into formula for the variance affects. We know in general that the variance of x has this nice expression that we often use to calculate it. The expected value of the squared of the random variable minus the mean squared. This formula, for the variances, of course it should apply to conditional universes. I mean it's a general formula about variances. If we put ourselves in a conditional universe where the random variable y is given to us the same math should work. So we should have a similar formula for the conditional variances. It's just the same formula, but applied to the conditional universe. The variance of x in the conditional universe is the expected value of x squared-- in the conditional universe-- minus the mean of x-- in the conditional universe-- squared. So this formula looks fine. Now let's take expected values of both sides. Remember the conditional variance is a random variable, because its value depends on whatever realization we get for capital Y. So we can take expectations here. We get the expected value of the variance. Then we have the expected value of a conditional expectation. Here we use the fact that we discussed before. The expected value of a conditional expectation is the same as the unconditional expectation. So this term becomes this. And finally, here we just have some weird looking random variable, and we take the expected value of it. All right, now we need to do something about this term. Let's use the same rule up here to write down this variance. So variance of an expectation, that's kind of strange, but you remember that the conditional expectation is random, because y is random. So this thing is a random variable, so this thing has a variance. What is the variance of this thing? It's the expected value of the thing squared minus the square of the expected value of the thing. Now what's the expected value of that thing? By the law of iterated expectations, once more, the expected value of this thing is the unconditional expectation. And that's why here I put the unconditional expectation. So I'm using again this general rule about how to calculate variances, and I'm applying it to calculate the variance of the conditional expectation. And now you notice that if you add these two expressions c and d we get this plus that, which is this. It's equal to-- these two terms cancel, we're left with this minus that, which is the variance of x. And that's the end of the proof. This one of those proofs that do not convey any intuition. This, as I said, it's a useful proof to go through just to make sure you understand the symbols. It starts to get pretty confusing, and a little bit on the abstract side. So it's good to understand what's going on. Now there is intuition behind this formula, some of which is better left for later in the class when we talk about inference. The idea is that the conditional expectation you can interpret it as an estimate of the random variable that you are trying to-- an estimate of x based on measurements of y, you can think of these variances as having something to do with an estimation error. And once you start thinking in those terms an interpretation will come about. But again as I said this is better left for when we start talking about inference. Nevertheless, we're going to get some intuition about all these formulas by considering a baby example where we're going to apply the law of iterated expectations, and the law of total variance. So the baby example is that we do this beautiful experiment of giving a quiz to a class consisting of many sections. And we're interested in two random variables. So we have a number of students, and they're all allocated to sections. The experiment is that I pick a student at random, and I look at two random variables. One is the quiz score of the randomly selected student, and the other random variable is the section number of the student that I have selected. We're given some statistics about the two sections. Section one has 10 students, section two has 20 students. The quiz average in section one was 90. Quiz average in section two was 60. What's the expected value of x? What's the expected quiz score if I pick a student at random? Well, each student has the same probability of being selected. I'm making that assumption out of the 30 students. I need to add the quiz scores of all of the students. So I need to add the quiz scores in section one, which is 90 times 10. I need to add the quiz scores in that section, which is 60 times 20. And we find that the overall average was 70. So this is the usual unconditional expectation. Let's look at the conditional expectation, and let's look at the elementary version where we're talking about numerical values. If I tell you that the randomly selected student was in section one what's the expected value of the quiz score of that student? Well, given this information, we're picking a random student uniformly from that section in which the average was 90. The expected value of the score of that student is going to be 90. So given the specific value of y, the specific section, the conditional expectation or the expected value of the quiz score is a specific number, the number 90. Similarly for the second section the expected value is 60, that's the average score in the second section. This is the elementary version. What about the abstract version? In the abstract version the conditional expectation is a random variable because it depends. In which section is the student that I picked? And with probability 1/3, I'm going to pick a student in the first section, in which case the conditional expectation will be 90, and with probability 2/3 I'm going to pick a student in the second section. And in that case the conditional expectation will take the value of 60. So this illustrates the idea that the conditional expectation is a random variable. Depending on what y is going to be, the conditional expectation is going to be one or the other value with certain probabilities. Now that we have the distribution of the conditional expectation we can calculate the expected value of it. And the expected value of such a random variable is 1/3 times 90, plus 2/3 times 60, and it comes out to equal 70. Which miraculously is the same number that we got up there. So this tells you that you can calculate the overall average in a large class by taking the averages in each one of the sections and weighing each one of the sections according to the number of students that it has. So this section had 90 students but only 1/3 of the students, so it gets a weight of 1/3. So the law of iterated expectations, once more, is nothing too complicated. It's just that you can calculate overall class average by looking at the section averages and combine them. Now since the conditional expectation is a random variable, of course it has a variance of it's own. So let's calculate the variance. How do we calculate variances? We look at all the possible numerical values of this random variable, which are 90 and 60. We look at the difference of those possible numerical values from the mean of this random variable, and the mean of that random variable, we found that's it's 70. And then we weight the different possible numerical values according to their probabilities. So with probability 1/3 the conditional expectation is 90, which is 20 away from the mean. And we get this squared distance. With probability 2/3 the conditional expectation is 60, which is 10 away from the mean, has this squared distance and gets weighed by 2/3, which is the probability of 60. So you do the numbers, and you get the value for the variance equal to 200. All right, so now we want to move towards using that more complicated formula involving the conditional variances. OK, suppose someone goes and calculates the variance of the quiz scores inside each one of the sections. So someone gives us these two pieces of information. In section one we take the differences from the mean in that section, and let's say that the various turns out to be a number equal to 10 similarly in the second section. So these are the variances of the quiz scores inside individual sections. The variance in one conditional universe, the variance in the other conditional universe. So if I pick a student in section one and I don't tell you anything more about the student, what's the variance of the random score of that student? The variance is 10. I know why, but I don't know the student. So the score is still a random variable in that universe. It has a variance, and that's the variance. Similarly, in the other universe, the variance of the quiz scores is this number, 20. Once more, this is an equality between numbers. I have fixed the specific value of y. So I put myself in a specific universe, I can calculate the variance in that specific universe. If I don't specify a numerical value for capital Y, and say I don't know what Y is going to be, it's going to be random. Then what kind of section variance I'm going to get itself will be random. With probability 1/3, I pick a student in the first section in which case the conditional variance given what I have picked is going to be 10. Or with probability 2/3 I pick y equal to 2, and I place myself in that universe. And in that universe the conditional variance is 20. So you see again from here that the conditional variance is a random variable that takes different values with certain probabilities. And which value it takes depends on the realization of the random variable capital Y. So this happens if capital Y is one, this happens if capital Y is equal to 2. Once you have something of this form-- a random variable that takes values with certain probabilities-- then you can certainly calculate the expected value of that random variable. Don't get intimidated by the fact that this random variable, it's something that's described by a string of eight symbols, or seven, instead of just a single letter. Think of this whole string of symbols there as just being a random variable. You could call it z for example, use one letter. So z is a random variable that takes these two values with these corresponding probabilities. So we can talk about the expected value of Z, which is going to be 1/3 times 10, 2/3 times 20, and we get a certain number from here. And now we have all the pieces to calculate the overall variance of x. The formula from the previous slide tells us this. Do we have all the pieces? The expected value of the variance, we just calculated it. The variance of the expected value, this was the last calculation in the previous slide. We did get a number for it, it was 200. You add the two, you find the total variance. Now the useful piece of this exercise is to try to interpret these two numbers, and see what they mean. The variance of x given y for a specific y is the variance inside section one. This is the variance inside section two. The expected value is some kind of average of the variances inside individual sections. So this term tells us something about the variability of this course, how widely spread they are within individual sections. So we have three sections, and this course happens to be-- OK, let's say the sections are really different. So here you have undergraduates and here you have post-doctoral students. And these are the quiz scores, that's section one, section two, section three. Here's the mean of the first section. And the variance has something to do with the spread. The variance in the second section has something to do with the spread, similarly with the third spread. And the expected value of the conditional variances is some weighted average of the three variances that we get from individual sections. So variability within sections definitely contributes something to the overall variability of this course. But if you ask me about the variability over the entire class there's a second effect. That has to do with the fact that different sections are very different from each other. That these courses here are far away from those scores. And this term is the one that does the job. This one looks at the expected values inside each section, and these expected values are this, this, and that. And asks a question how widely spread are they? It asks how different from each other are the means inside individual sections? And in this picture it would be a large number because the difference section means are quite different. So the story that this formula is telling us is that the overall variability of the quiz scores consists of two factors that can be quantified and added. One factor is how much variability is there inside individual sections? And the other factor is how different are the sections from each other? Both effects contribute to the overall variability of this course. Let's continue with just one more numerical example. Just to get the hang of doing these kinds of calculations, and apply this formula to do a divide and conquer calculation of the variance of a random variable. Just for variety now we're going to take a continuous random variable. Somebody gives you a PDF if this form, and they ask you for the variance. And you say oh that's too complicated, I don't want to do integrals. Can I divide and conquer? And you say OK, let me do the following trick. Let me define a random variable, y. Which takes the value 1 if x falls in here, and takes the value 2 if x falls in the second interval. And let me try to work in the conditional world where things might be easier, and then add things up to get the overall variance. So I have defined y this particular way. In this example y becomes a function of x. y is completely determined by x. And I'm going to calculate the overall variance by trying to calculate all of the terms that are involved here. So let's start calculating. First observation is that this event has probability 1/3, and this event has probability 2/3. The expected value of x given that we are in this universe is 1/2, because we have a uniform distribution from 0 to 1. Here we have a uniform distribution from 1 to 2, so the conditional expectation of x in that universe is 3/2. How about conditional variances? In the world who are y is equal to 1 x has a uniform distribution on a unit interval. What's the variance of x? By now you've probably seen that formula, it's 1 over 12. 1 over 12 is the variance of a uniform distribution over a unit interval. When y is equal to 2 the variance is again 1 over 12. Because in this instance again x has a uniform distribution over an interval of unit length. What's the overall expected value of x? The way you find the overall expected value is to consider the different numerical values of the conditional expectation. And weigh them according to their probabilities. So with probability 1/3 the conditional expectation is 1/2. And with probability 2/3 the conditional expectation is 3 over 2. And this turns out to be 7 over 6. So this is the advance work we need to do, now let's calculate a few things here. What's the variance of the expected value of x given y? Expected value of x given y is a random variable that takes these two values with these probabilities. So to find the variance we consider the probability that the expected value takes the numerical value of 1/2 minus the mean of the conditional expectation. What's the mean of the conditional expectation? It's the unconditional expectation. So it's 7 over 6. We just did that calculation. So I'm putting here that number, 7 over 6 squared. And then there's a second term with probability 2/3, the conditional expectation takes this value of 3 over 2, which is so much away from the mean, and we get this contribution. So this way we have calculated the variance of the conditional expectation, this is this term. What is this? Any guesses what this number is? It's 1 over 12, why? The conditional variance just happened in this example to be 1 over 12 no matter what. So the conditional variance is a deterministic random variable that takes a constant value. So the expected value of this random variable is just 1 over 12. So we got the two pieces that we need, and so we do have the overall variance of the random variable x. So this was just an academic example in order to get the hang of how to manipulate various quantities. Now let's use what we have learned and the tools that we have to do something a little more interesting. OK, so by now you're all in love with probabilities. So over the weekend you're going to bookstores to buy probability books. So you're going to visit a random number bookstores, and at each one of the bookstores you're going to spend a random amount of money. So let n be the number of stores that you are visiting. So n is an integer-- non-negative random variable-- and perhaps you know the distribution of that random variable. Each time that you walk into a store your mind is clear from whatever you did before, and you just buy a random number of books that has nothing to do with how many books you bought earlier on the day. It has nothing to do with how many stores you are visiting, and so on. So each time you enter as a brand new person, and buy a random number of books, and spend a random amount of money. So what I'm saying, more precisely, is that I'm making the following assumptions. That for each store i-- if you end up visiting the i-th store-- the amount of money that you spend is a random variable that has a certain distribution. That distribution is the same for each store, and the xi's from store to store are independent from each other. And furthermore, the xi's are all independent of n. So how much I'm spending at the store-- once I get in-- has nothing to do with how many stores I'm visiting. So this is the setting that we're going to look at. y is the total amount of money that you did spend. It's the sum of how much you spent in the stores, but the index goes up to capital N. And what's the twist here? It's that we're dealing with the sum of independent random variables except that how many random variables we have is not given to us ahead of time, but it is chosen at random. So it's a sum of a random number of random variables. We would like to calculate some quantities that have to do with y, in particular the expected value of y, or the variance of y. How do we go about it? OK, we know something about the linearity of expectations. That expectation of a sum is the sum of the expectations. But we have used that rule only in the case where it's the sum of a fixed number of random variables. So expected value of x plus y plus z is expectation of x, plus expectation of y, plus expectation of z. We know this for a fixed number of random variables. We don't know it, or how it would work for the case of a random number. Well, if we know something about the case for fixed random variables let's transport ourselves to a conditional universe where the number of random variables we're summing is fixed. So let's try to break the problem divide and conquer by conditioning on the different possible values of the number of bookstores that we're visiting. So let's work in the conditional universe, find the conditional expectation in this universe, and then use our law of iterated expectations to see what happens more generally. If I told you that I visited exactly little n stores, where little n now is a number, let's say 10. Then the amount of money you're spending is x1 plus x2 all the way up to x10 given that we visited 10 stores. So what I have done here is that I've replaced the capital N with little n, and I can do this because I'm now in the conditional universe where I know that capital N is little n. Now little n is fixed. We have assumed that n is independent from the xi's. So in this universe of a fixed n this information here doesn't tell me anything new about the values of the x's. If you're conditioning random variables that are independent from the random variables you are interested in, the conditioning has no effect, and so it can be dropped. So in this conditional universe where you visit exactly 10 stores the expected amount of money you're spending is the expectation of the amount of money spent in 10 stores, which is the sum of the expected amount of money in each store. Each one of these is the same number, because the random variables have identical distributions. So it's n times the expected value of money you spent in a typical store. This is almost obvious without doing it formally. If I'm telling you that you're visiting 10 stores, what you expect to spend is 10 times the amount you expect to spend in each store individually. Now let's take this equality here and rewrite it in our abstract notation, in terms of random variables. This is an equality between numbers. Expected value of y given that you visit 10 stores is 10 times this particular number. Let's translate it into random variables. In random variable notation, the expected value of money you're spending given the number of stores-- but without telling you a specific number-- is whatever that number of stores turns out to be times the expected value of x. So this is a random variable that takes this as a numerical value whenever capital N happens to be equal to little n. This is a random variable, which by definition takes this numerical value whenever capital N is equal to little n. So no matter what capital N happens to be what specific value, little n, it takes this is equal to that. Therefore the value of this random variable is going to be equal to that random variable. So as random variables, these two random variables are equal to each other. And now we use the law of iterated expectations. The law of iterated expectations tells us that the overall expected value of y is the expected value of the conditional expectation. We have a formula for the conditional expectation. It's n times expected value of x. Now the expected value of x is a number. Expected value of something random times a number is expected value of the random variable times the number itself. We can take a number outside the expectation. So expected value of x gets pulled out. And that's the conclusion, that overall the expected amount of money you're going to spend is equal to how many stores you expect to visit on the average, and how much money you expect to spend on each one on the average. You might have guessed that this is the answer. If you expect to visit 10 stores, and you expect to spend $100 on each store, then yes, you expect to spend $1,000 today. You're not going to impress your Harvard friends if you tell them that story. It's one of the cases where reasoning, on the average, does give you the plausible answer. But you will be able to impress your Harvard friends if you tell them that I can actually calculate the variance of how much I can spend. And we're going to work by applying this formula that we have, and the difficulty is basically sorting out all those terms here, and what they mean. So let's start with this term. So the expected value of y given that you're visiting n stores is n times the expected value of x. That's what we did in the previous slide. So this thing is a random variable, it has a variance. What is the variance? Is the variance of n times the expected value of x. Remember expected value of x is a number. So we're dealing with the variance of n times a number. What happens when you multiply a random variable by a constant? The variance becomes the previous variance times the constant squared. So the variance of this is the variance of n times the square of that constant that we had here. So this tells us the variance of the expected value of y given n. This is the part of the variability of how much money you're spending, which is attributed to the randomness, or the variability, in the number of stores that you are visiting. So the interpretation of the two terms is there's randomness in how much you're going to spend, and this is attributed to the randomness in the number of stores together with the randomness inside individual stores. Well, after I tell you how many stores you're visiting. So now let's deal with this term-- the variance inside individual stores. Let's take it slow. If I tell you that you're visiting exactly little n stores, then y is how much money you spent in those little n stores. You're dealing with the sum of little n random variables. What is the variance of the sum of little n random variables? It's the sum of their variances. So each store contributes a variance of x, and you're adding over little n stores. That's the variance of money spent if I tell you the number of stores. Now let's translate this into random variable notation. This is a random variable that takes this numerical value whenever capital N is equal to little n. This is a random variable that takes this numerical value whenever capital N is equal to little n. This is equal to that. Therefore, these two are always equal, no matter what happens to y. So we have an equality here between random variables. Now we take expectations of both. Expected value of the variance is expected value of this. OK it may look confusing to think of the expected value of the variance here, but the variance of x is a number, not a random variable. You think of it as a constant. So its expected value of n times a constant gives us the expected value of n times the constant itself. So now we got the second term as well, and now we put everything together, this plus that to get an expression for the overall variance of y. Which again, as I said before, the overall variability in y has to do with the variability of how much you spent inside the typical store. And the variability in the number of stores that you are visiting. OK, so this is it for today. We'll change subjects quite radically from next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
17_Markov_Chains_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality, educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right. So today, we're going to start by taking stock of what we discussed last time, review the definition of Markov chains. And then most of the lecture, we're going to concentrate on their steady-state behavior. Meaning, we're going to look at what does a Markov chain do if it has run for a long time. What can we say about the probabilities of the different states? So what I would like to repeat is a statement I made last time that Markov chains is a very, very useful class of models. Pretty much anything in the real world can be approximately modeled by a Markov chain provided that you set your states in the proper way. So we're going to see some examples. You're going to see more examples in the problems you're going to do in homework and recitation. On the other hand, we're not going to go too deep into examples. Rather, we're going to develop the general methodology. OK. All right. Markov models can be pretty general. They can run in continuous or discrete time. They can have continuous or discrete state spaces. In this class, we're going to stick just to the case where the state space is discrete and time is discrete because this is the simplest case. And also, it's the one where you build your intuition before going to more general cases perhaps in other classes. So the state is discrete and finite. There's a finite number of states. At any point in time, the process is sitting on one of those states. Time is discrete, so at each unit of time, somebody whistles and then the state jumps. And when it jumps, it can either land in the same place, or it can land somewhere else. And the evolution of the process is described by transition probabilities. Pij is the probability that the next state is j given that the current state is i. And the most important property that the Markov chain has, the definition of a Markov chain or Markov process, is that this probability, Pij, is the same every time that you land at state i -- no matter how you got there and also no matter what time it is. So the model we have is time homogeneous, which basically means that those transition probabilities are the same at every time. So the model is time invariant in that sense. So we're interested in what the chain or the process is going to do in the longer run. So we're interested, let's say, in the probability that starting at a certain state, n times steps later, we find ourselves at some particular state j. Fortunately, we can calculate those probabilities recursively. Of course, at the first time 1, the probability of being 1 time later at state j given that we are right now at state i, by definition, this is just the transition probabilities. So by knowing these, we can start a recursion that tells us the transition probabilities for more than n steps. This recursion, it's a formula. It's always true. You can copy it or memorize it. But there is a big idea behind that formula that you should keep in mind. And basically, the divide and conquer idea. It's an application of the total probability law. So let's fix i. The probability that you find yourself at state j, you break it up into the probabilities of the different ways that you can get to state j. What are those different ways? The different ways are the different states k at which you might find yourself the previous time. So with some probability, with this probability, you find yourself at state k the previous time. And then with probability Pkj, you make a transition to state j. So this is a possible scenario that takes you to state j after n transitions. And by summing over all the k's, then we have considered all the possible scenarios. Now, before we move to the more serious stuff, let's do a little bit of warm up to get a handle on how we use transition probabilities to calculate more general probabilities, then talk about some structural properties of Markov chains, and then eventually get to the main business of today, which is a steady-state behavior. So somebody gives you this chain, and our convention is that those arcs that are not shown here corresponds to 0 probabilities. And each one of the arcs that's shown has a non-zero probability, and somebody gives it to us. Suppose that the chain starts at state 1. We want to calculate the probability that it follows this particular path. That is, it goes to 2, then to 6, then to 7. How do we calculate the probability of a particular trajectory? Well, this is the probability-- so it's the probability of the trajectory from 1 that you go to 2, then to 6, then to 7. So the probability of this trajectory is we use the multiplication rule. The probability of several things happening is the probability that the first thing happens, which is a transition from 1 to 2. And then given that we are at state 2, we multiply with a conditional probability that the next event happens. That is, that X2 is equal to 6 given that right now, we are at state 1. And that conditional probability is just P26. And notice that this conditional probability applies no matter how we got to state 2. This is the Markov assumption. So we don't care about the fact that we came in in a particular way. Given that we came in here, this probability P26, that the next transition takes us to 6. And then given that all that stuff happened, so given that right now, we are at state 6, we need to multiply with a conditional probability that the next transition takes us to state 7. And this is just the P67. So to find the probability of following a specific trajectory, you just multiply the transition probabilities along the particular trajectory. Now, if you want to calculate something else, such as for example, the probability that 4 time steps later, I find myself at state 7 given that they started, let's say, at this state. How do you calculate this probability? One way is to use the recursion for the Rijs that we know that it is always valid. But for short and simple examples, and with a small time horizon, perhaps you can do this in a brute force way. What would be the brute force way? This is the event that 4 time steps later, I find myself at state 7. This event can happen in various ways. So we can take stock of all the different ways, and write down their probabilities. So starting from 2. One possibility is to follow this trajectory, 1 transition, 2 transitions, 3 transitions, 4 transitions. And that takes me to state 7. What's the probability of this trajectory? It's P26 times P67 times P76 and then times P67. So this is a probability of a particular trajectory that takes you to state 7 after 4 time steps. But there's other trajectories as well. What could be it? I might start from state 2, go to state 6, stay at state 6, stay at state 6 once more. And then from state 6, go to state 7. And so there must be one more. What's the other one? I guess I could go 1, 2, 6, 7. OK. That's the other trajectory. Plus P21 times P12 times P26 and times P67. So the transition probability, the overall probability of finding ourselves at state 7, is broken down as the sum of the probabilities of all the different ways that I can get to state 7 in exactly 4 steps. So we could always do that without knowing much about Markov chains or the general formula for the Rij's that we had. What's the trouble with this procedure? The trouble with this procedure is that the number of possible trajectories becomes quite large if this index is a little bigger. If this 4 was 100, and you ask how many different trajectories of length 100 are there to take me from here to there, that number of trajectories would be huge. It grows exponentially with the time horizon. And this kind of calculation would be impossible. The basic equation, the recursion that have for the Rij's is basically a clever way of organizing this computation so that the amount of computation that you do is not exponential in the time horizon. Rather, it's sort of linear with the time horizon. For each time step you need in the time horizon, you just keep repeating the same iteration over and over. OK. Now, the other thing that we discussed last time, briefly, was a classification of the different states of the Markov chain in two different types. A Markov chain, in general, has states that are recurrent, which means that from a recurrent state, I can go somewhere else. But from that somewhere else, there's always some way of coming back. So if you have a chain of this form, no matter where you go, no matter where you start, you can always come back where you started. States of this kind are called recurrent. On the other hand, if you have a few states all this kind, a transition of this type, then these states are transient in the sense that from those states, it's possible to go somewhere else from which place there's no way to come back where you started. The general structure of a Markov chain is basically a collection of transient states. You're certain that you are going to leave the transient states eventually. And after you leave the transient states, you enter into a class of states in which you are trapped. You are trapped if you get inside here. You are trapped if you get inside there. This is a recurrent class of states. From any state, you can get to any other state within this class. That's another recurrent class. From any state inside here, you can get anywhere else inside that class. But these 2 classes, you do not communicate. If you start here, there's no way to get there. If you have 2 recurrent classes, then it's clear that the initial conditions of your Markov chain matter in the long run. If you start here, you will be stuck inside here for the long run and similarly about here. So the initial conditions do make a difference. On the other hand, if this class was not here and you only had that class, what would happen to the chain? Let's say you start here. You move around. At some point, you make that transition. You get stuck in here. And inside here, you keep circulating, because of the randomness, you keep visiting all states over and over. And hopefully or possibly, in the long run, it doesn't matter exactly what time it is or where you started, but the probability of being at that particular state is the same no matter what the initial condition was. So with a single recurrent class, we hope that the initial conditions do not matter. With 2 or more recurrent classes, initial conditions will definitely matter. So how many recurrent classes we have is something that has to do with the long-term behavior of the chain and the extent to which initial conditions matter. Another way that initial conditions may matter is if a chain has a periodic structure. There are many ways of defining periodicity. The one that I find sort of the most intuitive and with the least amount of mathematical symbols is the following. The state space of a chain is said to be periodic if you can lump the states into a number of clusters called d clusters or groups. And the transition diagram has the property that from a cluster, you always make a transition into the next cluster. So here d is equal to 2. We have two subsets of the state space. Whenever we're here, next time we'll be there. Whenever we're here, next time we will be there. So this chain has a periodic structure. There may be still some randomness. When I jump from here to here, the state to which I jump may be random, but I'm sure that I'm going to be inside here. And then next time, I will be sure that I'm inside here. This would be a structure of a diagram in which we have a period of 3. If you start in this lump, you know that the next time, you would be in a state inside here. Next time, you'll be in a state inside here, and so on. So these chains certainly have a periodic structure. And that periodicity gets maintained. If I start, let's say, at this lump, at even times, I'm sure I'm here. At odd times, I'm sure I am here. So the exact time does matter in determining the probabilities of the different states. And in particular, the probability of being at the particular state cannot convert to a state value. The probability of being at the state inside here is going to be 0 for all times. In general, it's going to be some positive number for even times. So it goes 0 positive, zero, positive, 0 positive. Doesn't settle to anything. So when we have periodicity, we do not expect the states probabilities to converge to something, but rather, we expect them to oscillate. Now, how can we tell whether a Markov chain is periodic or not? There are systematic ways of doing it, but usually with the types of examples we see in this class, we just eyeball the chain, and we tell whether it's periodic or not. So is this chain down here, is it the periodic one or not? How many people think it's periodic? No one. One. How many people think it's not periodic? OK. Not periodic? Let's see. Let me do some drawing here. OK. Is it periodic? It is. From a red state, you can only get to a white state. And from a white state, you can only get to a red state. So this chain, even though it's not apparent from the picture, actually has this structure. We can group the states into red states and white states. And from reds, we always go to a white, and from a white, we always go to a red. So this tells you that sometimes eyeballing is not as easy. If you have lots and lots of states, you might have some trouble doing this exercise. On the other hand, something very useful to know. Sometimes it's extremely easy to tell that the chain is not periodic. What's that case? Suppose that your chain has a self-transition somewhere. Then automatically, you know that your chain is not periodic. So remember, the definition of periodicity requires that if you are in a certain group of states, next time, you will be in a different group. But if you have self-transitions, that property is not true. If you have a possible self-transition, it's possible that you stay inside your own group for the next time step. So whenever you have a self-transition, this implies that the chain is not periodic. And usually that's the simplest and easy way that we can tell most of the time that the chain is not periodic. So now, we come to the big topic of today, the central topic, which is the question about what does the chain do in the long run. The question we are asking and which we motivated last time by looking at an example. It's something that did happen in our example of last time. So we're asking whether this happens for every Markov chain. We're asking the question whether the probability of being at state j at some time n settles to a steady-state value. Let's call it pi sub j. That these were asking whether this quantity has a limit as n goes to infinity, so that we can talk about the steady-state probability of state j. And furthermore, we asked whether the steady-state probability of that state does not depend on the initial state. In other words, after the chain runs for a long, long time, it doesn't matter exactly what time it is, and it doesn't matter where the chain started from. You can tell me the probability that the state is a particular j is approximately the steady-state probability pi sub j. It doesn't matter exactly what time it is as long as you tell me that a lot of time has elapsed so that n is a big number. So this is the question. We have seen examples, and we understand that this is not going to be the case always. For example, as I just discussed, if we have 2 recurrent classes, where we start does matter. The probability pi(j) of being in that state j is going to be 0 if we start here, but it would be something positive if we were to start in that lump. So the initial state does matter if we have multiple recurrent classes. But if we have only a single class of recurrent states from each one of which you can get to any other one, then we don't have that problem. Then we expect initial conditions to be forgotten. So that's one condition that we need. And then the other condition that we need is that the chain is not periodic. If the chain is periodic, then these Rij's do not converge. They keep oscillating. If we do not have periodicity, then there is hope that we will get the convergence that we need. It turns out this is the big theory of Markov chains-- the steady-state convergence theorem. It turns out that yes, the rijs do converge to a steady-state limit, which we call a steady-state probability as long as these two conditions are satisfied. We're not going to prove this theorem. If you're really interested, the end of chapter exercises basically walk you through a proof of this result, but it's probably a little too much for doing it in this class. What is the intuitive idea behind this theorem? Let's see. Let's think intuitively as to why the initial state doesn't matter. Think of two copies of the chain that starts at different initial states, and the state moves randomly. As the state moves randomly starting from the two initial states a random trajectory. as long as you have a single recurrent class at some point, and you don't have periodicity at some point, those states, those two trajectories, are going to collide. Just because there's enough randomness there. Even though we started from different places, the state is going to be the same. After the state becomes the same, then the future of these trajectories, probabilistically, is the same because they both started at the same state. So this means that the initial conditions stopped having any influence. That's sort of the high-level idea of why the initial state gets forgotten. Even if you started at different initial states, at some time, you may find yourself to be in the same state as the other trajectory. And once that happens, your initial conditions cannot have any effect into the future. All right. So let's see how we might calculate those steady-state probabilities. The way we calculate the steady-state probabilities is by taking this recursion, which is always true for the end-step transition probabilities, and take the limit of both sides. The limit of this side is the steady-state probability of state j, which is pi sub j. The limit of this side, we put the limit inside the summation. Now, as n goes to infinity, n - also goes to infinity. So this Rik is going to be the steady-state probability of state k starting from state i. Now where we started doesn't matter. So this is just the steady-state probability of state k. So this term converges to that one, and this gives us an equation that's satisfied by the steady-state probabilities. Actually, it's not one equation. We get one equation for each one of the j's. So if we have 10 possible states, we're going to get the system of 10 linear equations. In the unknowns, pi(1) up to pi(10). OK. 10 unknowns, 10 equations. You might think that we are in business. But actually, this system of equations is singular. 0 is a possible solution of this system. If you plug pi equal to zero everywhere, the equations are satisfied. It does not have a unique solution, so maybe we need one more condition to get the uniquely solvable system of linear equations. It turns out that this system of equations has a unique solution. If you impose an additional condition, which is pretty natural, the pi(j)'s are the probabilities of the different states, so they should add to 1. So you want this one equation to the mix. And once you do that, then this system of equations is going to have a unique solution. And so we can find the steady-state probabilities of the Markov chain by just solving these linear equations, which is numerically straightforward. Now, these equations are quite important. I mean, they're the central point in the Markov chain. They have a name. They're called the balance equations. And it's worth interpreting them in a somewhat different way. So intuitively, one can sometimes think of probabilities as frequencies. For example, if I toss an unbiased coin, probability 1/2 of heads, you could also say that if I keep flipping that coin, in the long run, 1/2 of the time, I'm going to see heads. Similarly, let's try an interpretation of this pi(j), the steady-state probability, the long-term probability of finding myself at state j. Let's try to interpret it as the frequency with which I find myself at state j if I run a very, very long trajectory over that Markov chain. So the trajectory moves around, visits states. It visits the different states with different frequencies. And let's think of the probability that you are at a certain state as being sort of the same as the frequency of visiting that state. This turns out to be a correct statement. If you were more rigorous, you would have to prove it. But it's an interpretation which is valid and which gives us a lot of intuition about what these equation is saying. So let's think as follows. Let's focus on a particular state j, and think of transitions into the state j versus transitions out of the state j, or transitions into j versus transitions starting from j. So transition starting from that includes a self-transition. Ok. So how often do we get a transition, if we interpret the pi(j)'s as frequencies, how often do we get a transition into j? Here's how we think about it. A fraction pi(1) of the time, we're going to be at state 1. Whenever we are at state 1, there's going to be a probability, P1j, that we make a transition of this kind. So out of the times that we're at state 1, there's a frequency, P1j with which the next transition is into j. So out of the overall number of transitions that happen at the trajectory, what fraction of those transitions is exactly of that kind? That fraction of transitions is the fraction of time that you find yourself at 1 times the fraction with which out of one you happen to visit next state j. So we interpreted this number as the frequency of transitions of this kind. At any given time, our chain can do transitions of different kinds, transitions of the general form from some k, I go to some l. So we try to do some accounting. How often does a transition of each particular kind happen? And this is the frequency with which transitions of that particular kind happens. Now, what's the total frequency of transitions into state j? Transitions into state j can happen by having a transition from 1 to j, from 2 to j, or from state m to j. So to find the total frequency with which we would observe transitions into j is going to be this particular sum. Now, you are at state j if and only if the last transition was into state j. So the frequency with which you are at j is the frequency with which transitions into j happen. So this equation expresses exactly that statement. The probability of being at state j is the sum of the probabilities that the last transition was into state j. Or in terms of frequencies, the frequency with which you find yourself at state j is the sum of the frequencies of all the possible transition types that take you inside state j. So that's a useful intuition to have, and we're going to see an example a little later that it gives us short cuts into analyzing Markov chains. But before we move, let's revisit the example from last time. And let us write down the balance equations for this example. So the steady-state probability that I find myself at state 1 is the probability that the previous time I was at state 1 and I made a self-transition-- So the probability that I was here last time and I made a transition of this kind, plus the probability that the last time I was here and I made a transition of that kind. So plus pi(2) times 0.2. And similarly, for the other states, the steady-state probably that I find myself at state 2 is the probability that last time I was at state 1 and I made a transition into state 2, plus the probability that the last time I was at state 2 and I made the transition into state 1. Now, these are two equations and two unknowns, pi(1) and pi(2). But you notice that both of these equations tell you the same thing. They tell you that 0.5pi(1) equals 0.2pi(2). Either of these equations tell you exactly this if you move terms around. So these two equations are not really two equations. It's just one equation. They are linearly dependent equations, and in order to solve the problem, we need the additional condition that pi(1) + pi(2) is equal to 1. Now, we have our system of two equations, which you can solve. And once you solve it, you find that pi(1) is 2/7 and pi(2) is 5/7. So these are the steady state probabilities of the two different states. If we start this chain, at some state, let's say state 1, and we let it run for a long, long time, the chain settles into steady state. What does that mean? It does not mean that the state itself enters steady state. The state will keep jumping around forever and ever. It will keep visiting both states once in a while. So the jumping never ceases. The thing that gets into steady state is the probability of finding yourself at state 1. So the probability that you find yourself at state 1 at time one trillion is approximately 2/7. The probability you find yourself at state 1 at time two trillions is again, approximately 2/7. So the probability of being in that state settles into a steady value. That's what the steady-state convergence means. It's convergence of probabilities, not convergence of the process itself. And again, the two main things that are happening in this example, and more generally, when we have a single class and no periodicity, is that the initial state does not matter. There's enough randomness here so that no matter where you start, the randomness kind of washes out any memory of where you started. And also in this example, clearly, we do not have periodicity because we have self arcs. And this, in particular, implies that the exact time does not matter. So now, we're going to spend the rest of our time by looking into a special class of chains that's a little easier to deal with, but still, it's an important class. So what's the moral from here? This was a simple example with two states, and we could find the steady-state probabilities by solving a simple system of two-by-two equations. If you have a chain with 100 states, it's no problem for a computer to solve a system of 100-by-100 equations. But you can certainly not do it by hand, and usually, you cannot get any closed-form formulas, so you do not necessarily get a lot of insight. So one looks for special structures or models that maybe give you a little more insight or maybe lead you to closed-form formulas. And an interesting subclass of Markov chains in which all of these nice things do happen, is the class of birth/death processes. So what's a birth/death process? It's a Markov chain who's diagram looks basically like this. So the states of the Markov chain start from 0 and go up to some finite integer m. What's special about this chain is that if you are at a certain state, next time you can either go up by 1, you can go down by 1, or you can stay in place. So it's like keeping track of some population at any given time. One person gets born, or one person dies, or nothing happens. Again, we're not accounting for twins here. So we're given this structure, and we are given the transition probabilities, the probabilities associated with transitions of the different types. So we use P's for the upward transitions, Q's for the downward transitions. An example of a chain of this kind was the supermarket counter model that we discussed last time. That is, a customer arrives, so this increments the state by 1. Or a customer finishes service, in which case, the state gets decremented by 1, or nothing happens in which you stay in place, and so on. In the supermarket model, these P's inside here were all taken to be equal because we assume that the arrival rate was sort of constant at each time slot. But you can generalize a little bit by assuming that these transition probabilities P1 here, P2 there, and so on may be different from state to state. So in general, from state i, there's going to be a transition probability Pi that the next transition is upwards. And there's going to be a probability Qi that the next transition is downwards. And so from that state, the probability that the next transition is downwards is going to be Q_(i+1). So this is the structure of our chain. As I said, it's a crude model of what happens at the supermarket counter but it's also a good model for lots of types of service systems. Again, you have a server somewhere that has a buffer. Jobs come into the buffer. So the buffer builds up. The server processes jobs, so the buffer keeps going down. And the state of the chain would be the number of jobs that you have inside your buffer. Or you could be thinking about active phone calls out of a certain city. Each time that the phone call is placed, the number of active phone calls goes up by 1. Each time that the phone call stops happening, is terminated, then the count goes down by 1. So it's for processes of this kind that a model with this structure is going to show up. And they do show up in many, many models. Or you can think about the number of people in a certain population that have a disease. So 1 more person gets the flu, the count goes up. 1 more person gets healed, the count goes down. And these probabilities in such an epidemic model would certainly depend on the current state. If lots of people already have the flu, the probability that another person catches it would be pretty high. Whereas, if no one has the flu, then the probability that you get a transition where someone catches the flu, that probability would be pretty small. So the transition rates, the incidence of new people who have the disease definitely depends on how many people already have the disease. And that motivates cases where those P's, the upward transition probabilities, depend on the state of the chain. So how do we study this chain? You can sit down and write the system of n linear equations in the pi's. And this way, find the steady-state probabilities of this chain. But this is a little harder. It's more work than one actually needs to do. There's a very clever shortcut that applies to birth/death processes. And it's based on the frequency interpretation that we discussed a little while ago. Let's put a line somewhere in the middle of this chain, and focus on the relation between this part and that part in more detail. So think of the chain continuing in this direction, that direction. But let's just focus on 2 adjacent states, and look at this particular cut. What is the chain going to do? Let's say it starts here. It's going to move around. At some point, it makes a transition to the other side. And that's a transition from i to i+1. It stays on the other side for some time. It gets here, and eventually, it's going to make a transition to this side. Then it keeps moving and so on. Now, there's a certain balance that must be obeyed here. The number of upward transitions through this line cannot be very different from the number of downward transitions. Because we cross this way, then next time, we'll cross that way. Then next time, we'll cross this way. We'll cross that way. So the frequency with which transitions of this kind occur has to be the same as the long-term frequency that transitions of that kind occur. You cannot go up 100 times and go down only 50 times. If you have gone up 100 times, it means that you have gone down 99, or 100, or 101, but nothing much more different than that. So the frequency with which transitions of this kind get observed. That is, out of a large number of transitions, what fraction of transitions are of these kind? That fraction has to be the same as the fraction of transitions that happened to be of that kind. What are these fractions? We discussed that before. The fraction of times at which transitions of this kind are observed is the fraction of time that we happen to be at that state. And out of the times that we are in that state, the fraction of transitions that happen to be upward transitions. So this is the frequency with which transitions of this kind are observed. And with the same argument, this is the frequency with which transitions of that kind are observed. Since these two frequencies are the same, these two numbers must be the same, and we get an equation that relates the Pi to P_(i+1). This has a nice form because it gives us a recursion. If we knew pi(i), we could then immediately calculate pi(i+1). So it's a system of equations that's very easy to solve almost. But how do we get started? If I knew pi(0), I could find by pi(1) and then use this recursion to find pi(2), pi(3), and so on. But we don't know pi(0). It's one more unknown. It's an unknown, and we need to actually use the extra normalization condition that the sum of the pi's is 1. And after we use that normalization condition, then we can find all of the pi's. So you basically fix pi(0) as a symbol, solve this equation symbolically, and everything gets expressed in terms of pi(0). And then use that normalization condition to find pi(0), and you're done. Let's illustrate the details of this procedure on a particular special case. So in our special case, we're going to simplify things now by assuming that all those upward P's are the same, and all of those downward Q's are the same. So at each point in time, if you're sitting somewhere in the middle, you have probability P of moving up and probability Q of moving down. This rho, the ratio of P/Q is frequency of going up versus frequency of going down. If it's a service system, you can think of it as a measure of how loaded the system is. If P is equal to Q, it's means that if you're at this state, you're equally likely to move left or right, so the system is kind of balanced. The state doesn't have a tendency to move in this direction or in that direction. If rho is bigger than 1 so that P is bigger than Q, it means that whenever I'm at some state in the middle, I'm more likely to move right rather than move left, which means that my state, of course it's random, but it has a tendency to move in that direction. And if you think of this as a number of customers in queue, it means your system has the tendency to become loaded and to build up a queue. So rho being bigger than 1 corresponds to a heavy load, where queues build up. Rho less than 1 corresponds to the system where queues have the tendency to drain down. Now, let's write down the equations. We have this recursion P_(i+1) is Pi times Pi over Qi. In our case here, the P's and the Q's do not depend on the particular index, so we get this relation. And this P over Q is just the load factor rho. Once you look at this equation, clearly you realize that by pi(1) is rho times pi(0). pi(2) is going to be -- So we'll do it in detail. So pi(1) is pi(0) times rho. pi(2) is pi(1) times rho, which is pi(0) times rho-squared. And then you continue doing this calculation. And you find that you can express every pi(i) in terms of pi(0) and you get this factor of rho^i. And then you use the last equation that we have -- that the sum of the probabilities has to be equal to 1. And that equation is going to tell us that the sum over all i's from 0 to m of pi(0) rho to the i is equal to 1. And therefore, pi(0) is 1 over (the sum over the rho to the i for i going from 0 to m). So now we found pi(0), and by plugging in this expression, we have the steady-state probabilities of all of the different states. Let's look at some special cases of this. Suppose that rho is equal to 1. If rho is equal to 1, then pi(i) is equal to pi(0). It means that all the steady-state probabilities are equal. It's means that every state is equally likely in the long run. So this is an example. It's called a symmetric random walk. It's a very popular model for modeling people who are drunk. So you start at a state at any point in time. Either you stay in place, or you have an equal probability of going left or going right. There's no bias in either direction. You might think that in such a process, you will tend to kind of get stuck near one end or the other end. Well, it's not really clear what to expect. It turns out that in such a model, in the long run, the drunk person is equally likely to be at any one of those states. The steady-state probability is the same for all i's if rho is equal to 1. And so if you show up at a random time, and you ask where is my state, you will be told it's equally likely to be at any one of those places. So let's make that note. If rho equal to 1, implies that all the pi(i)'s are 1/(M+1) -- M+1 because that's how many states we have in our model. Now, let's look at a different case. Suppose that M is a huge number. So essentially, our supermarket has a very large space, a lot of space to store their customers. But suppose that the system is on the stable side. P is less than Q, which means that there's a tendency for customers to be served faster than they arrive. The drift in this chain, it tends to be in that direction. So when rho is less than 1, which is this case, and when M is going to infinity, this infinite sum is the sum of a geometric series. And you recognize it (hopefully) -- this series is going to 1/(1-rho). And because it's in the denominator, pi(0) ends up being 1-rho. So by taking the limit as M goes to infinity, in this case, and when rho is less than 1 so that this series is convergent, we get this formula. So we get the closed-form formula for the pi(i)'s. In particular, pi(i) is (1- rho)(rho to the i). to So these pi(i)'s are essentially a probability distribution. They tell us if we show up at time 1 billion and we ask, where is my state? You will be told that the state is 0. Your system is empty with probability 1-rho, minus or there's one customer in the system, and that happens with probability (rho - 1) times rho. And it keeps going down this way. And it's pretty much a geometric distribution except that it has shifted so that it starts at 0 whereas the usual geometric distribution starts at 1. So this is a mini introduction into queuing theory. This is the first and simplest model that one encounters when you start studying queuing theory. This is clearly a model of a queueing phenomenon such as the supermarket counter with the P's corresponding to arrivals, the Q's corresponding to departures. And this particular queuing system when M is very, very large and rho is less than 1, has a very simple and nice solution in closed form. And that's why it's very much liked. And let me just take two seconds to draw one last picture. So this is the probability of the different i's. It gives you a PMF. This PMF has an expected value. And the expectation, the expected number of customers in the system, is given by this formula. And this formula, which is interesting to anyone who tries to analyze a system of this kind, tells you the following. That as long as a rho is less than 1, then the expected number of customers in the system is finite. But if rho becomes very close to 1 -- So if your load factor is something like .99, you expect to have a large number of customers in the system at any given time. OK. All right. Have a good weekend. We'll continue next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
The_Difference_of_Two_Independent_Exponential_Random_Variables.txt
In this problem, Romeo and Juliet are to meet up for a date, where Romeo arrives at time x and Juliet at time y, where x and y are independent exponential random variables, with parameters lambda. And we're interested in knowing the difference between the two times of arrivals, we'll call it z, written as x minus y. And we'll like to know what the distribution of z is, expressed by the probability density function, f of z. Now, we'll do so by using the so-called convolution formula that we learn in the lecture. Recall that if we have a random variable w that is the sum of two independent random variables, x plus y, now, if that's the case, we can write the probability [INAUDIBLE] function, fw, [INAUDIBLE] as the following integration-- negative infinity to infinity fx little x times f of y w minus x, integrated over x. And to use this expression to calculate f of z, we need to do a bit more work. Notice w is expressed as a sum of two random variables, whereas z is expressed as the subtraction of y from x. But that's fairly easy to fix. Now, we can write z. Instead of a subtraction, write it as addition of x plus negative y. So in the expression of the convolution formula, we'll simply replace y by negative y, as it will show on the next slide. Using the convolution formula, we can write f of z little z as the integration of f of x little x and f of negative y z minus x dx. Now, we will use the fact that f of negative y, evaluated z minus x, is simply equal to f of y evaluated at x minus z. To see why this is true, let's consider, let's say, a discreet random variable, y. And now, the probability that negative y is equal to negative 1 is simply the same as probability that y is equal to 1. And the same is true for probability density functions. With this fact in mind, we can further write equality as the integration of x times f of y x minus z dx. We're now ready to compute. We'll first look at the case where z is less than 0. On the right, I'm writing out the distribution of an exponential random variable with a parameter lambda. In this case, using the integration above, we could write it as 0 to infinity, lambda e to the negative lambda x times lambda e to the negative lambda x minus z dx. Now, the reason we chose a region to integrate from 0 to positive infinity is because anywhere else, as we can verify from the expression of fx right here, that the product of fx times fy here is 0. Follow this through. We'll pull out the constant. Lambda e to the lambda z, the integral from 0 to infinity, lambda e to the negative 2 lambda x dx. This will give us lambda e to the lambda z minus 1/2 e to the negative 2 lambda x infinity minus this expression value at 0. And this will give us lamdba over 2 e to the lambda z. So now, we have an expression for f of z evaluated at little z when little z is less than 0. Now that have the distribution of f of z when z is less than 0, we'd like to know what happens when z is greater or equal to 0. In principle, we can go through the same procedure of integration and calculate that value. But it turns out, there's something much simpler. z is the difference between x and y, at negative z, simply the difference between y and x. Now, x and y are independent and identically distributed. And therefore, x minus y has the same distribution as y minus x. So that tells us z and negative z have the same distribution. What that means is, is the distribution of z now must be symmetric around 0. In other words, if we know that the shape of f of z below 0 is something like that, then the shape of it above 0 must be symmetric. So here's the origin. For example, if we were to evaluate f of z at 1, well, this will be equal to the value of f of z at negative 1. So this will equal to f of z at negative 1. Well, with this information in mind, we know that in general, f of z little z is equal to f of z negative little z. So what this allows us to do is to get all the information for z less than 0 and generalize it to the case where z is greater or equal to 0. In particular, by the symmetry here, we can write, for the case z greater or equal to 0, as lambda over 2 e to the negative lambda z. So the negative sign comes from the fact that the distribution of f of z is symmetric around 0. And simply, we can go back to the expression here to get the value. And all in all, this implies that f of z little z is equal to lambda over 2 e to the negative lambda absolute value of z. This completes our problem.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
10_Continuous_Bayes_Rule_Derived_Distributions.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So today's agenda is to say a few more things about continuous random variables. Mainly we're going to talk a little bit about inference. This is a topic that we're going to revisit at the end of the semester. But there's a few things that we can already say at this point. And then the new topic for today is the subject of derived distributions. Basically if you know the distribution of one random variable, and you have a function of that random variable, how to find a distribution for that function. And it's a fairly mechanical skill, but that's an important one, so we're going to go through it. So let's see where we stand. Here is the big picture. That's all we have done so far. We have talked about discrete random variables, which we described by probability mass function. So if we have multiple random variables, we describe them with the a joint mass function. And then we define conditional probabilities, or conditional PMFs, and the three are related according to this formula, which is, you can think of it either as the definition of conditional probability. Or as the multiplication rule, the probability of two things happening is the product of the probabilities of the first thing happening, and then the second happening, given that the first has happened. There's another relation between this, which is the probability of x occurring, is the sum of the different probabilities of the different ways that x may occur, which is in conjunction with different values of y. And there's an analog of all that in the continuous world, where all you do is to replace p's by f's, and replace sums by integrals. So the formulas all look the same. The interpretations are a little more subtle, so the f's are not probabilities, they're probability densities. So they're probabilities per unit length, or in the case of joint PDf's, these are probabilities per unit area. So they're densities of some sort. Probably the more subtle concept to understand what it really is the conditional density. In some sense, it's simple. It's just the density of X in a world where you have been told the value of the random variable Y. It's a function that has two arguments, but the best way to think about it is to say that we fixed y. We're told the value of the random variable Y, and we look at it as a function of x. So as a function of x, the denominator is a constant, and it just looks like the joint density. when we keep y fixed. So it's really a function of one argument, just the argument x. And it has the same shape as the joint's density when you take that slice of it. So conditional PDFs are just slices of joint PDFs. There's a bunch of concepts, expectations, variances, cumulative distribution functions that apply equally well for to both universes of discrete or continuous random variables. So why is probability useful? Probability is useful because, among other things, we use it to make sense of the world around us. We use it to make inferences about things that we do not see directly. And this is done in a very simple manner using the base rule. We've already seen some of that, and now we're going to revisit it with a bunch of different variations. And the variations come because sometimes our random variable are discrete, sometimes they're continuous, or we can have a combination of the two. So the big picture is that there's some unknown random variable out of there, and we know the distribution that's random variable. And in the discrete case, it's going to be given by PMF. In the continuous case, it's given a PDF. Then we have some phenomenon, some noisy phenomenon or some measuring device, and that measuring device produces observable random variables Y. We don't know what x is, but we have some beliefs about how X is distributed. We observe the random variable Y. We need a model of this box. And the model of that box is going to be either a PMF, for the random variable Y. And that model tells us, if the true state of the world is X, how do we expect to Y to be distributed? That's for the case where Y is this discrete. If Y is a continuous, you might instead have a density for Y, or something of that form. So in either case, this should be a function that's known to us. This is our model of the measuring device. And now having observed y, we want to make inferences about x. What does it mean to make inferences? Well the most complete answer in the inference problem is to tell me the probability distribution of the unknown quantity. But when I say the probability distribution, I don't mean this one. I mean the probability distribution that takes into account the measurements that you got. So the output of an inference problem is to come up with the distribution of X, the unknown quantity, given what we have already observed. And in the discrete case, it would be an object like that. If X is continuous, it would be an object of this kind. OK, so we're given conditional probabilities of this type, and we want to get conditional distributions of the opposite type where the order of the conditioning is being reversed. So the starting point is always a formula such as this one. The probability of x happening, and then y happening given that x happens. This is the probability that a particular x and y happen simultaneously. But this is also equal to the probability that y happens, and then that x happens, given that y has happened. And you take this expression and send one term to the denominator of the other side, and this gives us the base rule for the discrete case. Which is this one that you have already seen, and you have played with it. So this is what the formula looks like in the discrete case. And the typical example where both random variables are discrete is the one we discussed some time ago. X is, let's say, a binary variable, or whether an airplane is present up there or not. Y is a discrete measurement, for example, whether our radar beeped or it didn't beep. And we make inferences and calculate the probability that the plane is there, or the probability that the plane is not there, given the measurement that we have made. And of course X and Y do not need to be just binary. They could be more general discrete random variables. So how does the story change in the continuous case? First, what's a possible application of the continuous case? Well, think of X as being some signal that takes values over a continuous range. Let's say X is the current through a resistor. And then you have some measuring device that measures currents, but that device is noisy, it gets hit, let's say for example, by Gaussian noise. And the Y that you observe is a noisy version of X. But your instruments are analog, so you measure things on a continuous scale. What are you going to do in that case? Well the inference problem, the output of the inference problem, is going to be the conditional distribution of X. What do you think your current is based on a particular value of Y that you have observed? So the output of our inference problem is, given the specific value of Y, to calculate this entire function as a function of x, and then go and plot it. How do we calculate it? You go through the same calculation as in the discrete case, except that all of the x's gets replaced by p's. In the continuous case, it's equally true that the joint's density is the product of the marginal density with the conditional density. So the formula is still valid with just a little change of notation. So we end up with the same formula here, except that we replace x's with p's. So all of these functions are known to us. We have formulas for them. We fix a specific value of y, we plug it in, so we're left with a function of x. And that gives us the posterior distribution. Actually there's also a denominator term that's not necessarily given to us, but we can always calculate it if we have the marginal of X, and we have the model for measuring device. Then we can always find the marginal distribution of Y. So this quantity, that number, is in general a known one, as well, and doesn't give us any problems. So to complicate things a little bit, we can also look into situations where our two random variables are of different kinds. For example, one random variable could be discrete, and the other it might be continuous. And there's two versions. Here one version is when X is discrete, but Y is continuous. What's an example of this? Well suppose that I send a single bit of information so my X is 0 or 1. And what I measure is Y, which is X plus, let's say, Gaussian noise. This is the standard example that shows up in any textbook on communication, or signal processing. You send a single bit, but what you observe is a noisy version of that bit. You start with a model of your x's. These would be your prior probabilities. For example, you might be believe that either 0 or 1 are equally likely, in which case your PMF gives equal weight to two possible values. And then we need a model of our measuring device. This is one specific model. The general model would have a shape such as follows. Y has a distribution, its density. And that density, however, depends on the value of X. So when x is 0, we might get a density of this kind. And when x is 1, we might get the density of a different kind. So these are the conditional densities of y in a universe that's specified by a particular value of x. And then we go ahead and do our inference. OK, what's the right formula for doing this inference? We need a formula that's sort of an analog of this one, but applies to the case where we have two random variables of different kinds. So let me just redo this calculation here. Except that I'm not going to have a probability of taking specific values. It will have to be something a little different. So here's how it goes. Let's look at the probability that X takes a specific value that makes sense in the discrete case, but for the continuous random variable, let's look at the probability that it takes values in some little interval. And now this probability of two things happening, I'm going to write it as a product. And I'm going to write this as a product in two different ways. So one way is to say that this is the probability that X takes that value and then given that X takes that value, the probability that Y falls inside that interval. So this is our usual multiplication rule for multiplying probabilities, but I can use the multiplication rule also in a different way. It's the probability that Y falls in the range of interest. And then the probability that X takes the value of interest given that Y satisfies the first condition. So this is something that's definitely true. We're just using the multiplication rule. And now let's translate it into PMF is PDF notation. So the entry up there is the PMF of X evaluated at x. The second entry, what is it? Well probabilities of little intervals are given to us by densities. But we are in the conditional universe where X takes on a particular value. So it's going to be the density of Y given the value of X times delta. So probabilities of little intervals are given by the density times the length of the little interval, but because we're working in the conditional universe, it has to be the conditional density. Now let's try the second expression. This is the probability that the Y falls into the little interval. So that's the density of Y times delta. And then here we have an object which is the conditional probability X in a universe where the value of Y is given to us. Now this relation is sort of approximate. This is true for very small delta in the limit. But we can cancel the deltas from both sides, and we're left with a formula that links together PMFs and PDFs. Now this may look terribly confusing because there's both p's and f's involved. But the logic should be clear. If a random variable is discrete, it's described by PMF. So here we're talking about the PMF of X in some particular universe. X is discrete, so it has a PMF. Similarly here. Y is continuous so it's described by a PDF. And even in the conditional universe where I tell you the value of X, Y is still a continuous random variable, so it's been described by a PDF. So this is the basic relation that links together PMF and PDFs. In this mixed the world. And now in this inequality, you can take this term and send it to the new denominator to the other side. And what you end up with is the formula that we have up here. And this is a formula that we can use to make inferences about the discrete random variable X when we're told the value of the continuous random variable Y. The probability that X takes on a particular value has something to do with the prior. And other than that, it's proportional to this quantity, the conditional of Y given X. So these are the quantities that we plotted here. Suppose that the x's are equally likely in your prior, so we don't really care about that term. It tells us that the posterior of X is proportional to that particular density under the given x's. So in this picture, if I were to get a particular y here, I would say that x equals 1 has a probability that's proportional to this quantity. x equals 0 has a probability that's proportional to this quantity. So the ratio of these two quantities gives us the relative odds of the different x's given the y that we have observed. So we're going to come back to this topic and redo plenty of examples of these kinds towards the end of the class, when we spend some time dedicated to inference problems. But already at this stage, we sort of have the basic skills to deal with a lot of that. And it's useful at this point to pull all the formulas together. So finally let's look at the last case that's remaining. Here we have a continuous phenomenon that we're trying to measure, but our measurements are discrete. What's an example where this might happen? So you have some device that emits light, and you drive it with a current that has a certain intensity. You don't know what that current is, and it's a continuous random variable. But the device emits light by sending out individual photons. And your measurement is some other device that counts how many photons did you get in a single second. So if we have devices that emit a very low intensity you can actually start counting individual photons as they're being observed. So we have a discrete measurement, which is the number of problems, and we have a continuous hidden random variable that we're trying to estimate. What do we do in this case? Well we start again with a formula of this kind, and send the p term to the denominator. And that's the formula that we use there, except that the roles of x's and y's are interchanged. So since here we have Y being discrete, we should change all the subscripts. It would be p_Y f_X given y f_X, and P(Y given X). So just change all those subscripts. Because now what we're used to be continuous became discrete, and vice versa. Take that formula, send the other terms to the denominator, and we have a formula for the density, or X, given the particular measurements for Y that we have obtained. In some sense that's all there is in Bayesian inference. It's using these very simple one line formulas. But why are there people then who make their living solving inference problems? Well, the devil is in the details. As we're going to discuss, there are some real world issues of how exactly do you design your f's, how do you model your system, then how do you do your calculations. This might not be always easy. For example, there's certain integrals or sums that have to be evaluated, which may be hard to do and so on. So this object is a lot of richer than just these formulas. On the other hand, at the conceptual level, that's the basis for Bayesian inference, that these are the basic concepts. All right, so now let's change gear and move to the new subject, which is the topic of finding the distribution of a functional for a random variable. We call those distributions derived distributions, because we're given the distribution of X. We're interested in a function of X. We want to derive the distribution of that function based on the distribution that we already know. So it could be a function of just one random variable. It could be a function of several random variables. So one example that we are going to solve at some point, let's say you have to run the variables X and Y. Somebody tells you their distribution, for example, is a uniform of the square. For some reason, you're interested in the ratio of these two random variables, and you want to find the distribution of that ratio. You can think of lots of cases where your random variable of interest is created by taking some other unknown variables and taking a function of them. And so it's legitimate to care about the distribution of that random variable. A caveat, however. There's an important case where you don't need to find the distribution of that random variable. And this is when you want to calculate the expectations. If all you care about is the expected value of this function of the random variables, you can work directly with the distribution of the original random variables without ever having to find the PDF of g. So you don't do unnecessary work if it's not needed, but if it's needed, or if you're asked to do it, then you just do it. So how do we find the distribution of the function? As a warm-up, let's look at the discrete case. Suppose that X is a discrete random variable and takes certain values. We have a function g that maps x's into y's. And we want to find the probability mass function for Y. So for example, if I'm interested in finding the probability that Y takes on this particular value, how would they find it? Well I ask, what are the different ways that these particular y value can happen? And the different ways that it can happen is either if x takes this value, or if X takes that value. So we identify this event in the y space with that event in the x space. These two events are identical. X falls in this set if and only if Y falls in that set. Therefore, the probability of Y falling in that set is the probability of X falling in that set. The probability of X falling in that set is just the sum of the individual probabilities of the x's in this set. So we just add the probabilities of the different x's where the summation is taken over all x's that leads to that particular value of y. Very good. So that's all there is in the discrete case. It's a very nice and simple. So let's transfer these methods to the continuous case. Suppose we are in the continuous case. Suppose that X and Y now can take values anywhere. And I try to use same methods and I ask, what is the probability that Y is going to take this value? At least if the diagram is this way, you would say this is the same as the probability that X takes this value. So I can find the probability of Y being this in terms of the probability of X being that. Is this useful? In the continuous case, it's not. Because in the continuous case, any single value has 0 probability. So what you're going to get out of this argument is that the probability Y takes this value is 0, is equal to the probability that X takes that value which also 0. That doesn't help us. We want to do something more. We want to actually find, perhaps, the density of Y, as opposed to the probabilities of individual y's. So to find the density of Y, you might argue as follows. I'm looking at an interval for y, and I ask what's the probability of falling in this interval. And you go back and find the corresponding set of x's that leads to those y's, and equate those two probabilities. The probability of all of those y's collectively should be equal to the probability of all of the x's that map into that interval collectively. And this way you can relate the two. As far as the mechanics go, in many cases it's easier to not to work with little intervals, but instead to work with cumulative distribution functions that used to work with sort of big intervals. So you can instead do a different picture. Look at this set of y's. This is the set of y's that are smaller than a certain value. The probability of this set is given by the cumulative distribution of the random variable Y. Now this set of y's gets produced by some corresponding set of x's. Maybe these are the x's that map into y's in that set. And then we argue as follows. The probability that the Y falls in this interval is the same as the probability that X falls in that interval. So the event of Y falling here and the event of X falling there are the same, so their probabilities must be equal. And then I do the calculations here. And I end up getting the cumulative distribution function of Y. Once I have the cumulative, I can get the density by just differentiating. So this is the general cookbook procedure that we will be using to calculate it derived distributions. We're interested in a random variable Y, which is a function of the x's. We will aim at obtaining the cumulative distribution of Y. Somehow, manage to calculate the probability of this event. Once we get it, and what I mean by get it, I don't mean getting it for a single value of little y. You need to get this for all little y's. So you need to get the function itself, the cumulative distribution. Once you get it in that form, then you can calculate the derivative at any particular point. And this is going to give you the density of Y. So a simple two-step procedure. The devil is in the details of how you carry the mechanics. So let's do one first example. Suppose that X is a uniform random variable, takes values between 0 and 2. We're interested in the random variable Y, which is the cube of X. What kind of distribution is it going to have? Now first notice that Y takes values between 0 and 8. So X is uniform, so all the x's are equally likely. You might then say, well, in that case, all the y's should be equally likely. So Y might also have a uniform distribution. Is this true? We'll find out. So let's start applying the cookbook procedure. We want to find first the cumulative distribution of the random variable Y, which by definition is the probability that the random variable is less than or equal to a certain number. That's what we want to find. What we have in our hands is the distribution of X. That's what we need to work with. So the first step that you need to do is to look at this events and translate it, and write it in terms of the random variable about which you know you have information. So Y is X cubed, so this event is the same as that event. So now we can forget about the y's. It's just an exercise involving a single random variable with a known distribution and we want to calculate the probability of some event. So we're looking at this event. X cubed being less than or equal to Y. We massage that expression so that's it involves X directly, so let's take cubic roots of both sides of this inequality. This event is the same as the event that X is less than or equal to Y to the 1/3. Now with a uniform distribution on [0,2], what is that probability going to be? It's the probability of being in the interval from 0 to y to the 1/3, so it's going to be in the area under the uniform going up to that point. And what's the area under that uniform? So here's x. Here is the distribution of X. It goes up to 2. The distribution of X is this one. We want to go up to y to the 1/3. So the probability for this event happening is this area. And the area is equal to the base, which is y to the 1/3 times the height. What is the height? Well since the density must integrate to 1, the total area under the curve has to be 1. So the height here is 1/2, and that explains why we get the 1/2 factor down there. So that's the formula for the cumulative distribution. And then the rest is easy. You just take derivatives. You differentiate this expression with respect to y 1/2 times 1/3, and y drops by one power. So you get y to 2/3 in the denominator. So if you wish to plot this, it's 1/y to the 2/3. So when y goes to 0, it sort of blows up and it goes on this way. Is this picture correct the way I've drawn it? What's wrong with it? [? AUDIENCE: Something. ?] PROFESSOR: Yes. y only takes values from 0 to 8. This formula that I wrote here is only correct when the preview picture applies. I took my y to the 1/3 to be between 0 and 2. So this formula here is only correct for y between 0 and 8. And for that reason, the formula for the derivative is also true only for a y between 0 and 8. And any other values of why are impossible, so they get zero density. So to complete the picture here, the PDF of y has a cut-off of 8, and it's also 0 everywhere else. And one thing that we see is that the distribution of Y is not uniform. Certain y's are more likely than others, even though we started with a uniform random variable X. All right. So we will keep doing examples of this kind, a sequence of progressively more interesting or more complicated. So that's going to continue in the next lecture. You're going to see plenty of examples in your recitations and tutorials and so on. So let's do one that's pretty similar to the one that we did, but it's going to add to just a small twist in how we do the mechanics. OK so you set your cruise control when you start driving. And you keep driving at the constants based at the constant speed. Where you set your cruise control is somewhere between 30 and 60. You're going to drive a distance of 200. And so the time it's going to take for your trip is 200 over the setting of your cruise control. So it's 200/V. Somebody gives you the distribution of V, and they tell you not only it's between 30 and 60, it's roughly equally likely to be anything between 30 and 60, so we have a uniform distribution over that range. So we have a distribution of V. We want to find the distribution of the random variable T, which is the time it takes till your trip ends. So how are we going to proceed? We'll use the exact same cookbook procedure. We're going to start by finding the cumulative distribution of T. What is this? By definition, the cumulative distribution is the probability that T is less than a certain number. OK. Now we don't know the distribution of T, so we cannot to work with these event directly. But we take that event and translate it into T-space. So we replace the t's by what we know T to be in terms of V or the v's All right. So we have the distribution of V. So now let's calculate this quantity. OK. Let's massage this event and rewrite it as the probability that V is larger or equal to 200/T. So what is this going to be? So let's say that 200/T is some number that falls inside the range. So that's going to be true if 200/T is bigger than 30, and less than 60. Which means that t is less than 30/200. No, 200/30. And bigger than 200/60. So for t's inside that range, this number 200/t falls inside that range. This is the range of t's that are possible, given the description of the problem the we have set up. So for t's in that range, what is the probability that V is bigger than this number? So V being bigger than that number is the probability of this event, so it's going to be the area under this curve. So the area under that curve is the height of the curve, which is 1/3 over 30 times the base. How big is the base? Well it's from that point to 60, so the base has a length of 60 minus 200/t. And this is a formula which is valid for those t's for which this picture is correct. And this picture is correct if 200/T happens to fall in this interval, which is the same as T falling in that interval, which are the t's that are possible. So finally let's find the density of T, which is what we're looking for. We find this by taking the derivative in this expression with respect to t. We only get one term from here. And this is going to be 200/30, 1 over t squared. And this is the formula for the density for t's in the allowed to range. OK, so that's the end of the solution to this particular problem as well. I said that there was a little twist compared to the previous one. What was the twist? Well the twist was that in the previous problem we dealt with the X cubed function, which was monotonically increasing. Here we dealt with the function that was monotonically decreasing. So when we had to find the probability that T is less than something, that translated into an event that V was bigger than something. Your time is less than something if and only if your velocity is bigger than something. So for when you're dealing with the monotonically decreasing function, at some point some inequalities will have to get reversed. Finally let's look at a very useful one. Which is the case where we take a linear function of a random variable. So X is a random variable with given distribution, and we can see there is a linear function. So in this particular instance, we take a to be equal to 2 and b equal to 5. And let us first argue just by picture. So X is a random variable that has a given distribution. Let's say it's this weird shape here. And x ranges from -1 to +2. Let's do things one step at the time. Let's first find the distribution of 2X. Why do you think you know about 2X? Well if x ranges from -1 to 2, then the random variable X is going to range from -2 to +4. So that's what the range is going to be. Now dealing with the random variable 2X, as opposed to the random variable X, in some sense it's just changing the units in which we measure that random variable. It's just changing the scale on which we draw and plot things. So if it's just a scale change, then intuition should tell you that the random variable X should have a PDF of the same shape, except that it's scaled out by a factor of 2, because our random variable of 2X now has a range that's twice as large. So we take the same PDF and scale it up by stretching the x-axis by a factor of 2. So what does scaling correspond to in terms of a formula? So the distribution of 2X as a function, let's say, a generic argument z, is going to be the distribution of X, but scaled by a factor of 2. So taking a function and replacing its arguments by the argument over 2, what it does is it stretches it by a factor of 2. You have probably been tortured ever since middle school to figure out when need to stretch a function, whether you need to put 2z or z/2. And the one that actually does the stretching is to put the z/2 in that place. So that's what the stretching does. Could that to be the full answer? Well there's a catch. If you stretch this function by a factor of 2, what happens to the area under the function? It's going to get doubled. But the total probability must add up to 1, so we need to do something else to make sure that the area under the curve stays to 1. So we need to take that function and scale it down by this factor of 2. So when you're dealing with a multiple of a random variable, what happens to the PDF is you stretch it according to the multiple, and then scale it down by the same number so that you preserve the area under that curve. So now we found the distribution of 2X. How about the distribution of 2X + 5? Well what does adding 5 to random variable do? You're going to get essentially the same values with the same probability, except that those values all get shifted by 5. So all that you need to do is to take this PDF here, and shift it by 5 units. So the range used to be from -2 to 4. The new range is going to be from 3 to 9. And that's the final answer. This is the distribution of 2X + 5, starting with this particular distribution of X. Now shifting to the right by b, what does it do to a function? Shifting to the right to by a certain amount, mathematically, it corresponds to putting -b in the argument of the function. So I'm taking the formula that I had here, which is the scaling by a factor of a. The scaling down to keep the total area equal to 1. And then I need to introduce this extra term to do the shifting. So this is a plausible argument. The proof by picture that this should be the right answer. But just in order to keep our skills tuned and refined, let us do this derivation in a more formal way using our two-step cookbook procedure. And I'm going to do it under the assumption that a is positive, as in the example that's we just did. So what's the two-step procedure? We want to find the cumulative of Y, and after that we're going to differentiate. By definition the cumulative is the probability that the random variable takes values less than a certain number. And now we need to take this event and translate it, and express it in terms of the original random variables. So Y is, by definition, aX + b, so we're looking at this event. And now we want to express this event in a clean form where X shows up in a straight way. Let's say I'm going to massage this event and write it in this form. For this inequality to be true, x should be less than or equal to (y minus b) divided by a. OK, now what is this? This is the cumulative distribution of X evaluated at the particular point. So we got a formula for the cumulative Y based on the cumulative of X. What's the next step? Next step is to take derivatives of both sides. So the density of Y is going to be the derivative of this expression with respect to y. OK, so now here we need to use the chain rule. It's going to be the derivative of the F function with respect to its argument. And then we need to take the derivative of the argument with respect to y. What is the derivative of the cumulative? The derivative of the cumulative is the density itself. And we evaluate it at the point of interest. And then the chain rule tells us that we need to take the derivative of this with respect to y, and the derivative of this with respect to y is 1/a. And this gives us the formula which is consistent with what I had written down here, for the case where a is a positive number. What if a was a negative number? Could this formula be true? Of course not. Densities cannot be negative, right? So that formula cannot be true. Something needs to change. What should change? Where does this argument break down when a is negative? So when I write this inequality in this form, I divide by a. But when you divide by a negative number, the direction of an inequality is going to change. So when a is negative, this inequality becomes larger than or equal to. And in that case, the expression that I have up there would change when this is larger than here. Instead of getting the cumulative, I would get 1 minus the cumulative of (y minus b) divided by a. So this is the probability that X is bigger than this particular number. And now when you take the derivatives, there's going to be a minus sign that shows up. And that minus sign will end up being here. And so we're taking the negative of a negative number, and that basically is equivalent to taking the absolute value of that number. So all that happens when we have a negative a is that we have to take the absolute value of the scaling factor instead of the factor itself. All right, so this general formula is quite useful for dealing with linear functions of random variables. And one nice application of it is to take the formula for a normal random variable, consider a linear function of a normal random variable, plug into this formula, and what you will find is that Y also has a normal distribution. So using this formula, now we can prove a statement that I had made a couple of lectures ago, that a linear function of a normal random variable is also linear. That's how you would prove it. I think this is it for today so.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Setting_Up_a_Markov_Chain.txt
Hi. In this problem, we're going to practice setting up a Markov chain by going fishing in this lake, which has n fish in it, some of which are green. And the rest of the fish are blue. So, what we do is, every day we go to this lake, and we catch exactly 1 fish. And all the fish are equally likely to be the 1 that's caught. Now, if we catch a green fish, we paint it blue, and we throw back into the lake. And if we catch a blue fish, we just keep it blue, and we also throw it back. Now, what we're interested in modeling is, how does this lake evolve over time? And specifically what we're interested in is the number of green fish that are left in the lake. So, let's let Gi be the event that there are i green fish left in the lake. And we want to know, how does Gi evolve over time? Now, one thing that we've learned that we can use to model this is a Markov chain. But before we can use it, we need to make sure that this actually satisfies the Markov property. Now, recall that the Markov property essentially says that, given the current state of the system, that's all you need in order to predict the future states. So, any past history of the previous states that it was in, that's all irrelevant. All you need is the current state. Now, in the context of this particular problem, what that means is that if I tell you that there are 10 green fish left, that's all the information you need in order to predict how many fish there will be tomorrow. So, why is that? Well, it's because what influences the number of green fish that are left? What influences it is which fish you catch because, depending on which fish you catch, you may paint the green fish blue, in which case the number of green fish decrease. But what affects which fish you catch? Well, that probability is dictated solely based on just the number of green fish in the lake right now, today. So, it doesn't matter that there were 20 fish yesterday. All that matters is how many green fish there are in the lake today. And so, because of that argument, the number of green fish-- this does satisfy the Markov property, so we can use this and model it as a Markov chain. So, like we alluded to just now, the key dynamic that we need to look at is, how does the number of green fish change? And if we look at it, we notice that after each day, the number of green fish can only have two possible transitions. One possible transition is that it goes down by exactly 1, which happens if you happen to catch a green fish and paint it blue. So, that green fish is no longer green, so the number of green fish goes down by 1. The other possible transition is that Gi doesn't change because you caught a blue fish that day. So, all the green fish are still green. So, now given that, let's see if we can come up with a Markov chain. So, the first thing we've done is we've written down all the different states, right? So, this represents the number of green fish left in the lake. So, there could be 0 green fish left, 1 green fish, all the way through n, which means that every single fish in the lake is green. Now, we have the states. What we need to do now is to fill in the transition probabilities, which are the Pij's. And remember, the Pij is the probability of transitioning from state i to state j in the next transition. So, what that means in this context is, what's the probability that there will be j green fish tomorrow given that there are i green fish today? Now, if we go back to our earlier argument, we see that for any given i, you can only transition to two possible j's. One of them is you stay at i because the number of green fish doesn't change because you caught a blue fish. And the other is that you'd go from i to i minus 1. The number of green fish decreases by 1. Now, what we need to do now is fill in what those probabilities are. So, if j equals i, meaning that the number of green fish doesn't change, well, what's the probability that you have the same number of green fish tomorrow as you do today? Well, if you have i green fish today, that happens if you catch 1 of the n minus i blue fish. So, what's the probability of catching one of the n minus i blue fish? Well, it's n minus i over n. Now, the other possible transition is you go from a i to j equals i minus 1, so i goes down by 1. And that happens when you catch a green fish. So, given that there are i green fish, what's the probability that you catch 1 of those? Well, it's going to be i/n. And finally, every other transition has 0 probability. All right. So, now we can add those transitions on to our Markov chain. So, for example, we have these. So, let's look at this general case i. So, if you're state i, you have i green fish left. You will transition to i minus 1 green fish left if that day you caught a green fish. And we said that that probability is i/n. And the self transition probability is you caught a blue fish that day, so you still stay a i green fish. And that probability, we said, was n minus i over n. All right. Now, it's helpful to verify that this formula works by looking at some cases where it's intuitive to calculate what these probabilities should be. So, let's look at state n. That is the state where every single fish in the lake is green. So, if ever single fish in the lake is green, then no matter what fish you catch, it's going to be green. And you're going to paint it blue and return it, so you're guaranteed to go down to n minus 1 green fish. And so, this transition probability down to n minus 1 is guaranteed to be 1. And so, the self transition probability has to be 0. Now, let's go back to our formula and verify that actually gives us the right value. So, if i is n, then there's only these transition probabilities. So, if i is n, then the transition probability to j, for j is also n, is n minus n over n, which is 0. And that's exactly what we said. We argued that the self transition probability should be 0. And also, if i is in, the probability of transitioning to n minus 1 should be n over n, which is 1. And that's exactly what we argued here. So, it seems like these transition probabilities do make sense. And if we wanted to, we could fill in the rest of these. So, for example, this would be 2/n, 1/n, n minus 1 over n, n minus 2 over n. And now, let's also consider the case of state 0, which means that every single fish is blue. There are 0 green fish left. Well, if that's the case, then what's the probability of staying at 0? Well, that's n minus 0 over n is 1, all right? So, the self transition probability is 1. And that makes sense because if you have 0 green fish, there's no way to generate more green fish because you don't paint blue fish green. And so, you're going to stay at 0 green fish forever. All right. So, we've characterized the entire Markov chain now. And so, now let's just answer some simple questions about this. So, the problem asks us to identify, what are the recurrent and transient states? So, remember that recurrent state means that if you start out at that state, no matter where you go, what other states you end up at, there is some positive probability path that will take you back to your original state. And if you're not recurrent, then you're transient, which means that if you're transient, if you start out at the transient state, there is some other state that you can go to, from which there's no way to come back to the original transient state. All right. So, now let's look at this and see which states are recurrent and which are transient. And we can fill this in more. And if we look at it, let's look at state n. Well, we're guaranteed to go from state n to state n minus 1. And once we're in state n minus 1, there's no way for us to go back to state n because we can't generate more green fish. And so, n is transient. And similarly, we can use the same argument to show that everything from 1 through n, all of these states, are transient for the same reason because there's no way to generate more green fish. And so, the chain can only stay at a given state or go down 1. And so, it always goes down. It can only go left, and it can never go right. So, once you leave a certain state, there's no way to come back. And so, states 1 through n are all transient. And 0 the only recurrent state because, well, the only place you go from 0 is itself. So, you always stay at 0. And in fact, 0 is not only recurrent, it's absorbing because every single other state, no matter where you start out at, you will always end up at 0. So, this was just an example of how to set up a Markov chain. You just think about the actual dynamics of what's going on and make sure that it satisfies the Markov property. Then, figure out what all the states are and calculate all the transition probabilities. And once you have that, you've specified your Markov chain.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Calculating_a_Cumulative_Distribution_Function_CDF.txt
Hi. In this problem, we'll get some practice working with PDFs and also using PDFs to calculate CDFs. So the PDF that we're given in this problem is here. So we have a random variable, z, which is a continuous random variable. And we're told that the PDF of this random variable, z, is given by gamma times 1 plus z squared in the range of z between negative 2 and 1. And outside of this range, it's 0. All right, so first thing we need to do and the first part of this problem is we need to figure out what gamma is because it's not really a fully specified PDF yet. We need to figure out exactly what the value gamma is. And how do we do that? Well, we've done analogous things before for the discrete case. So the tool that we use is that the PDF must integrate to 1. So in the discrete case, the analogy was that the PMF had to sum to 1. So what do we know? We know that when you integrate this PDF from negative infinity to infinity, fz of z, it has to equal 1. All right, so what do we do now? Well, we know what the PDF is-- partially, except for gamma-- so let's plug that in. And the first thing that we'll do is we'll simplify this because we know that the PDF is actually only non-zero in the range negative 2 to 1. So instead of integrating from negative infinity to infinity, we'll just integrate from negative 2 to 1. And now let's plug in this gamma times 1 plus z squared dc. And now the rest of the problem is just applying calculus and integrating this. So let's just go through that process. So we get z plus 1/3 z cubed from minus 2 to 1. And now we'll plug in the limits. And we get gamma, and that's 1 plus 1/3 minus minus 2 plus 1/3 times minus 2 cubed. And then if we add this all up, you get 4/3 plus 2 plus 8/3, which will give you 6. So what we end up with in the end is that 1 is equal to 6 gamma. So what does that tell us? That tells us that, in this case, gamma is 1/6. OK, so we've actually figured out what this PDF really is. And let's just substitute that in. So we know what gamma is. So it's 1/6. So from this PDF, we can calculate anything that we want to. This PDF, basically, fully specifies everything that we need to know about this random variable, z. And one of the things that we can calculate from the PDF is the CDF. So the next part of the problem asks us to calculate the CDF. So remember the CDF, we use capital F. And the definition is that you integrate from negative infinity to this z. And what do you integrate? You integrate the PDF. And all use some dummy variable, y, here in the integration. So what is it really doing? It's basically just taking the PDF and taking everything to the left of it. So another way to think about this-- this is the probability that the random variable is less than or equal to some little z. It's just accumulating probability as you go from left to right. So the hardest part about calculating the CDFs, really, is actually just keeping track of the ranges, because unless the PDF is really simple, you'll have cases where the PDF cold be 0 in some ranges and non-zero in other ranges. And then what you really have to keep track of is where those ranges are and where you actually have non-zero probability. So in this case, we actually break things down into three different ranges because this PDF actually looks something like this. So it's non-zero between negative 2 and 1, and it's 0 everywhere else. So then what that means is that our job is a little simpler because everything to the left of negative 2, the CDF will be 0 because there's no probability density to the left. And then everything to the right of 1, well we've accumulated all the probability in the PDF because we know that when you integrate from negative 2 to 1, you capture everything. So anything to the right of 1, the CDF will be 1. So the only hard part is calculating what the CDF is in this intermediate range, between negative 2 and 1. So let's do that case first-- so the case of z is between negative 2 and 1. So what is the CDF in that case? Well, the definition is to integrate from negative infinity to z. But we know that everything to the left of negative 2, there's no probably density. So we don't need to include that. So we can actually change this lower limit to negative 2. And the upper limit is wherever this z is. So that becomes our integral. And the inside is still the PDF. So let's just plug that in. We know that it's 1/6 1 plus-- we'll make this y squared-- by. And now it's just calculus again. And in fact, it's more or less the same integral, so what we get is y plus 1/3 y cubed from negative 2 to z. Notice the only thing that's different here is that we're integrating from negative 2 to z instead of negative 2 to 1. And when we calculate this out, what we get is z plus 1/3 z cubed minus minus 2 plus 1/3 minus 2 cubed, which gives us 1/6 z plus 1/3 z cubed plus plus 2 plus 8/3 gives us 14/3. So that actually is our CDF between the range of negative 2 to 1. So for full completeness, let's actually write out the entire CDF, because there's two other parts in the CDF. So the first part is that it's 0 if z is less than negative 2. And it's 1 if z is greater than 1. And in between, it's this thing that we've just calculated. So it's 1/6 z plus 1/3 z cubed plus 14/3 if z is between minus 2 and 1. So that is our final answer. So the main point of this problem was to drill a little bit more the concepts of PDFs and CDFs. So for the PDF, the important thing to remember is that in order to be a valid PDF, the PDF has to integrate to 1. And you can use that fact to help you calculate any unknown constants in the PDF. And then to calculate the CDF, it's just integrating the PDF from negative infinity to whatever point that you want to cut off at. And the tricky part, as I said earlier, was really just keeping track of the ranges. In this case, we've broke it down into three ranges. If we had a slightly more complicated PDF, then you would have to keep track of even more ranged. All right, so I hope that was helpful, and we'll see you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Uniform_Probabilities_on_a_Square.txt
In this problem, we will be helping Romeo and Juliet meet up for a date. And in the process, also we'll review some concepts in basic probability theory, including sample spaces and probability laws. This problem, the basic setup is that Romeo and Juliet are trying to meet up for a date. And let's say they're trying to meet up for lunch tomorrow at noon. But they're not necessarily punctual. So they may arrive on time with a delay of 0, or they may actually be up to 1 hour late and arrive at 1:00 PM. So the other thing that we assume in this problem is that all pairs of arrival times-- so the time that Romeo arrives paired with the time they Juliet arrives-- all of these pairs are equally likely. And I've put this in quotes, because we haven't really specify exactly what this means. And we'll come back to that in a little bit. The last important thing is that each person will wait for 15 minutes for the other person to arrive. If within that 15-minute window the other person doesn't arrive, then they'll give up and they'll end up not meeting up for lunch. So to solve this problem, let's first try to set up a sample space and come up with a probability law to describe this scenario. And let's actually start with a simpler version of this problem. And instead of assuming that they can arrive at any delay between 0 and 1 hour, let's pretend instead that Romeo and Juliet can only arrive in 15-minute increments. So Romeo can arrive on time with a delay 0, or be 15 minutes late, 30 minutes late, 45 minutes late, or one hour late. But none of the other times are possible. And the same thing for Juliet. Let's start out with just the simple case first, because it helps us get the intuition for the problem, and it's an easier case to analyze. So it's actually easy to visualize this. It's a nice visual tool to group this sample space into a grid. So the horizontal axis here represents the arrival time of Romeo, and the vertical axis represents the arrival time of Juliet. And so, for example, this point here would represent Romeo arriving 15 minutes late and Juliet arriving 30 minutes late. So this is our sample space now. This is our omega. And now let's try to assign a probability law. And we'll continue to assume that all pairs of arrival times are equally likely. And now we can actually specifically specify what this term means. And in particular, we'll be invoking the discrete uniform law, which basically says that all of these points, which are just outcomes in our probabilistic experiment-- all of these outcomes are equally likely. And so since there are 25 of them, each one of these outcomes has a probability of 1 over 25. So now we've specified our sample space and our probability law. So now let's try to answer the question, what is the probability that Romeo and Juliet will meet up for their date? So all that amounts to now is just identifying which of these 25 outcomes results in Romeo and Juliet arriving within 15 minutes of each other. So let's start with this one that I've picked out. If Romeo arrives 15 minutes late and Juliet arrives 30 minutes late, then they will arrive within 15 minutes of each other. So this outcome does result in the two of them meeting. And so we can actually highlight all of these. And it turns out that these outcomes that I'm highlighting result in the two them arriving within 15 minutes of each other. So because each one has a probability of 1 over 25, all we really need to do now is just count how many outcomes there are. So there's 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13. So the probability in the end is for the discrete case. The discrete case-- I'm referring to the case where we simplified it and considered only arrival times with increments of 15 minutes. In this case, the probability is 13 over 25. So now we have an idea of how to solve this problem. It amounts to basically coming up with a sample space, a probability law, and then identifying the events of interest in calculating the probability of that event. So now let's actually solve the problem that we really are interested in, which is that instead of confining Romeo and Juliet to arrive in only 15-minute minute increments, really, time is continuous, and Romeo and Juliet can arrive at any time. So they don't necessarily have to arrive 15 minutes late. Romeo could arrive 15 minutes and 37 seconds late if he wanted to. So now our new sample space is actually just, instead of only these 25 points in the grid, it's this entire square. So any point within the square could be a possible pair of meeting times between Romeo and Juliet. So that is our new sample space, our new omega. And now let's assign a new probability law. And now, instead of being in the discrete world, we're in the continuous world. And the analogy here is to consider probabilities as areas. So the area of this entire square is one. And that also corresponds to the probability of omega, the sample space. And imagine just spreading probability evenly across this square so that the probability of any event-- which in this case would just be any shape within this square-- is exactly equal to the area of that shape. So now that is our new sample space and our new probability law. So what we have to do now is just to identify the event of interest, which is still the event that Romeo and Juliet arrive within 15 minutes of each other. So let's do that. If Romeo and Juliet arrive both on time, then obviously they'll meet. And if Romeo's on time and Juliet is 15 minutes late, then they will still meet. And in fact, any pairs of meeting times between these would still work, because now Romeo can be on time, and Juliet can arrive at any time between 0 and 15 minutes late. But you notice that if Juliet is even a tiny bit later than 15 minutes, then they won't end up meeting. So this segment here is part of the event of interest. And similarly, this segment here is also part of the event. And if you take this exercise and extend it, you can actually verify that the event of interest is this strip shape in the middle of the square. Which, if you think about it, makes sense, because you want the arrival times between Romeo and Juliet to be close to each other, so you would be expect it to be somewhere close to a diagonal in this square. So now we have our event of interest. We have our sample space and our probability law. So all we have to do now is just calculate what this probability is. And we've already said that the probability in this probability law is just areas. So now it actually just boils down to not a probability problem, but a problem in geometry. So to calculate this area, you can do it in lots of ways. One way is to calculate the area of the square, which is 1, and subtract the areas of these two triangles. So let's do that. So in the continuous case, the probability of meeting is going to be 1 minus the area of this triangle. The base here is 3/4 and 3/4, so it's 1/2 times 3/4 times 3/4. That's the area of one of these triangles. There's two of them, so we'll multiply by two. And we end up with 1 minus 9/16, or 7/16 as our final answer. So in this problem, we've reviewed some basic concepts of probability, and that's also helped us solve this problem of helping Romeo and Juliet meet up for a date. And if you wanted to, you could even extend this problem even further and turn it on its head. And instead of calculating given that they arrive within 15 minutes of each other, what is the probability that they'll meet, let's say that Romeo really wants to meet up with Juliet, and he wants to assure himself a least, say, a 90% chance of meeting Juliet. Then you can ask, if he wants to have at least a 90% chance of meeting her, how long should he be willing to wait? And so that's the flip side of the problem. And you can see that with just some basic concepts of probability, you can answer some already pretty interesting problems. So I hope this problem was interesting, and we'll see you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Inferring_a_Parameter_of_Uniform_Part_1.txt
Hi. In this problem, Romeo and Juliet are back and they're still looking to meet up for a date. Remember, the last time we met up with them, it was back in the beginning of the course and they were trying to meet up for a date but they weren't always punctual. So we modeled their delay as uniformly distributed between 0 and 1 hour. So now in this problem, we're actually going to look at variation. And we're going to ask the question, how do we actually know that the distribution is uniformly distributed between 0 and 1 hour? Or it could also be the case that it is uniformly distributed between 0 and half an hour, or zero and two hours. How do we actually know what this parameter of the uniform distribution is? OK, so let's put ourselves in the shoes of Romeo who's tired of being stood up by Juliet on all these dates. And fortunately, he's learned some probability since the beginning of course, and so have we. And in particular we've learned Bayesian inference. And so in this problem, we're actually going to use basically all the concepts and tools of Bayesian inference that we learned chapter eight and apply them. So it's a nice review problem, and so let's get started. The set of the problem is similar to the first Romeo and Juliet problem that we dealt with. They are meeting up for a date, and they're not always punctual and they have a delay. But instead of the delay being uniformly distributed between 0 and 1 hour, now we have an extra layer of uncertainty. So if we know sum theta, then we know that the delay, which we'll call x is uniformly distributed between 0 and the theta. So here's one possible theta, theta 1. But we don't actually know what this theta is. So in the original problem we knew that theta was exactly one hour. But in this problem we don't know what theta is. So theta could also be like this, some other theta 2. And we don't know what this theta is. And we choose to model it as being uniformly distributed between 0 and 1. So like I said, we have two layers now. We have uncertainty about theta, which is the parameters of the uniform distribution. And then we have uncertainty in regards to the actual delay, x. OK, so let's actually write out what these distributions are. So theta, the unknown parameter, we're told in the problem that we're going to assume that is uniformly distributed between 0 and 1. And so the PDF is just 1, when theta is between 0 and 1, and 0 otherwise. And we're told that, given what theta is, given what this parameter is, the delay is uniformly distributed between 0 and this theta. So what that means is that we know this conditional PDF, the conditional PDF of x given theta is going to be 1 over theta if x is between 0 and theta, and 0 otherwise. All right, because we know that given a theta, x is uniformly distributed between 0 and theta. So in order to make this uniform distribution, it's the normalization or the heights, you can think of it, has to be exactly 1 over theta. So just imagine for a concrete case, if theta were 1, 1 hour in the original problem, then this would just be a PDF of 1 or a standard uniform distribution between 0 and 1. OK, so now this is, we have the necessary fundamentals for this problem. And what do we do in inference? Well the objective is to try to infer some unknown parameter. And what we have is we have a prior which is our initial belief for what this parameter might be. And then we have some data. So in this case, the data that we collect is the actual observed delayed for Juliet, x. And this model tells us how this data is essentially generated. And now what we do is, we want to use the data and our prior belief, combined them somehow, and use it to update our belief into what we call our posterior. In order to do that, we use Bayes' rule, which is why this is called Bayesian inference. So when we use Bayes' rule, remember the Bayes' rule is just, we want to now find the posterior which is the conditional PDF of theta, the unknown parameter, given x. So essentially just flip this condition. And remember Bayes' rule is given as the following. It's just the prior times this conditional PDF of x given theta divided by the PDF of x. All right, and we know what most of these things are. The prior or just the PDF of theta is 1. The condition PDF of x given theta is 1 over theta. And then of course we have this PDF of x. But we always have to be careful because these two values are only valid for certain ranges of theta and x. So in order for this to be valid we need theta to be between 0 and 1 because otherwise it would be 0. So we need theta to be between 0 and 1. And we need x to be between 0 and theta. And otherwise this would be 0. So now we're almost done. One last thing we need to do is just calculate what this denominator is, f x of x. Well the denominator, remember, is just a normalization. And it's actually relatively less important because what we'll find out is that this has no dependence on theta. It will only depend on x. So the importance, the dependence on theta, will be captured just by the numerator. But for completeness let's calculate out what this is. So it's just a normalization. So it's actually just the integral of the numerator. You can think of it as an application of kind of total probability. So we have this that we integrate over and what do we integrate this over? Well we know that we're integrating over theta. And we know that theta has to be between x-- has to be greater than x and it has to be less than 1. So we integrate from theta equals x to 1. And this is just the integral from x to 1 of the numerator, right? This is just 1 and this is 1 over theta. So it's the integral of 1 over theta, d theta from x to 1. Which when you do it out, this is the integral, this is log of theta. So it's log of 1 minus log of x. Log of 1 is 0. X, remember x is between 0 and theta. Theta is less than 1. So x has to be between 0 and 1. The log of something between 0 and 1 is negative. So this is a negative number. This is 0. And then we have a negative sign. So really what we can write this as is the absolute value of log of x. This is just so that it would actually be negative log of x. But because log of x is negative we can just-- we know that this is actually going to be a positive number. So this is just to make it look more intuitive. OK so now to complete this we can just plug that back in and the final answer is-- this is going to be the absolute value log of x or you could also rewrite this as 1 over theta times absolute value log of x. And of course, remember that the actual limits for where this is valid are very important. OK, so what does this actually mean? Let's try to interpret what this answer is. So what we have is this is the posterior distribution. And now what have we done? Well we started out with the prior, which was that theta is uniform between 0 and between 0 and 1. This is our prior belief. Now we observed some data. And this allows us to update our belief. And this is the update that we get. So let's just assume that we observe that Juliet is late by half an hour. Well if she's late by half an hour, what does that tell us about what theta can be? Well what we know from that at least is that theta cannot be anything less than half an hour because if theta were less than half an hour there's no way that her delay-- remember her delay we know has to be distributed between 0 and theta. There's no way that her delay could be half an hour if theta were less than half an hour. So automatically we know that now theta has to be somewhere between x and one which is where this limit comes in. So we know that theta have to be between x and 1 now instead of just 0 and 1. So by observing an x that cuts down and eliminates part of the range of theta, the range that theta can take on. Now what else do we know? Well this, we can actually plot this. This is a function of theta. The log x, we can just think of it as some sort of scaling factor. So it's something like 1 over theta scaled. And so that's going to look something like this. And so what we've done is we've transformed the prior, which looks like flat and uniform into something that looks like this, the posterior. So we've eliminated small values of x because we know that those can't be possible. And now what's left is everything between x and 1. So now why is it also that it becomes not uniform between x and 1? Well it's because, if you think about it, when theta is close to x, so say x is half an hour. If theta is half an hour, that means that there's higher probability that you will actually observe something, a delay of half an hour because there's only a range between 0 and half an hour that x can be drawn from. Now if theta was actually 1 then x could be drawn anywhere from 0 to 1 which is a wider range. And so it's less likely that you'll get a value of x equal to half an hour. And so because of that values of theta closer to x are more likely. That's why you get this decreasing function. OK, so now let's continue and now what we have is this is the case for if you observe one data point. So you arrange a date with Juliet, you observe how late she is, and you get one value of x. And now suppose you want to get collect more data so you arrange say 10 dates with Juliet. And for each one you observe how late she was. So now we can collect multiple samples, say n samples of delays. So x1 is her delay on the first date. Xn is her delay on the nth date. And x we can now just call a variable that's a collection of all of these. And now the question is, how do you incorporate in all this information into updating your belief about theta? And it's actually pretty analogous to what we've done here. The important assumption that we make in this problem is that conditional on theta, all of these delays are in fact conditionally independent. And that's going to help us solve this problem. So the set up is essentially the same. What we still need is a-- we still need the prior. And the prior hasn't changed. The prior is still uniform between 0 and 1. The way the actual delays are generated is we still assume to be the same given conditional on theta, each one of these is conditionally independent, and each one is uniformly distributed between 0 and theta. And so what we get is that this is going to be equal to-- you can also imagine this as a big joint PDF, joint conditional PDF of all the x's. And because we said that they are conditionally independent given theta, then we can actually split this joint PDF into the product of a lot of individual conditional PDFs. So this we can actually rewrite as PDF of x1 given theta times all the way through the condition PDF of xn given theta. And because we assume that each one of these is-- for each one of these it's uniformly distributed between 0 and theta, they're all the same. So in fact what we get is 1 over theta for each one of these. And there's n of them. So it's 1 over theta to the n. But what values of x is this valid for? What values of x and theta? Well what we need is that for each one of these, we need that theta has to be at least equal to whatever x you get. Whatever x you observe, theta has to at least that. So we know that theta has to at least equal to x1 and all the way through xn. And so theta has to be at least greater than or equal to all these x's and otherwise this would be 0. So let's define something that's going to help us. Let's define x bar to be the maximum of all the observed x's. And so what we can do is rewrite this condition as theta has to be at least equal to the maximum, equal to x bar. All right, and now we can again apply Bayes' rule. Bayes' rule will tell us what this posterior distribution is. So again the numerator will be the prior times this conditional PDF over PDF of x. OK, so the numerator again, the prior is just one. This distribution we calculated over here. It's 1 over theta to the n. And then we have this denominator. And again, we need to be careful to write down when this is actually valid. So it's actually valid when x bar is greater than theta-- I'm sorry, x bar is less than or equal to theta, and otherwise it's zero. So this is actually more or less complete. Again we need to calculate out what exactly this denominator is but just like before it's actually just a scaling factor which is independent of what theta is. So if we wanted to, we could actually calculate this out. It would be just like before. It would be the integral of the numerator, which is 1 over theta to the n d theta. And we integrate theta from before, it was from x to 1. But now we need to integrate from x bar to 1. And if we wanted to, we can actually do others. It's fairly simple calculus to calculate what this normalization factor would be. But the main point is that the shape of it will be dictated by this 1 over theta to the n term. And so now we know that with n pieces of data, it's actually going to be 1-- the shape will be 1 over theta to the n, where theta has to be at least greater than or equal to x bar. Before it was actually just 1 over theta and has to be between x and 1. So you can kind of see how the problem generalizes when you collect more data. So now imagine that this is the new-- when you collect n pieces of data, the maximum of all the x's is here. Well, it turns out that it's the posterior now is going to look something like this. So it becomes steeper because it's 1 over theta to the n as opposed to 1 over theta. And it's limited to be between x bar and 1. And so with more data you're more sure of the range that theta can take on because each data points eliminates parts of theta, the range of theta that theta can't be. And so you're left with just x bar to 1. And you're also more certain. So you have this kind of distribution. OK, so this is kind of the posterior distribution which tells you the entire distribution of what the unknown parameter-- the entire distribution of the unknown parameter given all the data that you have plus the prior distribution that you have. But if someone were to come to ask you, your manager asks you, well what is your best guess of what theta is? It's less informative or less clear when you tell them, here's the distribution. Because you still have a big range of what theta could be, it could be anything between x and 1 or x bar and 1. So if you wanted to actually come up with a point estimate which is just one single value, there's different ways you can do it. The first way that we'll talk about is the map rule. What the map rule does is it takes the posterior distribution and just finds the value of the parameter that gives the maximum posterior distribution, the maximum point in the posterior distribution. So if you look at this posture distribution, the map will just take the highest value. And in this case, because the posterior looks like this, the highest value is in fact x. And so theta hat map is actually just x. And if you think about it, this kind of an optimistic estimate because you always assume that it's whatever, if Juliet were 30 minutes late then you assume that her delay is uniformly distributed between 0 and 30 minutes. Well in fact, even though she arrived 30 minutes late, that could have been because she's actually distributed between 0 and 1 hour and you just happened to get 30 minutes. But what you do is you always take kind of the optimistic, and just give her the benefit of the doubt, and say that was actually kind of the worst case scenario given her distribution. So another way to take this entire posterior distribution and come up with just a single number, a point estimate, is to take the conditional expectation. So you have an entire distribution. So there's two obvious ways of getting a number out of this. One is to take the maximum and the other is to take the expectation. So take everything in the distribution, combine it and come up with a estimate. So if you think about it, it will probably be something like here, would be the conditional distribution. So this is called the LMS estimator. And the way to calculate it is just like we said, you take the conditional expectation. So how do we take the conditional expectation? Remember it is just the value and you weight it by the correct distribution, in this case it's the conditional PDF of theta given x which is the posterior distribution. And what do we integrate theta from? Well we integrate it from x to 1. Now if we plug this in, we integrate from x to 1, theta times the posterior. The posterior we calculated earlier, it was 1 over theta times the absolute value of log x. So the thetas just cancel out, and you just have 1 over absolute value of log x. Well that doesn't depend on theta. So what you get is just 1 minus x over absolute value of log x. All right, so we can actually plot this, so we have two estimates now. One is that the estimate is just theta-- the estimate is just x. The other one is that it's 1 minus x over absolute value of log x. So we can plot this and compare the two. So here's x, and here is theta hat, theta hat of x for the two different estimates. So here's you the estimate from the map rule which is whatever x is, we estimate that theta is equal to x. So it just looks like this. Now if we plot this, turns out that it looks something like this. And so whatever x is, this will tell you what the estimate, the LMS estimate of theta would be. And it turns out that it's always higher than the map estimate. So it's less optimistic. And it kind of factors in the entire distribution. So because there are several parts to this problem, we're going to take a pause for a quick break and we'll come back and finish the problem in a little bit.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
6_Discrete_Random_Variables_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK so let's start. So today, we're going to continue the subject from last time. And the subject is random variables. As we discussed, random variables basically associate numerical values with the outcomes of an experiment. And we want to learn how to manipulate them. Now to a large extent, what's going to happen, what's happening during this chapter, is that we are revisiting the same concepts we have seen in chapter one. But we're going to introduce a lot of new notation, but really dealing with the same kind of stuff. The only difference where we go beyond the new notation, the new concept in this chapter is the concept of the expectation or expected values. And we're going to learn how to manipulate expectations. So let us start with a quick review of what we discussed last time. We talked about random variables. Loosely speaking, random variables are random quantities that result from an experiment. More precisely speaking, mathematically speaking, a random variable is a function from the sample space to the real numbers. That is, you give me an outcome, and based on that outcome, I can tell you the value of the random variable. So the value of the random variable is a function of the outcome that we have. Now given a random variable, some of the numerical outcomes are more likely than others. And we want to say which ones are more likely and how likely they are. And the way we do that is by writing down the probabilities of the different possible numerical outcomes. Notice here, the notation. We use uppercase to denote the random variable. We use lowercase to denote real numbers. So the way you read this, this is the probability that the random variable, capital X, happens to take the numerical value, little x. This is a concept that's familiar from chapter one. And this is just the new notation we will be using for that concept. It's the Probability Mass Function of the random variable, capital X. So the subscript just indicates which random variable we're talking about. And it's the probability assigned to a particular outcome. And we want to assign such probabilities for all possibly numerical values. So you can think of this as being a function of little x. And it tells you how likely every little x is going to be. Now the new concept we introduced last time is the concept of the expected value for random variable, which is defined this way. You look at all the possible outcomes. And you form some kind of average of all the possible numerical values over the random variable capital X. You consider all the possible numerical values, and you form an average. In fact, it's a weighted average where, to every little x, you assign a weight equal to the probability that that particular little x is going to be realized. Now, as we discussed last time, if you have a random variable, you can take a function of a random variable. And that's going to be a new random variable. So if capital X is a random variable and g is a function, g of X is a new random variable. You do the experiment. You get an outcome. This determines the value of X. And that determines the value of g of X. So the numerical value of g of X is determined by whatever happens in the experiment. It's random. And that makes it a random variable. Since it's a random variable, it has an expectation of its own. So how would we calculate the expectation of g of X? You could proceed by just using the definition, which would require you to find the PMF of the random variable g of X. So find the PMF of g of X, and then apply the formula for the expected value of a random variable with known PMF. But there is also a shortcut, which is just a different way of doing the counting and the calculations, in which we do not need to find the PMF of g of X. We just work with the PMF of the original random variable. And what this is saying is that the average value of g of X is obtained as follows. You look at all the possible results, the X's, how likely they are. And when that particular X happens, this is how much you get. And so this way, you add these things up. And you get the average amount that you're going to get, the average value of g of X, where you average over the likelihoods of the different X's. Now expected values have some properties that are always true and some properties that sometimes are not true. So the property that is not always true is that this would be the same as g of the expected value of X. So in general, this is not true. You cannot interchange function and expectation, which means you cannot reason on the average, in general. But there are some exceptions. When g is a linear function, then the expected value for a linear function is the same as that same linear function of the expectation. So for linear functions, so for random variable, the expectation behaves nicely. So this is basically telling you that, if X is degrees in Celsius, alpha X plus b is degrees in Fahrenheit, you can first do the conversion to Fahrenheit and take the average. Or you can find the average temperature in Celsius, and then do the conversion to Fahrenheit. Either is valid. So the expected value tells us something about where is the center of the distribution, more specifically, the center of mass or the center of gravity of the PMF, when you plot it as a bar graph. Besides the average value, you may be interested in knowing how far will you be from the average, typically. So let's look at this quantity, X minus expected value of X. This is the distance from the average value. So for a random outcome of the experiment, this quantity in here measures how far away from the mean you happen to be. This quantity inside the brackets is a random variable. Why? Because capital X is random. And what we have here is capital X, which is random, minus a number. Remember, expected values are numbers. Now a random variable minus a number is a new random variable. It has an expectation of its own. We can use the linearity rule, expected value of something minus something else is just the difference of their expected value. So it's going to be expected value of X minus the expected value over this thing. Now this thing is a number. And the expected value of a number is just the number itself. So we get from here that this is expected value minus expected value. And we get zero. What is this telling us? That, on the average, the assigned difference from the mean is equal to zero. That is, the mean is here. Sometimes X will fall to the right. Sometimes X will fall to the left. On the average, the average distance from the mean is going to be zero, because sometimes the realized distance will be positive, sometimes it will be negative. Positives and negatives cancel out. So if we want to capture the idea of how far are we from the mean, just looking at the assigned distance from the mean is not going to give us any useful information. So if we want to say something about how far we are, typically, we should do something different. One possibility might be to take the absolute values of the differences. And that's a quantity that sometimes people are interested in. But it turns out that a more useful quantity happens to be the variance of a random variable, which actually measures the average squared distance from the mean. So you have a random outcome, random results, random numerical value of the random variable. It is a certain distance away from the mean. That certain distance is random. We take the square of that. This is the squared distance from the mean, which is again random. Since it's random, it has an expected value of its own. And that expected value, we call it the variance of X. And so we have this particular definition. Using the rule that we have up here for how to calculate expectations of functions of a random variable, why does that apply? Well, what we have inside the brackets here is a function of the random variable, capital X. So we can apply this rule where g is this particular function. And we can use that to calculate the variance, starting with the PMF of the random variable X. And then we have a useful formula that's a nice shortcut, sometimes, if you want to do the calculation. Now one thing that's slightly wrong with the variance is that the units are not right, if you want to talk about the spread a of a distribution. Suppose that X is a random variable measured in meters. The variance will have the units of meters squared. So it's a kind of a different thing. If you want to talk about the spread of the distribution using the same units as you have for X, it's convenient to take the square root of the variance. And that's something that we define. And we call it to the standard deviation of X, or the standard deviation of the distribution of X. So it tells you the amount of spread in your distribution. And it is in the same units as the random variable itself that you are dealing with. And we can just illustrate those quantities with an example that's about as simple as it can be. So consider the following experiment. You're going to go from here to New York, let's say, 200 miles. And you have two alternatives. Either you'll get your private plane and go at a speed of 200 miles per hour, constant speed during your trip, or otherwise, you'll decide to walk really, really slowly, at the leisurely pace of one mile per hour. So you pick the speed at random by doing this experiment, by flipping a coin. And with probability one-half, you do one thing. With probably one-half, you do the other thing. So your V is a random variable. In case you're interested in how much time it's going to take you to get there, well, time is equal to distance divided by speed. So that's the formula. The time itself is a random variable, because it's a function of V, which is random. How much time it's going to take you depends on the coin flip that you do in the beginning to decide what speed you are going to have. OK, just as a warm up, the trivial calculations. To find the expected value of V, you argue as follows. With probability one-half, V is going to be one. And with probability one-half, V is going to be 200. And so the expected value of your speed is 100.5. If you wish to calculate the variance of V, then you argue as follows. With probability one-half, I'm going to travel at the speed of one, whereas, the mean is 100.5. So this is the distance from the mean, if I decide to travel at the speed of one. We take that distance from the mean squared. That's one contribution to the variance. And with probability one-half, you're going to travel at the speed of 200, which is this much away from the mean. You take the square of that. OK, so approximately how big is this number? Well, this is roughly 100 squared. That's also 100 squared. So approximately, the variance of this random variable is 100 squared. Now if I tell you that the variance of this distribution is 10,000, it doesn't really help you to relate it to this diagram. Whereas, the standard deviation, where you take the square root, is more interesting. It's the square root of 100 squared, which is a 100. And the standard deviation, indeed, gives us a sense of how spread out this distribution is from the mean. So the standard deviation basically gives us some indication about this spacing that we have here. It tells us the amount of spread in our distribution. OK, now let's look at what happens to time. V is a random variable. T is a random variable. So now let's look at the expected values and all of that for the time. OK, so the time is a function of a random variable. We can find the expected time by looking at all possible outcomes of the experiment, the V's, weigh them according to their probabilities, and for each particular V, keep track of how much time it took us. So if V is one, which happens with probability one-half, the time it takes is going to be 200. If we travel at speed of one, it takes us 200 time units. And otherwise, if our speed is equal to 200, the time is one. So the expected value of T is once more the same as before. It's 100.5. So the expected speed is 100.5. The expected time is also 100.5. So the product of these expectations is something like 10,000. How about the expected value of the product of T and V? Well, T times V is 200. No matter what outcome you have in the experiment, in that particular outcome, T times V is total distance traveled, which is exactly 200. And so what do we get in this simple example is that the expected value of the product of these two random variables is different than the product of their expected values. This is one more instance of where we cannot reason on the average. So on the average, over a large number of trips, your average time would be 100. On the average, over a large number of trips, your average speed would be 100. But your average distance traveled is not 100 times 100. It's something else. So you cannot reason on the average, whenever you're dealing with non-linear things. And the non-linear thing here is that you have a function which is a product of stuff, as opposed to just linear sums of stuff. Another way to look at what's happening here is the expected value of the time. Time, by definition, is 200 over the speed. Expected value of the time, we found it to be about a 100. And so expected value of 200 over V is about a 100. But it's different from this quantity here, which is roughly equal to 2, and so 200. Expected value of V is about 100. So this quantity is about equal to two. Whereas, this quantity up here is about 100. So what do we have here? We have a non-linear function of V. And we find that the expected value of this function is not the same thing as the function of the expected value. So again, that's an instance where you cannot interchange expected values and functions. And that's because things are non-linear. OK, so now let us introduce a new concept. Or maybe it's not quite a new concept. So we discussed, in chapter one, that we have probabilities. We also have conditional probabilities. What's the difference between them? Essentially, none. Probabilities are just an assignment of probability values to give different outcomes, given a particular model. Somebody comes and gives you new information. So you come up with a new model. And you have a new probabilities. We call these conditional probabilities, but they taste and behave exactly the same as ordinary probabilities. So since we can have conditional probabilities, why not have conditional PMFs as well, since PMFs deal with probabilities anyway. So we have a random variable, capital X. It has a PMF of its own. For example, it could be the PMF in this picture, which is a uniform PMF that takes for possible different values. And we also have an event. And somebody comes and tells us that this event has occurred. The PMF tells you the probability that capital X equals to some little x. Somebody tells you that a certain event has occurred that's going to make you change the probabilities that you assign to the different values. You are going to use conditional probabilities. So this part, it's clear what it means from chapter one. And this part is just the new notation we're using in this chapter to talk about conditional probabilities. So this is just a definition. So the conditional PMF is an ordinary PMF. But it's the PMF that applies to a new model in which we have been given some information about the outcome of the experiment. So to make it concrete, consider this event here. Take the event that capital X is bigger than or equal to two. In the picture, what is the event A? The event A consists of these three outcomes. OK, what is the conditional PMF, given that we are told that event A has occurred? Given that the event A has occurred, it basically tells us that this outcome has not occurred. There's only three possible outcomes now. In the new universe, in the new model where we condition on A, there's only three possible outcomes. Those three possible outcomes were equally likely when we started. So in the conditional universe, they will remain equally likely. Remember, whenever you condition, the relative likelihoods remain the same. They keep the same proportions. They just need to be re-scaled, so that they add up to one. So each one of these will have the same probability. Now in the new world, probabilities need to add up to 1. So each one of them is going to get a probability of 1/3 in the conditional universe. So this is our conditional model. So our PMF is equal to 1/3 for X equals to 2, 3 and 4. All right. Now whenever you have a probabilistic model involving a random variable and you have a PMF for that random variable, you can talk about the expected value of that random variable. We defined expected values just a few minutes ago. Here, we're dealing with a conditional model and conditional probabilities. And so we can also talk about the expected value of the random variable X in this new universe, in this new conditional model that we're dealing with. And this leads us to the definition of the notion of a conditional expectation. The conditional expectation is nothing but an ordinary expectation, except that you don't use the original PMF. You use the conditional PMF. You use the conditional probabilities. It's just an ordinary expectation, but applied to the new model that we have to the conditional universe where we are told that the certain event has occurred. So we can now calculate the condition expectation, which, in this particular example, would be 1/3. That's the probability of a 2, plus 1/3 which is the probability of a 3 plus 1/3, the probability of a 4. And then you can use your calculator to find the answer, or you can just argue by symmetry. The expected value has to be the center of gravity of the PMF we're working with, which is equal to 3. So conditional expectations are no different from ordinary expectations. They're just ordinary expectations applied to a new type of situation or a new type of model. Anything we might know about expectations will remain valid about conditional expectations. So for example, the conditional expectation of a linear function of a random variable is going to be the linear function of the conditional expectations. Or you can take any formula that you might know, such as the formula that expected value of X is equal to the-- sorry-- expected value of g of X is the sum over all X's of g of X times the PMF of X. So this is the formula that we already know about how to calculate expectations of a function of a random variable. If we move to the conditional universe, what changes? In the conditional universe, we're talking about the conditional expectation, given that event A has occurred. And we use the conditional probabilities, given that A has occurred. So any formula has a conditional counterpart. In the conditional counterparts, expectations get replaced by conditional expectations. And probabilities get replaced by conditional probabilities. So once you know the first formula and you know the general idea, there's absolutely no reason for you to memorize a formula like this one. You shouldn't even have to write it on your cheat sheet for the exam, OK? OK, all right, so now let's look at an example of a random variable that we've seen before, the geometric random variable, and this time do something a little more interesting with it. Do you remember from last time what the geometric random variable is? We do coin flips. Each time there's a probability of P of obtaining heads. And we're interested in the number of tosses we're going to need until we observe heads for the first time. The probability that the random variable takes the value K, this is the probability that the first K appeared at the K-th toss. So this is the probability of K minus 1 consecutive tails followed by a head. So this is the probability of having to weight K tosses. And when we plot this PMF, it has this kind of shape, which is the shape of a geometric progression. It starts at 1, and it goes all the way to infinity. So this is a discrete random variable that takes values over an infinite set, the set of the positive integers. So it's a random variable, therefore, it has an expectation. And the expected value is, by definition, we'll consider all possible values of the random variable. And we weigh them according to their probabilities, which leads us to this expression. You may have evaluated that expression some time in your previous life. And there are tricks for how to evaluate this and get a closed-form answer. But it's sort of an algebraic trick. You might not remember it. How do we go about doing this summation? Well, we're going to use a probabilistic trick and manage to evaluate the expectation of X, essentially, without doing any algebra. And in the process of doing so, we're going to get some intuition about what happens in coin tosses and with geometric random variables. So we have two people who are going to do the same experiment, flip a coin until they obtain heads for the first time. One of these people is going to use the letter Y to count how many heads it took. So that person starts flipping right now. This is the current time. And they are going to obtain tails, tails, tails, until eventually they obtain heads. And this random variable Y is, of course, geometric, so it has a PMF of this form. OK, now there is a second person who is doing that same experiment. That second person is going to take, again, a random number, X, until they obtain heads for the first time. And of course, X is going to have the same PMF as Y. But that person was impatient. And they actually started flipping earlier, before the Y person started flipping. They flipped the coin twice. And they were unlucky, and they obtained tails both times. And so they have to continue. Looking at the situation at this time, how do these two people compare? Who do you think is going to obtain heads first? Is one more likely than the other? So if you play at the casino a lot, you'll say, oh, there were two tails in a row, so a head should be coming up sometime soon. But this is a wrong argument, because coin flips, at least in our model, are independent. The fact that these two happened to be tails doesn't change anything about our beliefs about what's going to be happening here. So what's going to be happening to that person is they will be flipping independent coin flips. That person will also be flipping independent coin flips. And both of them wait until the first head occurs. They're facing an identical situation, starting from this time. OK, now what's the probabilistic model of what this person is facing? The time until that person obtains heads for the first time is X. So this number of flips until they obtain heads for the first time is going to be X minus 2. So X is the total number until the first head. X minus 2 is the number or flips, starting from here. Now what information do we have about that person? We have the information that their first two flips were tails. So we're given the information that X was bigger than 2. So the probabilistic model that describes this piece of the experiment is that it's going to take a random number of flips until the first head. That number of flips, starting from here until the next head, is that number X minus 2. But we're given the information that this person has already wasted 2 coin flips. Now we argued that probabilistically, this person, this part of the experiment here is identical with that part of the experiment. So the PMF of this random variable, which is X minus 2, conditioned on this information, should be the same as that PMF that we have down there. So the formal statement that I'm making is that this PMF here of X minus 2, given that X is bigger than 2, is the same as the PMF of X itself. What is this saying? Given that I tell you that you already did a few flips and they were failures, the remaining number of flips until the first head has the same geometric distribution as if you were starting from scratch. Whatever happened in the past, it happened, but has no bearing what's going to happen in the future. Remaining coin flips until a head has the same distribution, whether you're starting right now, or whether you had done some other stuff in the past. So this is a property that we call the memorylessness property of the geometric distribution. Essentially, it says that whatever happens in the future is independent from whatever happened in the past. And that's true almost by definition, because we're assuming independent coin flips. Really, independence means that information about one part of the experiment has no bearing about what's going to happen in the other parts of the experiment. The argument that I tried to give using the intuition of coin flips, you can make it formal by just manipulating PMFs formally. So this is the original PMF of X. Suppose that you condition on the event that X is bigger than 3. This conditioning information, what it does is it tells you that this piece did not happen. You're conditioning just on this event. When you condition on that event, what's left is the conditional PMF, which has the same shape as this one, except that it needs to be re-normalized up, so that the probabilities add up to one. So you take that picture, but you need to change the height of it, so that these terms add up to 1. And this is the conditional PMF of X, given that X is bigger than 2. But we're talking here not about X. We're talking about the remaining number of heads. Remaining number of heads is X minus 2. If we have the PMF of X, can we find the PMF of X minus 2? Well, if X is equal to 3, that corresponds to X minus 2 being equal to 1. So this probability here should be equal to that probability. The probability that X is equal to 4 should be the same as the probability that X minus 2 is equal to 2. So basically, the PMF of X minus 2 is the same as the PMF of X, except that it gets shifted by these 2 units. So this way, we have formally derived the conditional PMF of the remaining number of coin tosses, given that the first two flips were tails. And we see that it's exactly the same as the PMF that we started with. And so this is the formal proof of this statement here. So it's useful here to digest both these formal statements and understand it and understand the notation that is involved here, but also to really appreciate the intuitive argument what this is really saying. OK, all right, so now we want to use this observation, this memorylessness, to eventually calculate the expected value for a geometric random variable. And the way we're going to do it is by using a divide and conquer tool, which is an analog of what we have already seen sometime before. Remember our story that there's a number of possible scenarios about the world? And there's a certain event, B, that can happen under any of these possible scenarios. And we have the total probability theory. And that tells us that, to find the probability of this event, B, you consider the probabilities of B under each scenario. And you weigh those probabilities according to the probabilities of the different scenarios that we have. So that's a formula that we already know and have worked with. What's the next step? Is it something deep? No, it's just translation in different notation. This is the exactly same formula, but with PMFs. The event that capital X is equal to little x can happen in many different ways. It can happen under either scenario. And within each scenario, you need to use the conditional probabilities of that event, given that this scenario has occurred. So this formula is identical to that one, except that we're using conditional PMFs, instead of conditional probabilities. But conditional PMFs, of course, are nothing but conditional probabilities anyway. So nothing new so far. Then what I do is to take this formula here and multiply both sides by X and take the sum over all X's. What do we get on this side? We get the expected value of X. What do we get on that side? Probability of A1. And then here, sum over all X's of X times P. That's, again, the same calculation we have when we deal with expectations, except that, since here, we're dealing with conditional probabilities, we're going to get the conditional expectation. And this is the total expectation theorem. It's a very useful way for calculating expectations using a divide and conquer method. We figure out the average value of X under each one of the possible scenarios. The overall average value of X is a weighted linear combination of the expected values of X in the different scenarios where the weights are chosen according to the different probabilities. OK, and now we're going to apply this to the case of a geometric random variable. And we're going to divide and conquer by considering separately the two cases where the first toss was heads, and the other case where the first toss was tails. So the expected value of X is the probability that the first toss was heads, so that X is equal to 1, and the expected value if that happened. What is the expected value of X, given that X is equal to 1? If X is known to be equal to 1, then X becomes just a number. And the expected value of a number is the number itself. So this first line here is the probability of heads in the first toss times the number 1. So the probability that X is bigger than 1 is 1 minus P. And then we need to do something about this conditional expectation. What is it? I can write it in, perhaps, a more suggested form, as expected the value of X minus 1, given that X minus 1 is bigger than 1. Ah. OK, X bigger than 1 is the same as X minus 1 being positive, this way. X minus 1 is positive plus 1. What did I do here? I added and subtracted 1. Now what is this? This is the expected value of the remaining coin flips, until I obtain heads, given that the first one was tails. It's the same story that we were going through down there. Given that the first coin flip was tails doesn't tell me anything about the future, about the remaining coin flips. So this expectation should be the same as the expectation faced by a person who was starting just now. So this should be equal to the expected value of X itself. And then we have the plus 1 that's come from there, OK? Remaining coin flips until a head, given that I had a tail yesterday, is the same as expected number of flips until heads for a person just starting now and wasn't doing anything yesterday. So the fact that they I had a coin flip yesterday doesn't change my beliefs about how long it's going to take me until the first head. So once we believe that relation, than we plug this here. And this red term becomes expected value of X plus 1. So now we didn't exactly get the answer we wanted, but we got an equation that involves the expected value of X. And it's the only unknown in that equation. Expected value of X equals to P plus (1 minus P) times this expression. You solve this equation for expected value of X, and you get the value of 1/P. The final answer does make intuitive sense. If P is small, heads are difficult to obtain. So you expect that it's going to take you a long time until you see heads for the first time. So it is definitely a reasonable answer. Now the trick that we used here, the divide and conquer trick, is a really nice one. It gives us a very good shortcut in this problem. But you must definitely spend some time making sure you understand why this expression here is the same as that expression there. Essentially, what it's saying is that, if I tell you that X is bigger than 1, that the first coin flip was tails, all I'm telling you is that that person has wasted a coin flip, and they are starting all over again. So they've wasted 1 coin flip. And they're starting all over again. If I tell you that the first flip was tails, that's the only information that I'm basically giving you, a wasted flip, and then starts all over again. All right, so in the few remaining minutes now, we're going to quickly introduce a few new concepts that we will be playing with in the next ten days or so. And you will get plenty of opportunities to manipulate them. So here's the idea. A typical experiment may have several random variables associated with that experiment. So a typical student has height and weight. If I give you the PMF of height, that tells me something about distribution of heights in the class. I give you the PMF of weight, it tells me something about the different weights in this class. But if I want to ask a question, is there an association between height and weight, then I need to know a little more how height and weight relate to each other. And the PMF of height individuality and PMF of weight just by itself do not tell me anything about those relations. To be able to say something about those relations, I need to know something about joint probabilities, how likely is it that certain X's go together with certain Y's. So these probabilities, essentially, capture associations between these two random variables. And it's the information I would need to have to do any kind of statistical study that tries to relate the two random variables with each other. These are ordinary probabilities. This is an event. It's the event that this thing happens and that thing happens. This is just the notation that we will be using. It's called the joint PMF. It's the joint Probability Mass Function of the two random variables X and Y looked at together, jointly. And it gives me the probability that any particular numerical outcome pair does happen. So in the finite case, you can represent joint PMFs, for example, by a table. This particular table here would give you information such as, let's see, the joint PMF evaluated at 2, 3. This is the probability that X is equal to 3 and, simultaneously, Y is equal to 3. So it would be that number here. It's 4/20. OK, what is a basic property of PMFs? First, these are probabilities, so all of the entries have to be non-negative. If you adopt the probabilities over all possible numerical pairs that you could get, of course, the total probability must be equal to 1. So that's another thing that we want. Now suppose somebody gives me this model, but I don't care about Y's. All I care is the distribution of the X's. So I'm going to find the probability that X takes on a particular value. Can I find it from the table? Of course, I can. If you ask me what's the probability that X is equal to 3, what I'm going to do is to add up those three probabilities together. And those probabilities, taken all together, give me the probability that X is equal to 3. These are all the possible ways that the event X equals to 3 can happen. So we add these, and we get the 6/20. What I just did, can we translate it to a formula? What did I do? I fixed the particular value of X. And I added up the values of the joint PMF over all the possible values of Y. So that's how you do it. You take the joint. You take one slice of the joint, keeping X fixed, and adding up over the different values of Y. The moral of this example is that, if you know the joint PMFs, then you can find the individual PMFs of every individual random variable. And we have a name for these. We call them the marginal PMFs. We have the joint that talks about both together, and the marginal that talks about them one at the time. And finally, since we love conditional probabilities, we will certainly want to define an object called the conditional PMF. So this quantity here is a familiar one. It's just a conditional probability. It's the probability that X takes on a particular value, given that Y takes a certain value. For our example, let's take little y to be equal to 2, which means that we're conditioning to live inside this universe. This red universe here is the y equal to 2 universe. And these are the conditional probabilities of the different X's inside that universe. OK, once more, just an exercise in notation. This is the chapter two version of the notation of what we were denoting this way in chapter one. The way to read this is that it's a conditional PMF having to do with two random variables, the PMF of X conditioned on information about Y. We are fixing a particular value of capital Y, that's the value on which we are conditioning. And we're looking at the probabilities of the different X's. So it's really a function of two arguments, little x and little y. But the best way to think about it is to fix little y and think of it as a function of X. So I'm fixing little y here, let's say, to y equal to 2. So I'm considering only this. And now, this quantity becomes a function of little x. For the different little x's, we're going to have different conditional probabilities. What are those conditional probabilities? OK, conditional probabilities are proportional to original probabilities. So it's going to be those numbers, but scaled up. And they need to be scaled so that they add up to 1. So we have 1, 3 and 1. That's a total of 5. So the conditional PMF would have the shape zero, 1/5, 3/5, and 1/5. This is the conditional PMF, given a particular value of Y. It has the same shape as those numbers, where by shape, I mean try to visualize a bar graph. The bar graph associated with those numbers has exactly the same shape as the bar graph associated with those numbers. The only thing that has changed is the scaling. Big moral, let me say in different words, the conditional PMF, given a particular value of Y, is just a slice of the joint PMF where you maintain the same shape, but you rescale the numbers so that they add up to 1. Now mathematically, of course, what all of this is doing is it's taking the original joint PDF and it rescales it by a certain factor. This does not involve X, so the shape, is a function of X, has not changed. We're keeping the same shape as a function of X, but we divide by a certain number. And that's the number that we need, so that the conditional probabilities add up to 1. Now where does this formula come from? Well, this is just the definition of conditional probabilities. Probability of something conditioned on something else is the probability of both things happening, the intersection of the two divided by the probability of the conditioning event. And last remark is that, as I just said, conditional probabilities are nothing different than ordinary probabilities. So a conditional PMF must sum to 1, no matter what you are conditioning on. All right, so this was sort of quick introduction into our new notation. But you get a lot of practice in the next days to come.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
19_Weak_Law_of_Large_Numbers.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN TSITSIKLIS: We're going to start today a new unit. so we will be talking about limit theorems. So just to introduce the topic, let's think of the following situation. There's a population of penguins down at the South Pole. And if you were to pick a penguin at random and measure their height, the expected value of their height would be the average of the heights of the different penguins in the population. So suppose when you pick one, every penguin is equally likely. Then the expected value is just the average of all the penguins out there. So your boss asks you to find out what that the expected value is. One way would be to go and measure each and every penguin. That might be a little time consuming. So alternatively, what you can do is to go and pick penguins at random, pick a few of them, let's say a number n of them. So you measure the height of each one. And then you calculate the average of the heights of those penguins that you have collected. So this is your estimate of the expected value. Now, we called this the sample mean, which is the mean value, but within the sample that you have collected. This is something that's sort of feels the same as the expected value, which is again, the mean. But the expected value's a different kind of mean. The expected value is the mean over the entire population, whereas the sample mean is the average over the smaller sample that you have measured. The expected value is a number. The sample mean is a random variable. It's a random variable because the sample you have collected is random. Now, we think that this is a reasonable way of estimating the expectation. So in the limit as n goes to infinity, it's plausible that the sample mean, the estimate that we are constructing, should somehow get close to the expected value. What does this mean? What does it mean to get close? In what sense? And is this statement true? This is the kind of statement that we deal with when dealing with limit theorems. That's the subject of limit theorems, when what happens if you're dealing with lots and lots of random variables, and perhaps take averages and so on. So why do we bother about this? Well, if you're in the sampling business, it would be reassuring to know that this particular way of estimating the expected value actually gets you close to the true answer. There's also a higher level reason, which is a little more abstract and mathematical. So probability problems are easy to deal with if you're having in your hands one or two random variables. You can write down their mass functions, joints density functions, and so on. You can calculate on paper or on a computer, you can get the answers. Probability problems become computationally intractable if you're dealing, let's say, with 100 random variables and you're trying to get the exact answers for anything. So in principle, the same formulas that we have, they still apply. But they involve summations over large ranges of combinations of indices. And that makes life extremely difficult. But when you push the envelope and you go to a situation where you're dealing with a very, very large number of variables, then you can start taking limits. And when you take limits, wonderful things happen. Many formulas start simplifying, and you can actually get useful answers by considering those limits. And that's sort of the big reason why looking at limit theorems is a useful thing to do. So what we're going to do today, first we're going to start with a useful, simple tool that allows us to relates probabilities with expected values. The Markov inequality is the first inequality we're going to write down. And then using that, we're going to get the Chebyshev's inequality, a related inequality. Then we need to define what do we mean by convergence when we talk about random variables. It's a notion that's a generalization of the notion of the usual convergence of limits of a sequence of numbers. And once we have our notion of convergence, we're going to see that, indeed, the sample mean converges to the true mean, converges to the expected value of the X's. And this statement is called the weak law of large numbers. The reason it's called the weak law is because there's also a strong law, which is a statement with the same flavor, but with a somewhat different mathematical content. But it's a little more abstract, and we will not be getting into this. So the weak law is all that you're going to get. All right. So now we start our digression. And our first tool will be the so-called Markov inequality. So let's take a random variable that's always non-negative. No matter what, it gets no negative values. To keep things simple, let's assume it's a discrete random variable. So the expected value is the sum over all possible values that a random variable can take. The values of the random variables that can take weighted according to their corresponding probabilities. Now, this is a sum over all x's. But x takes non-negative values. And the PMF is also non-negative. So if I take a sum over fewer things, I'm going to get a smaller value. So the sum when I add over everything is less than or equal to the sum that I will get if I only add those terms that are bigger than a certain constant. Now, if I'm adding over x's that are bigger than a, the x that shows up up there will always be larger than or equal to a. So we get this inequality. And now, a is a constant. I can pull it outside the summation. And then I'm left with the probabilities of all the x's that are bigger than a. And that's just the probability of being bigger than a. OK, so that's the Markov inequality. Basically tells us that the expected value is larger than or equal to this number. It relates expected values to probabilities. It tells us that if the expected value is small, then the probability that x is big is also going to be small. So it's translates a statement about smallness of expected values to a statement about smallness of probabilities. OK. What we actually need is a somewhat different version of this same statement. And what we're going to do is to apply this inequality to a non-negative random variable of a special type. And you can think of applying this same calculation to a random variable of this form, (X minus mu)-squared, where mu is the expected value of X. Now, this is a non-negative random variable. So, the expected value of this random variable, which is the variance, by following the same thinking as we had in that derivation up to there, is bigger than the probability that this random variable is bigger than some-- let me use a-squared instead of an a times the value a-squared. So now of course, this probability is the same as the probability that the absolute value of X minus mu is bigger than a times a-squared. And this side is equal to the variance of X. So this relates the variance of X to the probability that our random variable is far away from its mean. If the variance is small, then it means that the probability of being far away from the mean is also small. So I derived this by applying the Markov inequality to this particular non-negative random variable. Or just to reinforce, perhaps, the message, and increase your confidence in this inequality, let's just look at the derivation once more, where I'm going, here, to start from first principles, but use the same idea as the one that was used in the proof out here. Ok. So just for variety, now let's think of X as being a continuous random variable. The derivation is the same whether it's discrete or continuous. So by definition, the variance is the integral, is this particular integral. Now, the integral is going to become smaller if I integrate, instead of integrating over the full range, I only integrate over x's that are far away from the mean. So mu is the mean. Think of c as some big number. These are x's that are far away from the mean to the left, from minus infinity to mu minus c. And these are the x's that are far away from the mean on the positive side. So by integrating over fewer stuff, I'm getting a smaller integral. Now, for any x in this range, this distance, x minus mu, is at least c. So that squared is at least c squared. So this term over this range of integration is at least c squared. So I can take it outside the integral. And I'm left just with the integral of the density. Same thing on the other side. And so what factors out is this term c squared. And inside, we're left with the probability of being to the left of mu minus c, and then the probability of being to the right of mu plus c, which is the same as the probability that the absolute value of the distance from the mean is larger than or equal to c. So that's the same inequality that we proved there, except that here I'm using c. There I used a, but it's exactly the same one. This inequality was maybe better to understand if you take that term and send it to the other side and write it this form. What does it tell us? It tells us that if c is a big number, it tells us that the probability of being more than c away from the mean is going to be a small number. When c is big, this is small. Now, this is intuitive. The variance is a measure of the spread of the distribution, how wide it is. It tells us that if the variance is small, the distribution is not very wide. And mathematically, this translates to this statement that when the variance is small, the probability of being far away is going to be small. And the further away you're looking, that is, if c is a bigger number, that probability also becomes small. Maybe an even more intuitive way to think about the content of this inequality is to, instead of c, use the number k, where k is positive and sigma is the standard deviation. So let's just plug k sigma in the place of c. So this becomes k sigma squared. These sigma squared's cancel. We're left with 1 over k-square. Now, what is this? This is the event that you are k standard deviations away from the mean. So for example, this statement here tells you that if you look at the test scores from a quiz, what fraction of the class are 3 standard deviations away from the mean? It's possible, but it's not going to be a lot of people. It's going to be at most, 1/9 of the class that can be 3 standard deviations or more away from the mean. So the Chebyshev inequality is a really useful one. It comes in handy whenever you want to relate probabilities and expected values. So if you know that your expected values or, in particular, that your variance is small, this tells you something about tailed probabilities. So this is the end of our first digression. We have this inequality in our hands. Our second digression is talk about limits. We want to eventually talk about limits of random variables, but as a warm up, we're going to start with limits of sequences. So you're given a sequence of numbers, a1, a2, a3, and so on. And we want to define the notion that a sequence converges to a number. You sort of know what this means, but let's just go through it some more. So here's a. We have our sequence of values as n increases. What do we mean by the sequence converging to a is that when you look at those values, they get closer and closer to a. So this value here is your typical a sub n. They get closer and closer to a, and they stay closer. So let's try to make that more precise. What it means is let's fix a sense of what it means to be close. Let me look at an interval that goes from a - epsilon to a + epsilon. Then if my sequence converges to a, this means that as n increases, eventually the values of the sequence that I get stay inside this band. Since they converge to a, this means that eventually they will be smaller than a + epsilon and bigger than a - epsilon. So convergence means that given a band of positive length around the number a, the values of the sequence that you get eventually get inside and stay inside that band. So that's sort of the picture definition of what convergence means. So now let's translate this into a mathematical statement. Given a band of positive length, no matter how wide that band is or how narrow it is, so for every epsilon positive, eventually the sequence gets inside the band. What does eventually mean? There exists a time, so that after that time something happens. And the something that happens is that after that time, we are inside that band. So this is a formal mathematical definition, which actually translates what I was telling in the wordy way before, and showing in terms of the picture. Given a certain band, even if it's narrow, eventually, after a certain time n0, the values of the sequence are going to stay inside this band. Now, if I were to take epsilon to be very small, this thing would still be true that eventually I'm going to get inside of the band, except that I may have to wait longer for the values to get inside here. All right, that's what it means for a deterministic sequence to converge to something. Now, how about random variables. What does it mean for a sequence of random variables to converge to a number? We're just going to twist a little bit of the word definition. For numbers, we said that eventually the numbers get inside that band. But if instead of numbers we have random variables with a certain distribution, so here instead of a_n we're dealing with a random variable that has a distribution, let's say, of this kind, what we want is that this distribution gets inside this band, so it gets concentrated inside here. What does it means that the distribution gets inside this band? I mean a random variable has a distribution. It may have some tails, so maybe not the entire distribution gets concentrated inside of the band. But we want that more and more of this distribution is concentrated in this band. So that -- in a sense that -- the probability of falling outside the band converges to 0 -- becomes smaller and smaller. So in words, we're going to say that the sequence random variables or a sequence of probability distributions, that would be the same, converges to a particular number a if the following is true. If I consider a small band around a, then the probability that my random variable falls outside this band, which is the area under this curve, this probability becomes smaller and smaller as n goes to infinity. The probability of being outside this band converges to 0. So that's the intuitive idea. So in the beginning, maybe our distribution is sitting everywhere. As n increases, the distribution starts to get concentrating inside the band. When a is even bigger, our distribution is even more inside that band, so that these outside probabilities become smaller and smaller. So the corresponding mathematical statement is the following. I fix a band around a, a +/- epsilon. Given that band, the probability of falling outside this band, this probability converges to 0. Or another way to say it is that the limit of this probability is equal to 0. If you were to translate this into a complete mathematical statement, you would have to write down the following messy thing. For every epsilon positive -- that's this statement -- the limit is 0. What does it mean that the limit of something is 0? We flip back to the previous slide. Why? Because a probability is a number. So here we're talking about a sequence of numbers convergent to 0. What does it mean for a sequence of numbers to converge to 0? It means that for any epsilon prime positive, there exists some n0 such that for every n bigger than n0 the following is true -- that this probability is less than or equal to epsilon prime. So the mathematical statement is a little hard to parse. For every size of that band, and then you take the definition of what it means for the limit of a sequence of numbers to converge to 0. But it's a lot easier to describe this in words and, basically, think in terms of this picture. That as n increases, the probability of falling outside those bands just become smaller and smaller. So the statement is that our distribution gets concentrated in arbitrarily narrow little bands around that particular number a. OK. So let's look at an example. Suppose a random variable Yn has a discrete distribution of this particular type. Does it converge to something? Well, the probability distribution of this random variable gets concentrated at 0 -- there's more and more probability of being at 0. If I fix a band around 0 -- so if I take the band from minus epsilon to epsilon and look at that band-- the probability of falling outside this band is 1/n. As n goes to infinity, that probability goes to 0. So in this case, we do have convergence. And Yn converges in probability to the number 0. So this just captures the facts obvious from this picture, that more and more of our probability distribution gets concentrated around 0, as n goes to infinity. Now, an interesting thing to notice is the following, that even though Yn converges to 0, if you were to write down the expected value for Yn, what would it be? It's going to be n times the probability of this value, which is 1/n. So the expected value turns out to be 1. And if you were to look at the expected value of Yn-squared, this would be 0. times this probability, and then n-squared times this probability, which is equal to n. And this actually goes to infinity. So we have this, perhaps, strange situation where a random variable goes to 0, but the expected value of this random variable does not go to 0. And the second moment of that random variable actually goes to infinity. So this tells us that convergence in probability tells you something, but it doesn't tell you the whole story. Convergence to 0 of a random variable doesn't imply anything about convergence of expected values or of variances and so on. So the reason is that convergence in probability tells you that this tail probability here is very small. But it doesn't tell you how far does this tail go. As in this example, the tail probability is small, but that tail acts far away, so it gives a disproportionate contribution to the expected value or the expected value squared. OK. So now we've got everything that we need to go back to the sample mean and study its properties. So the sad thing is that we have a sequence of random variables. They're independent. They have the same distribution. And we assume that they have a finite mean and a finite variance. We're looking at the sample mean. Now in principle, you can calculate the probability distribution of the sample mean, because we know how to find the distributions of sums of independent random variables. You use the convolution formula over and over. But this is pretty complicated, so let's not look at that. Let's just look at expected values, variances, and the probabilities that the sample mean is far away from the true mean. So what is the expected value of this random variable? The expected value of a sum of random variables is the sum of the expected values. And then we have this factor of n in the denominator. Each one of these expected values is mu, so we get mu. So the sample mean, the average value of this Mn in expectation is the same as the true mean inside our population. Now here, this is a fine conceptual point, there's two kinds of averages involved when you write down this expression. We understand that expectations are some kind of average. The sample mean is also an average over the values that we have observed. But it's two different kinds of averages. The sample mean is the average of the heights of the penguins that we collected over a single expedition. The expected value is to be thought of as follows, my probabilistic experiment is one expedition to the South Pole. Expected value here means thinking on the average over a huge number of expeditions. So my expedition is a random experiment, I collect random samples, and they record Mn. The average result of an expedition is what we would get if we were to carry out a zillion expeditions and average the averages that we get at each particular expedition. So this Mn is the average during a single expedition. This expectation is the average over an imagined infinite sequence of expeditions. And of course, the other thing to always keep in mind is that expectations give you numbers, whereas the sample mean is actually a random variable. All right. So this random variable, how random is it? How big is its variance? So the variance of a sum of random variables is the sum of the variances. But since we're dividing by n, when you calculate variances this brings in a factor of n-squared. So the variance is sigma-squared over n. And in particular, the variance of the sample mean becomes smaller and smaller. It means that when you estimate that average height of penguins, if you take a large sample, then your estimate is not going to be too random. The randomness in your estimates become small if you have a large sample size. Having a large sample size kind of removes the randomness from your experiment. Now let's apply the Chebyshev inequality to say something about tail probabilities for the sample mean. The probability that you are more than epsilon away from the true mean is less than or equal to the variance of this quantity divided by this number squared. So that's just the translation of the Chebyshev inequality to the particular context we've got here. We found the variance. It's sigma-squared over n. So we end up with this expression. So what does this expression do? For any given epsilon, if I fix epsilon, then this probability, which is less than sigma-squared over n epsilon-squared, converges to 0 as n goes to infinity. And this is just the definition of convergence in probability. If this happens, that the probability of being more than epsilon away from the mean, that probability goes to 0, and this is true no matter how I choose my epsilon, then by definition we have convergence in probability. So we have proved that the sample mean converges in probability to the true mean. And this is what the weak law of large numbers tells us. So in some vague sense, it tells us that the sample means, when you take the average of many, many measurements in your sample, then the sample mean is a good estimate of the true mean in the sense that it approaches the true mean as your sample size increases. It approaches the true mean, but of course in a very specific sense, in probability, according to this notion of convergence that we have used. So since we're talking about sampling, let's go over an example, which is the typical situation faced by someone who's constructing a poll. So you're interested in some property of the population. So what fraction of the population prefers Coke to Pepsi? So there's a number f, which is that fraction of the population. And so this is an exact number. So out of a population of 100 million, 20 million prefer Coke, then f would be 0.2. We want to find out what that fraction is. We cannot ask everyone. What we're going to do is to take a random sample of people and ask them for their preferences. So the ith person either says yes for Coke or no. And we record that by putting a 1 each time that we get a yes answer. And then we form the average of these x's. What is this average? It's the number of 1's that we got divided by n. So this is a fraction, but calculated only on the basis of the sample that we have. So you can think of this as being an estimate, f_hat, based on the sample that we have. Now, even though we used the lower case letter here, this f_hat is, of course, a random variable. f is a number. This is the true fraction in the overall population. f_hat is the estimate that we get by using our particular sample. Ok. So your boss told you, I need to know what f is, but go and do some sampling. What are you going to respond? Unless I ask everyone in the whole population, there's no way for me to know f exactly. Right? There's no way. OK, so the boss tells you, well OK, then that'll me f within an accuracy. I want an answer from you, that's your answer, which is close to the correct answer within 1 % point. So if the true f is 0.4, your answer should be somewhere between 0.39 and 0.41. I want a really accurate answer. What are you going to say? Well, there's no guarantee that my answer will be within 1 %. Maybe I'm unlucky and I just happen to sample the wrong set of people and my answer comes out to be wrong. So I cannot give you a hard guarantee that this inequality will be satisfied. But perhaps, I can give you a guarantee that this inequality will be satisfied, this accuracy requirement will be satisfied, with high confidence. That is, there's going to be a smaller probability that things go wrong, that I'm unlikely and I use a bad sample. But leaving aside that smaller probability of being unlucky, my answer will be accurate within the accuracy requirement that you have. So these two numbers are the usual specs that one has when designing polls. So this number is the accuracy that we want. It's the desired accuracy. And this number has to do with the confidence that we want. So 1 minus that number, we could call it the confidence that we want out of our sample. So this is really 1 minus confidence. So now your job is to figure out how large an n, how large a sample should you be using, in order to satisfy the specs that your boss gave you. All you know at this stage is the Chebyshev inequality. So you just try to use it. The probability of getting an answer that's more than 0.01 away from the true answer is, by Chebyshev's inequality, the variance of this random variable divided by this number squared. The variance, as we argued a little earlier, is the variance of the x's divided by n. So we get this expression. So we would like this number to be less than or equal to 0.05. OK, here we hit a little bit off a difficulty. The variance, (sigma_x)-squared, what is it? (Sigma_x)-squared is, if you remember the variance of a Bernoulli random variable, is this quantity. But we don't know it. f is what we're trying to estimate in the first place. So the variance is not known, so I cannot plug in a number inside here. What I can do is to be conservative and use an upper bound of the variance. How large can this number get? Well, you can plot f times (1-f). It's a parabola. It has a root at 0 and at 1. So the maximum value is going to be, by symmetry, at 1/2 and when f is 1/2, then this variance becomes 1/4. So I don't know (sigma_x)-squared, but I'm going to use the worst case value for (sigma_x)-squared, which is 4. And this is now an inequality that I know to be always true. I've got my specs, and my specs tell me that I want this number to be less than 0.05. And given what I know, the best thing I can do is to say, OK, I'm going to take this number and make it less than 0.05. If I choose my n so that this is less than 0.05, then I'm certain that this probability is also less than 0.05. What does it take for this inequality to be true? You can solve for n here, and you find that to satisfy this inequality, n should be larger than or equal to 50,000. So you can just let n be equal to 50,000. So the Chebyshev inequality tells us that if you take n equal to 50,000, then by the Chebyshev inequality, we're guaranteed to satisfy the specs that we were given. Ok. Now, 50,000 is a bit of a large sample size. Right? If you read anything in the newspapers where they say so much of the voters think this and that, this was determined on the basis of a sample of 1,200 likely voters or so. So the numbers that you will typically see in these news items about polling, they usually involve sample sizes about the 1,000 or so. You will never see a sample size of 50,000. That's too much. So where can we cut some corners? Well, we can cut corners basically in three places. This requirement is a little too tight. Newspaper stories will usually tell you, we have an accuracy of +/- 3 % points, instead of 1 % point. And because this number comes up as a square, by making it 3 % points instead of 1, saves you a factor of 10. Then, the five percent confidence, I guess that's usually OK. If we use that factor of 10, then we make our sample that we gain from here, then we get a sample size of 10,000. And that's, again, a little too big. So where can we fix things? Well, it turns out that this inequality that we're using here, Chebyshev's inequality, is just an inequality. It's not that tight. It's not very accurate. Maybe there's a better way of calculating or estimating this quantity, which is smaller than this. And using a more accurate inequality or a more accurate bound, then we can convince ourselves that we can settle with a smaller sample size. This more accurate kind of inequality comes out of a difference limit theorem, which is the next limit theorem we're going to consider. We're going to start the discussion today, but we're going to continue with it next week. Before I tell you exactly what that other limit theorem says, let me give you the big picture of what's involved here. We're dealing with sums of i.i.d random variables. Each X has a distribution of its own. So suppose that X has a distribution which is something like this. This is the density of X. If I add lots of X's together, what kind of distribution do I expect? The mean is going to be n times the mean of an individual X. So if this is mu, I'm going to get a mean of n times mu. But my variance will also increase. When I add the random variables, I'm adding the variances. So since the variance increases, we're going to get a distribution that's pretty wide. So this is the density of X1 plus all the way to Xn. So as n increases, my distribution shifts, because the mean is positive. So I keep adding things. And also, my distribution becomes wider and wider. The variance increases. Well, we started a different scaling. We started a scaled version of this quantity when we looked at the weak law of large numbers. In the weak law of large numbers, we take this random variable and divide it by n. And what the weak law tells us is that we're going to get a distribution that's very highly concentrated around the true mean, which is mu. So this here would be the density of X1 plus Xn divided by n. Because I've divided by n, the mean has become the original mean, which is mu. But the weak law of large numbers tells us that the distribution of this random variable is very concentrated around the mean. So we get a distribution that's very narrow in this kind. In the limit, this distribution becomes one that's just concentrated on top of mu. So it's sort of a degenerate distribution. So these are two extremes, no scaling for the sum, a scaling where we divide by n. In this extreme, we get the trivial case of a distribution that flattens out completely. In this scaling, we get a distribution that gets concentrated around a single point. Again, we look at some intermediate scaling that makes things more interesting. Things do become interesting if we scale by dividing the sum by square root of n instead of dividing by n. What effect does this have? When we scale by dividing by square root of n, the variance of Sn over square root of n is going to be the variance of Sn over sum divided by n. That's how variances behave. The variance of Sn is n sigma-squared, divide by n, which is sigma squared, which means that when we scale in this particular way, as n changes, the variance doesn't change. So the width of our distribution will be sort of constant. The distribution changes shape, but it doesn't become narrower as was the case here. It doesn't become wider, kind of keeps the same width. So perhaps in the limit, this distribution is going to take an interesting shape. And that's indeed the case. So let's do what we did before. So we're looking at the sum, and we want to divide the sum by something that goes like square root of n. So the variance of Sn is n sigma squared. The variance of the sigma Sn is the square root of that. It's this number. So effectively, we're scaling by order of square root n. Now, I'm doing another thing here. If my random variable has a positive mean, then this quantity is going to have a mean that's positive and growing. It's going to be shifting to the right. Why is that? Sn has a mean that's proportional to n. When I divide by square root n, then it means that the mean scales like square root of n. So my distribution would still keep shifting after I do this division. I want to keep my distribution in place, so I subtract out the mean of Sn. So what we're doing here is a standard technique or transformation where you take a random variable and you so-called standardize it. I remove the mean of that random variable and I divide by the standard deviation. This results in a random variable that has 0 mean and unit variance. What Zn measures is the following, Zn tells me how many standard deviations am I away from the mean. Sn minus (n times expected value of X) tells me how much is Sn away from the mean value of Sn. And by dividing by the standard deviation of Sn -- this tells me how many standard deviations away from the mean am I. So we're going to look at this random variable, which is just a transformation Zn. It's a linear transformation of Sn. S And we're going to compare this random variable to a standard normal random variable. So a standard normal is the random variable that you are familiar with, given by the usual formula, and for which we have tables for it. This Zn has 0 mean and unit variance. So in that respect, it has the same statistics as the standard normal. The distribution of Zn could be anything -- can be pretty messy. But there is this amazing theorem called the central limit theorem that tells us that the distribution of Zn approaches the distribution of the standard normal in the following sense, that probability is that you can calculate -- of this type -- that you can calculate for Zn -- is the limit becomes the same as the probabilities that you would get from the standard normal tables for Z. It's a statement about the cumulative distribution functions. This quantity, as a function of c, is the cumulative distribution function of the random variable Zn. This is the cumulative distribution function of the standard normal. The central limit theorem tells us that the cumulative distribution function of the sum of a number of random variables, after they're appropriately standardized, approaches the cumulative distribution function over the standard normal distribution. In particular, this tells us that we can calculate probabilities for Zn when n is large by calculating instead probabilities for Z. And that's going to be a good approximation. Probabilities for Z are easy to calculate because they're well tabulated. So we get a very nice shortcut for calculating probabilities for Zn. Now, it's not Zn that you're interested in. What you're interested in is Sn. And Sn -- inverting this relation here -- Sn is square root n sigma Zn plus n expected value of X. All right. Now, if you can calculate probabilities for Zn, even approximately, then you can certainly calculate probabilities for Sn, because one is a linear function of the other. And we're going to do a little bit of that next time. You're going to get, also, some practice in recitation. At a more vague level, you could describe the central limit theorem as saying the following, when n is large, you can pretend that Zn is a standard normal random variable and do the calculations as if Zn was standard normal. Now, pretending that Zn is normal is the same as pretending that Sn is normal, because Sn is a linear function of Zn. And we know that linear functions of normal random variables are normal. So the central limit theorem essentially tells us that we can pretend that Sn is a normal random variable and do the calculations just as if it were a normal random variable. Mathematically speaking though, the central limit theorem does not talk about the distribution of Sn, because the distribution of Sn becomes degenerate in the limit, just a very flat and long thing. So strictly speaking mathematically, it's a statement about cumulative distributions of Zn's. Practically, the way you use it is by just pretending that Sn is normal. Very good. Enjoy the Thanksgiving Holiday.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Convergence_in_Probability_and_in_the_Mean_Part_1.txt
In this exercise, we'll be working with the notion of convergence in probability, as well as some other notion of converge of random variables that we'll introduce later. First type of random variable is xn, where xn has probability 1 minus 1 minus over n to be as 0 and probability of 1 over n to be a 1. And graphically, we see that we have a pretty big mess. 1 minus 1 over n at location 0, and a tiny bit somewhere here, only 1 over n. So this will be the PMF for x. On the other hand, we have the sequence of random variables, yn. Fairly similar to xn with a slight tweak. The similar part says it also has a very high probability of being at 0, mass 1 over 1 minus n. But on the off chance that yn is not at 0, it has a pretty big value n. So it has probability 1 over n of somewhere out there. So to contrast the two graphs, we see at 0, they have the same amount of mass, 1 over 1 minus n, but for y, it's all the way out there that has a small mass 1 over n. So this will be our Pyn of y. And for the remainder of the problem, we'll be looking at the regime where the number n tends to infinity, and study what will happen to these two sequences of random variables. In part A, we're to compute the expected value n variance for both xn and yn. Let's get started. The expected value of xn is given by the probability-- it's at one, which is 1 over n times 1 plus the probability being at 0, 1 over n times value 0. And that gives us 1 over n. To calculate the variance of xn, see that variance is simply the expected value of xn minus expected value of xn, which in this case is 1 over n from the previous calculation we have here. We take the square of this value and compute the whole expectation, and this gives us 1 over n, 1 minus 1 over n plus the remainder probability 1 over 1 minus n of x being at 0, so 0 minus 1 over n squared. And if we carry out the calculations here, we'll get n minus 1 over n squared. Now, let's turn to yn. The expected value of yn is equal to probability of being at 0, 0 plus the probability of being at n and times the value n. And this gives us 1. The variance of yn. We do the same thing as before, we have 1 minus 1 over n probability of being at 0, multiplied 0 minus 1 squared, where 1 is the expected value of y. And with probability 1 over n, out there, equal to n, and this is multiplied by n minus 1 squared. And this gives us n minus 1 Already, we can see that while the expected value for x was 1 over n, the expected value for y is sitting right at 1. It does not decrease as it increases. And also, while the variance for x is n minus 1 over n squared, the variance for y is much bigger. It is actually increasing to infinity as n goes infinity. So these intuitions will be helpful for the remainder of the problem. In part B, we're asked to use Chebyshev's Inequality and see whether xn or yn converges to any number in probability. Let's first recall what the inequality is about. It says that if we have random variable x, in our case, xn, then the probability of xn minus the expected value of xn, in our case, 1 over n, that this deviation, the absolute value of this difference is greater than epsilon is bounded above by the variance of xn divided by the value of epsilon squared. Well, in our case, we know the variance is n minus 1 over n squared, hence this whole term is this term divided by epsilon squared. Now, we know that as n gets really big, the probability of xn being at 0 is very big. It's 1 minus 1 over n. So a safe bet to guess is that if xn work to converge anywhere on the real line, it might just converge to the point 0. And let's see if that is true. Now, to show that xn converges to 0 in probability, formally we need to show that for every fixed epsilon greater than 0, the probability that xn minus 0 greater than epsilon has to be 0, and the limit has n going to infinity. And hopefully, the inequalities above will help us achieve this goal. And let's see how that is done. I would like to have an estimate, in fact, an upper bound of the probability xn absolute value greater or equal to epsilon. And now, we're going to do some massaging to this equation so that it looks like what we know before, which is right here. Now, we see that this equation is in fact, less than probability xn minus 1 over n greater or equal to epsilon plus 1 over n. Now, I will justify this inequality in one second. But suppose that you believe me for this inequality, we can simply plug-in the value right here, namely substituting epsilon plus 1 over n, in the place of epsilon right here and use the Chebyshev Inequality we did earlier to arrive at the following inequality, which is n minus 1 over n squared times, instead of epsilon, now we have epsilon plus 1 over n squared. Now, if we take n to infinity in this equation, see what happens. Well, this term here converges to 0 because n squared is much bigger than n minus 1. And this term here converges to number 1 over epsilon squared. So it becomes 0 times 1 over epsilon squared, hence the whole term converges to 0. And this proves that indeed, the limit of the term here as n going to infinity is equal to 0, and that implies xn converges to 0 in probability. Now, there is the one thing I did not justify in the process, which is why is probability of absolute value xn greater than epsilon less then the term right here? So let's take a look. Well, the easiest way to see this is to see what ranges of xn are we talking about in each case. Well, in the first case, we're looking at interval around 0 plus minus epsilon and xn can lie anywhere here. While in the second case, right here, we can see that the set of range values for xn is precisely this interval here, which was the same as before, but now, we actually have less on this side, where the starting point and the interval on the right is epsilon plus 2 over n. And therefore, the right hand style captures strictly less values of xn than the left hand side, hence the inequality is true. Now, we wonder if we can use the same trick, Chebyshev Inequality, to derive the result for yn as well. Let's take a look. The probability of yn minus it's mean, 1, greater or equal to epsilon. From the Chebyshev Inequality, we have variance of yn divided by epsilon squared. Now, there is a problem. The variance of yn is very big. In fact, it is equal to n minus 1. And we calculated in part A, divided by epsilon squared. And this quantity here diverges as n going to infinity to infinity itself. So in this case, the Chebyshev Inequality does not tell us much information of whether yn converges or not. Now, going to part C, the question is although we don't know anything about yn from just the Chebyshev Inequality, does yn converge to anything at all? Well, it turns out it does. In fact, we don't have to go through anything more complicated than distribution yn itself. So from the distribution yn, we know that absolute value of yn greater or equal to epsilon is equal to 1 over n whenever epsilon is less than n. And this is true because we know yn has a lot of mass at 0 and a tiny bit a mass at value 1 over n at location n. So if we draw the cutoff here at epsilon, then the probability of yn landing to the right of epsilon is simply equal to 1 over n. And this tells us, if we take the limit of n going to infinity and measure the probability that yn-- just to write it clearly-- deviates from 0 by more than epsilon, this is equal to the limit as n going to infinity of 1 over n. And that is equal to 0. From this calculation, we know that yn does converge to 0 in probability as n going to infinity. For part D, we'd like to know whether the convergence in probability implies the convergence in expectation. That is, if we have a sequence of random variables, let's call it zn, that converges to number c in probability as n going to infinity, does it also imply that the limit as n going to infinity of the expected value of zn also converges to c. Is that true? Well, intuitively it is true, because in the limit, zn almost looks like it concentrates on c solely, hence we might expect that the expected value is also going to c itself. Well, unfortunately, that is not quite true. In fact, we have the proof right here by looking at yn. For yn, we know that the expected value of yn is equal to 1 for all n. It does not matter how big n gets. But we also know front part C that yn does converge to 0 in probability. And this means somehow, yn can get very close to 0, yet it's expected value still stays one away. And the reason again, we go back to the way yn was constructed. Now, as n goes to infinity, the probability of yn being at 0, 1 minus 1 over n, approaches 1. So it's very likely that yn is having a value 0, but whenever on the off chance that yn takes a value other than 0, it's a huge number. It is n, even though it has a small probability of 1 over n. Adding these two factors together, it tells us the expected value of yn always stays at 1. And however, in probability, it's very likely that y is around 0. So this example tells us converges in probability is not that strong. That tells us something about the random variables but it does not tell us whether the mean value of the random variables converge to the same number.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
The_Variance_in_the_Stick_Breaking_Problem.txt
Hi. In this problem, we'll get a chance to see the usefulness of conditioning in helping us to calculate quantities that would otherwise be difficult to calculate. Specifically, we'll be using the law of iterated expectations and the law of total variance. Before we get started, let's just take a quick moment to interpret what these two laws are saying. Really, what it's saying is, in order to calculate the expectation or the variance of some random variable x, if that's difficult to do, we'll instead attack this problem in stages. So the first stage is, we'll condition on some related random variable, y. And the hope is that by conditioning on this and reducing it to this conditional universe, the expectation of x will be easier to calculate. Now, recall that this conditional expectation is really a random variable, which is a function of the random variable y. So what we've done is we first average out x given some y. What remains is some new random variable, which is a function of y. And now, what we have is randomness in y, which will then average out again to get the final expectation of x. OK, so in this problem, we'll actually see an example of how this plays out. One more thing before we get started that's useful to recall is if y is a uniform random variable, distributed between a and b, then the variance of y is b minus a squared over 12, and the expectation of y is just a midpoint, a plus b over 2. All right, so let's get started on the problem. So what we have is we have a stick of some fixed length, l, and what we do is we break it uniformly at random. So what we do is we choose a point uniformly at random along this stick. And we break it there, and then we keep the left portion of that stick. So let's call the length of this left portion after the first break random variable y. So it's random because the point where we break it is random. And then what we do is we repeat this process. We'll take this left side of the stick that's left. And we'll pick another point, uniformly at random, along this left remaining side. And we'll break it again, and keep the left side of that break. And we'll call that the length of the final remaining piece, x, which again is random. The problem is really asking us to calculate the expectation of variance of x. So at first, it seems difficult to do, because the expectation and variance of x depends on where you break it the second time and also where you break it the first time. So let's see if conditioning can help us here. So the first thing that we'll notice is that, if we just consider y, the length of the stick after the first break, it's actually pretty easy to calculate the expectation and variance of y. Because y, when you think about it, is actually just the simple uniform in a variable, uniformly distributed between 0 and l, the length of the stick. And this is because we're told that we choose the point of the break uniformly at random between 0 and l. And so wherever we choose it, that's going to be the length of the left side of the stick. And so because of this, we know that the expectation of y is just l/2, and the variance of y is l squared over 12. But unfortunately, calculating the expectation variance of x is not quite as simple, because x isn't just uniformly distributed between 0 and some fixed number. Because it's actually uniformly distributed between 0 and y, wherever the first break was. But where the first break is is random too. And so we can't just say that x is a uniformly distributed random variable. So what do we do instead? Well, we'll make the nice observation that let's pretend that we actually know what y is. If we knew what y was, then calculating the expectation of x would be simple, right? So if we were given that y is just some little y, then x would in fact just be uniformly distributed between 0 and little y. And then if that's the case, then our calculation is simple, because the expectation of x would just be y/2, and the variance would just be y squared over 12. All right, so let's make that a little bit more formal. What we're saying is that the expectation of x, If we knew what y was, would just be y/2. And the variance of x If we knew what y was would just be y squared over 12. All right, so notice what we've done. We've taken the second stage and we've said, let's pretend we know what happens in the first stage where we break it. And we know what y, the first break, was. Then the second stage becomes simple, because the average of x is just going to be the midpoint. Now what we do to calculate the actual expectation of x, well, we'll invoke the law of iterated expectations. So expectation of x is expectation of the conditional expectation of x given 1, which in this case is just expectation of y/2. And we know what the expectation of y is. It's l/2. And so this is just l/4. l/4. All right, and so notice what we've done. We've taken this calculation and done it in stages. So we assume we know where the first break is. Given that, the average location of the second break becomes simple. It's just in the midpoint. And then, we move up to the higher stage. And that now we average out over where the first break could have been. And that gives us our final answer. And notice that this actually makes sense, if we just think about it intuitively, because on average, the first break will be somewhere in the middle. And then that will leave us with half the stick left, and we break it again. On average, that will leave us with another half. So on average, you get a quarter of the original stick left, which makes sense. All right, so that's the first part, where we use the law of iterated expectations. Now, let's go to part B, where we're actually asked to find the variance. The variance is given by the law of total variance. So let's do it in stages. We'll first calculate the first term, the expectation of the conditional variance. Well, what is the expectation of the conditional variance? We've already calculated out what this conditional variance is. The conditional variance is y squared over 12. So let's just plug that in. It's expectation of y squared over 12. All right, now this looks like it could be a little difficult to calculate. But let's just first pull out the 1/12. And then remember, one way to calculate the expectation of the square of a random variable is to use the variance. So recall that the variance of any random variable is just expectation of the square minus the square of the expectation. So if we want to calculate the expectation of the square, we can just take the variance and add the square of the expectation. So this actually we can get pretty easily. It's actually just the variance of y plus the square of the expectation of y. And we know what these two terms are. The variance of y is l squared over 12. And the expectation of y is l/2. So when you square that, you get l squared over 4. So l squared over 12 plus l squared over 4 gives you l squared over 3. And you get that the first term is l squared over 36. All right, now let's calculate the second term. Second term is the variance of the conditional expectation. So the variance of expectation of x given y. Well, what is the expectation of x given y? We've already calculated that. That's y/2. So what we really want is the variance of y/2. And remember, when you have a constant inside the variance, you pull it out but you square it. So what you get is 1/4 the variance of y, which we know that the variance of y is l squared over 12. So we get that this is l squared over 48. OK, so we've calculated both terms of this conditional variance. So all we need to do to find the final answer is just to add them. So it's l squared over 36 plus l squared over 48. And so, the final answer is 7 l squared over 144. OK, and so this is the first, the expectation of x, maybe you could have guessed intuitively. But the variance of x is not something that looks like something that you could have calculated off the top of your head. And so I guess the lesson from this example is that it is often very helpful if you condition on some things, because it allows you to calculate things in stages and build up from the bottom. But it's important to note that the choice of what you condition on-- so the choice of y-- is actually very important, because you could choose lots of other y's that wouldn't actually help you at all. And so how to actually choose this y is something that you can learn based on just having practiced with these kinds of problems. So again, the overall lesson is, conditioning can often help when you calculate these problems. And so you should look to see if that could be a possible solution. So I hope that was helpful, and see you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
20_Central_Limit_Theorem.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: We're going to finish today our discussion of limit theorems. I'm going to remind you what the central limit theorem is, which we introduced briefly last time. We're going to discuss what exactly it says and its implications. And then we're going to apply to a couple of examples, mostly on the binomial distribution. OK, so the situation is that we are dealing with a large number of independent, identically distributed random variables. And we want to look at the sum of them and say something about the distribution of the sum. We might want to say that the sum is distributed approximately as a normal random variable, although, formally, this is not quite right. As n goes to infinity, the distribution of the sum becomes very spread out, and it doesn't converge to a limiting distribution. In order to get an interesting limit, we need first to take the sum and standardize it. By standardizing it, what we mean is to subtract the mean and then divide by the standard deviation. Now, the mean is, of course, n times the expected value of each one of the X's. And the standard deviation is the square root of the variance. The variance is n times sigma squared, where sigma is the variance of the X's -- so that's the standard deviation. And after we do this, we obtain a random variable that has 0 mean -- its centered -- and the variance is equal to 1. And so the variance stays the same, no matter how large n is going to be. So the distribution of Zn keeps changing with n, but it cannot change too much. It stays in place. The mean is 0, and the width remains also roughly the same because the variance is 1. The surprising thing is that, as n grows, that distribution of Zn kind of settles in a certain asymptotic shape. And that's the shape of a standard normal random variable. So standard normal means that it has 0 mean and unit variance. More precisely, what the central limit theorem tells us is a relation between the cumulative distribution function of Zn and its relation to the cumulative distribution function of the standard normal. So for any given number, c, the probability that Zn is less than or equal to c, in the limit, becomes the same as the probability that the standard normal becomes less than or equal to c. And of course, this is useful because these probabilities are available from the normal tables, whereas the distribution of Zn might be a very complicated expression if you were to calculate it exactly. So some comments about the central limit theorem. First thing is that it's quite amazing that it's universal. It doesn't matter what the distribution of the X's is. It can be any distribution whatsoever, as long as it has finite mean and finite variance. And when you go and do your approximations using the central limit theorem, the only thing that you need to know about the distribution of the X's are the mean and the variance. You need those in order to standardize Sn. I mean -- to subtract the mean and divide by the standard deviation -- you need to know the mean and the variance. But these are the only things that you need to know in order to apply it. In addition, it's a very accurate computational shortcut. So the distribution of this Zn's, in principle, you can calculate it by convolution of the distribution of the X's with itself many, many times. But this is tedious, and if you try to do it analytically, it might be a very complicated expression. Whereas by just appealing to the standard normal table for the standard normal random variable, things are done in a very quick way. So it's a nice computational shortcut if you don't want to get an exact answer to a probability problem. Now, at a more philosophical level, it justifies why we are really interested in normal random variables. Whenever you have a phenomenon which is noisy, and the noise that you observe is created by adding the lots of little pieces of randomness that are independent of each other, the overall effect that you're going to observe can be described by a normal random variable. So in a classic example that goes 100 years back or so, suppose that you have a fluid, and inside that fluid, there's a little particle of dust or whatever that's suspended in there. That little particle gets hit by molecules completely at random -- and so what you're going to see is that particle kind of moving randomly inside that liquid. Now that random motion, if you ask, after one second, how much is my particle displaced, let's say, in the x-axis along the x direction. That displacement is very, very well modeled by a normal random variable. And the reason is that the position of that particle is decided by the cumulative effect of lots of random hits by molecules that hit that particle. So that's a sort of celebrated physical model that goes under the name of Brownian motion. And it's the same model that some people use to describe the movement in the financial markets. The argument might go that the movement of prices has to do with lots of little decisions and lots of little events by many, many different actors that are involved in the market. So the distribution of stock prices might be well described by normal random variables. At least that's what people wanted to believe until somewhat recently. Now, the evidence is that, actually, these distributions are a little more heavy-tailed in the sense that extreme events are a little more likely to occur that what normal random variables would seem to indicate. But as a first model, again, it could be a plausible argument to have, at least as a starting model, one that involves normal random variables. So this is the philosophical side of things. On the more accurate, mathematical side, it's important to appreciate exactly quite kind of statement the central limit theorem is. It's a statement about the convergence of the CDF of these standardized random variables to the CDF of a normal. So it's a statement about convergence of CDFs. It's not a statement about convergence of PMFs, or convergence of PDFs. Now, if one makes additional mathematical assumptions, there are variations of the central limit theorem that talk about PDFs and PMFs. But in general, that's not necessarily the case. And I'm going to illustrate this with-- I have a plot here which is not in your slides. But just to make the point, consider two different discrete distributions. This discrete distribution takes values 1, 4, 7. This discrete distribution can take values 1, 2, 4, 6, and 7. So this one has sort of a periodicity of 3, this one, the range of values is a little more interesting. The numbers in these two distributions are cooked up so that they have the same mean and the same variance. Now, what I'm going to do is to take eight independent copies of the random variable and plot the PMF of the sum of eight random variables. Now, if I plot the PMF of the sum of 8 of these, I get the plot, which corresponds to these bullets in this diagram. If I take 8 random variables, according to this distribution, and add them up and compute their PMF, the PMF I get is the one denoted here by the X's. The two PMFs look really different, at least, when you eyeball them. On the other hand, if you were to plot the CDFs of them, then the CDFs, if you compare them with the normal CDF, which is this continuous curve, the CDF, of course, it goes up in steps because we're looking at discrete random variables. But it's very close to the normal CDF. And if we, instead of n equal to 8, we were to take 16, then the coincidence would be even better. So in terms of CDFs, when we add 8 or 16 of these, we get very close to the normal CDF. We would get essentially the same picture if I were to take 8 or 16 of these. So the CDFs sit, essentially, on top of each other, although the two PMFs look quite different. So this is to appreciate that, formally speaking, we only have a statement about CDFs, not about PMFs. Now in practice, how do you use the central limit theorem? Well, it tells us that we can calculate probabilities by treating Zn as if it were a standard normal random variable. Now Zn is a linear function of Sn. Conversely, Sn is a linear function of Zn. Linear functions of normals are normal. So if I pretend that Zn is normal, it's essentially the same as if we pretend that Sn is normal. And so we can calculate probabilities that have to do with Sn as if Sn were normal. Now, the central limit theorem does not tell us that Sn is approximately normal. The formal statement is about Zn, but, practically speaking, when you use the result, you can just pretend that Sn is normal. Finally, it's a limit theorem, so it tells us about what happens when n goes to infinity. If we are to use it in practice, of course, n is not going to be infinity. Maybe n is equal to 15. Can we use a limit theorem when n is a small number, as small as 15? Well, it turns out that it's a very good approximation. Even for quite small values of n, it gives us very accurate answers. So n over the order of 15, or 20, or so give us very good results in practice. There are no good theorems that will give us hard guarantees because the quality of the approximation does depend on the details of the distribution of the X's. If the X's have a distribution that, from the outset, looks a little bit like the normal, then for small values of n, you are going to see, essentially, a normal distribution for the sum. If the distribution of the X's is very different from the normal, it's going to take a larger value of n for the central limit theorem to take effect. So let's illustrates this with a few representative plots. So here, we're starting with a discrete uniform distribution that goes from 1 to 8. Let's add 2 of these random variables, 2 random variables with this PMF, and find the PMF of the sum. This is a convolution of 2 discrete uniforms, and I believe you have seen this exercise before. When you convolve this with itself, you get a triangle. So this is the PMF for the sum of two discrete uniforms. Now let's continue. Let's convolve this with itself. These was going to give us the PMF of a sum of 4 discrete uniforms. And we get this, which starts looking like a normal. If we go to n equal to 32, then it looks, essentially, exactly like a normal. And it's an excellent approximation. So this is the PMF of the sum of 32 discrete random variables with this uniform distribution. If we start with a PMF which is not symmetric-- this one is symmetric around the mean. But if we start with a PMF which is non-symmetric, so this is, here, is a truncated geometric PMF, then things do not work out as nicely when I add 8 of these. That is, if I convolve this with itself 8 times, I get this PMF, which maybe resembles a little bit to the normal one. But you can really tell that it's different from the normal if you focus at the details here and there. Here it sort of rises sharply. Here it tails off a bit slower. So there's an asymmetry here that's present, and which is a consequence of the asymmetry of the distribution we started with. If we go to 16, it looks a little better, but still you can see the asymmetry between this tail and that tail. If you get to 32 there's still a little bit of asymmetry, but at least now it starts looking like a normal distribution. So the moral from these plots is that it might vary, a little bit, what kind of values of n you need before you get the really good approximation. But for values of n in the range 20 to 30 or so, usually you expect to get a pretty good approximation. At least that's what the visual inspection of these graphs tells us. So now that we know that we have a good approximation in our hands, let's use it. Let's use it by revisiting an example from last time. This is the polling problem. We're interested in the fraction of population that has a certain habit been. And we try to find what f is. And the way we do it is by polling people at random and recording the answers that they give, whether they have the habit or not. So for each person, we get the Bernoulli random variable. With probability f, a person is going to respond 1, or yes, so this is with probability f. And with the remaining probability 1-f, the person responds no. We record this number, which is how many people answered yes, divided by the total number of people. That's the fraction of the population that we asked. This is the fraction inside our sample that answered yes. And as we discussed last time, you might start with some specs for the poll. And the specs have two parameters-- the accuracy that you want and the confidence that you want to have that you did really obtain the desired accuracy. So the specs here is that we want, probability 95% that our estimate is within 1 % point from the true answer. So the event of interest is this. That's the result of the poll minus distance from the true answer is less or bigger than 1 % point. And we're interested in calculating or approximating this particular probability. So we want to do it using the central limit theorem. And one way of arranging the mechanics of this calculation is to take the event of interest and massage it by subtracting and dividing things from both sides of this inequality so that you bring him to the picture the standardized random variable, the Zn, and then apply the central limit theorem. So the event of interest, let me write it in full, Mn is this quantity, so I'm putting it here, minus f, which is the same as nf divided by n. So this is the same as that event. We're going to calculate the probability of this. This is not exactly in the form in which we apply the central limit theorem. To apply the central limit theorem, we need, down here, to have sigma square root n. So how can I put sigma square root n here? I can divide both sides of this inequality by sigma. And then I can take a factor of square root n from here and send it to the other side. So this event is the same as that event. This will happen if and only if that will happen. So calculating the probability of this event here is the same as calculating the probability that this events happens. And now we are in business because the random variable that we got in here is Zn, or the absolute value of Zn, and we're talking about the probability that Zn, absolute value of Zn, is bigger than a certain number. Since Zn is to be approximated by a standard normal random variable, our approximation is going to be, instead of asking for Zn being bigger than this number, we will ask for Z, absolute value of Z, being bigger than this number. So this is the probability that we want to calculate. And now Z is a standard normal random variable. There's a small difficulty, the one that we also encountered last time. And the difficulty is that the standard deviation, sigma, of the Xi's is not known. Sigma is equal to f times-- sigma, in this example, is f times (1-f), and the only thing that we know about sigma is that it's going to be a number less than 1/2. OK, so we're going to have to use an inequality here. We're going to use a conservative value of sigma, the value of sigma equal to 1/2 and use that instead of the exact value of sigma. And this gives us an inequality going this way. Let's just make sure why the inequality goes this way. We got, on our axis, two numbers. One number is 0.01 square root n divided by sigma. And the other number is 0.02 square root of n. And my claim is that the numbers are related to each other in this particular way. Why is this? Sigma is less than 2. So 1/sigma is bigger than 2. So since 1/sigma is bigger than 2 this means that this numbers sits to the right of that number. So here we have the probability that Z is bigger than this number. The probability of falling out there is less than the probability of falling in this interval. So that's what that last inequality is saying-- this probability is smaller than that probability. This is the probability that we're interested in, but since we don't know sigma, we take the conservative value, and we use an upper bound in terms of the probability of this interval here. And now we are in business. We can start using our normal tables to calculate probabilities of interest. So for example, let's say that's we take n to be 10,000. How is the calculation going to go? We want to calculate the probability that the absolute value of Z is bigger than 0.2 times 1000, which is the probability that the absolute value of Z is larger than or equal to 2. And here let's do some mechanics, just to stay in shape. The probability that you're larger than or equal to 2 in absolute value, since the normal is symmetric around the mean, this is going to be twice the probability that Z is larger than or equal to 2. Can we use the cumulative distribution function of Z to calculate this? Well, almost the cumulative gives us probabilities of being less than something, not bigger than something. So we need one more step and write this as 1 minus the probability that Z is less than or equal to 2. And this probability, now, you can read off from the normal tables. And the normal tables will tell you that this probability is 0.9772. And you do get an answer. And the answer is 0.0456. OK, so we tried 10,000. And we find that our probably of error is 4.5%, so we're doing better than the spec that we had. So this tells us that maybe we have some leeway. Maybe we can use a smaller sample size and still stay without our specs. Let's try to find how much we can push the envelope. How much smaller can we take n? To answer that question, we need to do this kind of calculation, essentially, going backwards. We're going to fix this number to be 0.05 and work backwards here to find-- did I do a mistake here? 10,000. So I'm missing a 0 here. Ah, but I'm taking the square root, so it's 100. Where did the 0.02 come in from? Ah, from here. OK, all right. 0.02 times 100, that gives us 2. OK, all right. Very good, OK. So we'll have to do this calculation now backwards, figure out if this is 0.05, what kind of number we're going to need here and then here, and from this we will be able to tell what value of n do we need. OK, so we want to find n such that the probability that Z is bigger than 0.02 square root n is 0.05. OK, so Z is a standard normal random variable. And we want the probability that we are outside this range. We want the probability of those two tails together. Those two tails together should have probability of 0.05. This means that this tail, by itself, should have probability 0.025. And this means that this probability should be 0.975. Now, if this probability is to be 0.975, what should that number be? You go to the normal tables, and you find which is the entry that corresponds to that number. I actually brought a normal table with me. And 0.975 is down here. And it tells you that to the number that corresponds to it is 1.96. So this tells us that this number should be equal to 1.96. And now, from here, you do the calculations. And you find that n is 9604. So with a sample of 10,000, we got probability of error 4.5%. With a slightly smaller sample size of 9,600, we can get the probability of a mistake to be 0.05, which was exactly our spec. So these are essentially the two ways that you're going to be using the central limit theorem. Either you're given n and you try to calculate probabilities. Or you're given the probabilities, and you want to work backwards to find n itself. So in this example, the random variable that we dealt with was, of course, a binomial random variable. The Xi's were Bernoulli, so the sum of the Xi's were binomial. So the central limit theorem certainly applies to the binomial distribution. To be more precise, of course, it applies to the standardized version of the binomial random variable. So here's what we did, essentially, in the previous example. We fixed the number p, which is the probability of success in our experiments. p corresponds to f in the previous example. Let every Xi a Bernoulli random variable and are standing assumption is that these random variables are independent. When we add them, we get a random variable that has a binomial distribution. We know the mean and the variance of the binomial, so we take Sn, we subtract the mean, which is this, divide by the standard deviation. The central limit theorem tells us that the cumulative distribution function of this random variable is a standard normal random variable in the limit. So let's do one more example of a calculation. Let's take n to be-- let's choose some specific numbers to work with. So in this example, first thing to do is to find the expected value of Sn, which is n times p. It's 18. Then we need to write down the standard deviation. The variance of Sn is the sum of the variances. It's np times (1-p). And in this particular example, p times (1-p) is 1/4, n is 36, so this is 9. And that tells us that the standard deviation of this n is equal to 3. So what we're going to do is to take the event of interest, which is Sn less than 21, and rewrite it in a way that involves the standardized random variable. So to do that, we need to subtract the mean. So we write this as Sn-3 should be less than or equal to 21-3. This is the same event. And then divide by the standard deviation, which is 3, and we end up with this. So the event itself of-- AUDIENCE: [INAUDIBLE]. Should subtract, 18, yes, which gives me a much nicer number out here, which is 1. So the event of interest, that Sn is less than 21, is the same as the event that a standard normal random variable is less than or equal to 1. And once more, you can look this up at the normal tables. And you find that the answer that you get is 0.43. Now it's interesting to compare this answer that we got through the central limit theorem with the exact answer. The exact answer involves the exact binomial distribution. What we have here is the binomial probability that, Sn is equal to k. Sn being equal to k is given by this formula. And we add, over all values for k going from 0 up to 21, we write a two lines code to calculate this sum, and we get the exact answer, which is 0.8785. So there's a pretty good agreements between the two, although you wouldn't call that's necessarily excellent agreement. Can we do a little better than that? OK. It turns out that we can. And here's the idea. So our random variable Sn has a mean of 18. It has a binomial distribution. It's described by a PMF that has a shape roughly like this and which keeps going on. Using the central limit theorem is basically pretending that Sn is normal with the right mean and variance. So pretending that Zn has 0 mean unit variance, we approximate it with Z, that has 0 mean unit variance. If you were to pretend that Sn is normal, you would approximate it with a normal that has the correct mean and correct variance. So it would still be centered at 18. And it would have the same variance as the binomial PMF. So using the central limit theorem essentially means that we keep the mean and the variance what they are but we pretend that our distribution is normal. We want to calculate the probability that Sn is less than or equal to 21. I pretend that my random variable is normal, so I draw a line here and I calculate the area under the normal curve going up to 21. That's essentially what we did. Now, a smart person comes around and says, Sn is a discrete random variable. So the event that Sn is less than or equal to 21 is the same as Sn being strictly less than 22 because nothing in between can happen. So I'm going to use the central limit theorem approximation by pretending again that Sn is normal and finding the probability of this event while pretending that Sn is normal. So what this person would do would be to draw a line here, at 22, and calculate the area under the normal curve all the way to 22. Who is right? Which one is better? Well neither, but we can do better than both if we sort of split the difference. So another way of writing the same event for Sn is to write it as Sn being less than 21.5. In terms of the discrete random variable Sn, all three of these are exactly the same event. But when you do the continuous approximation, they give you different probabilities. It's a matter of whether you integrate the area under the normal curve up to here, up to the midway point, or up to 22. It turns out that integrating up to the midpoint is what gives us the better numerical results. So we take here 21 and 1/2, and we integrate the area under the normal curve up to here. So let's do this calculation and see what we get. What would we change here? Instead of 21, we would now write 21 and 1/2. This 18 becomes, no, that 18 stays what it is. But this 21 becomes 21 and 1/2. And so this one becomes 1 + 0.5 by 3. This is 117. So we now look up into the normal tables and ask for the probability that Z is less than 1.17. So this here gets approximated by the probability that the standard normal is less than 1.17. And the normal tables will tell us this is 0.879. Going back to the previous slide, what we got this time with this improved approximation is 0.879. This is a really good approximation of the correct number. This is what we got using the 21. This is what we get using the 21 and 1/2. And it's an approximation that's sort of right on-- a very good one. The moral from this numerical example is that doing this 1 and 1/2 correction does give us better approximations. In fact, we can use this 1/2 idea to even calculate individual probabilities. So suppose you want to approximate the probability that Sn equal to 19. If you were to pretend that Sn is normal and calculate this probability, the probability that the normal random variable is equal to 19 is 0. So you don't get an interesting answer. You get a more interesting answer by writing this event, 19 as being the same as the event of falling between 18 and 1/2 and 19 and 1/2 and using the normal approximation to calculate this probability. In terms of our previous picture, this corresponds to the following. We are interested in the probability that Sn is equal to 19. So we're interested in the height of this bar. We're going to consider the area under the normal curve going from here to here, and use this area as an approximation for the height of that particular bar. So what we're basically doing is, we take the probability under the normal curve that's assigned over a continuum of values and attributed it to different discrete values. Whatever is above the midpoint gets attributed to 19. Whatever is below that midpoint gets attributed to 18. So this is green area is our approximation of the value of the PMF at 19. So similarly, if you wanted to approximate the value of the PMF at this point, you would take this interval and integrate the area under the normal curve over that interval. It turns out that this gives a very good approximation of the PMF of the binomial. And actually, this was the context in which the central limit theorem was proved in the first place, when this business started. So this business goes back a few hundred years. And the central limit theorem was first approved by considering the PMF of a binomial random variable when p is equal to 1/2. People did the algebra, and they found out that the exact expression for the PMF is quite well approximated by that expression hat you would get from a normal distribution. Then the proof was extended to binomials for more general values of p. So here we talk about this as a refinement of the general central limit theorem, but, historically, that refinement was where the whole business got started in the first place. All right, so let's go through the mechanics of approximating the probability that Sn is equal to 19-- exactly 19. As we said, we're going to write this event as an event that covers an interval of unit length from 18 and 1/2 to 19 and 1/2. This is the event of interest. First step is to massage the event of interest so that it involves our Zn random variable. So subtract 18 from all sides. Divide by the standard deviation of 3 from all sides. That's the equivalent representation of the event. This is our standardized random variable Zn. These are just these numbers. And to do an approximation, we want to find the probability of this event, but Zn is approximately normal, so we plug in here the Z, which is the standard normal. So we want to find the probability that the standard normal falls inside this interval. You find these using CDFs because this is the probability that you're less than this but not less than that. So it's a difference between two cumulative probabilities. Then, you look up your normal tables. You find two numbers for these quantities, and, finally, you get a numerical answer for an individual entry of the PMF of the binomial. This is a pretty good approximation, it turns out. If you were to do the calculations using the exact formula, you would get something which is pretty close-- an error in the third digit-- this is pretty good. So I guess what we did here with our discussion of the binomial slightly contradicts what I said before-- that the central limit theorem is a statement about cumulative distribution functions. In general, it doesn't tell you what to do to approximate PMFs themselves. And that's indeed the case in general. One the other hand, for the special case of a binomial distribution, the central limit theorem approximation, with this 1/2 correction, is a very good approximation even for the individual PMF. All right, so we spent quite a bit of time on mechanics. So let's spend the last few minutes today thinking a bit and look at a small puzzle. So the puzzle is the following. Consider Poisson process that runs over a unit interval. And where the arrival rate is equal to 1. So this is the unit interval. And let X be the number of arrivals. And this is Poisson, with mean 1. Now, let me take this interval and divide it into n little pieces. So each piece has length 1/n. And let Xi be the number of arrivals during the Ith little interval. OK, what do we know about the random variables Xi? Is they are themselves Poisson. It's a number of arrivals during a small interval. We also know that when n is big, so the length of the interval is small, these Xi's are approximately Bernoulli, with mean 1/n. Guess it doesn't matter whether we model them as Bernoulli or not. What matters is that the Xi's are independent. Why are they independent? Because, in a Poisson process, these joint intervals are independent of each other. So the Xi's are independent. And they also have the same distribution. And we have that X, the total number of arrivals, is the sum over the Xn's. So the central limit theorem tells us that, approximately, the sum of independent, identically distributed random variables, when we have lots of these random variables, behaves like a normal random variable. So by using this decomposition of X into a sum of i.i.d random variables, and by using values of n that are bigger and bigger, by taking the limit, it should follow that X has a normal distribution. On the other hand, we know that X has a Poisson distribution. So something must be wrong in this argument here. Can we really use the central limit theorem in this situation? So what do we need for the central limit theorem? We need to have independent, identically distributed random variables. We have it here. We want them to have a finite mean and finite variance. We also have it here, means variances are finite. What is another assumption that was never made explicit, but essentially was there? Or in other words, what is the flaw in this argument that uses the central limit theorem here? Any thoughts? So in the central limit theorem, we said, consider-- fix a probability distribution, and let the Xi's be distributed according to that probability distribution, and add a larger and larger number or Xi's. But the underlying, unstated assumption is that we fix the distribution of the Xi's. As we let n increase, the statistics of each Xi do not change. Whereas here, I'm playing a trick on you. As I'm taking more and more random variables, I'm actually changing what those random variables are. When I take a larger n, the Xi's are random variables with a different mean and different variance. So I'm adding more of these, but at the same time, in this example, I'm changing their distributions. That's something that doesn't fit the setting of the central limit theorem. In the central limit theorem, you first fix the distribution of the X's. You keep it fixed, and then you consider adding more and more according to that particular fixed distribution. So that's the catch. That's why the central limit theorem does not apply to this situation. And we're lucky that it doesn't apply because, otherwise, we would have a huge contradiction destroying probability theory. OK, but now that's still leaves us with a little bit of a dilemma. Suppose that, here, essentially we're adding independent Bernoulli random variables. So the issue is that the central limit theorem has to do with asymptotics as n goes to infinity. And if we consider a binomial, and somebody gives us specific numbers about the parameters of that binomial, it might not necessarily be obvious what kind of approximation do we use. In particular, we do have two different approximations for the binomial. If we fix p, then the binomial is the sum of Bernoulli's that come from a fixed distribution, we consider more and more of these. When we add them, the central limit theorem tells us that we get the normal distribution. There's another sort of limit, which has the flavor of this example, in which we still deal with a binomial, sum of n Bernoulli's. We let that sum, the number of the Bernoulli's go to infinity. But each Bernoulli has a probability of success that goes to 0, and we do this in a way so that np, the expected number of successes, stays finite. This is the situation that we dealt with when we first defined our Poisson process. We have a very, very large number so lots, of time slots, but during each time slot, there's a tiny probability of obtaining an arrival. Under that setting, in discrete time, we have a binomial distribution, or Bernoulli process, but when we take the limit, we obtain the Poisson process and the Poisson approximation. So these are two equally valid approximations of the binomial. But they're valid in different asymptotic regimes. In one regime, we fixed p, let n go to infinity. In the other regime, we let both n and p change simultaneously. Now, in real life, you're never dealing with the limiting situations. You're dealing with actual numbers. So if somebody tells you that the numbers are like this, then you should probably say that this is the situation that fits the Poisson description-- large number of slots with each slot having a tiny probability of success. On the other hand, if p is something like this, and n is 500, then you expect to get the distribution for the number of successes. It's going to have a mean of 50 and to have a fair amount of spread around there. It turns out that the normal approximation would be better in this context. As a rule of thumb, if n times p is bigger than 10 or 20, you can start using the normal approximation. If n times p is a small number, then you prefer to use the Poisson approximation. But there's no hard theorems or rules about how to go about this. OK, so from next time we're going to switch base again. And we're going to put together everything we learned in this class to start solving inference problems.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
A_Random_Number_of_Coin_Flips.txt
Hey, everyone. Welcome back. Today, we're going to do another fun problem that has to do with a random number of coin flips. So the experiment we're going to run is as follows. We're given a fair six-sided die, and we roll it. And then we take a fair coin, and we flip it the number of times indicated by the die. That is to say, if I roll a four on my die, then I flip the coin four times. And then we're interested in some statistics regarding the number of heads that show up in our sequence. In particular, we want to compute the expectation and the variance of the number of heads that we see. So the first step of this problem is to translate the English to the math. So we have to define some notation. I went ahead and did that for us. I defined n to be the outcome of the die role. Now, since we flip the coin the number of times shown by the die roll, n is equivalently the number of flips that we perform. And n, of course, is a random variable, and I've written its PMF up here. So Pn of n is just a discrete uniform random variable between 1 and 6, because we're told that the die has six sides and that it's fair. Now, I also defined h to be the number of heads that we see. So that's the quantity of interest. And it turns out that Bernoulli random variables will be very helpful to us in this problem. So I defined x sub i as 1 if the ith flip is heads, and 0 otherwise. And what we're going to do now is, we're going to use these x sub i's to come up with an expression for h. So if you want to count the number of heads, one possible thing you could do is start with 0 and then look at the first coin flip. If it's heads, you add 1 to 0, which I'm going to call your running sum. If the first flip is tails, you add 0. And similarly, after that, after every trial, if you see heads, you add 1 to your running sum. If you see a tails, you add 0. And in that way, we can precisely compute h. So the mathematical statement of what I just said is that h is equal to x1 plus x2 plus x3, all the way through x sub n. So now, we are interested in computing e of h, the expectation of h. So your knee jerk reaction might be to say, oh, well, by linearity of expectation, we know that this is an expectation of x1, et cetera through the expectation of xn. But in this case, you would actually be wrong. Don't do that. And the reason that this is not going to work for us is because we're dealing with a random number of random variables. So each xi is a random variable. And we have capital n of them. But capital n is a random variable. It denotes the outcome of our die roll. So we actually cannot just take the sum of these expectations. Instead, we're going to have to condition on n and use iterated expectation. So this is the mathematical statement of what I just said. And the reason why this works is because conditioning on n will take us to the case that we already know how to deal with, where we have a known number of random variables. And of course, iterated expectations holds, as you saw in lecture. I will briefly mention here that the formula we're going to derive is derived in the book. And it was probably derived in lecture. So if you want, you can just go to that formula immediately. But I think the derivation of the formula that we need is quick and is helpful. So I'm going to go through it quickly. Let's do it over here. Plugging in our running sum for h, we get this expression-- x1 plus x2 et cetera plus xn, conditioned on n. And this, of course, is n times the expectation of x sub i. So again, I'm going through this quickly, because it's in the book. But this step holds, because each of these xi's have the same statistics. They're all Bernoulli with parameter of 1/2, because our coin is fair. And so I used x sub i to say it doesn't really matter which integer you pick for i, because the expectation of xi is the same for all i. So this now, the expectation of x sub i, this is just a number, it's just some constant, so you can pull it out of the expectation. So you get the expectation of x sub i times the expectation of n. So I gave away the answer to this a second ago. But x sub i is just a Bernoulli random variable with parameter of success of 1/2. And we know already that the expectation of such a random variable is just p, or 1/2. So this is 1/2 times expectation of n. And now n we know is a discrete uniform random variable. And there's a formula that I'm going to use, which hopefully some of you may remember. If you have a discrete uniform random variable that takes on values between a and b-- let's use w-- if you call this random variable w, then we have that the variance of w is equal to b minus a times b minus a plus 2 divided by 12. So that's the variance. We don't actually need the variance, but we will need this later. And the expectation of w-- actually, let's just do it up here right ahead for this problem. Because we have a discrete uniform random variable, the expectation is just the middle. So you agree hopefully that the middle is right at 3.5, which is also 7/2. So this is times 7/2, which is equal to 7/4. So we are done with part of part a. I'm going to write this answer over here, so I can erase. And we're going to do something very similar to compute the variance. To compute the variance, we are going to also condition on n. So we get rid of this source of randomness. And then we're going to use law of total variance, which you've also seen in lecture. And again, the formula for this variance is derived in the book. So I'm going to go through it quickly. But make sure you understand this derivation, because it exercises a lot of stuff we taught you. So this, just using law of total variance, is the variance of expectation of h given n, plus the expectation of the variance of h given n. And now, plugging in this running sum for h, you get this. It's a mouthful to write. Bear with me. x1 through xn given n-- so I didn't do anything fancy. I just plugged this into here. So this term is similar to what we saw in a previous problem. By linearity of expectation and due to the fact that all of the x i's are distributed in the same way, they have the same expectation, this becomes n times the expectation of x sub i. And let's do this term over here. This term-- well, conditioned on n, this n is known. So we essentially have a finite known sum of independent random variables. We know that the variance of a sum of independent random variables is the sum of the variances. So this is the variance of x1 plus the variance of x2 et cetera, plus the variance of xn. And furthermore, again, because all of these xi's have the same distribution, the variance is the same. So we can actually write this as n times the variance of x sub i, where x sub i just corresponds to one of the trials. It doesn't matter which one, because they all have the same variance and expectation. So now, we're almost home free. This is just some scaler. So we can take it out of the variance, but we have to square it. So this becomes expectation of xi squared times the variance of n. And then this variance is also just a scalar, so we can take it outside. So then we get variance of x sub i times expectation of n. Now, we know that the expectation of x sub i is just the probability of success, which is 1/2. So we have 1/2 squared, or 1/4, times the variance of n. So that's where this formula comes in handy. b is equal to 6, a is equal to 1. So we get that the variance of n is equal to 5 times-- and then 5 plus 2 is 7-- divided by 12. So this is just a formula from the book that you guys hopefully remember. So we get 35/12. And then the variance of xi, we know the variance of a Bernoulli random variable is just p times 1 minus p. So in our case, that's 1/2 times 1/2, which is 1/4. So we get 1/4. And then the expectation of n, we remember from our previous computation, is just 7/2. So I will let you guys do this arithmetic on your own time. But the answer comes out to be 77/48. So I will go ahead and put our answer over here-- 77/48-- so that I can erase. So I want you guys to start thinking about part b while I erase. Essentially, you do the same experiment that we did in part a, except now we use two dice instead of one. So in part b, just to repeat, you now have two dice. You roll them. You look at the outcome. If you have an outcome of four on one die and six on another die, then you flip the coin 10 times. So it's the same exact experiment. We're interested in the number of heads we want the expectation and the variance. But this step is now a little bit different. Again, let's approach this by defining some notation first. Now, I want to let n1 be the outcome of the first die. And then you can let n2 be the outcome of the second die. And we'll start with just that. So one way you could approach this problem is say, OK, if n1 is the outcome of my first die and n2 is the outcome of my second die, then the number of coin flips that I'm going to make is n1 plus n2. This is the total coin flips. So you could just repeat the same exact math that we did in part a, except everywhere that you see an n, you replace that n with n1 plus n2. So that will get you to your answer, but it will require slightly more work. We're going to think about this problem slightly differently. So the way we are thinking about it just now, we roll two dice at the same time. We add the results of the die rolls. And then we flip the coin that number of times. But another way you can think about this is, you roll one die, and then you flip the coin the number of times shown by that die and count the number of heads. And then you take the second die and you roll it. And then you flip the coin that many more times and count the number of heads after that. So you could define h1 to be number of heads in the first n1 coin flips. And you could just let h2 be the number of heads in the last n2 coin flips. So hopefully that terminology is not confusing you. Essentially, what I'm saying is, n1 plus n2 means you'll have n1 flips, followed by n2 flips, for a total of n1 plus n2 flips. And then within the first n1 flips, you can get some number of heads, which we're calling h1. And in the last n2 flips, you can get some number of heads, which is h2. So the total number of heads that we get at the end-- I'm going to call it h star-- is equal to h1 plus h2. And what part b is really asking us for is the expectation of h star and the variance of h star. But here's where something really beautiful happens. h1 and h2 are independent, and they are statistically the same. So the reason why they're independent is because-- well, first of all, all of our coin flips are independent. And they're statistically the same, because the experiment is exactly the same. And everything's independent. So instead of imagining one person rolling two die and then summing the outcomes and flipping a coin that many times and counting heads, you can imagine one person takes one die and goes into one room. A second person takes a second die and goes into another room. They run their experiments. Then they report back to a third person the number of heads. And that person adds them together to get h star. And in that scenario, everything is very clearly independent. So the expectation of h star-- you actually don't need independence for this part, because linearly of expectation always holds. But you get the expectation of h1 plus the expectation of h2. And because these guys are statistically equivalent, this is just two times the expectation of h. And the expectation of h we calculated in part a. So this is 2 times 7 over 4. Now, for the variance, here's where the independence comes in. I'm actually going to write this somewhere where I don't have to bend over. So the variance of h star is equal to the variance of h1 plus the variance of h2 by independence. And that's equal to 2 times the variance of h, because they are statistically the same. And the variance of h we computed already. So this is just 2 times 77 over 48. So the succient answer to part b is that both the mean and the variance double from part A. So hopefully you guys enjoyed this problem. We covered a bunch of things. So we saw how to deal with having a random number of random variables. Usually we have a fixed number of random variables. In this problem, the number of random variables we were adding together was itself random. So to handle that, we conditioned on n. And to compute expectation, we use iterated expectation. To compute variance, we used law of total variance. And then in part b, we were just a little bit clever. We thought about how can we reinterpret this experiment to reduce computation. And we realized that part b is essentially two independent trials of part a. So both the mean and the variance should double.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Convergence_in_Probability_and_in_the_Mean_Part_2.txt
For part E and F of the problem, we'll be introducing a new notion of convergence, so-called the convergence E mean squared sense. We say that xn converges to a number c in mean squared, if as we take and go to infinity, the expected value of xn minus c squared goes to 0. To get a sense of what this looks like, let's say we let c equal to the expected value of xn, and let's say the expected value of xn is always the same. So the sequence of random variables has the same mean. Well, if that is true, then mean square convergence simply says the limit of the variance of xn is 0. So as you can imagine, somehow as xn becomes big, the variance of xn is very small, so xn is basically highly concentrated around c. And by this I mean, the density function for xn. So that's the notion of convergence we'll be working with. Our first task here is to show that the mean square convergence is in some sense stronger than the convergence in probability that we have been working with from part A to part D. That is, if I know that xn converged to some number c in mean squared, then this must imply that xn converges to c in probability. And now, we'll go show that for part E. Well, let's start with a definition of convergence in probability. We want to show that for a fixed constant epsilon the probability that xn minus c, greater than epsilon, essentially goes to 0 as n goes to infinity. To do so, we look at the value of this term. Well, the probability of absolute value xn minus c greater than epsilon is equal to the case if we were to square both sides of the inequality. So that is equal to the probability that xn minus c squared greater than epsilon squared. We can do this because both sides are positive, hence this goes through. Now, to bound this equality, we'll invoke the Markov's Inequality, which it says this probability of xn, some random variable greater than epsilon squared, is no more than is less equal to the expected value of the random variable. In this case, the expected value of x minus c squared divided by the threshold that we're trying to cross. So that is Markov's Inequality. Now, since we know xn converges to c in mean squared, and by definition, mean square we know this precise expectation right here goes to 0. And therefore, the whole expression goes to 0 as n goes to infinity. Because the denominator here is a constant and the top, the numerator here, goes to 0. So now we have it. We know that the probability of xn minus c absolute value greater than epsilon goes to 0 as n goes to infinity, for all fixed value of epsilons and this is the definition of convergence in probability. Now that we know if xn converges to c mean squared, it implies that xn converges to c in probability. One might wonder whether the reverse is true. Namely, if we know something converges in probability to a constant, does the same sequence of random variables converge to the same constant in mean squared? It turns out that is not quite the case. The notion of probability converges in probability is not as strong as a notion of convergence in mean squared. Again, to look for a counter example, we do not have to go further than the yn's we have been working with. So here we know that yn converges to 0 in probability. But it turns out it does not converge to 0 in the mean squared. And to see why this is the case, we can take the expected value of yn minus 0 squared, and see how that goes. Well, the value of this can be computed easily, which is simply 0, if yn is equal to 0, with probability 1 minus n plus n squared when yn takes a value of n, and this happens with probability 1 over n. The whole expression evaluates to n, which blows up to infinity as n going to infinity. As a result, the limit n going to infinity of E of yn minus 0 squared is infinity and is not equal to 0. And there we have it, even though yn converges to 0 in probability, because the variance of yn, in some sense, is too big, it does not converge in a mean squared sense.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
The_Sum_of_Discrete_and_Continuous_Random_Variables.txt
In this video, we're going to do an example in which we derive the probability density function of the sum of two random variables. The problem tells us the following. We're given that X and Y are independent random variables. X is a discrete random variable with PMF Px. Y is continuous with PDF Fy. And we'd like to compute the PDF of Z which is equal to X plus Y. We're going to use the standard approach here-- compute the CDF of Z and then take the derivative to get the PDF. So in this case, the CDF, which is Fz, by definition is the random variable Z being less than little z. But Z is just X plus Y. So now, we'd actually like to, instead of having to deal with two random variables, X and Y, we'd like to deal with one at a time. And the total probability theorem allows us to do this by conditioning on one of the two random variables. Conditioning on Y here is a bit tricky, because Y is continuous, and you have to be careful with your definitions. So conditioning on X seems like the way to go. So let's do that. This is just the probability that X equals little x, which is exactly equal to the PMF of X evaluated at x. Now we're given we're fixing X equal to little x. So we can actually replace every instance of the random variable with little x. And now I'm going to just rearrange this so that it looks a little nicer. So I'm going to have Y on the left and say Y is less than z minus x, where z minus x is just a constant. Now, remember that X and Y are independent. So telling us something about X shouldn't change our beliefs about Y. So in this case, we can actually drop the conditioning. And this is exactly the CDF of Y evaluated at z minus x. So now we've simplified as far as we could. So let's take the derivative and see where that takes us. So the PDF of Z is, by definition, the derivative of the CDF, which we just computed here. This is sum over x Fy z minus x Px. What next? Interchange the derivative and the summation. And a note of caution here. So if x took on a finite number of values, you'd have a finite number of terms here. And this would be completely valid. You can just do this. But if x took on, for example, a countably infinite number of values-- a geometric random variable, for example-- this would actually require some formal justification. But I'm not going to get into that. So here, the derivative with respect to z-- this is actually z-- is you use chain rule here. Px doesn't matter, because it's not a function of z. So we have Fy evaluated at z minus x according to the chain rule, and then the derivative of the inner quantity, z minus x, which is just 1. So we don't need to put anything there. And we get Px of x. So there we go. We've derived the PDF of z. Notice that this looks quite similar to the convolution formula when you assume that both X and Y are either continuous or discrete. And so that tells us that this looks right. So in summary, we've basically computed the PDF of X plus Y where X is discrete and Y is continuous. And we've used the standard two-step approach-- compute the CDF and then take the derivative to get the PDF.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
13_Bernoulli_Process.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So by now you have seen pretty much every possible trick there is in basic probability theory, about how to calculate distributions, and so on. You have the basic tools to do pretty much anything. So what's coming after this? Well, probability is useful for developing the science of inference, and this is a subject to which we're going to come back at the end of the semester. Another chapter, which is what we will be doing over the next few weeks, is to deal with phenomena that evolve in time. So so-called random processes or stochastic processes. So what is this about? So in the real world, you don't just throw two random variables and go home. Rather the world goes on. So you generate the random variable, then you get more random variables, and things evolve in time. And random processes are supposed to be models that capture the evolution of random phenomena over time. So that's what we will be doing. Now when we have evolution in time, mathematically speaking, you can use discrete time or continuous time. Of course, discrete time is easier. And that's where we're going to start from. And we're going to start from the easiest, simplest random process, which is the so-called Bernoulli process, which is nothing but just a sequence of coin flips. You keep flipping a coin and keep going forever. That's what the Bernoulli process is. So in some sense it's something that you have already seen. But we're going to introduce a few additional ideas here that will be useful and relevant as we go along and we move on to continuous time processes. So we're going to define the Bernoulli process, talk about some basic properties that the process has, and derive a few formulas, and exploit the special structure that it has to do a few quite interesting things. By the way, where does the word Bernoulli come from? Well the Bernoulli's were a family of mathematicians, Swiss mathematicians and scientists around the 1700s. There were so many of them that actually-- and some of them had the same first name-- historians even have difficulty of figuring out who exactly did what. But in any case, you can imagine that at the dinner table they were probably flipping coins and doing Bernoulli trials. So maybe that was their pass-time. OK. So what is the Bernoulli process? The Bernoulli process is nothing but a sequence of independent Bernoulli trials that you can think of as coin flips. So you can think the result of each trial being heads or tails. It's a little more convenient maybe to talk about successes and failures instead of heads or tails. Or if you wish numerical values, to use a 1 for a success and 0 for a failure. So the model is that each one of these trials has the same probability of success, p. And the other assumption is that these trials are statistically independent of each other. So what could be some examples of Bernoulli trials? You buy a lottery ticket every week and you win or lose. Presumably, these are independent of each other. And if it's the same kind of lottery, the probability of winning should be the same during every week. Maybe you want to model the financial markets. And a crude model could be that on any given day the Dow Jones is going to go up or down with a certain probability. Well that probability must be somewhere around 0.5, or so. This is a crude model of financial markets. You say, probably there is more into them. Life is not that simple. But actually it's a pretty reasonable model. It takes quite a bit of work to come up with more sophisticated models that can do better predictions than just pure heads and tails. Now more interesting, perhaps to the examples we will be dealing with in this class-- a Bernoulli process is a good model for streams of arrivals of any kind to a facility. So it could be a bank, and you are sitting at the door of the bank. And at every second, you check whether a customer came in during that second or not. Or you can think about arrivals of jobs to a server. Or any other kind of requests to a service system. So requests, or jobs, arrive at random times. You split the time into time slots. And during each time slot something comes or something does not come. And for many applications, it's a reasonable assumption to make that arrivals on any given slot are independent of arrivals in any other time slot. So each time slot can be viewed as a trial, where either something comes or doesn't come. And different trials are independent of each other. Now there's two assumptions that we're making here. One is the independence assumption. The other is that this number, p, probability of success, is constant. Now if you think about the bank example, if you stand outside the bank at 9:30 in the morning, you'll see arrivals happening at a certain rate. If you stand outside the bank at 12:00 noon, probably arrivals are more frequent. Which means that the given time slot has a higher probability of seeing an arrival around noon time. This means that the assumption of a constant p is probably not correct in that setting, if you're talking about the whole day. So the probability of successes or arrivals in the morning is going to be smaller than what it would be at noon. But if you're talking about a time period, let's say 10:00 to 10:15, probably all slots have the same probability of seeing an arrival and it's a good approximation. So we're going to stick with the assumption that p is constant, doesn't change with time. Now that we have our model what do we do with it? Well, we start talking about the statistical properties that it has. And here there's two slightly different perspectives of thinking about what a random process is. The simplest version is to think about the random process as being just a sequence of random variables. We know what random variables are. We know what multiple random variables are. So it's just an experiment that has associated with it a bunch of random variables. So once you have random variables, what do you do instinctively? You talk about the distribution of these random variables. We already specified for the Bernoulli process that each Xi is a Bernoulli random variable, with probability of success equal to p. That specifies the distribution of the random variable X, or Xt, for general time t. Then you can calculate expected values and variances, and so on. So the expected value is, with probability p, you get a 1. And with probability 1 - p, you get a 0. So the expected value is equal to p. And then we have seen before a formula for the variance of the Bernoulli random variable, which is p times 1-p. So this way we basically now have all the statistical properties of the random variable Xt, and we have those properties for every t. Is this enough of a probabilistic description of a random process? Well, no. You need to know how the different random variables relate to each other. If you're talking about a general random process, you would like to know things. For example, the joint distribution of X2, with X5, and X7. For example, that might be something that you're interested in. And the way you specify it is by giving the joint PMF of these random variables. And you have to do that for every collection, or any subset, of the random variables you are interested in. So to have a complete description of a random processes, you need to specify for me all the possible joint distributions. And once you have all the possible joint distributions, then you can answer, in principle, any questions you might be interested in. How did we get around this issue for the Bernoulli process? I didn't give you the joint distributions explicitly. But I gave them to you implicitly. And this is because I told you that the different random variables are independent of each other. So at least for the Bernoulli process, where we make the independence assumption, we know that this is going to be the product of the PMFs. And since I have told you what the individual PMFs are, this means that you automatically know all the joint PMFs. And we can go to business based on that. All right. So this is one view of what a random process is, just a collection of random variables. There's another view that's a little more abstract, which is the following. The entire process is to be thought of as one long experiment. So we go back to the chapter one view of probabilistic models. So there must be a sample space involved. What is the sample space? If I do my infinite, long experiment of flipping an infinite number of coins, a typical outcome of the experiment would be a sequence of 0's and 1's. So this could be one possible outcome of the experiment, just an infinite sequence of 0's and 1's. My sample space is the set of all possible outcomes of this kind. Here's another possible outcome, and so on. And essentially we're dealing with a sample space, which is the space of all sequences of 0's and 1's. And we're making some sort of probabilistic assumption about what may happen in that experiment. So one particular sequence that we may be interested in is the sequence of obtaining all 1's. So this is the sequence that gives you 1's forever. Once you take the point of view that this is our sample space-- its the space of all infinite sequences-- you can start asking questions that have to do with infinite sequences. Such as the question, what's the probability of obtaining the infinite sequence that consists of all 1's? So what is this probability? Let's see how we could calculate it. So the probability of obtaining all 1's is certainly less than or equal to the probability of obtaining 1's, just in the first 10 tosses. OK. This is asking for more things to happen than this. If this event is true, then this is also true. Therefore the probability of this is smaller than the probability of that. This event is contained in that event. This implies this. So we have this inequality. Now what's the probability of obtaining 1's in 10 trials? This is just p to the 10th because the trials are independent. Now of course there's no reason why I chose 10 here. The same argument goes through if I use an arbitrary number, k. And this has to be true for all k. So this probability is less than p to the k, no matter what k I choose. Therefore, this must be less than or equal to the limit of this, as k goes to infinity. This is smaller than that for all k's. Let k go to infinity, take k arbitrarily large, this number is going to become arbitrarily small. It goes to 0. And that proves that the probability of an infinite sequence of 1's is equal to 0. So take limits of both sides. It's going to be less than or equal to the limit-- I shouldn't take a limit here. The probability is less than or equal to the limit of p to the k, as k goes to infinity, which is 0. So this proves in a formal way that the sequence of all 1's has 0 probability. If you have an infinite number of coin flips, what's the probability that all of the coin flips result in heads? The probability of this happening is equal to zero. So this particular sequence has 0 probability. Of course, I'm assuming here that p is less than 1, strictly less than 1. Now the interesting thing is that if you look at any other infinite sequence, and you try to calculate the probability of that infinite sequence, you would get a product of (1-p) times 1, 1-p times 1, 1-p, times p times p, times 1-p and so on. You keep multiplying numbers that are less than 1. Again, I'm making the assumption that p is between 0 and 1. So 1-p is less than 1, p is less than 1. You keep multiplying numbers less than 1. If you multiply infinitely many such numbers, the infinite product becomes 0. So any individual sequence in this sample space actually has 0 probability. And that is a little bit counter-intuitive perhaps. But the situation is more like the situation where we deal with continuous random variables. So if you could draw a continuous random variable, every possible outcome has 0 probability. And that's fine. But all of the outcomes collectively still have positive probability. So the situation here is very much similar. So the space of infinite sequences of 0's and 1's, that sample space is very much like a continuous space. If you want to push that analogy further, you could think of this as the expansion of a real number. Or the representation of a real number in binary. Take a real number, write it down in binary, you are going to get an infinite sequence of 0's and 1's. So you can think of each possible outcome here essentially as a real number. So the experiment of doing an infinite number of coin flips is sort of similar to the experiment of picking a real number at random. When you pick real numbers at random, any particular real number has 0 probability. So similarly here, any particular infinite sequence has 0 probability. So if we were to push that analogy further, there would be a few interesting things we could do. But we will not push it further. This is just to give you an indication that things can get pretty subtle and interesting once you start talking about random processes that involve forever, over the infinite time horizon. So things get interesting even in this context of the simple Bernoulli process. Just to give you a preview of what's coming further, today we're going to talk just about the Bernoulli process. And you should make sure before the next lecture-- I guess between the exam and the next lecture-- to understand everything we do today. Because next time we're going to do everything once more, but in continuous time. And in continuous time, things become more subtle and a little more difficult. But we are going to build on what we understand for the discrete time case. Now both the Bernoulli process and its continuous time analog has a property that we call memorylessness, whatever happened in the past does not affect the future. Later on in this class we're going to talk about more general random processes, so-called Markov chains, in which there are certain dependences across time. That is, what has happened in the past will have some bearing on what may happen in the future. So it's like having coin flips where the outcome of the next coin flip has some dependence on the previous coin flip. And that gives us a richer class of models. And once we get there, essentially we will have covered all possible models. So for random processes that are practically useful and which you can manipulate, Markov chains are a pretty general class of models. And almost any real world phenomenon that evolves in time can be approximately modeled using Markov chains. So even though this is a first class in probability, we will get pretty far in that direction. All right. So now let's start doing a few calculations and answer some questions about the Bernoulli process. So again, the best way to think in terms of models that correspond to the Bernoulli process is in terms of arrivals of jobs to a facility. And there's two types of questions that you can ask. In a given amount of time, how many jobs arrived? Or conversely, for a given number of jobs, how much time did it take for them to arrive? So we're going to deal with these two questions, starting with the first. For a given amount of time-- that is, for a given number of time periods-- how many arrivals have we had? How many of those Xi's happen to be 1's? We fix the number of time slots-- let's say n time slots-- and you measure the number of successes. Well this is a very familiar random variable. The number of successes in n independent coin flips-- or in n independent trials-- is a binomial random variable. So we know its distribution is given by the binomial PMF, and it's just this, for k going from 0 up to n. And we know everything by now about this random variable. We know its expected value is n times p. And we know the variance, which is n times p, times 1-p. So there's nothing new here. That's the easy part. So now let's look at the opposite kind of question. Instead of fixing the time and asking how many arrivals, now let us fix the number of arrivals and ask how much time did it take. And let's start with the time until the first arrival. So the process starts. We got our slots. And we see, perhaps, a sequence of 0's and then at some point we get a 1. The number of trials it took until we get a 1, we're going to call it T1. And it's the time of the first arrival. OK. What is the probability distribution of T1? What kind of random variable is it? We've gone through this before. The event that the first arrival happens at time little t is the event that the first t-1 trials were failures, and the trial number t happens to be a success. So for the first success to happen at time slot number 5, it means that the first 4 slots had failures and the 5th slot had a success. So the probability of this happening is the probability of having failures in the first t -1 trials, and having a success at trial number 1. And this is the formula for t equal 1,2, and so on. So we know what this distribution is. It's the so-called geometric distribution. Let me jump this through this for a minute. In the past, we did calculate the expected value of the geometric distribution, and it's 1/p. Which means that if p is small, you expect to take a long time until the first success. And then there's a formula also for the variance of T1, which we never formally derived in class, but it was in your textbook and it just happens to be this. All right. So nothing new until this point. Now, let's talk about this property, the memorylessness property. We kind of touched on this property when we discussed-- when we did the derivation in class of the expected value of T1. Now what is the memoryless property? It's essentially a consequence of independence. If I tell you the results of my coin flips up to a certain time, this, because of independence, doesn't give you any information about the coin flips after that time. So knowing that we had lots of 0's here does not change what I believe about the future coin flips, because the future coin flips are going to be just independent coin flips with a given probability, p, for obtaining tails. So this is a statement that I made about a specific time. That is, you do coin flips until 12 o'clock. And then at 12 o'clock, you start watching. No matter what happens before 12 o'clock, after 12:00, what you're going to see is just a sequence of independent Bernoulli trials with the same probability, p. Whatever happened in the past is irrelevant. Now instead of talking about the fixed time at which you start watching, let's think about a situation where your sister sits in the next room, flips the coins until she observes the first success, and then calls you inside. And you start watching after this time. What are you're going to see? Well, you're going to see a coin flip with probability p of success. You're going to see another trial that has probability p as a success, and these are all independent of each other. So what you're going to see starting at that time is going to be just a sequence of independent Bernoulli trials, as if the process was starting at this time. How long it took for the first success to occur doesn't have any bearing on what is going to happen afterwards. What happens afterwards is still a sequence of independent coin flips. And this story is actually even more general. So your sister watches the coin flips and at some point tells you, oh, something really interesting is happening here. I got this string of a hundred 1's in a row. Come and watch. Now when you go in there and you start watching, do you expect to see something unusual? There were unusual things that happened before you were called in. Does this means that you're going to see unusual things afterwards? No. Afterwards, what you're going to see is, again, just a sequence of independent coin flips. The fact that some strange things happened before doesn't have any bearing as to what is going to happen in the future. So if the roulettes in the casino are properly made, the fact that there were 3 reds in a row doesn't affect the odds of whether in the next roll it's going to be a red or a black. So whatever happens in the past-- no matter how unusual it is-- at the time when you're called in, what's going to happen in the future is going to be just independent Bernoulli trials, with the same probability, p. The only case where this story changes is if your sister has a little bit of foresight. So your sister can look ahead into the future and knows that the next 10 coin flips will be heads, and calls you before those 10 flips will happen. If she calls you in, then what are you going to see? You're not going to see independent Bernoulli trials, since she has psychic powers and she knows that the next ones would be 1's. She called you in and you will see a sequence of 1's. So it's no more independent Bernoulli trials. So what's the subtle difference here? The future is independent from the past, provided that the time that you are called and asked to start watching is determined by someone who doesn't have any foresight, who cannot see the future. If you are called in, just on the basis of what has happened so far, then you don't have any information about the future. And one special case is the picture here. You have your coin flips. Once you see a one that happens, once you see a success, you are called in. You are called in on the basis of what happened in the past, but without any foresight. OK. And this subtle distinction is what's going to make our next example interesting and subtle. So here's the question. You buy a lottery ticket every day, so we have a Bernoulli process that's running in time. And you're interested in the length of the first string of losing days. What does that mean? So suppose that a typical sequence of events could be this one. So what are we discussing here? We're looking at the first string of losing days, where losing days means 0's. So the string of losing days is this string here. Let's call the length of that string, L. We're interested in the random variable, which is the length of this interval. What kind of random variable is it? OK. Here's one possible way you might think about the problem. OK. Starting from this time, and looking until this time here, what are we looking at? We're looking at the time, starting from here, until the first success. So the past doesn't matter. Starting from here we have coin flips until the first success. The time until the first success in a Bernoulli process-- we just discussed that it's a geometric random variable. So your first conjecture would be that this random variable here, which is 1 longer than the one we are interested in, that perhaps is a geometric random variable. And if this were so, then you could say that the random variable, L, is a geometric, minus 1. Can that be the correct answer? A geometric random variable, what values does it take? It takes values 1, 2, 3, and so on. 1 minus a geometric would take values from 0, 1, 2, and so on. Can the random variable L be 0? No. The random variable L is the length of a string of losing days. So the shortest that L could be, would be just 1. If you get just one losing day and then you start winning, L would be equal to 1. So L cannot be 0 by definition, which means that L + 1 cannot be 1, by definition. But if L +1 were geometric, it could be equal to 1. Therefore this random variable, L + 1, is not a geometric. OK. Why is it not geometric? I started watching at this time. From this time until the first success, that should be a geometric random variable. Where's the catch? If I'm asked to start watching at this time, it's because my sister knows that the next one was a failure. This is the time where the string of failures starts. In order to know that they should start watching here, it's the same as if I'm told that the next one is a failure. So to be asked to start watching at this time requires that someone looked in the future. And in that case, it's no longer true that these will be independent Bernoulli trials. In fact, they're not. If you start watching here, you're certain that the next one is a failure. The next one is not an independent Bernoulli trial. That's why the argument that would claim that this L + 1 is geometric would be incorrect. So if this is not the correct answer, which is the correct answer? The correct answer goes as follows. Your sister is watching. Your sister sees the first failure, and then tells you, OK, the failures-- or losing days-- have started. Come in and watch. So you start to watching at this time. And you start watching until the first success comes. This will be a geometric random variable. So from here to here, this will be geometric. So things happen. You are asked to start watching. After you start watching, the future is just a sequence of independent Bernoulli trials. And the time until the first failure occurs, this is going to be a geometric random variable with parameter p. And then you notice that the interval of interest is exactly the same as the length of this interval. This starts one time step later, and ends one time step later. So conclusion is that L is actually geometric, with parameter p. OK, it looks like I'm missing one slide. Can I cheat a little from here? OK. So now that we dealt with the time until the first arrival, we can start talking about the time until the second arrival, and so on. How do we define these? After the first arrival happens, we're going to have a sequence of time slots with no arrivals, and then the next arrival is going to happen. So we call this time that elapses-- or number of time slots after the first arrival until the next one-- we call it T2. This is the second inter-arrival time, that is, time between arrivals. Once this arrival has happened, then we wait and see how many more it takes until the third arrival. And we call this time here, T3. We're interested in the time of the k-th arrival, which is going to be just the sum of the first k inter-arrival times. So for example, let's say Y3 is the time that the third arrival comes. Y3 is just the sum of T1, plus T2, plus T3. So we're interested in this random variable, Y3, and it's the sum of inter-arrival times. To understand what kind of random variable it is, I guess we should understand what kind of random variables these are going to be. So what kind of random variable is T2? Your sister is doing her coin flips until a success is observed for the first time. Based on that information about what has happened so far, you are called into the room. And you start watching until a success is observed again. So after you start watching, what you have is just a sequence of independent Bernoulli trials. So each one of these has probability p of being a success. The time it's going to take until the first success, this number, T2, is going to be again just another geometric random variable. It's as if the process just started. After you are called into the room, you have no foresight, you don't have any information about the future, other than the fact that these are going to be independent Bernoulli trials. So T2 itself is going to be geometric with the same parameter p. And then you can continue the arguments and argue that T3 is also geometric with the same parameter p. Furthermore, whatever happened, how long it took until you were called in, it doesn't change the statistics about what's going to happen in the future. So whatever happens in the future is independent from the past. So T1, T2, and T3 are independent random variables. So conclusion is that the time until the third arrival is the sum of 3 independent geometric random variables, with the same parameter. And this is true more generally. The time until the k-th arrival is going to be the sum of k independent random variables. So in general, Yk is going to be T1 plus Tk, where the Ti's are geometric, with the same parameter p, and independent. So now what's more natural than trying to find the distribution of the random variable Yk? How can we find it? So I fixed k for you. Let's say k is 100. I'm interested in how long it takes until 100 customers arrive. How can we find the distribution of Yk? Well one way of doing it is to use this lovely convolution formula. Take a geometric, convolve it with another geometric, you get something. Take that something that you got, convolve it with a geometric once more, do this 99 times, and this gives you the distribution of Yk. So that's definitely doable, and it's extremely tedious. Let's try to find the distribution of Yk using a shortcut. So the probability that Yk is equal to t. So we're trying to find the PMF of Yk. k has been fixed for us. And we want to calculate this probability for the various values of t, because this is going to give us the PMF of Yk. OK. What is this event? What does it take for the k-th arrival to be at time t? For that to happen, we need two things. In the first t -1 slots, how many arrivals should we have gotten? k - 1. And then in the last slot, we get one more arrival, and that's the k-th one. So this is the probability that we have k - 1 arrivals in the time interval from 1 up to t. And then, an arrival at time t. That's the only way that it can happen, that the k-th arrival happens at time t. We need to have an arrival at time t. And before that time, we need to have exactly k - 1 arrivals. Now this is an event that refers-- t-1. In the previous time slots we had exactly k -1 arrivals. And then at the last time slot we get one more arrival. Now the interesting thing is that this event here has to do with what happened from time 1 up to time t -1. This event has to do with what happened at time t. Different time slots are independent of each other. So this event and that event are independent. So this means that we can multiply their probabilities. So take the probability of this. What is that? Well probability of having a certain number of arrivals in a certain number of time slots, these are just the binomial probabilities. So this is, out of t - 1 slots, to get exactly k - 1 arrivals, p to the k-1, (1-p) to the t-1 - (k-1), this gives us t-k. And then we multiply with this probability, the probability of an arrival, at time t is equal to p. And so this is the formula for the PMF of the number-- of the time it takes until the k-th arrival happens. Does it agree with the formula in your handout? Or its not there? It's not there. OK. Yeah. OK. So that's the formula and it is true for what values of t? [INAUDIBLE]. It takes at least k time slots in order to get k arrivals, so this formula should be true for k larger than or equal to t. For t larger than or equal to k. All right. So this gives us the PMF of the random variable Yk. Of course, we may also be interested in the mean and variance of Yk. But this is a lot easier. Since Yk is the sum of independent random variables, the expected value of Yk is going to be just k times the expected value of your typical t. So the expected value of Yk is going to be just k times 1/p, which is the mean of the geometric. And similarly for the variance, it's going to be k times the variance of a geometric. So we have everything there is to know about the distribution of how long it takes until the first arrival comes. OK. Finally, let's do a few more things about the Bernoulli process. It's interesting to talk about several processes at the time. So in the situation here of splitting a Bernoulli process is where you have arrivals that come to a server. And that's a picture of which slots get arrivals. But actually maybe you have two servers. And whenever an arrival comes to the system, you flip a coin and with some probability, q, you send it to one server. And with probability 1-q, you send it to another server. So there is a single arrival stream, but two possible servers. And whenever there's an arrival, you either send it here or you send it there. And each time you decide where you send it by flipping an independent coin that has its own bias q. The coin flips that decide where do you send it are assumed to be independent from the arrival process itself. So there's two coin flips that are happening. At each time slot, there's a coin flip that decides whether you have an arrival in this process here, and that coin flip is with parameter p. And if you have something that arrives, you flip another coin with probabilities q, and 1-q, that decides whether you send it up there or you send it down there. So what kind of arrival process does this server see? At any given time slot, there's probability p that there's an arrival here. And there's a further probability q that this arrival gets sent up there. So the probability that this server sees an arrival at any given time is p times q. So this process here is going to be a Bernoulli process, but with a different parameter, p times q. And this one down here, with the same argument, is going to be Bernoulli with parameter p times (1-q). So by taking a Bernoulli stream of arrivals and splitting it into two, you get two separate Bernoulli processes. This is going to be a Bernoulli process, that's going to be a Bernoulli process. Well actually, I'm running a little too fast. What does it take to verify that it's a Bernoulli process? At each time slot, it's a 0 or 1. And it's going to be a 1, you're going to see an arrival with probability p times q. What else do we need to verify, to be able to tell-- to say that it's a Bernoulli process? We need to make sure that whatever happens in this process, in different time slots, are statistically independent from each other. Is that property true? For example, what happens in this time slot whether you got an arrival or not, is it independent from what happened at that time slot? The answer is yes for the following reason. What happens in this time slot has to do with the coin flip associated with the original process at this time, and the coin flip that decides where to send things. What happens at that time slot has to do with the coin flip here, and the additional coin flip that decides where to send it if something came. Now all these coin flips are independent of each other. The coin flips that determine whether we have an arrival here is independent from the coin flips that determined whether we had an arrival there. And you can generalize this argument and conclude that, indeed, every time slot here is independent from any other time slot. And this does make it a Bernoulli process. And the reason is that, in the original process, every time slot is independent from every other time slot. And the additional assumption that the coin flips that we're using to decide where to send things, these are also independent of each other. So we're using here the basic property that functions of independent things remain independent. There's a converse picture of this. Instead of taking one stream and splitting it into two streams, you can do the opposite. You could start from two streams of arrivals. Let's say you have arrivals of men and you have arrivals of women, but you don't care about gender. And the only thing you record is whether, in a given time slot, you had an arrival or not. Notice that here we may have an arrival of a man and the arrival of a woman. We just record it with a 1, by saying there was an arrival. So in the merged process, we're not keeping track of how many arrivals we had total. We just record whether there was an arrival or not an arrival. So an arrival gets recorded here if, and only if, one or both of these streams had an arrival. So that we call a merging of two Bernoull-- of two processes, of two arrival processes. So let's make the assumption that this arrival process is independent from that arrival process. So what happens at the typical slot here? I'm going to see an arrival, unless none of these had an arrival. So the probability of an arrival in a typical time slot is going to be 1 minus the probability of no arrival. And the event of no arrival corresponds to the first process having no arrival, and the second process having no arrival. So there's no arrival in the merged process if, and only if, there's no arrival in the first process and no arrival in the second process. We're assuming that the two processes are independent and that's why we can multiply probabilities here. And then you can take this formula and it simplifies to p + q, minus p times q. So each time slot of the merged process has a certain probability of seeing an arrival. Is the merged process a Bernoulli process? Yes, it is after you verify the additional property that different slots are independent of each other. Why are they independent? What happens in this slot has to do with that slot, and that slot down here. These two slots-- so what happens here, has to do with what happens here and there. What happens in this slot has to do with whatever happened here and there. Now, whatever happens here and there is independent from whatever happens here and there. Therefore, what happens here is independent from what happens there. So the independence property is preserved. The different slots of this merged process are independent of each other. So the merged process is itself a Bernoulli process. So please digest these two pictures of merging and splitting, because we're going to revisit them in continuous time where things are little subtler than that. OK. Good luck on the exam and see you in a week.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Normal_Probability_Calculation.txt
Hi. In this video, we're going to do standard probability calculations for normal random variables. We're given that x is standard normal with mean 0 and variance 1. And y is normal with mean one and variance 4. And we're asked for a couple of probabilities. For the normal CDF, we don't have a closed form expression. And so people generally tabulate values and for the standard normal case. So if we want little x equal to 3.49, we just look for 3.4 along the rows and 0.09 along the columns, and then pick the value appropriately. So for part A, we're asked what's the probability that x is less than equal to 1.5? That's exactly phi of 1.5 and we can look that up. 1.5 directly and that's 0.9332. Then were asked, what's the probability that x is less than equal to negative 1? Notice that negative values are not on this table. And the reason that is is because the standard normal is symmetric around zero. And we don't really need that. We just recognize that the area in this region is exactly the area in this region. And so that's equal to the probability that x is greater than equal to 1. This is equal to 1 minus the probability that x is less than 1. And we can put the equal sign in here because x is continuous, it doesn't matter. And so we're going to get, this is equal to 1 minus phi of one. And we can look up phi of 1, which is 1.00, and that's 0.8413. OK. For part B, we're asked for this distribution of y minus 1 over 2. So any linear function of a normal random variable is also normal. And you can see that by using the derived distribution for linear functions of random variables. So in this case, we only need to figure out what's the mean and the variance of this normal random variable. So the mean in this case, I'm going to write that as y over 2 minus 1/2. The expectation operator is linear and so that's going to be-- and the expectation in this case is 1, so that's going to be 0. Now the variance. For the shift, it doesn't affect the spread. And so the variance is exactly going to be the same without the minus 1/2. And for the constant, you can just pull that out and square it. And the variance of y we know is 4. And so that's 1/4 times 4, that's 1. OK. So now we know that y minus 1 over 2 is actually standard normal. Actually for any normal random variable, you can follow the same procedure. You just subtract its mean, which is 1 in this case. And divide by its standard deviation and you will get a standard normal distribution. All right, so for part C we want the probability that y is between negative 1 and 1. So let's try to massage it so that we can use the standard normal table. And we already know that this is standard normal, so let's subtract both sides by negative 1. And that's equal to-- I'm going to call this standard normal z, so that's easier to write. And that's equal to negative 1 less than equal to z, less than equal to zero. So we're looking for this region, 0, 1, negative 1. So that's just the probability that it's less than zero minus probability that it's less than negative 1. Well for a standard normal, half the mass is below zero and a half the mass is above. And so that's just going to be 0.5 directly. And for this, we've already computed this for a standard normal, which was x in our case. And that was 1 minus 0.8413. Done. So we basically calculated a few standard probabilities for normal distributions. And we did that by looking them up from the standard normal table.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Inferring_a_Parameter_of_Uniform_Part_2.txt
Welcome back. So now we're going to finish the rest of this problem. For part e, we've calculated what the map and LMS estimators are. And now we're going to calculate what the conditional mean squared error is. So it's a way to measure how good these estimators are. So let's start out generically. For any estimator theta hat, the conditional MSE is-- conditional mean squared error-- is equal to this. It's the estimator minus the actual value squared conditioned on X being equal to some little x. So the mean squared error. So you take the error, which is the difference between your estimator and the true value, square it, and then take the mean. And it's conditioned on the actual value of what x is. Or, conditioned on the data that you get is. So to calculate this, we use our standard definition of what conditional expectation would be. So it's theta hat minus theta squared. And we weight that by the appropriate conditional PDF, which in this case would be the posterior. And we integrate this from x-- from theta equals x to theta equals 1. Now, we can go through some algebra and this will tell us that this is theta hat squared minus 2 theta hat theta minus plus theta squared. And this posterior we know from before is 1 over theta times absolute value of log x d theta. And when we do out this integral, it's going to be-- we can split up in to three different terms. So there's theta hat squared times this and you integrate it. But in fact, this is just a conditional density. When you integrate it from x to 1, this will just integrate up to 1 because it is a valid density. So the first term is just theta hat squared. Now, the second term is you can pull out of 2 theta hat and integrate theta times 1 over theta times absolute value of log of x d theta from x to 1. And then the last one is integral of theta squared 1 over theta absolute value of log x d theta from x to 1. OK, so we can do some more-- with some more calculus, we get a final answer is this. So this will integrate to 1 minus x over absolute value of log x. And this will integrate to 1 minus x squared over 2 times absolute value of log x. So this tells us for any generic estimate theta hat, this would be what the conditional mean squared error would be. Now, let's calculate what it actually is for the specific estimates that we actually came up with. So for the MAP rule, the estimate of theta hat is just equal to x. So when we plug that into this, we get that the conditional MSE is just equal to x squared minus 2x 1 minus x absolute value of log x plus 1 minus x squared over 2 times absolute value of log of x. And for the LMS estimate, remember this was equal to-- theta hat was 1 minus x over absolute value of log x. And so when you plug this particular theta hat into this formula, what you get is that the conditional mean squared error is equal to 1 minus x squared over 2 times absolute value of log of x minus 1 minus x over log of x quantity squared. So these two expressions tells us what the mean squared error is for the MAP rule and the LMS rule. And it's kind of hard to actually interpret exactly which one is better based on just these expressions. So it's helpful to plot out what the conditional mean squared error is. So we're plotting for x. For each possible actual data that we observe-- data point that we observe, what is the mean squared error? So let's do the MAP rule first. The MAP rule would look something like this. And it turns out that the LMS rule is better, and it will look like this dotted line here on the bottom. And so it turns out that if your metric for how good your estimate is is the conditional mean squared error, then LMS is better than MAP. And this is true because LMS is actually designed to actually minimize what this mean squared error is. And so in this case, the LMS estimator should have a better mean squared error than the map estimator. OK, now the last part of the problem, we calculate one more type of estimator, which is the linear LMS estimator. So notice that the LMS estimator was this one. It was 1 minus x over absolute value of log of x. And this is not linear in x, which means sometimes it's difficult to calculate. And so what we do is we tried to come up with a linear form of this, something that is like ax plus b, where a and b are some constant numbers. But that also does well in terms of having a small mean squared error. And so we know from the class that in order to calculate the linear LMS, the linear LMS we know we just need to calculate a few different parts. So it's equal to the expectation of the parameter plus the covariance of theta and x over the variance of x times x minus expectation of x. Now, in order to do this, we just need to calculate four things. We need the expectation of theta, the covariance, the variance, and the expectation of x. OK, so let's calculate what these things are. Expectation of theta. We know that theta is uniformly distributed between 0 and 1. And so the expectation of theta is the easiest one to calculate. It's just 1/2. What about the expectation of x? Well, expectation of x is a little bit more complicated. But remember, like in previous problems, it's helpful when you have a hierarchy of randomness to try to use the law of iterated expectations. So the delay, which is x, is random. But it's randomness depends on the actual distribution, which is theta. Which itself is random. And so let's try to condition on theta and see if that helps us. OK, so if we knew what theta was, then what is the expectation of x? Well, we know that given theta, x is uniformly distributed between 0 and theta. And so the mean would be just theta over 2. And so this would just be expectation of theta over 2. And we know this is just 1/2 times the expectation of theta, which is 1/2. So this is just 1/4. Now, let's calculate the variance of x. The variance of x takes some more work because we need to use the law of total variance, which is this. That the variance of theta is equal to the expectation of the conditional variance plus the variance of the conditional expectation. Let's see if we can figure out what these different parts are. What is the conditional variance of x given theta? Well, given theta, x we know is uniformly distributed between 0 and theta. And remember for uniform distribution of width c, the variance of that uniform distribution is just c squared over 12. And so in this case, what is the width of this uniform distribution? Well, it's uniformly distributed between 0 and theta, so the width is theta. So this variance should be theta squared over 12. OK, what about the expectation of x given theta? Well, we already argued earlier that the expectation of x given theta is just theta over 2. So now let's fill in the rest. What's the expectation of theta squared over 12? Well, that takes a little bit more work because this is just-- you can think of it as 1/12. You could pull the 1/12 out times the expectation of theta squared. Well, the expectation of theta squared we can calculate from the variance of theta plus the expectation of theta quantity squared. Because that is just the definition of variance. Variance is equal to expectation of theta squared minus expectation of theta quantity squared. So we've just reversed the formula. Now, the second half is the variance of theta over 2. Well, remember when you pull out a constant from a variance, you have to square it. So this is just equal to 1/4 times the variance of theta. Well, what is the variance of theta? The variance of theta is the variance of uniform between 0 and 1. So the width is 1. So you get 1 squared over 12. And the variance is 1/12. What is the mean of theta? It's 1/2 when you square that, you get 1/4. Finally for here, the variance of theta like we said, is 1/12. So you get 1/12. And now, when you combine all these, you get that the variance ends up being 7/144. Now we have almost everything. The last thing we need to calculate is this covariance term. What is the covariance of theta and x? Well, the covariance we know is just the expectation of the product of theta and x minus the product of the expectations. So the expectation of x times the expectation of theta. All right, so we already know what expectation of theta is. That's 1/2. And expectation of x was 1/4. So the only thing that we don't know is expectation of the product of the two. So once again, let's try to use iterated expectations. So let's calculate this as the expectation of this conditional expectation. So we, again, condition on theta. And minus the expectation of theta is 1/2. Times 1/4, which is the expectation of x. Now, what is this conditional expectation? Well, the expectation of theta-- if you know what theta is, then the expectation of theta is just theta. You already know what it is, so you know for sure that the expectation is just equal to theta. And what is the expectation of x given theta? Well, the expectation of x given theta we already said was theta over 2. So what you get is this entire expression is just going to be equal to theta times theta over 2, or expectation of theta squared over 2 minus 1/8. Now, what is the expectation of theta squared over 2? Well, we know that-- we already calculated out what expectation of theta squared is. So we know that expectation of theta squared is 1/12 plus 1/4. So what we get is we need a 1/2 times 1/12 plus 1/4, which is 1/3 minus 1/8. So the answer is 1/6 minus 1/8, which is 1/24. Now, let's actually plug this in and figure out what this value is. So when you get everything-- when you combine everything, you get that the LMS estimator is-- the linear LMS estimator is going to be-- expectation of theta is 1/2. The covariance is 1/24. The variance is 7/144. And when you divide that, it's equal to 6/7 times x minus 1/4 because expectation of x is 1/4. And you can simplify this a little bit and get that this is equal to 6/7 times x plus 2/7. So now we have three different types of estimators. The map estimator, which is this. Notice that it's kind of complicated. You have x squared terms. You have more x squared terms. And you have absolute value of log of x. And then you have the LMS, which is, again, nonlinear. And now you have something that looks very simple-- much simpler. It's just 6/7 x plus 2/7. And that is the linear LMS estimator. And it turns out that you can, again, plot these to see what this one looks like. So here is our original plot of x and theta hat. So the map estimator-- sorry, the map estimator was just theta hat equals x. This was the mean squared error of the map estimator. So the map estimator is just this diagonal straight line. The LMS estimator looked like this. And it turns out that the linear LMS estimator will look something like this. So it is fairly close to the LMS estimator, but not quite the same. And note, especially that depending on what x is, if x is fairly close to the 1, you might actually get an estimate of theta that's greater than 1. So for example, if you observe that Julian is actually an hour late, then x is 1 and your estimate of theta from the linear LMS estimator would be 8/7, which is greater than 1. That doesn't quite make sense because we know that theta is bounded to be only between 0 and 1. So you shouldn't get an estimate of theta that's greater than 1. And that's one of the side effects of having the linear LMS estimator. So that sometimes you will have an estimator that doesn't quite make sense. But what you get instead when sacrificing that is you get a simple form of the estimator that's linear. And now, let's actually consider what the performance is. And it turns out that the performance in terms of the conditional mean squared error is actually fairly close to the LMS estimator. So it looks like this. Pretty close, pretty close, until you get close to 1. In which case, it does worse. And it does worse precisely because it will come up with estimates of theta which are greater than 1, which are too large. But otherwise, it does pretty well with a estimator that is much simpler in form than the LMS estimator. So in this problem, which had several parts, we actually went through, basically, all the different concepts and tools within Chapter Eight for Bayesian inference. We talked about the prior, the posterior, calculating the posterior using the Bayes' rule. We calculated the MAP estimator. We calculated the LMS estimator. From those, we calculated what the mean squared error for each one of those and compared the two. And then, we looked at the linear LMS estimator as another example and calculated what that estimator is, along with the mean squared error for that and compared all three of these. So I hope that was a good review problem for Chapter Eight, and we'll see you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Uniform_Probabilities_on_a_Triangle.txt
Hi. In this problem, we're going to get a bunch of practice working with multiple random variables together. And so we'll look at joint PDFs, marginal PDFs, conditional PDFs, and also get some practice calculating expectations as well. So the problem gives us a pair of random variables-- x and y. And we're told that the joint distribution is uniformly distributed on this triangle here, with the vertices being 0, 0 1, 0, and 0, 1. So it's uniform in this triangle. And the first part of the problem is just to figure out what exactly is disjoint PDF of the two random variables. So in this case, it's pretty easy to calculate, because we have a uniform distribution. And remember, when you have a uniform distribution, you can just imagine it being a sort of plateau coming out of the board. And it's flat. And so the height of the plateau, in order to calculate it, you just need to figure out what the area of this thing is, of this triangle is. So remember, when you had single random variables, what we had to do was calculate, for uniform distribution, we had to integrate to 1. So you took the length, and you took 1 over the length was the correct scaling factor. Here, you take the area. And the height has to make it so that the entire volume here integrates to 1. So the joint PDF is just going to be 1 over whatever this area is. And the area is pretty simple to calculate. It's 1/2 base times height. So it's 1/2. And so what we have is that the area is 1/2. And so the joint PDF of x and y is going to equal 2. But remember, you always have to be careful when writing these things to remember the ranges when these things are valid. So it's only 2 within this triangle. And outside of the triangle, it's 0. So what exactly does inside the triangle mean? Well, we can write it more mathematically. So this diagonal line, it's given by x plus y equals 1. So everything in the triangle is really x plus y is less than or equal to 1. It means everything under this triangle. And so we need x plus y to be less then or equal to 1 and also x to be non-negative and y to be non-negative. So with these inequalities, that captures everything within this triangle. And otherwise, the joint PDF is going to be 0. The next part asks us to find, using this joint PDF, the marginal of y. And remember, when you have a joint PDF of two random variables, you essentially have everything that you need, because from this joint PDF, you can calculate marginals, you can calculate from the margins, you can calculate conditionals. The joint PDF captures everything that there is to know about this pair of random variables. Now, to calculate a marginal PDF of y, remember a marginal really just means collapsing the other random variable down. And so you can just imagine taking this thing and collapsing it down onto the y-axis. And mathematically, that is just saying that we integrate out the other random variable. So the other random variable in this case will be x. We take x and we get rid of it by integrating out from negative infinity to infinity. Of course, this joint PDF is 0 in a lot of places. And so a lot of these will be 0. And only for a certain range of x's will this integral actually be non-zero. And so again, the other time when we have to be careful is when we have these limits of integration, we need to make sure that we have the right limits. And so we know that the joint PDF is 2. It's nonzero only within this triangle. And so it's only 2 within this triangle, which means what for x? Well, depending on what x and y are, this will be either 2 or 0. So let's just fix some value of y. Pretend that we've picked some value y, let's say here. We want this value of y. Well, what are the values of x such that the joint PDF for that value y is actually nonzero, it's actually 2? Well, it's everything from x equals 0 to whatever x value this is. But this x value, actually, if you think about it, is just 1 minus y, because this line is x plus y equals 1. So whatever y is, x is going to be 1 minus that. And so the correct limits would actually be from 0 to 1 minus y. And then the rest of that is pretty simple. You integrate this. This is a pretty simple integral. And you get that it's actually two times 1 minus y. That's a y. But of course, again, we need to make sure that we have the right regions. So this is not always true for y, of course. This is only true for y between 0 and 1. And otherwise, it's actually 0, because when you take a y down here, well, there's no values of x that will give you a nonzero joint PDF. And if you take a value of y higher than this, the same thing happens. So we can actually draw this out and see what it looks like. So let's actually draw a small picture here. Here's y. Here's the marginal PDF of y. And here's 2. And it actually looks like this. It's a triangle and a 0 outside this range. So does that make sense? Well, first of all, you see that actually does in fact integrates to 1, which is good. And the other thing we notice is that there is a higher density for smaller values of y. So why is that? Why are smaller values of y more likely than larger values of y? Well, because when you have smaller values of y, you're down here. And it's more likely because there are more values of x that go along with it that make that value of y more likely to appear. Say you have a large value of y. Then you're up here at the tip. Well, there aren't very many combinations of x and y that give you that large a value of y. And so that large value of y becomes less likely. Another way to think about it is, when you collapse this down, there's a lot more stuff to collapse down its base. There's a lot of x's to collapse down. But up here, there's only a very little bit of x to collapse down. And the PDF of y becomes more skewed towards smaller values of y. So now, the next thing that we want to do is calculate the conditional PDF of x, given y. Well, let's just recall what that means. This is what we're looking for-- the conditional PDF of x, given y. And remember, this is calculated by taking the joint and dividing by the marginal of y. So we actually have the top and the bottom. We have to joint PDF from part A. And from part B, we calculated the marginal PDF of y. So we have both pieces. So let's actually plug them in. Again, the thing that you have to be careful here is about the ranges of x and y where these things are valid, because this is only non-zero when x and y fall within this triangle. And this is only non-zero when y is between 0 and 1. So we need to be careful. So the top, when it's non-zero, it's 2. And the bottom, when it's non-zero, it's 2 times 1 minus y. So we can simplify that to be 1 over 1 minus y. And when is this true? Well, it's true when x and y are in the triangle and y is between 0 and 1. So put another way, that means that this is valid when y is between 0 and 1 and x is between 0 and 1 minus y, because whatever x has to be, it has to be such that they actually still fall within this triangle. And outside of this, it's 0. So let's see what this actually looks like. So this is x, and this is the conditional PDF of x, given y. Let's say this is 1 right here. Then what it's saying is, let's say we're given that y is some little y. Let's say it's somewhere here. Then it's saying that the conditional PDF of x given y is this thing. But notice that this value, 1 over 1 minus y, does not depend on x. So in fact, it actually is uniform. So it's uniform between 0 and 1 minus y. And the height is something like 1 over 1 minus y. And this is so that the scaling makes it so that actually is a valid PDF, because the integral is to 1. So why is the case? Why is that when you condition on y being some value, you get that the PDF of x is actually uniform? Well, when you look over here, let's again just pretend that you're taking this value of y. Well, when you're conditioning on y being this value, you're basically taking a slice of this joint PDF at this point. But remember, the original joint PDF was uniform. So when you take a slice of a uniform distribution, joint uniform distribution, you still get something that is uniform. Just imagine that you have a cake that is flat. Now, you take a slice at this level. Then whatever slice you have is also going to be imagine being a flat rectangle. So it's still going to be uniform. And that's why the conditional PDF of x given y is also uniform. Part D now asks us to find a conditional expectation of x. So we want to find the expectation of x, given that y is some little y. And for this, we can use the definition. Remember, expectations are really just weighted sums. Or in the [? continuous ?] case, it's an integral. So you take the value. And then you weight it by the density. And in this case, because we're taking conditional a expectation, what we weight it by is the conditional density. So it's the conditional density of x given that y is little y. We integrate with respect to x. And fortunately, we know what this conditional PDF is, because we calculated it earlier in part C. And we know that it's this-- 1 over 1 minus y. But again, we have to be careful, because this formula, 1 over 1 minus y, is only valid certain cases. So let's think about this first. Let's think about some extreme cases. What if y, little y, is negative? If little y is negative, we're conditioning on something over here. And so there is no density for y being negative or for y, say, in other cases when y is greater than 1. And so in those cases, this expectation is just undefined, because conditioning on that doesn't really make sense, because there's no density for those values of y. Now, let's consider the case that actually makes, sense where y is between 0 and 1. Now, we're in business, because that is the range where this formula is valid. So this formula is valid, and we can plug it in. So it's 1 over 1 minus y dx. And then the final thing that we again need to check is what the limits of this integration is. So we're integrating with respect to x. So we need to write down what values of x, what ranges of x is this conditional PDF valid. Well, luckily, we specified that here. x has to be between 0 and 1 minus y. So let's actually calculate this integral. This 1 over 1 minus y is a constant with respect to x. You can just pull that out. And then now, you're really just integrating x from 0 to 1 minus y. So the integral of x is [? 1 ?], 1/2x squared. So you get a 1/2x squared, and you integrate that from 0 to 1 minus y. And so when you plug in the limits, you'll get a 1 minus y squared. That will cancel out the 1 over 1 minus y. And what you're left with is just 1 minus y over 2. And again, we have to specify that this is only true for y between 0 and 1. Now, we can again actually verify that this makes sense. What we're really looking for is the conditional expectation of x given some value of y. And we already said that condition on y being some value of x is uniformly distributed between 0 and 1 minus y. And so remember for our uniform distribution, the expectation is simple. It's just the midpoint. So the midpoint of 0 and 1 minus y is exactly 1 minus y/2. So that's a nice way of verifying that this answer is actually correct. Now, the second part of part D asks us to do a little bit more. We have to use the total expectation theorem in order to somehow write the expectation of x in terms of the expectation of y. So the first thing we'll do is use the total expectation theorem. So the total expectation theorem is just saying, well, we can take these conditional expectations. And now, we can integrate this by the marginal density of y, then we'll get the actual expectation of x. You can think of it as just kind of applying the law of iterated expectations as well. So this integral is going to look like this. You take the conditional expectation. So this is the expectation of x if y were equal to little y. And now, what is that probability? Well, now we just multiply that by the density of y at that actual value of little y. And we integrate with respect to y. Now, we've already calculated what this conditional expectation is. It's 1 minus y/2. So let's plug that in. 1 minus y/2 times the marginal of y. There's a couple ways of attacking this problem now. One way is, we can actually just plug in that marginal of y. We've already calculated that out in part B. And then we can do this integral and calculate out the expectation. But maybe we don't really want to do so much calculus. So let's do what the problem says and try a different approach. So what the problem suggests is to write this in terms of the expectation of y. And what is the expectation of y? Well, the expectation of y is going to look something like the integral of y times the marginal of y. So let's see if we can identify something like that and pull it out. Well, yeah, we actually do have that. We have y times the marginal of y, integrated. So let's isolate that. So besides that, we also have this. We have the integral of the first term, is 1/2 times the marginal of y. And then the second term is minus 1/2 times the integral of y of dy. This is just me splitting this integral up into two separate integrals. Now, we know what this is. The 1/2 we can pull out. And then the rest of it is just the integral of a marginal of a density from minus infinity to infinity. And by definition, that has to be equal to 1. So this just gives us a 1/2. And now, what is this? We get a minus 1/2. And now this, we already said that is the expectation of y. So what we have is the expectation of y. So in the second part of this part D, we've expressed the expectation of x in terms of the expectation of y. Now, maybe that seems like that's not too helpful, because we don't know what either of those two are. But if we think about this problem, and as part E suggests, we can see that there's symmetry in this problem, because x and y are essentially symmetric. So imagine this is x equals y. There's symmetry in this problem, because if you were to swap the roles of x and y, you would have exactly the same joint PDF. So what that suggests is that by symmetry then, it must be that the expectation of x and the expectation of y are exactly the same. And that is using the symmetry argument. And that helps us now, because we can plug that in and solve for expectation of x. So expectation of x is 1/2 minus 1/2 expectation of x. So we have 3/2 expectation of x equals 1/2. So expectation of x equals 1/3. And of course, expectation of y is also 1/3. And so it turns out that the expectation is around there. So this problem had several parts. And it allowed us to start out from just a raw joint distribution, calculate marginals, calculate conditionals, and then from there, calculate all kinds of conditional expectations and expectations. And a couple of important points to remember are, when you do these joint distributions, it's very important to consider where values are valid. So you have to keep in mind when you write out these conditional PDFs and joint PDFs and marginal PDFs, what ranges the formulas you calculated are valid for. And that also translates to when you're calculating expectations and such. When you have integrals, you need to be very careful about the limits of your integration, to make sure that they line up with the range where the values are actually valid. And the last thing, which is kind of unrelated, but it is actually a common tool that's used in a lot of problems is, when you see symmetry in these problems, that can help a lot, because it will simplify things and allow you to use facts like these to help you calculate what the final answer is. Of course, this is also comes along with practice. You may not immediately see that there could be a symmetry argument that will help with this problem. But with practice, when you do more of these problems, you'll eventually build up that kind of--
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Geniuses_and_Chocolates.txt
Hi. Today, we're going to do a really fun problem called geniuses and chocolates. And what this problem is exercising is your knowledge of properties of probability laws. So let me just clarify what I mean by that. Hopefully, by this point, you have already learned what the axioms of probability are. And properties of probability laws are essentially any rules that you can derive from those axioms. So take for example the fact that the probability of A union B is equal to the probability of A plus the probability of B minus the probability of the intersection. That's an example of a property of a probability law. So enough with the preamble. Let's see what the problem is asking us. In this problem, we have a class of students. And we're told that 60% of the students are geniuses. 70% of the students love chocolate. So I would be in that category. And 40% fall into both categories. And our job is to determine the probability that a randomly selected student is neither a genius nor a chocolate lover. So first I just want to write down the information that we're given in the problem statement. So if you let G denote the event that a randomly selected student is a genius then the problem statement tells us that the probability of G is equal to 0.6. Similarly, if we let C denote the event that a randomly selected student is a chocolate lover, then we have that the probability of C is equal to 0.7. Lastly, we are told that the probability a randomly selected student falls into both categories is 0.4. And the way we can express that using the notation already on the board is probability of G intersect C is equal to 0.4. OK, now one way of approaching this problem is to essentially use this information and sort of massage it using properties of probability laws to get to our answer. Instead, I'm going to take a different approach, which I think will be helpful. So namely, we're going to use something called a Venn diagram. Now a Venn diagram is just a tool that's really useful for telling you how different sets relate to each other and how their corresponding probabilities relate to each other. So the way you usually draw this is you draw a rectangle, which denotes your sample space, which of course, we call omega. And then you draw two intersecting circles. So one to represent our geniuses and one to represent our chocolate lovers. And the reason why I drew them intersecting is because we know that there are 40% of the students in our class are both geniuses and chocolate lovers. OK, and the way you sort of interpret this diagram is the space outside these two circles correspond to students who are neither geniuses nor chocolate lovers. And so just keep in mind that the probability corresponding to these students on the outside, that's actually what we're looking for. Similarly, students in this little shape, this tear drop in the middle, those would correspond to geniuses and chocolate lovers. You probably get the idea. So this is our Venn diagram. Now I'm going to give you guys a second trick if you will. And that is to work with partitions. So I believe you've seen partitions in lecture by now. And a partition is essentially a way of cutting up the sample space into pieces. But you need two properties to be true. So the pieces that you cut up your sample space into, they need to be disjoint, so they can't overlap. So for instance, G and C are not disjoint because they overlap in this tear drop region. Now the second thing that a partition has to satisfy is that if you put all the pieces together, they have to comprise the entire sample space. So I'm just going to put these labels down on my graph. X, Y, Z, and W. So X is everything outside the two circles but inside the rectangle. And just note, again, that what we're actually trying to solve in this problem is the probability of X, the probability that you're neither genius, because you're not in this circle, and you're not a chocolate lover, because you're not in this circle. So Y I'm using to refer to this sort of crescent moon shape. Z, I'm using to refer to this tear drop. And W, I'm using to refer to this shape. So, hopefully, you agree that X, Y, Z, and W form a partition because they don't overlap. So they are disjoint. And together they form omega. So now we're ready to do some computation. The first step is to sort of get the information we have written down here in terms of these new labels. So hopefully, you guys buy that G is just the union of Y and Z. And because Y and Z are disjoint, we get that the probability of the union is the sum of the probabilities. And, of course, we have from before that this is 0.6. Similarly, we have that the probability of C is equal to the probability of Z union W. And, again, using the fact that these two guys are disjoint, you get this expression. And that is equal to 0.7. OK, and the last piece of information, G intersects C corresponds to Z, or our tear drop, and so we have that the probability of Z is equal to 0.4. And now, if you notice, probability of Z shows up in these two equations. So we can just plug it in. So plug in 0.4 into this equation. We get P of Y plus 0.4 is 0.6. So that implies that P of Y is 0.2. That's just algebra. And similarly we have point. 0.4 plus P of W is equal to 0.7. So that implies that P of W is 0.3. Again, that's just algebra. So now we're doing really well because we have a lot of information. We know the probability of Y, the probability of Z, the probability of W. But remember we're going for, we're trying to find the probability of X. So the way we finally put all this information together to solve for X is we use the axiom that tells us that 1 is equal to the probability of the sample space. And then, again, we're going to use sort of this really helpful fact that X, Y, Z, and W form a partition of omega to go ahead and write this as probability of X plus probability of Y plus probability, oops, I made a mistake. Hopefully, you guys caught that. It's really, oh, no. I'm right. Never mind. Probability of X plus probability of Y plus probability of Z plus probability of W. And now we can go ahead and plug-in the values that we solved for previously. So we get probability of X plus 0.2 plus 0.4 plus 0.3. These guys sum to 0.9. So, again, just simple arithmetic, we get that the probability of X is equal to 0.1. So we're done because we've successfully found that the probability that a randomly selected student is neither a genius nor a chocolate lover is 0.1. So this was a fairly straightforward problem. But there are some important takeaways. The first one is that Venn diagrams are a really nice tool. Whenever the problem is asking you how different sets relate to each other or how different probabilities relate to each other, you should probably draw Venn diagram because it will help you. And the second takeaway is that it's frequently useful to divide your sample space into a partition mainly because sort of the pieces that compose a partition are disjoint. So we will be back soon to solve more problems.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
The_Coupon_Collector_Problem.txt
In this exercise, we'll be looking at a problem, also know as the coupons collector's problem. We have a set of K coupons, or grades in our case. And each time slot we're revealed with one random grade. And we'd like to know how long it would take for us to collect all K grades. In our case, K is equal to 6. Now the key to solving the problem is essentially twofolds. First, we'll have to find a way to intelligently define sequence random variables that captured, essentially, stopping time of this process. And then we'll employ the idea of linearity of expectations in breaking down this value in simpler terms. So let's get started. We'll define Yi as the number of papers till we see the i-th new grade. What does that mean? Well, let's take a look at an example. Suppose, here we have a timeline from no paper yet, first paper, second paper, third paper, so on, and so forth. Now, if we got grade A on the first slot, grade A minus on second slot, A again on the third slot, let's say there's a fourth's slot, we got B. According to this process, we see that Y1 is always 1, because whatever we got on the first slot will be a new grade. Now, Y2 is 2, because the second paper is, again, a new grade. On the third paper we got a grade, which is the same as the first grade. So that would not count as any Yi. And the third time we saw new grade would now be paper four. According to this notation, we're interested in knowing what is the expected value of E of Y6, which is the time it takes to receive all six grades. So so far this notation isn't really helping us in solving the problem, but kind of just staying a different way. It turns out, it's much easier to look at the following variable derived from the Yis. We'll define Xi as the difference between Yi plus 1 minus Yi. And in [? words, ?] it says, Xi is a number of papers you need until you see the i plus 1-th new grade, after you have received i new grade so far. So in this case, X1 will be if we call 0, Y0, will be the difference between Y1 and Y0, which is always 1-- that's X1. And the difference between these two will be X2. And the difference between Y3 and Y2-- Sorry. It should be Y X0, 1, 2, and so on. OK? Through this notation we see that Y6 now can be written as the summation of i equal to 0, 2, 5, X, i. So all I did was to break down i6 into a sequence of summation of the differences, like Y. 6 Minus Y5, Y5 minus Y4, and so on. It turns out, this expression will be very useful. OK. So now that we have the two variables Y and X, let's see if it will be easier to look at the distribution of X in studying this process. Let's say, we have seen a new grade so far-- one. How many trials would it take for us to see the second new grade? It turns out it's not that hard. In this case, we know there is a total of six grades, and we have seen one of them. So that leaves us five more grades that we'll potentially see. And therefore, on any random trial after that, there is a probability of 5 over 6 that we'll see a new grade. And hence, we know that X1 has a distribution geometric with a success probability, or a parameter, 5/6. Now, more generally, if we extend this idea further, we see that Xi will have a geometric distribution of parameter 6 minus i over 6. And this is due to the fact that so far we have already seen i new grades. And that will be the success probability of seeing a further new grade. So from this expression, we know that the expected value of Xi will simply be the inverse of the parameter of the geometric distribution, which is 6 over 6 minus i or 6 times 1 over 6 minus i. And now we're ready to compute a final answer. So from this expression we know expected value of Y6 is equal to the expected value of sum of i equal to 0 to 5 Xi. And by the linearity of expectation, we can pull out the sum and write it as 2, 5 expected value of Xi. Now, since we know that expective of Xi is the following expression. We see that this term is equal to 6 times expected value of i equals 0, 5, 1 over 6 minus i. Or written in the other way this is equal to 6 times i equal to 0, 2, 5. In fact, 1, 2, 5, 1 over i. And all I did here was to, essentially, change the variable, so that these two summations contained exactly the same terms. And this will give us the answer, which is 14.7. Now, more generally, we can see that there's nothing special about number 6 here. We could have substituted 6 with a number, let's say, K. And then we'll get E of YK, let's say, there's more than six labels. And this will give us K times summation i equal to 1, so K minus 1, 1 over i. Interestingly, it turns out this quantity has an [? asymptotic ?] expression that, essentially, is roughly equal to K times the natural logarithm of K. And this is known as the scaling [? la ?] for the coupon collector's problem that says, essentially, takes about K times [? la ?] K many trials until we collect all K coupons. And that'll be the end of the problem. See you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Mean_First_Passage_and_Recurrence_Times.txt
In this problem, we are looking at a student whose performance from day to day sort of oscillates according to a Markov chain. In particular, the student can either be in state 1, which is a state of being up to date, or in state 2, which is a state of being kind of fallen behind. Now, the transition probabilities between these two states are given by the numbers here, which is 0.2 from state 1 to 2, 0.6 from 2 to 1, 0.4 from 2 back to 2, and 0.8 from 1 back to state 1. The quantity we're interesting calculating is this notion of first passage time. Let me define what that means. Suppose we are looking at a time horizon of time 0, 1, 2, 3. And let's call the state of the Markov chain x of t. Suppose we start from the chain being in state 2 here. Now, if we look at a particular sample path, let's say 2 and 2 again on day 1, and 2 again on day 2, and on day 3, the student enters state 1. So in this sample path, we start from time 0 and time 3 is the first time we enter state 1. And we'll say that the first passage time, namely, the first time we enter state 1 in this case, is equal to 3. More formally, we'll define tj as the first pass the time to state 1 conditional on that we start from state j at time 0. Now, this quantity, of course, is random. Depending on the realization, we have different numbers. And we are interested in calculating the expected value of t2. That is, on average, if we start from state 2 here, how long would it take for us to enter state 1? Now to calculate this quantity, in the following recursion will be very important. The idea is we don't know exactly what t2 is. But t2 has to satisfy a certain recurrent equation, namely, t2 must be equal to 1 plus summation j equal to 1 to 2 P2jtj. Now let me explain what this equation means. Let's say we are at state 2. Well, we don't actually know how long it's going to take for us to enter state 1. But we do know after one step, I will be go into some other state. Let's call it state j. And from state j, it's going to take some time to enter state 1 finally. So this equation essentially says the time for us to first enter state 1 from 2 is 1-- which is the next step-- plus the expected time from that point on to enter 1. So that constitutes our [? recurrent ?] relationship. Now, by this definition, we can see that this is simply 1 plus P21 times t1 plus P22 times t2. Now, the definition of tj says t1 must be 0 because, by definition, if we start from state 1, we are already in state 1. So the time to reach state 1 is simply 0. So this term disappears. And we end up with 1 plus P22 t2. If we plug in a number of P22-- which is 0.4 right here-- we get 1 plus 0.4 t2. Now we started from t2 and we ended up with another expression involving numbers and only one unknown, which is t2. Combining this together and solving for t2, we get t2 equals 1 divided by 1 minus 0.4, which is 5/3. And that is the answer for the first part of the problem. In the second part of the problem, we are asked to do something similar as before but with a slight twist. Here, I copied over the definition for tj, which is the first time to visit state 1 starting from state j at time t equals 0. And the little tj is this expectation. And here we're going to define a similar quantity, which is t1, let's say, star, defined as the first time to visit state 1 again. So that's the recurrence part starting from state 1, 1 at t equals 0. So this is the recurrence time from state 1 back to state 1 again. As an example, again, we look at t equals 0, 1, 2, 3, 4. And here, if we start from state 1 on time 0, we went to state 2, 2, 1, 1 again. Now here, again, time 3 will be the first time to visit state 1 after time 0. And we don't count the very first 0. And that will be our t1 star. So t1 star in this particular case is equal to 3. OK. Same as before, we like to calculate the expected time to revisit state 1. Define little t1 star expected value of t1 star. And we'll be using the same recurrence trick through the following equation. We say that t1 star is equal to 1 plus j from 1 to 2. Now, since we started from state 1, this goes from 1 to state 1j and tj. Again, the interpretation is we started at state 1 at time t equals 0, we went to some other state-- we call it j-- and front of state j, it goes around, and after time expected value tj, we came back to state 1. Here, and as before, this equation works because we are working with a Markov chain whereby the time to reach some other state only depends on the current state. And that's why we're able to break down the recursion as follows. If we write out the recursion, we get 1 plus P11 t1 plus P12 t2. As before, t1 now is just the expected first passage time from state 1. And by definition, it is 0. Because if we start from state 1, it's already in state 1 and takes 0 time to get there. So again, like before, this term goes out. And we have 1 plus 0.2 times 5/3. And this number came from the previous calculation of t2. And this gives us 4/3. So this completes the problem. And just to remind ourselves, the kind of crux of the problem is this type of recursion which expresses a certain quantity in the one incremental step followed by the expected time to reach a certain destination after that one step. And we can do so because the dynamics is modeled by a Markov chain. And hence, the time to reach a certain destination after this first step only depends on where you start again, in this case, state j.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
25_Classical_Inference_III.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu PROFESSOR: OK, if you have not yet done it, please take a moment to go through the course evaluation website and enter your comments for the class. So what we're going to do today to wrap things up is we're going to go through a tour of the world of hypothesis testing. See a few examples of hypothesis tests, starting from simple ones such as the one the setting that we discussed last time in which you just have two hypotheses, you're trying to choose between them. But also look at more complicated situations in which you have one basic hypothesis. Let's say that you have a fair coin and you want to test it against the hypotheses that your coin is not fair, but that alternative hypothesis is really lots of different hypothesis. So is my coin fair? Is my die fair? Do I have the correct distribution for random variable, and so on. And I'm going to end up with a few general comments about this whole business. So the sad thing in simple hypothesis testing problems is the following-- we have two possible models, and this is the classical world so we do not have any prior probabilities on the two hypotheses. Usually we want to think of these hypotheses as not being completely symmetrical, but rather one is the default hypothesis, and usually it's referred to as the null hypothesis. And you want to check whether the null hypothesis is true, whether things are normal as you would have expected them to be, or whether it turns out to be false, in which case an alternative hypothesis would be correct. So how does one go about it? No matter what approach you use, in the end you're going to end up doing the following. You have the space of all simple observations that you may obtain. So when you do the experiment you're going to get an X vector, a vector of data that's somewhere. And for some vectors you're going to decide that you accept H. Note for some vectors that you reject H0 and you accept H1. So what you will end up doing is that you're going to have some division of the space of all X's into two parts, and one part is the rejection region, and one part is the acceptance region. So if you fall in here you accept H0, if you fall here you'd reject H0. So to design a hypothesis test basically you need to come up with a division of your X space into two pieces. So the figuring out how to do this involves two elements. One element is to decide what kind of shape so I want for my dividing curve? And having chosen the shape of the dividing curve, where exactly do I put it? So if you were to cut this space using, let's say, a straight cut you might put it here, or you might put it there, or you might put it there. Where exactly are you going to put it? So let's look at those two steps. The first issue is to decide the general shape of your rejection region, which is the structure of your test. And the way this is done for the case of two hypothesis is by writing down the likelihood ratio between the two hypothesis. So let's call that quantity l of X. It's something that you can compute given the data that you have. A high value of l of X basically means that this probability here tends to be bigger than this probability. It means that the data that you have seen are quite likely to have occurred under H1, but less likely to have occurred under H0. So if you see data that they are more plausible, can be better explained, under H1, then this ratio is big, and you're going to choose in favor of H1 or reject H0. That's what you do if you have discrete data. You use the PMFs. If you have densities, in the case of continues data, again you consider the ratio of the two densities. So a big l of X is evidence that your data are more compatible with H1 rather than H0. Once you accept this kind of structure then your decision is really made in terms of that single number. That is, you had your data that was some kind of vector, and you condense your data into a single number-- a statistic as it's called-- in this case the likelihood ratio, and you put the dividing point somewhere here call it Xi. And in this region you accept H1, in this region you accept H0. So by committing ourselves to using the likelihood ratio in order to carry out the test we have gone from this complicated picture of finding a dividing line in x-space, to a simpler problem of just finding a dividing point on the real line. OK, how are we going? So what's left to do is to choose this threshold, Xi. Or as it's called, the critical value, for making our decision. And you can place it anywhere, but one way of deciding where to place it is the following-- look at the distribution of this random variable, l of X. It's has a certain distribution under H0, and it has some other distribution under H1. If I put my threshold here, here's what's going to happen. When H0 is true, there is this much probability that I'm going to end up making an incorrect decision. If H0 is true there's still a probability that my likelihood ratio will be bigger than Xi, and that's the probability of making an incorrect decision of this particular type. That is of making a false rejection of H0. Usually one sets this probability to a certain number, alpha. For example alpha being 5 %. And once you decide that you want this to be 5 %, that determines where this number Psi(Xi) is going to be. So the idea here is that I'm going to reject H0 if the data that I have seen are quite incompatible with H0. if they're quite unlikely to have occurred under H0. And I take this level, 5%. So I see my data and then I say well if H0 was true, the probability that I would have seen data of this kind would be less than 5 %. Given that I saw those data, that suggests that H0 is not true, and I end up rejecting H0. Now of course there's the other type of error probability. If I put my threshold here, if H1 is true but my likelihood ratio falls here I'm going to make a mistake of the opposite kind. H1 is true, but my likelihood ratio turned out to be small, and I decided in favor of H0. This is an error of the other kind, this probability of error we call beta. And you can see that there's a trade-off between alpha and beta. If you move your threshold this way alpha become smaller, but beta becomes larger. And the general picture is, in your trade-off, depending on where you put your threshold is as follows-- you can make this beta to be 0 if you put your threshold out here, but in that case you are certain that you're going to make a mistake of the opposite kind. So beta equals 0, alpha equals 1 is one possibility. Beta equals 1 alpha equals 0 is the other possibility if you send your thresholds complete to the other side. And in general you're going to get a trade-off curve of some sort. And if you want to use a specific value of alpha, for example alpha being 0.05, then that's going to determine for you the probability for beta. Now there's a general, and quite important theorem in statistics, which were are not proving. And which tells us that when we use likelihood ratio tests we get the best possible trade-off curve. You could think of other ways of making your decisions. Other ways of cutting off your x-space into a rejection and acceptance region. But any other way that you do it is going to end up with some probabilities of error that are going to be above this particular curve. So the likelihood ratio test turns out to give you the best possible way of dealing with this trade-off between alpha and beta. We cannot minimize alpha and beta simultaneously, there's a trade-off between them. But at least we would like to have a test that deals with this trade-off in the best possible way. For a given value of alpha we want to have the smallest possible value of beta. And as the theorem is that the likelihood ratio tests do have this optimality property. For a given value of alpha they minimize the probability of error of a different kind. So let's make all these concrete and look at the simple example. We have two normal distributions with different means. So under H0 you have a mean of 0. Under H1 you have a mean of 1. You get your data, you actually get several data drawn from one of the two distributions. And you want to make a decision, which one of the two is true? So what you do is you write down the likelihood ratio. The density for a vector of data, if that vector was generated according to H0 -- which is this one, and the density if it was generated according to H1. Since we have multiple data the density of a vector is the product of the densities of the individual elements. Since we're dealing with normals we have those exponential factors. A product of exponentials gives us an exponential of the sum. I'll spare you the details, but this is the form of the likelihood ratio. The likelihood ratio test tells us that we should calculate this quantity after we get your data, and compare with a threshold. Now you can do some algebra here, and simplify. And by tracing down the inequalities you're taking logarithms of both sides, and so on. One comes to the conclusion that using a test that has a threshold on this ratio is equivalent to calculating this quantity, and comparing it with a threshold. Basically this quantity here is monotonic in that quantity. This being larger than the threshold is equivalent to this being larger than the threshold. So this tells us the general structure of the likelihood ratio test in this particular case. And it's nice because it tells us that we can make our decisions by looking at this simple summary of the data. This quantity, this summary of the data on the basis of which we make our decision is called a statistic. So you take your data, which is a multi-dimensional vector, and you condense it to a single number, and then you make a decision on the basis of that number. So this is the structure of the test. If I get a large sum of Xi's this is evidence in favor of H1 because here the mean is larger. And so I'm going to decide in favor of H1 or reject H0 if the sum is bigger than the threshold. How do I choose my threshold? Well I would like to choose my threshold so that the probability of an incorrect decision when H0 is true the probability of a false rejection equals to a certain number. Alpha, such as for example 5 %. So you're given here that this is 5 %. You know the distribution of this random variable, it's normal. And you want to find the threshold value that makes this to be true. So this is a type of problem that you have seen several times. You go to the normal tables, and you figure it out. So the sum of the Xi's has some distribution, it's normal. So that's the distribution of the sum of the Xi's. And you want this probability here to be alpha. For this to happen what is the threshold value that makes this to be true? So you know how to solve problems of this kind using the normal tables. A slightly different example is one in which you have two normal distributions that have the same mean -- let's take it to be 0 -- but they have a different variance. So it's sort of natural that here, if your X's that you see are kind of big on either side you would choose H1. If your X's are near 0 then that's evidence for the smaller variance you would choose H0. So to proceed formally you again write down to the form of the likelihood ratio. So again the density of an X vector under H0 is this one. It's the product of the densities of each one of the Xi's. Product of normal densities gives you a product of exponentials, which is exponential of the sum, and that's the expression that you get. Under the other hypothesis the only thing that changes is the variance. And the variance, in the normal distribution, shows up here in the denominator of the exponent. So you put it there. So this is the general structure of the likelihood ratio test. And now you do some algebra. These terms are constants comparing this ratio to a constant is the same as just comparing the ratio of the exponentials to a constant. Then you take logarithms, you want to compare the logarithm of this thing to a constant. You do a little bit of algebra, and in the end you find that the structure of the test is to reject H0 if the sum of the squares of the Xi's is bigger than the threshold. So by committing to a likelihood ratio test you are told that you should be making it your decision according to a rule of this type. So this fixes the shape or the structure of the decision region, of the rejection region. And the only thing that's left, once more, is to pick this threshold in order to have the property that the probability of a false rejection is equal to say 5 %. So that's the probability that H0 is true, but the sum of the squares accidentally happens to be bigger than my threshold. In which case I end up deciding H1. How do I find the value of Xi prime? Well what I need to do is to look at the picture, more or less of this kind, but now I need to look at the distribution of the sum of the Xi's squared. Actually the sum of the Xi's squared is a non-negative random variable. So it's going to have a distribution that's something like this. I look at that distribution, and once more I want this tail probability to be alpha, and that determines where my threshold is going to be. So that's again a simple exercise provided that you know the distribution of this quantity. Do you know it? Well we don't really know it, we have not dealt with this particular distribution in this class. But in principle you should be able to find what it is. It's a derived distribution problem. You know the distribution of Xi, it's normal. Therefore, by solving a derived distribution problem you can find the distribution of Xi squared. And the Xi squared's are independent of each other, because the Xi's are independent. So you want to find the distribution of the sum of random variables with known distributions. And since they're independent, in principle, you can do this using the convolution formula. So in principle, and if you're patient enough, you will be able to find the distribution of this random variable. And then you plot it or tabulate it, and find where exactly is the 95th percentile of that distribution, and that determines your threshold. So this distribution actually turns out to have a nice and simple closed-form formula. Because this is a pretty common test, people have tabulated that distribution. It's called the chi-square distribution. There's tables available for it. And you look up in the tables, you find the 95th percentile of the distribution, and this way you determine your threshold. So what's the moral of the story? The structure of the likelihood ratio test tells you what kind of decision region you're going to have. It tells you that for this particular test you should be using the sum of the Xi squared's as your statistic, as the basis for making your decision. And then you need to solve a derived distribution problem to find the probability distribution of your statistic. Find the distribution of this quantity under H0, and finally, based on that distribution, after you have derived it, then determine your threshold. So now let's move on to a somewhat more complicated situation. You have a coin, and you are told that I tried to make a fair coin. Is it fair? So you have the hypothesis, which is the default-- the null hypothesis-- that the coin is fair. But maybe it isn't. So you have the alternative hypothesis that your coin is not fair. Now what's different in this context is that your alternative hypothesis is not just one specific hypothesis. Your alternative hypothesis consists of many alternatives. It includes the hypothesis that p is 0.6. It includes the hypothesis that p is 0.51. It includes the hypothesis that p is 0.48, and so on. So you're testing this hypothesis versus all this family of alternative hypothesis. What you will end up doing is essentially the following-- you get some data. That is, you flip the coin a number of times. Let's say you flip it 1,000 times. You observe some outcome. Let's say you saw 472 heads. And you ask the question if this hypothesis is true is this value really possible under that hypothesis? Or would it be very much of an outlier? If it looks like an extreme outlier under this hypothesis then I reject it, and I accept the alternative. If this number turns out to be something within the range that you would have expected then you keep, or accept your null hypothesis. OK so what does it mean to be an outlier or not? First you take your data, and you condense them to a single number. So your detailed data actually would have been a sequence of heads/tails, heads/tails and all that. Any reasonable person would tell you that you shouldn't really care about the exact sequence of heads and tails. Let's just base our decision on the number of heads that we have observed. So using some kind of reasoning which could be mathematical, or intuitive, or involving artistry-- you pick a one-dimensional, or scalar summary of the data that you have seen. In this case, the summary of the data is just the number of heads that's a quite reasonable one. And so you commit yourself to make a decision on the basis of this quantity. And you ask the quantity that I'm seeing does it look like an outlier? Or does it look more or less OK? OK, what does it mean to be an outlier? You want to choose the shape of this rejection region, but on the basis of that single number s. And again, the reasonable thing to do in this context would be to argue as follows-- if my coin is fair I expect to see n over 2 heads. That's the expected value. If the number of heads I see is far from the expected number of heads then I consider this to be an outlier. So if this number is bigger than some threshold Xi. I consider it to be an outlier, and then I'm going to reject my hypothesis. So we picked our statistic. We picked the general form of how we're going to make our decision, and then we pick a certain significance, or confidence level that we want. Again, this famous 5% number. And we're going to declare something to be an outlier if it lies in the region that has 5% or less probability of occurring. That is I'm picking my rejection region so that if H0 is true under the default, or null hypothesis, there's only 5% chance that by accident I fall there, and the thing makes me think that H1 is going to be true. So now what's left to do is to pick the value of this threshold. This is a calculation of the usual kind. I want to pick my threshold, my Xi number so that the probability that s is further from the mean by an amount of Xi is less than 5%. Or that the probability of being inside the acceptance region-- so that the distance from the default is less than my threshold. I want that to be 95%. So this is an equality that you can get using the central limit theorem and the normal tables. There's 95% probability that the number of heads is going to be within 31 from the correct mean. So the way the exercise is done of course, is that we start with this number, 5%. Which translates to this number 95%. And once we have fixed that number then you ask the question what number should we have here to make this equality to be true? It's again a problem of this kind. You have a quantity whose distribution you know. Why do you know it? The number of heads by the central limit theorem is approximately normal. So this here talks about the normal distribution. You set your alpha to be 5%, and you ask where should I put my threshold so that this probability of being out there is only 5%? Now in our particular example the threshold turned out to be 31. This number turned out was just 28 away from the correct mean. So these distance was less than the threshold. So we end up not rejecting H0. So we have our rejection region. The way we designed it is that when H0 is true there's only a small chance, 5%, that we get to data out of there. Data that we would call an outlier. If we see such an outlier we reject H0. If what we see is not an outlier as in this case, where that distance turned out to be kind of small, then we do not reject H0. An interesting little piece of language here, people generally prefer to use this terminology-- to say that H0 is not rejected by the data. Instead of saying that H0 is accepted. In some sense they're both saying the same thing, but the difference is sort of subtle. When I say not rejected what I mean is that I got some data that are compatible with my hypothesis. That is the data that I got do not falsify the hypothesis that I had, my null hypothesis. So my null hypothesis is still alive, and may be true. But from data you can never really prove that the hypothesis is correct. Perhaps my coin is not fair in some other complicated way. Perhaps I was just lucky, and even though my coin is not fair I ended up with an outcome that suggests that it's fair. Perhaps my coin flips are not independent as I assumed in my model. So there's many ways that my null hypothesis could be wrong, and still I got data that tells me that my hypothesis is OK. So this is the general way that things work in science. One comes up with a model or a theory. This is the default theory, and we work with that theory trying to find whether there are examples that violate the theory. If you find data and examples that violate the theory your theory is falsified, and you need to look for a new one. But when you have your theory, really no amount of data can prove that your theory is correct. So we have the default theory that the speed of light is constant as long as we do not find any data that runs counter to it. We stay with that theory, but there's no way of really proving this, no matter how many experiments we do. But there could be experiments that falsify that theory, in which case we need to do look for a new one. So there's a bit of an asymmetry here in how we treat the alternative hypothesis. H0 is the default which we'll accept until we see some evidence to the contrary. And if we see some evidence to the contrary we reject it. As long as we do not see evidence to the contrary then we keep working with it, but always take it with a grain of salt. You can never really prove that a coin has a bias exactly equal to 1/2. Maybe the bias is equal to 0.50001, so the bias is not 1/2. But with an experiment with 1,000 coin tosses you wouldn't be able to see this effect. OK, so that's how you go about testing about whether your coin is fair. You can also think about testing whether a die is fair. So for a die the null hypothesis would be that every possible result when you roll the die has equal probability and equal to 1/6. And you also make the hypothesis that your die rolls are statistically independent from each other. So I take my die, I roll it a number of times, little n, and I count how many 1's I got, how many 2's I got, how many 3's I got, and these are my data. I count how many times I observed a specific result in my die roll that was equal to sum i. And now I ask the question-- the Ni's that I observed, are they compatible with my hypothesis or not? What does compatible to my hypothesis mean? Under the null hypothesis Ni should be approximately equal, or is equal in expectation to N times little Pi. And in our example this little Pi is of course 1/6. So if my die is fair the number of ones I expect to see is equal to the number of rolls times 1/6. The number of 2's I expect to see is again that same number. Of course there's randomness, so I do not expect to get exactly that number. But I can ask how far away from the expected values was i? If my capital Ni's turn to be very different from N/6 this is evidence that my die is not fair. If those numbers turn out to be close to N times 1/6 then I'm going to say there's no evidence that would lead me to reject this hypothesis. So this hypothesis remains alive. So someone has come up with this thought that maybe the right statistic to use, or the right way of quantifying how far away are the Ni's from their mean is to look at this quantity. So I'm looking at the expected value of Ni under the null hypothesis. See what I got, take the square of this, and add it over all i's. But also throw in these terms in the denominator. And why that term is there, that's a longer story. One can write down certain likelihood ratios, do certain Taylor Series approximations, and there's a Heuristic argument that justifies why this would be a good form for the test to use. So there's a certain art that's involved in this step that some people somehow decided that it's a reasonable thing to do is to calcelate. Once you get your results to calculate this one-dimensional summary of your result, this is going to be your statistic, and compare that statistic to a threshold. And that's how you make your decision. So by this point we have fixed the type of the rejection region that we're going to have. So we've chosen the qualitative structure of our test, and the only thing that's now left is to choose the particular threshold we're going to use. And the recipe, once more, is the same. We want to set our threshold so that the probability of a false rejection is 5%. We want the probability that our data fall in here is only 5% when the null hypothesis is true. So that's the same as setting our threshold Xi so that the probability that our test statistic is bigger than that threshold. We want that probability to be only 0.05. So to solve a problem of this kind what is it that you need to do? You need to find the probability distribution of capital T. So once more it's the same picture. You need to do some calculations of some sort, and come up with the distribution of the random variable T, where T is defined this way. You want to find this distribution under hypothesis H0. Once you find what that distribution is then you can solve this usual problem. I want this probability here to be 5%. What should my threshold be? So what does this boil down to? Finding the distribution of capital T is in some sense a messy, difficult, derived distribution problem. From this model we know the distribution of the capital Ni's. And actually we can even write down the joint distribution of the capital Ni's. In fact we can make an approximation here. Capital Ni is a binomial random variable. Let's say the number of 1's that I got in little N rolls off my die. So that's a binomial random variable. When little n is big this is going to be approximately normal. So we have normal random variables, or approximately normal minus a constant. They're still approximately normal. We take the squares of these, scale them so you can solve a derived distribution problem to find the distribution of this quantity. You can do more work, more derived distribution work, and find the distribution of capital T. So this is a tedious matter, but because this test is used quite often, again people have done those calculations. They have found the distribution of capital T, and it's available in tables. And you go to those tables, and you find the appropriate threshold for making a decision of this type. Now to give you a sense of how complicated hypothesis one might have to deal with let's make things one level more complicated. So here you can think this X is a discrete random variable. This is the outcome of my roll. And I had a model in which the possible values of my discrete random variables they have probabilities all equal to 1/6. So my null hypothesis here was a particular PMF for the random variable capital X. So another way of phrasing what happened in this problem was the question is my PMF correct? So this is the PMF of the result of one die roll. You're asking the question is my PMF correct? Make it more complicated. How about the question of the type is my PDF correct when I have continuous data? So I have hypothesized that's the probability distribution that I have is let's say a particular normal. I get lots of results from that random variable. Can I tell whether my results look like normal or not? What are some ways of going about it? Well, we saw in the previous slide that there is a methodology for deciding if your PMF is correct. So you could take your normal results, the data that you got from your experiment, and discretize them, and so now you're dealing with discrete data. And sort of used in previous methodology to solve a discrete problem of the type is my PDF correct? So in practice the way this is done is that you get all your data, let's say data points of this kind. You split your space into bins, and you count how many you have in each bin. So you get this, and that, and that, and nothing. So that's a histogram that you get from the data that you have. Like the very familiar histograms that you see after each one of our quizzes. So if you look at these histogram, and you ask does it look like normal? OK, we need a systematic way of going about it. If it were normal you can calculate the probability of falling in this interval. The probability of falling in that interval, probability of falling into that interval. So you would have expected values of how many results, or data points, you would have in this interval. And compare these expected values for each interval with the actual ones that you observed. And then take the sum of squares, and so on, exactly as in the previous slide. And this gives you a way of going about it. This is a little messy. It gets hard to do because you have the difficult decision of how do you choose the bin size? If you take your bins to be very narrow you would get lots of bins with 0's, and a few bins that only have one outcome in them. It probably wouldn't feel right. If you choose your bins to be very wide then you're losing a lot of information. Is there some way of making a test without creating bins? This is just to illustrate the clever ideas of what statisticians have thought about. And here's a really cute way of going about a test, whether my distribution is correct or not. Here we're essentially plotting a PMF, or an approximation of a PDF. And we ask does it look like the PDF we assumed? Instead of working with PDFs let's work with cumulative distribution functions. So how does this go? The true normal distribution that I have hypothesized, the density that I'm hypothesizing-- my null hypothesis-- has a certain CDF that I can plot. So supposed that my hypothesis H0 is that the X's are normal with our standard normals, and I plot the CDF of the standard normal, which is the sort of continuous looking curve here. Now I get my data, and I plot the empirical CDF. What's the empirical CDF? In the empirical CDF you ask the question what fraction of the data fell below 0? You get a number. What fraction of my data fell below 1? I get a number. What fraction of my data fell below 2, and so on. So you're talking about fractions of the data that fell below each particular number. And by plotting those fractions as a function of this number you get something that looks like a CDF. And it's the CDF suggested by the data. Now the fraction of the data that fall below 0 in my experiment is-- if my hypothesis were true-- expected to be 1/2. 1/2 is the value of the true CDF. I look at the fraction that I got, it's expected to be that number. But there's randomness, so it's might be a little different than that. For any particular value, the fraction that I got below a certain number-- the fraction of data that we're below, 2, its expectation is the probability of falling below 2, which is the correct CDF. So if my hypothesis is true the empirical CDF that I get based on data should, when n is large, be very close to the true CDF. So a way of judging whether my model is correct or not is to look at the assumed CDF, the CDF under hypothesis H0. Look at the CDF that I constructed based on the data, and see whether they're close enough or not. And by close enough, I mean I'm going to look at all the possible X's, and look at the maximum distance between those two curves. And I'm going to have a test that decides in favor of H0 if this distance is small, and in favor of H1 if this distance is large. That still leaves me the problem of coming up with a threshold. Where exactly do I put my threshold? Because this test is important enough, and is used frequently people have made the effort to try to understand the probability distribution of this quite difficult random variable. One needs to do lots of approximations and clever calculations, but these have led to values and tabulated values for the probability distribution of this random variable. And, for example, those tabulated values tell us that if we want 5% false rejection probability, then our threshold should be 1.36 divided by the square root of n. So we know where to put our threshold for this particular value. If we want this particular error or error probability to occur. So that's about as hard and sophisticated classical statistics get. You want to have tests for hypotheses that are not so easy to handle. People somehow think of clever ways of doing tests of this kind. How to compare the theoretical predictions with the observed predictions with the observed data. Come up with some measure of the difference between theory and data, and if that difference is big, than you reject your hypothesis. OK, of course that's not the end of the field of statistics, there's a lot more. In some ways, as we kept moving through today's lecture, the way that we constructed those rejection regions was more and more ad hoc. I pulled out of a hat a particular measure of fit between data and the model. And I said let's just use a test based on this. There are attempts at more or less systematic ways of coming up with the general shape of rejection regions that have at least some desirable or favorable theoretical properties. Some more specific problems that people study-- instead of having a test, is this the correct PDF? Yes or no. I just give you data, and I ask you tell me, give me a model or a PDF for those data. OK, my thoughts of this kind are of many types. One general method is you form a histogram, and then you take your histogram and plot a smooth line, that kind of fits the histogram. This still leaves the question of how do you choose the bins? The bin size in your histograms. How narrow do you take them? And that depends on how many data you have, and there's a lot of theory that tells you about the best way of choosing the bin sizes, and the best ways of smoothing the data that you have. A completely different topic is in signal processing -- you want to do your inference. Not only you want it to be good, but you also want it to be fast in a computational way. You get data in real time, lots of data. You want to keep processing and revising your estimates and your decisions as they come and go. Another topic that was briefly touched upon the last couple of lectures is that when you set up a model, like a linear regression model, you choose some explanatory variables, and you try to predict y from your X, these variables. You have a choice of what to take as your explanatory variables. Are there systematic ways of picking the right X variables to try to estimate a Y. For example should I try to estimate Y on the basis of X? Or on the basis of X-squared? How do I decide between the two? Finally, the rage these days has to do with anything big, high-demensional. Complicated models of complicated things, and tons and tons of data. So these days data are generated everywhere. The amounts of data are humongous. Also, the problems that people are interested in tend to be very complicated with lots of parameters. So I need specially tailored methods that can give you good results, or decent results even in the face of these huge amounts of data, and possibly with computational constraints. So with huge amounts of data you want methods that are simple, but still can deliver for you meaningful answers. Now as I mentioned some time ago, this whole field of statistics is very different from the field of probability. In some sense all that we're doing in statistics is probabilistic calculations. That's what the theory kind of does. But there's a big element of art. You saw that we chose the shape of some decision regions or rejection regions in a somewhat ad hoc way. There's even more basic things. How do you organize your data? How do you think about which hypotheses you would like to test, and so on. There's a lot of art that's involved here, and there's a lot that can go wrong. So I'm going to close with a note that you can take either as pessimistic or optimistic. There is a famous paper that came out a few years ago and has been cited about a 1,000 times or so. And the title of the paper is Why Most Published Research Findings Are False. And it's actually a very good argument why, in fields like psychology or the medical science and all that a lot of what you see published-- that yes, this drug has an effect on that particular disease-- is actually false, because people do not do their statistics correctly. There's lots of biases in what people do. I mean an obvious bias is that you only published a result when you see something. So the null hypothesis is that the drug doesn't work. You do your tests, the drug didn't work, OK, you just go home and cry. But if by accident that 5% happens, and even though the drug doesn't work, you got some outlier data, and it seemed to be working. Then you're excited, you publish it. So that's clearly a bias. That gets results to be published, even though they do not have a solid foundation behind them. Then there's another thing, OK? I'm picking my 5%. So H0 is true there's a small probability that the data will look like an outlier, and in that case I published my result. OK it's only 5% -- it's not going to happen too often. But suppose that I go and do a 1,000 different tests? Test H0 against this hypothesis, test H0 against that hypothesis , test H0 against that hypothesis. Some of these tests, just by accident might turn out to be in favor of H1, and again these are selected to be published. So if you do lots and lots of tests and in each one you have a 5% probability of error, when you consider the collection of all those tests, actually the probability of making incorrect inferences is a lot more than 5%. One basic principle in being systematic about such studies is that you should first pick your hypothesis that you're going to test, then get your data, and do your hypothesis testing. What would be wrong is to get your data, look at them, and say OK I'm going now to test for these 100 different hypotheses, and I'm going to choose my hypothesis to be for features that look abnormal in my data. Well, given enough data, you can always find some abnormalities just by chance. And if you choose to make a statistical test-- is this abnormality present? Yes, it will be present. Because you first found the abnormality, and then you tested for it. So that's another way that things can go wrong. So the moral of this story is that while the world of probability is really beautiful and solid, you have your axioms. Every question has a unique answer that by now you can, all of you, find in a very reliable way. Statistics is a dirty and difficult business. And that's why the subject is not over. And if you're interested in it, it's worth taking follow-on courses in that direction. OK so have good luck in the final, do well, and have a nice vacation afterwards.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Probability_that_Three_Pieces_Form_a_Triangle.txt
In this problem, we're going to look at the probability that when you take a stick and break it into three pieces randomly that these three pieces can actually be used to form a triangle. All right, so we start out with a stick of unit length, so-- length 1. And we'll choose a point along the stick to break. And we'll choose that point uniformly at random. So let's say that we chose it here, that was the point where we'll break it. And then independently of this first choice we'll again choose a second point to break it. Again, uniformly at random along the entire stick. So let's say the second point we chose was here. So what we have now is, we'll break it here, here, and so we'll have three pieces-- the first one, the left one and the right one. And we want to know, what's the probability that when you take these three pieces you could form a triangle? So the first thing we should ask ourselves is, what do you need-- what conditions must be satisfied in order to actually be able to form a triangle with three pieces? So you could think about, what would stop you from being able to do that? Well, one possibility is that you have pieces that look like this. So in that case you would try to form something that looks like this. But you can't get a triangle because these two pieces are too short and they can't touch each other. So actually the condition that must be satisfied is that when you take any two of the three pieces, their combined length has to be greater than the length of the remaining third piece. And that has to be true for any two pieces. And really that's just so that any two pieces, they can touch and still form a triangle. So let's try to add some probability to this. So we have a unit length stick. So let's actually give a coordinate system. The stick goes from 0 to 1. And let's say that we break it at these two points. So the first point where we choose, we'll call that x. So that's the first point that we choose to break it. And then the second point we choose, we'll call that y. Now note that I've drawn it so that x is to the left of y. But it could actually be the case that the first point I chose is here and the second point that I chose is to the left. But for now, let's first assume that this scenario holds. That the first point is to the left of the second point. So under this assumption, we can see that-- from the definition of these random variables-- we can actually see that the lengths are given by these three lengths. So the lengths are x, the left most piece has length x. The second, middle piece has length y minus x. And the last piece has length 1 minus y. And now let's recall our three conditions. So the conditions were that any two of these, the sum of any two lengths, has to be at least-- has to be greater than the length of the third piece. So let's do these together. So x plus y minus x has to be greater than 1 minus y. So with these two pieces you can cover this third piece. We also need that with the first and third pieces, we can cover the middle piece. And we need with the second and third pieces, we can cover the first piece. Now this looks kind of messy, but in fact we can actually simplify this. So this actually simplifies. x minus x, that disappears. And so this actually simplifies to 2y has to be at least 1. Or even more simply, y has to be greater than 1/2. What about this one? This one, we can rearrange things again. x we can move over. y we can move over here. And we get that 2x plus 1 has to be greater than 2y. Or put in other words, y is less than x plus 1/2. And for the last one, again we can simplify. The y's cancel each other out. And we're left with 2x is less than 1. Or x is less than 1/2. So these are our three conditions that need to be satisfied. So now we just have to figure out what's the probability that this is actually satisfied? Now let's go back to original definition and see what are the actual distributions for these random variables, x and y. Remember, we defined them to be x is the location of the first break and y is the location of the second break. And as we said in the problem, these are chosen uniformly at random and they're independent. And so we can actually draw out their joint PDF. So x and y, you can cover any point in the square. And moreover, it's actually uniform within the square. Because each one is chosen uniformly at random and they're independent. So it's anywhere in here. And so what do we need to do? We just need to identify, what is the probability that these three conditions hold? Rewrite this, line these up. So these are our three conditions that we need. And now remember, we're still working under the assumption that the first point that we chose is actually to the left of the second point. So what does that mean? That means that we are actually in this top triangle, top half-- x is less than y. All right, so what do we need? We need y to be at least 1/2, so here's 1/2. So we need y to be above this line. We need x to be less than 1/2. So we need x to be to the left of here. So now so far we're stuck in this upper square. And the last thing we need is y to be less than x plus 1/2. What is y? The line y equals x and a 1/2, x plus 1/2, is this one. So y has to be less than that, so it would have to be in this triangle here. So these three conditions tell us that in order for us to have a triangle we need to for x and y to fall jointly in this small triangle here. Now because the joint distribution is uniform, we know that the density is just 1, right? Because the area here is just 1. So the height is just 1 as well. And so the density, or the probability of falling within this small triangle, is just going to be also the area of this triangle. And what is the area of this triangle? Well, you can fit 8 of these triangles in here, or you could think of it as 1/2 times 1/2 times 1/2. So the area is just 1/8. So assuming that x is less than y, then the probability of forming a triangle is 1/8. Now, that's only half this story, though. Because it's possible that when you chose these two break points that we actually had the opposite result. That x, the point that you chose first, falls to the right of the point that you chose second. In which case everything kind of flips. Now we assume that y is less than x, which means that now we're in this lower triangle in the square. Now we can go through this whole exercise again. But really, what we can see is that all we've really done is just swap the names. Instead of having x and y we now call x-- we call x y and we call y x. And so if we just swap names, we can see that-- let's just fast forward through all these steps and see that we could just swap names here, too, as well, in the three conditions. So instead of needing y to be greater than 1/2, we just need x to be greater than 1/2. Instead of having x less than 1/2, we need y less than 1/2. We also swap this. So we need x to be less than y plus 1/2 or y is greater than x minus 1/2. All right, now let's figure out what this corresponds to. We need x to be greater than 1/2, so it needs to be to the right of here. We need y to be less than 1/2, so we need it to be below this line. And we need y to be greater than x minus 1/2. What is the line y equals x minus 1/2? That is this line here. And we need y to be greater than that, so it needs to be above this line. And so we get that this is the triangle, the small triangle that we need in this case. And notice that it's exactly the same area as this one, right? And so we get another contribution of 1/8 here. So the final answer is 1/8 plus 1/8 is 1/4. So the probability of forming a triangle using this three pieces is exactly 1/4. And so notice that we've done is, you've set things up very methodically in the beginning by assigning these random variables. And you consider different cases. Because you don't actually know the order in which x and y might fall, let's just assume that one particular order and work from there. And then do the other case, as well. And it just so happened that because of the symmetry of the problem the second case was actually very simple. We could just see that it is actually symmetric and so we get the same answer. So this is kind of an interesting problem because it's actually a practical application of something that you might actually do. And you can see that just by applying these probability concepts you can actually--
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
The_Monty_Hall_Problem.txt
Hi. In the session, we'll be solving the Monty Hall problem. And this problem is based on an old game show that was called "Let's Make a Deal." And the host of this game show, his name was Monty Hall, which is why this problem is now known as the Monty Hall problem. And this problem is actually pretty well-known, because there was some disagreement at the time over what the right answer to this problem should be. Even some really smart people didn't agree on what the right answer should be. And part of what might explain that disagreement is that they probably were considering slightly different variations of the problem, because as in all probability problems, the assumptions that you're working with are very important, because otherwise you may be solving an actually different problem. And so what we'll do first is really layout concretely what all the assumptions are, what the rules of the game are. And then we'll go through the methodology to solve for the actual answer. So the game is actually relatively simple. So you're on a game show and you're presented with three doors. These doors are closed. And behind one of these doors is a prize, let's say, a car. And behind the other two doors, there's nothing. You don't know which one it is. And the rules of the game are that, first, you get to choose any one of these three. So you pick one of the doors that you want. They don't show you what's behind that door, but your friend, who actually knows which door has the prize behind it, will look at the remaining doors. So let's, just for example, let's say you chose door one. Your friend will look at the other two doors and open one of them. And you will make sure that the one that he opens is empty. That is the prize not behind that one. And at this point, one of the doors is open and its empty, you have your original door plus another unopened door. And you're given an option-- you could either stay with your initial choice or you can switch to the other unopened door. And whichever one is your final choice, they will open that door. And if there's a price behind it, you win, and if there not, then you don't win. So the question that we're trying to answer is what is the better strategy here? Is the better strategy to stay with your initial choice or is it better to switch to the other unopened door? OK, so it turns out that the specific rules here actually are very important. Specifically, the rule about how your friend chooses to open doors. And the fact that he will always open one of the two other door that you haven't picked and he will make sure that that door doesn't have a prize behind it. And let's see how that actually plays out in this problem. So the simplest way, I think, of thinking about this problem is just to think about under what circumstances does staying with your initial choice win? So if you think about it, the only way that you can win by staying with your initial choice is if your initial choice happened to be the door that has a prize behind it. And because you're sticking with the initial choice, you can actually kind of forget about the rest of the game, about opening of the other door and about switching. It's as if you're playing a simpler game, which is just you have three doors, one of them has a prize behind it, and you choose one of them. And if you guessed right, then you win. If you didn't, then you don't win. And because the another important assumption is that the prize has an equal probability of being behind any one of three doors so one third, one third, one third. Because of that, then if you stay with your first choice, you win only if your first choice happened to the right one. And that is the case with probably one third. So with that simple argument you can convince yourself that the probability of winning, given the strategy of staying with your first choice, is one third. Now, let's think about the other strategy, which is to switch. So under what circumstances does switching win for you? Well, if your first choice happened to be the right door, then switching away from that door will always lose. But let's say, that happens with probably one third. But the rest of the time with probably 2/3, your first choice would be wrong. So let's give an example here. Let's say, the prize, which I'll denote by happy face, is behind door two. And your first choice was door one. So your first choice was wrong. Now, your friend can open door two, because door two has the prize behind it. He also doesn't open the door that you initially picked. So he has to open door three. So door three is open, and now you have an option of sticking with your first choice-- door one-- or switching to door two. So in this case, it's obvious to see that switching wins for you. And now, if instead, you picked door one first, and the prize was behind door three, again, you are wrong. And again, your friend is forced to open door two. And switching, again, wins for you. And so if you think about it, switching will win for you, as long as your initial pick was wrong. If your initial pick was wrong, then the prize is behind one of the doors. Your friend has to open one of the doors, but he can't open the door that has the prize behind it. So he has to open the other bad door, leaving the good door with the prize behind it, as the one that you can switch to. And so by switching you will win in this scenario. And what is the probability of that happening? Well, that happens if your initial pick was wrong, which happens with probably 2/3. So the final answer then, it's pretty simple, the probability of winning if you stay is one third, and the probability of winning if you switch is 2/3. And so maybe counterintuitively the result is that it's actually better for you, twice as good for you, to switch rather than stay. And so that was the argument, the kind of simple argument. We can also be more methodical about this and actually list out all of the possible outcomes. Because it's relatively small problem-- there's only three doors-- we can actually just list out all the possible outcomes. So for example, if you chose door one first, and the prize was behind door one, your friend has a choice. He can open door two or door three, because they're both empty. And then in that case, if you stay, you win, you picked the door correctly. And if you switch to two or three, then you lose. But if you chose door one, the prize is behind door two, then your friend has to open door three, he is forced to do that, then staying with lose but switching would win. And so on for the other cases. And so again, this is just an exhaustive list of all the possible outcomes, from which you can see that, in fact, staying wins, only if your first choice was correct. And switching wins in all the other cases. And so one third of the time, staying would win, 2/3 of the time switching would win. OK, so now, we have the answer. Let's try to figure out and convince ourselves that it is actually right, because you might think before going through this process that maybe it doesn't matter whether you stay or you switch, they both have the same probably of winning, or maybe even staying is better. So why is staying worse and switching better? Well, the first argument really is something that we've already talked about. By staying, you're essentially banking on your first choice being correct, which is a relatively poor bet, because you have only one in three chance of being right. But by switching, you're actually banking on your first choice being wrong, which is a relatively better bet, because you're more likely to be wrong than right in your first choice, because you're just picking blindly. OK, so that is one intuitive explanation for why switching is better. Another slightly different way to think about it is that instead of picking single doors, you're actually picking groups of doors. So let's say that your first pick was door one. Then you're actually really deciding between door one or doors two and three combined. So why is that? It's because by staying with door one, you're staying with door one. But by switching, you're actually getting two doors for the price of one, because you know that your friend will reveal one of these to be empty, and the other one will stay closed. But switching really kind of buys you both of these. And so because it buys you two opportunities to win, you get 2/3 chance of winning, versus a one third chance. Another way of thinking about this is to increase the scale of the problem, and maybe that will help visualize the counterintuitive answer. So instead of having three doors, imagine that you have 1,000 doors that are closed. And again, one prize is behind one of the doors. And the rules are similar-- you pick one door first, and then your friend will open 998 other doors. And these doors are guaranteed to be empty. And now you're left with your initial door plus one other door that is unopened. So now the question is should you stay with your first choice or switch to your other choice? And it should be more intuitively obvious now that the better decision would be to switch, because you're overwhelmingly more likely to have picked incorrectly for your first pick. You have only 1 in 1,000 chance of getting it right. So that is kind of just taking this to a bigger extreme and really driving home the intuition. OK, so what we've really discovered is that the fact that the rules of the game are that your friend has to open one of the other two doors and cannot reveal the prize plays a big role in this problem. And that is an important assumption. OK, so now let's think about a slightly different variation now. So a different strategy. Instead of just always staying or always switching, we have a specific other strategy, which is that you will choose door one first and then, depending on what your friend does, you will act accordingly. So if your friend opens door two, you will not switch. And if your friend opens door three, you will switch. So let's draw out exactly what happens here. So you have door one that you've chosen. And the prize can be behind doors one, two, or three. And again, it's equally likely. So the probabilities of these branches are one third, one third, and one third. And now given that, your friend in this scenario has a choice between opening doors two or three. And so because of doors, you chose one, the prize actually is behind one, and so two and three are both empty, so he can choose whichever one he wants to open. And the problem actually hasn't specified how your friend actually decides between this. So we'll leave it in general. So we'll say that the probability p, your friend will open two, door two, in this case. And with the remaining probability 1 minus p, he will open door three. What about in this case? Well, you chose door one. The prize is actually behind door two. So following the rules of the game, your friend is forced to open door three. So this happens with probability 1. And similarly, if the prize is behind door three, your friend is forced to open door two, which, again, happens with probably 1. So now let's see how this strategy works. When do you win? You win when, according to the strategy, your final choice is the right door. So according to the strategy, in this case, your friend opened door two. And according to your strategy, if door two is open, you don't switch. So you stay with your first choice of one. And that happens to the right one, so you win in this case. But what about here? Your friend opened door three, and by your strategy, you do switch, which is the wrong choice here, so you lose. Here, you switch, because you open door three, and you switch to the right door, so that wins. And this one, you don't switch, and you lose. All right, so what is the final probability of winning? And the final probably of winning is the probability of getting to these two outcomes, which happens with probability one third times p plus one third times 1. So one third. So the final answer is one third p plus one third. And notice now that the answer isn't just a number. Like in this case, the answer was one third and 2/3. And it didn't actually matter how your friend chose between these two doors when he had a choice. But in this case, it actually doesn't matter, because p stays in the answer. But one thing that we can do is we can compare this with these strategies. So what we see is that, well p is a probability, so it has to be between 0 and 1. So this probability winning for this strategy is somewhere between one third times 0 plus one third, which is one third. And one third times 1 plus one third, which is 2/3. So the strategy is somewhere between 2/3 and one third. So what we see is that no matter what, this strategy is at least as good as staying all the time, because that was only one third. And no matter what it can't be any better than switching, which was 2/3. So you can also come up with lots of other different strategies and see what the probabilities of winning are in that case. OK, so what have we learned in this problem? What are the key takeaways? One important takeaway is that it's important to really understand a problem and arrive at a concrete and precise set of assumptions. So really have a precise problem that you're solving. And another important takeaway is to think about your final answer, make sure that that actually makes sense to you, make sure that you can justify it somehow intuitively. In that case, you can actually convince yourself that your answer is actually correct, because sometimes go through a lot of formulas, and sometimes your formula may have an error in there somewhere. But you could take the final answer and ask yourself does this actually makes sense intuitively? That's often a very good check and sometimes you can catch errors in your calculations that way. OK so we'll see next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
8_Continuous_Random_Variables.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN TSITSIKLIS: OK. We can start. Good morning. So we're going to start now a new unit. For the next couple of lectures, we will be talking about continuous random variables. So this is new material which is not going to be in the quiz. You are going to have a long break next week without any lecture, just a quiz and recitation and tutorial. So what's going to happen in this new unit? Basically, we want to do everything that we did for discrete random variables, reintroduce the same sort of concepts but see how they apply and how they need to be modified in order to talk about random variables that take continuous values. At some level, it's all the same. At some level, it's quite a bit harder because when things are continuous, calculus comes in. So the calculations that you have to do on the side sometimes need a little bit more thinking. In terms of new concepts, there's not going to be a whole lot today, some analogs of things we have done. We're going to introduce the concept of cumulative distribution functions, which allows us to deal with discrete and continuous random variables, all of them in one shot. And finally, introduce a famous kind of continuous random variable, the normal random variable. OK, so what's the story? Continuous random variables are random variables that take values over the continuum. So the numerical value of the random variable can be any real number. They don't take values just in a discrete set. So we have our sample space. The experiment happens. We get some omega, a sample point in the sample space. And once that point is determined, it determines the numerical value of the random variable. Remember, random variables are functions on the sample space. You pick a sample point. This determines the numerical value of the random variable. So that numerical value is going to be some real number on that line. Now we want to say something about the distribution of the random variable. We want to say which values are more likely than others to occur in a certain sense. For example, you may be interested in a particular event, the event that the random variable takes values in the interval from a to b. And we want to say something about the probability of that event. In principle, how is this done? You go back to the sample space, and you find all those outcomes for which the value of the random variable happens to be in that interval. The probability that the random variable falls here is the same as the probability of all outcomes that make the random variable to fall in there. So in principle, you can work on the original sample space, find the probability of this event, and you would be done. But similar to what happened in chapter 2, we want to kind of push the sample space in the background and just work directly on the real axis and talk about probabilities up here. So we want now a way to specify probabilities, how they are bunched together, or arranged, along the real line. So what did we do for discrete random variables? We introduced PMFs, probability mass functions. And the way that we described the random variable was by saying this point has so much mass on top of it, that point has so much mass on top of it, and so on. And so we assigned a total amount of 1 unit of probability. We assigned it to different masses, which we put at different points on the real axis. So that's what you do if somebody gives you a pound of discrete stuff, a pound of mass in little chunks. And you place those chunks at a few points. Now, in the continuous case, this total unit of probability mass does not sit just on discrete points but is spread all over the real axis. So now we're going to have a unit of mass that spreads on top of the real axis. How do we describe masses that are continuously spread? The way we describe them is by specifying densities. That is, how thick is the mass that's sitting here? How dense is the mass that's sitting there? So that's exactly what we're going to do. We're going to introduce the concept of a probability density function that tells us how probabilities accumulate at different parts of the real axis. So here's an example or a picture of a possible probability density function. What does that density function kind of convey intuitively? Well, that these x's are relatively less likely to occur. Those x's are somewhat more likely to occur because the density is higher. Now, for a more formal definition, we're going to say that a random variable X is said to be continuous if it can be described by a density function in the following sense. We have a density function. And we calculate probabilities of falling inside an interval by finding the area under the curve that sits on top of that interval. So that's sort of the defining relation for continuous random variables. It's an implicit definition. And it tells us a random variable is continuous if we can calculate probabilities this way. So the probability of falling in this interval is the area under this curve. Mathematically, it's the integral of the density over this particular interval. If the density happens to be constant over that interval, the area under the curve would be the length of the interval times the height of the density, which sort of makes sense. Now, because the density is not constant but it kind of moves around, what you need is to write down an integral. Now, this formula is very much analogous to what you would do for discrete random variables. For a discrete random variable, how do you calculate this probability? You look at all x's in this interval. And you add the probability mass function over that range. So just for comparison, this would be the formula for the discrete case-- the sum over all x's in the interval from a to b over the probability mass function. And there is a syntactic analogy that's happening here and which will be a persistent theme when we deal with continuous random variables. Sums get replaced by integrals. In the discrete case, you add. In the continuous case, you integrate. Mass functions get replaced by density functions. So you can take pretty much any formula from the discrete case and translate it to a continuous analog of that formula, as we're going to see. OK. So let's take this now as our model. What is the probability that the random variable takes a specific value if we have a continuous random variable? Well, this would be the case. It's a case of a trivial interval, where the two end points coincide. So it would be the integral from a to itself. So you're integrating just over a single point. Now, when you integrate over a single point, the integral is just 0. The area under the curve, if you're only looking at a single point, it's 0. So big property of continuous random variables is that any individual point has 0 probability. In particular, when you look at the value of the density, the density does not tell you the probability of that point. The point itself has 0 probability. So the density tells you something a little different. We are going to see shortly what that is. Before we get there, can the density be an arbitrary function? Almost, but not quite. There are two things that we want. First, since densities are used to calculate probabilities, and since probabilities must be non-negative, the density should also be non-negative. Otherwise you would be getting negative probabilities, which is not a good thing. So that's a basic property that any density function should obey. The second property that we need is that the overall probability of the entire real line should be equal to 1. So if you ask me, what is the probability that x falls between minus infinity and plus infinity, well, we are sure that x is going to fall in that range. So the probability of that event should be 1. So the probability of being between minus infinity and plus infinity should be 1, which means that the integral from minus infinity to plus infinity should be 1. So that just tells us that there's 1 unit of total probability that's being spread over our space. Now, what's the best way to think intuitively about what the density function does? The interpretation that I find most natural and easy to convey the meaning of a density is to look at probabilities of small intervals. So let us take an x somewhere here and then x plus delta just next to it. So delta is a small number. And let's look at the probability of the event that we get a value in that range. For continuous random variables, the way we find the probability of falling in that range is by integrating the density over that range. So we're drawing this picture. And we want to take the area under this curve. Now, what happens if delta is a fairly small number? If delta is pretty small, our density is not going to change much over that range. So you can pretend that the density is approximately constant. And so to find the area under the curve, you just take the base times the height. And it doesn't matter where exactly you take the height in that interval, because the density doesn't change very much over that interval. And so the integral becomes just base times the height. So for small intervals, the probability of a small interval is approximately the density times delta. So densities essentially give us probabilities of small intervals. And if you want to think about it a little differently, you can take that delta from here and send it to the denominator there. And what this tells you is that the density is probability per unit length for intervals of small length. So the units of density are probability per unit length. Densities are not probabilities. They are rates at which probabilities accumulate, probabilities per unit length. And since densities are not probabilities, they don't have to be less than 1. Ordinary probabilities always must be less than 1. But density is a different kind of thing. It can get pretty big in some places. It can even sort of blow up in some places. As long as the total area under the curve is 1, other than that, the curve can do anything that it wants. Now, the density prescribes for us the probability of intervals. Sometimes we may want to find the probability of more general sets. How would we do that? Well, for nice sets, you will just integrate the density over that nice set. I'm not quite defining what "nice" means. That's a pretty technical topic in the theory of probability. But for our purposes, usually we will take b to be something like a union of intervals. So how do you find the probability of falling in the union of two intervals? Well, you find the probability of falling in that interval plus the probability of falling in that interval. So it's the integral over this interval plus the integral over that interval. And you think of this as just integrating over the union of the two intervals. So once you can calculate probabilities of intervals, then usually you are in business, and you can calculate anything else you might want. So the probability density function is a complete description of any statistical information we might be interested in for a continuous random variable. OK. So now we can start walking through the concepts and the definitions that we have for discrete random variables and translate them to the continuous case. The first big concept is the concept of the expectation. One can start with a mathematical definition. And here we put down a definition by just translating notation. Wherever we have a sum in the discrete case, we now write an integral. And wherever we had the probability mass function, we now throw in the probability density function. This formula-- you may have seen it in freshman physics-- basically, it again gives you the center of gravity of the picture that you have when you have the density. It's the center of gravity of the object sitting underneath the probability density function. So that the interpretation still applies. It's also true that our conceptual interpretation of what an expectation means is also valid in this case. That is, if you repeat an experiment a zillion times, each time drawing an independent sample of your random variable x, in the long run, the average that you are going to get should be the expectation. One can reason in a hand-waving way, sort of intuitively, the way we did it for the case of discrete random variables. But this is also a theorem of some sort. It's a limit theorem that we're going to visit later on in this class. Having defined the expectation and having claimed that the interpretation of the expectation is that same as before, then we can start taking just any formula you've seen before and just translate it. So for example, to find the expected value of a function of a continuous random variable, you do not have to find the PDF or PMF of g(X). You can just work directly with the original distribution of the random variable capital X. And this formula is the same as for the discrete case. Sums get replaced by integrals. And PMFs get replaced by PDFs. And in particular, the variance of a random variable is defined again the same way. The variance is the expected value, the average of the distance of X from the mean and then squared. So it's the expected value for a random variable that takes these numerical values. And same formula as before, integral and F instead of summation, and the P. And the formulas that we have derived or formulas that you have seen for the discrete case, they all go through the continuous case. So for example, the useful relation for variances, which is this one, remains true. All right. So time for an example. The most simple example of a continuous random variable that there is, is the so-called uniform random variable. So the uniform random variable is described by a density which is 0 except over an interval. And over that interval, it is constant. What is it meant to convey? It's trying to convey the idea that all x's in this range are equally likely. Well, that doesn't say very much. Any individual x has 0 probability. So it's conveying a little more than that. What it is saying is that if I take an interval of a given length delta, and I take another interval of the same length, delta, under the uniform distribution, these two intervals are going to have the same probability. So being uniform means that intervals of same length have the same probability. So no interval is more likely than any other to occur. And in that sense, it conveys the idea of sort of complete randomness. Any little interval in our range is equally likely as any other little interval. All right. So what's the formula for this density? I only told you the range. What's the height? Well, the area under the density must be equal to 1. Total probability is equal to 1. And so the height, inescapably, is going to be 1 over (b minus a). That's the height that makes the density integrate to 1. So that's the formula. And if you don't want to lose one point in your exam, you have to say that it's also 0, otherwise. OK. All right? That's sort of the complete answer. How about the expected value of this random variable? OK. You can find the expected value in two different ways. One is to start with the definition. And so you integrate over the range of interest times the density. And you figure out what that integral is going to be. Or you can be a little more clever. Since the center-of-gravity interpretation is still true, it must be the center of gravity of this picture. And the center of gravity is, of course, the midpoint. Whenever you have symmetry, the mean is always the midpoint of the diagram that gives you the PDF. OK. So that's the expected value of X. Finally, regarding the variance, well, there you will have to do a little bit of calculus. We can write down the definition. So it's an integral instead of a sum. A typical value of the random variable minus the expected value, squared, times the density. And we integrate. You do this integral, and you find it's (b minus a) squared over that number, which happens to be 12. Maybe more interesting is the standard deviation itself. And you see that the standard deviation is proportional to the width of that interval. This agrees with our intuition, that the standard deviation is meant to capture a sense of how spread out our distribution is. And the standard deviation has the same units as the random variable itself. So it's sort of good to-- you can interpret it in a reasonable way based on that picture. OK, yes. Now, let's go up one level and think about the following. So we have formulas for the discrete case, formulas for the continuous case. So you can write them side by side. One has sums, the other has integrals. Suppose you want to make an argument and say that something is true for every random variable. You would essentially need to do two separate proofs, for discrete and for continuous. Is there some way of dealing with random variables just one at a time, in one shot, using a sort of uniform notation? Is there a unifying concept? Luckily, there is one. It's the notion of the cumulative distribution function of a random variable. And it's a concept that applies equally well to discrete and continuous random variables. So it's an object that we can use to describe distributions in both cases, using just one piece of notation. So what's the definition? It's the probability that the random variable takes values less than a certain number little x. So you go to the diagram, and you see what's the probability that I'm falling to the left of this. And you specify those probabilities for all x's. In the continuous case, you calculate those probabilities using the integral formula. So you integrate from here up to x. In the discrete case, to find the probability to the left of some point, you go here, and you add probabilities again from the left. So the way that the cumulative distribution function is calculated is a little different in the continuous and discrete case. In one case you integrate. In the other, you sum. But leaving aside how it's being calculated, what the concept is, it's the same concept in both cases. So let's see what the shape of the cumulative distribution function would be in the two cases. So here what we want is to record for every little x the probability of falling to the left of x. So let's start here. Probability of falling to the left of here is 0-- 0, 0, 0. Once we get here and we start moving to the right, the probability of falling to the left of here is the area of this little rectangle. And the area of that little rectangle increases linearly as I keep moving. So accordingly, the CDF increases linearly until I get to that point. At that point, what's the value of my CDF? 1. I have accumulated all the probability there is. I have integrated it. This total area has to be equal to 1. So it reaches 1, and then there's no more probability to be accumulated. It just stays at 1. So the value here is equal to 1. OK. How would you find the density if somebody gave you the CDF? The CDF is the integral of the density. Therefore, the density is the derivative of the CDF. So you look at this picture and take the derivative. Derivative is 0 here, 0 here. And it's a constant up there, which corresponds to that constant. So more generally, and an important thing to know, is that the derivative of the CDF is equal to the density-- almost, with a little bit of an exception. What's the exception? At those places where the CDF does not have a derivative-- here where it has a corner-- the derivative is undefined. And in some sense, the density is also ambiguous at that point. Is my density at the endpoint, is it 0 or is it 1? It doesn't really matter. If you change the density at just a single point, it's not going to affect the value of any integral you ever calculate. So the value of the density at the endpoint, you can leave it as being ambiguous, or you can specify it. It doesn't matter. So at all places where the CDF has a derivative, this will be true. At those places where you have corners, which do show up sometimes, well, you don't really care. How about the discrete case? In the discrete case, the CDF has a more peculiar shape. So let's do the calculation. We want to find the probability of b to the left of here. That probability is 0, 0, 0. Once we cross that point, the probability of being to the left of here is 1/6. So as soon as we cross the point 1, we get the probability of 1/6, which means that the size of the jump that we have here is 1/6. Now, question. At this point 1, which is the correct value of the CDF? Is it 0, or is it 1/6? It's 1/6 because-- you need to look carefully at the definitions, the probability of x being less than or equal to little x. If I take little x to be 1, it's the probability that capital X is less than or equal to 1. So it includes the event that x is equal to 1. So it includes this probability here. So at jump points, the correct value of the CDF is going to be this one. And now as I trace, x is going to the right. As soon as I cross this point, I have added another 3/6 probability. So that 3/6 causes a jump to the CDF. And that determines the new value. And finally, once I cross the last point, I get another jump of 2/6. A general moral from these two examples and these pictures. CDFs are well defined in both cases. For the case of continuous random variables, the CDF will be a continuous function. It starts from 0. It eventually goes to 1 and goes smoothly-- well, continuously from smaller to higher values. It can only go up. It cannot go down since we're accumulating more and more probability as we are going to the right. In the discrete case, again it starts from 0, and it goes to 1. But it does it in a staircase manner. And you get a jump at each place where the PMF assigns a positive mass. So jumps in the CDF are associated with point masses in our distribution. In the continuous case, we don't have any point masses, so we do not have any jumps either. Now, besides saving us notation-- we don't have to deal with discrete and continuous twice-- CDFs give us actually a little more flexibility. Not all random variables are continuous or discrete. You can cook up random variables that are kind of neither or a mixture of the two. An example would be, let's say you play a game. And with a certain probability, you get a certain number of dollars in your hands. So you flip a coin. And with probability 1/2, you get a reward of 1/2 dollars. And with probability 1/2, you are led to a dark room where you spin a wheel of fortune. And that wheel of fortune gives you a random reward between 0 and 1. So any of these outcomes is possible. And the amount that you're going to get, let's say, is uniform. So you flip a coin. And depending on the outcome of the coin, either you get a certain value or you get a value that ranges over a continuous interval. So what kind of random variable is it? Is it continuous? Well, continuous random variables assign 0 probability to individual points. Is it the case here? No, because you have positive probability of obtaining 1/2 dollar. So our random variable is not continuous. Is it discrete? It's not discrete, because our random variable can take values also over a continuous range. So we call such a random variable a mixed random variable. If you were to draw its distribution very loosely, probably you would want to draw a picture like this one, which kind of conveys the idea of what's going on. So just think of this as a drawing of masses that are sitting over a table. We place an object that weighs half a pound, but it's an object that takes zero space. So half a pound is just sitting on top of that point. And we take another half-pound of probability and spread it uniformly over that interval. So this is like a piece that comes from mass functions. And that's a piece that looks more like a density function. And we just throw them together in the picture. I'm not trying to associate any formal meaning with this picture. It's just a schematic of how probabilities are distributed, help us visualize what's going on. Now, if you have taken classes on systems and all of that, you may have seen the concept of an impulse function. And you my start saying that, oh, I should treat this mathematically as a so-called impulse function. But we do not need this for our purposes in this class. Just think of this as a nice picture that conveys what's going on in this particular case. So now, what would the CDF look like in this case? The CDF is always well defined, no matter what kind of random variable you have. So the fact that it's not continuous, it's not discrete shouldn't be a problem as long as we can calculate probabilities of this kind. So the probability of falling to the left here is 0. Once I start crossing there, the probability of falling to the left of a point increases linearly with how far I have gone. So we get this linear increase. But as soon as I cross that point, I accumulate another 1/2 unit of probability instantly. And once I accumulate that 1/2 unit, it means that my CDF is going to have a jump of 1/2. And then afterwards, I still keep accumulating probability at a fixed rate, the rate being the density. And I keep accumulating, again, at a linear rate until I settle to 1. So this is a CDF that has certain pieces where it increases continuously. And that corresponds to the continuous part of our randomize variable. And it also has some places where it has discrete jumps. And those district jumps correspond to places in which we have placed a positive mass. And by the-- OK, yeah. So this little 0 shouldn't be there. So let's cross it out. All right. So finally, we're going to take the remaining time and introduce our new friend. It's going to be the Gaussian or normal distribution. So it's the most important distribution there is in all of probability theory. It's plays a very central role. It shows up all over the place. We'll see later in the class in more detail why it shows up. But the quick preview is the following. If you have a phenomenon in which you measure a certain quantity, but that quantity is made up of lots and lots of random contributions-- so your random variable is actually the sum of lots and lots of independent little random variables-- then invariability, no matter what kind of distribution the little random variables have, their sum will turn out to have approximately a normal distribution. So this makes the normal distribution to arise very naturally in lots and lots of contexts. Whenever you have noise that's comprised of lots of different independent pieces of noise, then the end result will be a random variable that's normal. So we are going to come back to that topic later. But that's the preview comment, basically to argue that it's an important one. OK. And there's a special case. If you are dealing with a binomial distribution, which is the sum of lots of Bernoulli random variables, again you would expect that the binomial would start looking like a normal if you have many, many-- a large number of point fields. All right. So what's the math involved here? Let's parse the formula for the density of the normal. What we start with is the function X squared over 2. And if you are to plot X squared over 2, it's a parabola, and it has this shape -- X squared over 2. Then what do we do? We take the negative exponential of this. So when X squared over 2 is 0, then negative exponential is 1. When X squared over 2 increases, the negative exponential of that falls off, and it falls off pretty fast. So as this goes up, the formula for the density goes down. And because exponentials are pretty strong in how quickly they fall off, this means that the tails of this distribution actually do go down pretty fast. OK. So that explains the shape of the normal PDF. How about this factor 1 over square root 2 pi? Where does this come from? Well, the integral has to be equal to 1. So you have to go and do your calculus exercise and find the integral of this the minus X squared over 2 function and then figure out, what constant do I need to put in front so that the integral is equal to 1? How do you evaluate that integral? Either you go to Mathematica or Wolfram's Alpha or whatever, and it tells you what it is. Or it's a very beautiful calculus exercise that you may have seen at some point. You throw in another exponential of this kind, you bring in polar coordinates, and somehow the answer comes beautifully out there. But in any case, this is the constant that you need to make it integrate to 1 and to be a legitimate density. We call this the standard normal. And for the standard normal, what is the expected value? Well, the symmetry, so it's equal to 0. What is the variance? Well, here there's no shortcut. You have to do another calculus exercise. And you find that the variance is equal to 1. OK. So this is a normal that's centered around 0. How about other types of normals that are centered at different places? So we can do the same kind of thing. Instead of centering it at 0, we can take some place where we want to center it, write down a quadratic such as (X minus mu) squared, and then take the negative exponential of that. And that gives us a normal density that's centered at mu. Now, I may wish to control the width of my density. To control the width of my density, equivalently I can control the width of my parabola. If my parabola is narrower, if my parabola looks like this, what's going to happen to the density? It's going to fall off much faster. OK. How do I make my parabola narrower or wider? I do it by putting in a constant down here. So by putting a sigma here, this stretches or widens my parabola by a factor of sigma. Let's see. Which way does it go? If sigma is very small, this is a big number. My parabola goes up quickly, which means my normal falls off very fast. So small sigma corresponds to a narrower density. And so it, therefore, should be intuitive that the standard deviation is proportional to sigma. Because that's the amount by which you are scaling the picture. And indeed, the standard deviation is sigma. And so the variance is sigma squared. So all that we have done here to create a general normal with a given mean and variance is to take this picture, shift it in space so that the mean sits at mu instead of 0, and then scale it by a factor of sigma. This gives us a normal with a given mean and a given variance. And the formula for it is this one. All right. Now, normal random variables have some wonderful properties. And one of them is that they behave nicely when you take linear functions of them. So let's fix some constants a and b, suppose that X is normal, and look at this linear function Y. What is the expected value of Y? Here we don't need anything special. We know that the expected value of a linear function is the linear function of the expectation. So the expected value is this. How about the variance? We know that the variance of a linear function doesn't care about the constant term. But the variance gets multiplied by a squared. So we get these variance, where sigma squared is the variance of the original normal. So have we used so far the property that X is normal? No, we haven't. This calculation here is true in general when you take a linear function of a random variable. But if X is normal, we get the other additional fact that Y is also going to be normal. So that's the nontrivial part of the fact that I'm claiming here. So linear functions of normal random variables are themselves normal. How do we convince ourselves about it? OK. It's something that we will do formerly in about two or three lectures from today. So we're going to prove it. But if you think about it intuitively, normal means this particular bell-shaped curve. And that bell-shaped curve could be sitting anywhere and could be scaled in any way. So you start with a bell-shaped curve. If you take X, which is bell shaped, and you multiply it by a constant, what does that do? Multiplying by a constant is just like scaling the axis or changing the units with which you're measuring it. So it will take a bell shape and spread it or narrow it. But it will still be a bell shape. And then when you add the constant, you just take that bell and move it elsewhere. So under linear transformations, bell shapes will remain bell shapes, just sitting at a different place and with a different width. And that sort of the intuition of why normals remain normals under this kind of transformation. So why is this useful? Well, OK. We have a formula for the density. But usually we want to calculate probabilities. How will you calculate probabilities? If I ask you, what's the probability that the normal is less than 3, how do you find it? You need to integrate the density from minus infinity up to 3. Unfortunately, the integral of the expression that shows up that you would have to calculate, an integral of this kind from, let's say, minus infinity to some number, is something that's not known in closed form. So if you're looking for a closed-form formula for this-- X bar-- if you're looking for a closed-form formula that gives you the value of this integral as a function of X bar, you're not going to find it. So what can we do? Well, since it's a useful integral, we can just tabulate it. Calculate it once and for all, for all values of X bar up to some precision, and have that table, and use it. That's what one does. OK, but now there is a catch. Are we going to write down a table for every conceivable type of normal distribution-- that is, for every possible mean and every variance? I guess that would be a pretty long table. You don't want to do that. Fortunately, it's enough to have a table with the numerical values only for the standard normal. And once you have those, you can use them in a clever way to calculate probabilities for the more general case. So let's see how this is done. So our starting point is that someone has graciously calculated for us the values of the CDF, the cumulative distribution function, that is the probability of falling below a certain point for the standard normal and at various places. How do we read this table? The probability that X is less than, let's say, 0.63 is this number. This number, 0.7357, is the probability that the standard normal is below 0.63. So the table refers to the standard normal. But someone, let's say, gives us some other numbers and tells us we're dealing with a normal with a certain mean and a certain variance. And we want to calculate the probability that the value of that random variable is less than or equal to 3. How are we going to do it? Well, there's a standard trick, which is so-called standardizing a random variable. Standardizing a random variable stands for the following. You look at the random variable, and you subtract the mean. This makes it a random variable with 0 mean. And then if I divide by the standard deviation, what happens to the variance of this random variable? Dividing by a number divides the variance by sigma squared. The original variance of X was sigma squared. So when I divide by sigma, I end up with unit variance. So after I do this transformation, I get a random variable that has 0 mean and unit variance. It is also normal. Why is its normal? Because this expression is a linear function of the X that I started with. It's a linear function of a normal random variable. Therefore, it is normal. And it is a standard normal. So by taking a general normal random variable and doing this standardization, you end up with a standard normal to which you can then apply the table. Sometimes one calls this the normalized score. If you're thinking about test results, how would you interpret this number? It tells you how many standard deviations are you away from the mean. This is how much you are away from the mean. And you count it in terms of how many standard deviations it is. So this number being equal to 3 tells you that X happens to be 3 standard deviations above the mean. And I guess if you're looking at your quiz scores, very often that's the kind of number that you think about. So it's a useful quantity. But it's also useful for doing the calculation we're now going to do. So suppose that X has a mean of 2 and a variance of 16, so a standard deviation of 4. And we're going to calculate the probability of this event. This event is described in terms of this X that has ugly means and variances. But we can take this event and rewrite it as an equivalent event. X less than 3 is this same as X minus 2 being less than 3 minus 2, which is the same as this ratio being less than that ratio. So I'm subtracting from both sides of the inequality the mean and then dividing by the standard deviation. This event is the same as that event. Why do we like this better than that? We like it because this is the standardized, or normalized, version of X. We know that this is standard normal. And so we're asking the question, what's the probability that the standard normal is less than this number, which is 1/4? So that's the key property, that this is normal (0, 1). And so we can look up now with the table and ask for the probability that the standard normal random variable is less than 0.25. Where is that going to be? 0.2, 0.25, it's here. So the answer is 0.987. So I guess this is just a drill that you could learn in high school. You didn't have to come here to learn about it. But it's a drill that's very useful when we will be calculating normal probabilities all the time. So make sure you know how to use the table and how to massage a general normal random variable into a standard normal random variable. OK. So just one more minute to look at the big picture and take stock of what we have done so far and where we're going. Chapter 2 was this part of the picture, where we dealt with discrete random variables. And this time, today, we started talking about continuous random variables. And we introduced the density function, which is the analog of the probability mass function. We have the concepts of expectation and variance and CDF. And this kind of notation applies to both discrete and continuous cases. They are calculated the same way in both cases except that in the continuous case, you use sums. In the discrete case, you use integrals. So on that side, you have integrals. In this case, you have sums. In this case, you always have Fs in your formulas. In this case, you always have Ps in your formulas. So what's there that's left for us to do is to look at these two concepts, joint probability mass functions and conditional mass functions, and figure out what would be the equivalent concepts on the continuous side. So we will need some notion of a joint density when we're dealing with multiple random variables. And we will also need the concept of conditional density, again for the case of continuous random variables. The intuition and the meaning of these objects is going to be exactly the same as here, only a little subtler because densities are not probabilities. They're rates at which probabilities accumulate. So that adds a little bit of potential confusion here, which, hopefully, we will fully resolve in the next couple of sections. All right. Thank you.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Joint_Probability_Mass_Function_PMF_Drill_1.txt
Welcome back guys. Today we're going to work on a problem that tests your knowledge of joint PMFs. And we're also going to get some practice computing conditional expectations and conditional variances. So in this problem, we are given a set of points in the xy plane. And we're told that these points are equally likely. So there's eight of them. And each point has a probability of 1/8 of occurring. And we're also given this list of questions. And we're going to work through them together. So in part a, we are asked to find the values of x that maximize the conditional expectation of y given x. So jumping right in, this is the quantity we're interested in. And so this quantity is a function of x. You plug-in various values of x. And then this will spit out a scalar value. And that value will correspond to the conditional expectation of y conditioned on the value of x that you put in. So let's see, when x is equal to 0, for instance, let's figure out what this value is. Well, when x is equal to 0 we're living in a world, essentially, on this line. So that means that only these two points could have occurred. And in particular, y can only take on the values of 1 and 3. Now, since all these points in the unconditional universe were equally likely, in the conditional universe they will still be equally likely. So this happens with probability 1/2. And this happens with probability 1/2. And therefore, the expectation would just be 3/2 plus 1/2 which is 4/2, or 2. But a much faster way of seeing this-- and it's the strategy that I'm going to use for the rest of the problem-- is to remember that expectation acts like center of mass. So the center of mass, when these two points are equally likely, is just the midpoint, which of course is 2. So we're going to use that intuition on the other ones. So I'm skipping to x is equal to 2 because 1 and 3 are not possible. So when x is equal to 2, y can only take on the values of 1 or 2. Again, they're equally likely. So the center of mass is in the middle which happens at 1.5 or 3/2. Similarly, x is equal to 4. We're living in this conditional universe, where y can take on of these four points with probability 1/4 each. And so again, we expect the center of mass to be at 1.5 or 3/2. And this quantity is undefined otherwise. OK, so we're almost done. Now we just need to find which value of x maximizes this. Well, let's see, 2 is the biggest quantity out of all of these numbers. So the maximum is 2. And it occurs when x is equal to 0. So we come over here. And we found our answer. x is equal to 0 is the value, which maximizes the conditional expectation of y given x. So part b is very similar to part a. But there is slightly more computation involved. Because now we're dealing with the variance and not an expectation. And variance is usually a little bit tougher to compute. So we're going to start in the same manner. But I want you guys to see if you can figure out intuitively what the right value is. I'm going to do the entire computation now. And then you can compare whether your intuition matches with the real results. So variance of x conditioned on a particular value of y, this is now a function of y. For each value of y you plug in you're going to get out a scalar number. And that number represents the conditional variance of x when you condition on the value of y that you plugged in. So let's see, when y is equal to 0 we have a nice case. If y is equal to 0 we have no freedom about what x is. This is the only point that could have occurred. Therefore, x definitely takes on a value of 4. And there's no uncertainty left. So in other words, the variance is 0. Now, if y is equal to 1, x can take on a value of 0, a value of 2 or a value of 4. And these all have the same probability of occurring, of 1/3, And again, the reasoning behind that is that all eight points were equally likely in the unconditional universe. If you condition on y being equal to 1 these outcomes still have the same relative frequency. Namely, they're still equally likely. And since there are three of them they now have a probability of 1/3 each. So we're going to go ahead and use a formula that hopefully, you guys remember. So in particular, variance is the expectation of x squared minus the expectation of x all squared, the whole thing squared. So let's start by computing this number first. So conditioned on y is equal to 1-- so we're in this line-- the expectation of x is just 2, right? The same center-of-mass to argument. So this, we have a minus 2 squared over here. Now, x squared is only slightly more difficult. With probability 1/3, x squared will take on a value of 0. With probability 1/3, x squared will take on a value of 4. I'm just doing 2 squared. And with probability 1/3, x squared takes on a value of 4 squared or 16. So writing down when I just said, we have 0 times 1/3 which is 0. We have 2 squared, which is 4 times 1/3. And then we have 4 squared, which is 16 times 1/3. And then we have our minus 4 from before. So doing this math out, we get, let's see, 20/3 minus 12/3, which is equal to 8/3, or 8/3. So we'll come back up here and put 8/3. So I realize I'm going through this pretty quickly. Hopefully this step didn't confuse you. Essentially, when I was doing is, if you think of x squared as a new random variable, x squared, the possible values that it can take on are 0, 4, and 16 when you're conditioning on y is equal to 1. And so I was simply saying that that random variable takes on those values with equal probability. So let's move on to the next one. So if we condition on y is equal to 2 we're going to do a very similar computation. Oops, I shouldn't have erased that. OK, so we're going to use the same formula that we just used, which is the expectation of x given y is equal to 2. Sorry, x squared minus the expectation of x conditioned on y is equal to 2, all squared. So conditioned on y is equal to 2, the expectation of x is 3. Same center of mass argument. So 3 squared is 9. And then x squared can take on a value of 4. Or it can take on a value of 16. And it does so with equal probability. So we get 4/2, 4 plus 16 over 2. So this is 2 plus 8, which is 10, minus 9. That'll give us 1. So we get a 1 when y is equal to 2. And last computation and then we're done. I'm still recycling the same formula. But now we're conditioning on y is equal to 3. And then we'll be done with this problem, I promise. OK, so when y is equal to 3 x can take on the value of 0. Or it can take on the value of 4. Those two points happen with probability 1/2, 1/2. So the expectation is right in the middle which is 2. So we get a minus 4. And similarly, x squared can take on the value of 0. When x takes on the value of 0-- and that happens with probability 1/2-- similarly, x squared can take on the value of 16 when x takes on the value of 4. And that happens with probability 1/2. So we just have 0/2 plus 16/2 minus 4. And this gives us 8 minus 4, which is simply 4. So finally, after all that computation, we are done. We have the conditional variance of x given y. Again, we're interested in when this value is largest. And we see that 4 is the biggest value in this column. And this value occurs when y takes on a value of 3. So our answer, over here, is y is equal to 3. All right, so now we're going to switch gears in part c and d a little bit. And we're going to be more concerned with PMFs, et cetera. So in part c, we're given a random variable called r which is defined as the minimum of x and y. So for instance, this is the 0.01. The minimum of 0 and 1 is 0. So r would have a value of 0 here. Now, we can be a little bit smarter about this. If we plot the line, y is equal to x. So that looks something like this. We see that all of the points below this line satisfy y being less or equal to x. And all the points above this line have y greater than or equal to x. So if y is less than or equal to x, you hopefully agree that here the min, or r, is equal to y. But over here, the min, r, is actually equal to x, since x is always smaller. So now we can go ahead quickly. And I'm going to write the value of r next each point using this rule. So here, r is the value of y, which is 1. Here, r is equal to 0. Here r is 1. Here r is 2. Here r is 3. Over here, r is the value of x. So r is equal to 0. And r is equal to 0 here. And so the only point we didn't handle is the one that lies on the line. But in that case it's easy. Because x is equal to 2. And y is equal to 2. So the min is simply 2. So with this information I claim we're now done. We can just write down what the PMF of r is. So in particular, r takes on a value of 0. When this point happens, this point happens, or this point happens. And those collectively have a probability of 3/8 of occurring. r can take on a value of 1 when either of these two points happen. So that happens with probability 2/8. r is equal to 2. This can happen in two ways. So we get 2/8. And r equal to 3 can happen in only one way. So we get 1/8. Quick sanity check, 3 plus 2 is 5, plus 2 is 7, plus 1 is 8. So our PMF sums to 1. And to be complete, we should sketch it. Because the problem asks us to sketch it. So we're plotting PR of r, 0, 1, 2, 3. So here we get, let's see, 1, 2, 3. For 0 we have 3/8. For 1 we have 2/8. For 2 we have 2/8. And for 3 we have 1/8. So this is our fully labeled sketch of Pr of r. And forgive me for erasing so quickly, but you guys can pause the video, presumably, if you need more time. Let's move on to part d. So in part d we're given an event named a, which is the event that x squared is greater than or equal to y. And then we're asked to find the expectation of xy in the unconditional universe. And then the expectation of x times y conditioned on a. So let's not worry about the conditioning for now. Let's just focus on the unconditional expectation of x times y. So I'm just going to erase all these r's so I don't get confused. But we're going to follow a very similar strategy, which is at each point I'm going to label what the value of w is. And we'll find the expectation of w that way. So let's see, here, we have 4 times 0. So w is equal to 0. Here we have 4 times 1. w is equal to 4. 4 times 2, w is equal to 8. 4 times 3, w is equal to 12. w is equal to 2. w is equal to 4. w is equal to 0. w is equal to 0. OK, so that was just algebra. And now, I claim again, we can just write down what the expectation of x times y is. And I'm sorry, I didn't announce my notation. I should mention that now. I was defining w to be the random variable x times y. And that's why I labeled the product of x times y as w over here. My apologies about not defining that random variable. So the expectation of w, well, w takes on a value of 0. When this happens, this happens or that happens. And we know that those three points occur with probability 3/8. So we have 0 times 3/8. I'm just using the normal formula for expectation. w takes on a value of 2 with probability 1/8. Because this is the lead point in which it happens, 2 times 1/8. Plus it can take on the value of 4 with probability 2/8, 4 times 2/8. And 8, with 1/8 probability. And similarly, 12 with 1/8 probability. So this is just algebra. The numerator sums up to 30. Yes, that's correct. So we have 30/8, which is equal to 15/4. So this is our first answer for part d. And now we have to do this slightly trickier one, which is the conditional expectation of x times y, or w conditioned on a. So similar to what I did in part c, I'm going to draw the line y equals x squared. So y equals x squared is 0 here, 1 here. And at 2, it should take on a value of 4. So the curve should look something like this. This is the line y is equal to x squared. So we know all the points below this line satisfy y less than or equal to x squared. And all the points above this line have y greater than or equal to x squared. And a is y less than or equal to x squared. So we are in the conditional universe where only points below this line can happen. So that one, that one, that one, that one, that one and that one. So there are six of them. And again, in the unconditional world, all of the points were equally likely. So in the conditional world these six points are still equally likely. So they each happen with probability 1/6. So in this case, the expectation of w is simply 2 times 1/6 plus 0 times 1/6. But that's 0. So I'm not going to write it. 4 times 2/6 plus 4 times 2/6 plus 8 times 1/6, plus 12 times 1 over 6. And again, the numerator summed to 30. But this time our denominator is 6. So this is simply 5. So we have, actually, finished the problem. Because we've computed this value and this value. And so the important takeaways of this problem are, essentially, honestly, just to get you comfortable with computing things involving joint PMFs. We talked a lot about finding expectations quickly by thinking about center of mass and the geometry of the problem. We've got practice computing conditional variances. And we did some derived distributions. And we'll do a lot more of those later.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Sampling_People_on_Buses.txt
Hi. In this problem, we're dealing with buses of students going to a job convention. And in the problem, we'll be exercising our knowledge of PMFs-- probability mass functions. So we'll get a couple of opportunities to write out some PMFs, and also calculating expectations or expected values. And also, importantly, we'll actually be exercising our intuition to help us not just rely on numbers, but also to just have a sense of what the answers to some probability questions should be. So the problem specifically deals with four buses of students. So we have buses, and in each one carries a different number of students. So the first one carries 40 students, the second one 33, the third one has 25, and the last one has 50 students for a total of 148 students. And because these students are smart, and they like probability, they are interested in a couple questions. So suppose that one of these 148 students is chosen randomly, and so we'll assume that what that means is that each one has the same probability of being chosen. So they're chosen uniformly at random. And let's assign a couple of random variables. So we'll say x corresponds to the number of students in the bus of the selected student. OK, so one of these 148 students is selected uniformly at random, and we'll let x correspond to the number of students in that student's bus. So if a student from this bus was chosen, then x would be 25, for example. OK, and then let's come up with another random variable, y, which is almost the same thing. Except instead of now selecting a random student, we'll select a random bus. Or equivalently, we'll select a random bus driver. So each bus has one driver, and instead of selecting one of the 148 students at random, we'll select one of the four bus drivers also uniformly at random. And we'll say the number of students in that driver's bus will be y. So for example, if this bus driver was selected, then y would be 33. OK, so the main problem that we're trying to answer is what do you expect the expectation-- which one of these random variables do you expect to have the higher expectation or the higher expected value? So, would you expect x to be higher on average, or y to be higher? And what would be the intuition for this? So obviously, we can actually write out the PMFs for x and y. These are just discrete random variables. And we can actually calculate out what the expectation is. But it's also useful to exercise your intuition, and your sense of what the answer should be. So it might not be immediately clear which one would be higher, or you might even say that maybe it doesn't make a difference. They're actually the same. But a useful way to approach some of these questions is to try to take things to the extreme and see how that plays out. So let's take the simpler example and take it to the extreme and say, suppose a set of four buses carrying these number of students. We have only two buses-- one bus that has only 1 student, and we have another bus that has 1,000 students. OK. And suppose we ask the same question. Well, now if you look at it, there's a total of 1,001 students now. If you select one of the students at random, it's overwhelmingly more likely that that student will be one of the 1,000 students on this huge bus. It's very unlikely that you'll get lucky and select the one student who is by himself. And so because of that, you have a very high chance of selecting the bus with the high number of students. And so you would expect x, the number of students, to be high-- to be almost 1,000 in the expectation. But on the other hand, if you selected the driver at random, then you have a 50/50 chance of selecting this one or that one. And so you would expect the expectation there to be roughly 500 or so. And so you can see that if you take this to the extreme, then it becomes more clear what the answer would be. And the argument is that the expectation of x should be higher than the expectation of y, and the reason here is that because you select the student at random, you're more likely to select a student who is in a large bus, because that bus just has more students to select from. And because of that, you're more biased in favor of selecting large buses, and therefore, that makes x higher in expectation. OK, so that's the intuition behind this problem. And now, as I actually go through some of the more mechanics and write out what the PMFs and the calculation for the expectation would be to verify that our intuition is actually correct. OK, so we have two random variables that are defined. Now let's just write out what their PMFs are. So the PMF-- we write it as little P of capital X and little x. So the random variable-- what we do is we say the probability that it will take on a certain value, right? So what is the probability that x will be 40? Well, x will be 40 if a student from this bus was selected. And what's the probability that a student from this bus is selected? That probability is 40/148, because there's 148 students, 40 of whom are sitting in this bus. And similarly, x will be 33 with probability 33/148, and x will be 25 with probability 25/148. And x will be 50 with probability 50/148. And it will be 0 otherwise. OK, so there is our PMF for x, and we can do the same thing for y. The PMF of y-- again, we say what is the probability that y will take on certain values? Well, y can take on the same values as x can, because we're still dealing with the number of students in each bus. So y can be 40. But the probability that y is 40, because we're selecting the driver at random now, is 1/4, right? Because there's a 1/4 chance that we'll pick this driver. And the probability that y will be 33 will also be 1/4, and the same thing for 25 and 50. And it's 0 otherwise. OK, so those are the PMFs for our two random variables, x and y. And we can also draw out what the PMFs look like. So if this is 25, 30, 35, 40, 45, and 50, then the probability that it's 25 is 25/148. So we can draw a mass right there. For 33, it's a little higher, because it's 33/148 instead of 25. For 40, it's even higher still. It's 40/148. And for 50, it is still higher, because it is 50/148. And so you can see that the PMF is more heavily favored towards the larger values. We can do the same thing for y, and we'll notice that there's a difference in how these distributions look. So if we do the same thing, the difference now is that all four of these masses will have the same height. Each one will have height 1/4, whereas this one for x, it's more heavily biased in favor of the larger ones. And so because of that, we can actually now calculate what the expectations are and figure out whether or not our intuition was correct. OK, so now let's actually calculate out what these expectations are. So as you recall, the expectation is calculated out as a weighted sum. So for each possible value of x, you take that value and you weight it by the probability of the random variable taking on that value. So in this case, it would be 40 times 40/148, 33 times 33/148, and so on. 48 plus 25 times 25/148 plus 50 times 50/148. And if you do out this calculation, what you'll get is that it is around 39. Roughly 39. And now we can do the same thing for y. But for y, it's different, because now instead of weighting it by these probabilities, we'll weight it by these probabilities. So each one has the same weight of 1/4. So now we get 40 times 1/4 plus 33 times 1/4. That's 25 times 1/4 plus 50 times 1/4. And if you do out this arithmetic, what you get is that this expectation is 37. And so what we get is that, in fact, after we do out the calculations, the expected value of x is indeed greater than the expected value of y, which confirms our intuition. OK, so this problem, to summarize-- we've reviewed how to write out a PMF and also how to calculate expectations. But also, we've got a chance to figure out some intuition behind some of these problems. And so sometimes it's helpful to take simpler things and take things to the extreme and figure out intuitively whether or not the answer makes sense. It's useful just to verify whether the numerical answer that you get in the end is correct. Does this actually make sense? It's a useful guide for when you're solving these problems. OK, so we'll see you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Bernoulli_Process_Practice.txt
Hi everyone. Today I'm going to talk about Bernoulli process practice number one. In this problem, you are visiting a rain forest. But unfortunately you have run out of insect repellent. As a result, the probability of you getting mosquito bites is really high. At each second, the probability that a mosquito will land on your neck is 0.5. If a mosquito lands on your neck, the probability that it will bite you is 0.2. And the probability that it will never bother you is 0.8. All of this happens independently among all mosquitoes. For part A of the problem, we're interested in finding the expected value of the time between successive mosquito bites and the variance of the time between successive mosquito bites. From the problem statement we know that the probability distributions of getting mosquito bites at different times are identically distributed and independent. Therefore, the mosquito bites occur as a Bernoulli process with parameter p, where p represents the probability of getting a mosquito bite at each second. And p can be calculated as the probability that a mosquito lands on your neck at each second multiplied by the probability that a mosquito will bite you, given that it has landed on your neck. And this is equal to 0.5 times 0.2, which is equal to 0.1. Next let us define x as the time between successive mosquito bites. Because of the memory-less property of the Bernoulli process, which means the probability of getting mosquito bites at different times are independent, x is equivalent to the time until the next mosquito bite. And x is a geometrical random variable whose PMF is like the following. For all x, let's say equal to 0, the probabilities are equal to 0. For x equal to 1, the probability that it takes 1 second to the next mosquito bite is simply equal to p. And for x equal to 2, the probability that it takes 2 seconds until the next mosquito bite is equal to 1 minus p times p. And for x equal to 3, the probability that it takes 3 seconds until the next mosquito bite is equal to 1 minus p to the power of 2 times p. Similarly, for x equal to k, the probability that it takes k seconds until the next mosquito bite is equal to 1 minus p to the power of k minus 1 times p. Therefore the expected value of x is equal to 1 over p, which is equal to 1 over 0.1, which is equal to 10. And the variance of x is equal to 1 minus p over p squared, which is equal to 1 minus 0.1 over 0.1 squared, which is equal to 90. For part B of the problem, we're considering another type of bug. Similar to the case as the mosquitoes, here at each second the probability that a tick will land on your neck is equal to 0.1. And if a tick lands on your neck, the probability that it will bite you is equal to 0.7. And the probability that it will never bother you is equal to 0.3. And all this happens independently among all ticks and all mosquitoes. So similar to the case as part A, where mosquito bites occurs as a Bernoulli process with parameter p equal to 0.1, here the tick bites also across a Bernoulli process with parameter q equal to 0.1 times 0.7, which is equal to 0.07. And q is the probability of getting a tick bite at each second. Therefore, the bug bites occurs as a merged process from the mosquito bites and the tick bites. And let r represent the parameter for the bug bites. So here r is equal to the probability of getting either a mosquito bite or a tick bite. And this is equivalent to 1 minus the probability of getting no mosquito bite and no tick bite. Because the mosquito bites and the tick bites happens independently, therefore this can be written as 1 minus the probability of no mosquito bites times the probability of no tick bites at each second. And this is equal to 1 minus p times 1 minus q, which is p plus q minus pq, which is equal to 0.1 plus 0.7 minus 0.1 times 0.7. Which is equal to 0.163. Next, let us define y as the time between successive bug bites. So similar as x in part a, here y is a geometric distribution with parameter r. And therefore the expected value of y is equal to 1 over r, which is equal to 1 over 0.163. That is approximately 6.135. And the variance of y is equal to 1 minus r over r squared, which is equal to 1 minus 0.163 over 0.163 squared. And this is approximately 31.503. So this gives us the expected value of the time between successive bug bites and the variance of the time between successive bug bites. And this concludes our two days practice on Bernoulli process.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
3_Independence.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Let us start. So as always, we're to have a quick review of what we discussed last time. And then today we're going to introduce just one new concept, the notion of independence of two events. And we will play with that concept. So what did we talk about last time? The idea is that we have an experiment, and the experiment has a sample space omega. And then somebody comes and tells us you know the outcome of the experiments happens to lie inside this particular event B. Given this information, it kind of changes what we know about the situation. It tells us that the outcome is going to be somewhere inside here. So this is essentially our new sample space. And now we need to we reassign probabilities to the various possible outcomes, because, for example, these outcomes, even if they had positive probability beforehand, now that we're told that B occurred, those outcomes out there are going to have zero probability. So we need to revise our probabilities. The new probabilities are called conditional probabilities, and they're defined this way. The conditional probability that A occurs given that we're told that B occurred is calculated by this formula, which tells us the following-- out of the total probability that was initially assigned to the event B, what fraction of that probability is assigned to outcomes that also make A to happen? So out of the total probability assigned to B, we see what fraction of that total probability is assigned to those elements here that will also make A happen. Conditional probabilities are left undefined if the denominator here is zero. An easy consequence of the definition is if we bring that term to the other side, then we can find the probability of two things happening by taking the probability that the first thing happens, and then, given that the first thing happened, the conditional probability that the second one happens. Then we saw last time that we can divide and conquer in calculating probabilities of mildly complicated events by breaking it down into different scenarios. So event B can happen in two ways. It can happen either together with A, which is this probability, or it can happen together with A complement, which is this probability. So basically what we're saying that the total probability of B is the probability of this, which is A intersection B, plus the probability of that, which is A complement intersection B. So these two facts here, multiplication rule and the total probability theorem, are basic tools that one uses to break down probability calculations into a simpler parts. So we find probabilities of two things happening by looking at each one at a time. And this is what we do to break up a situation with two different possible scenarios. Then we also have the Bayes rule, which does the following. Given a model that has conditional probabilities of this kind, the Bayes rule allows us to calculate conditional probabilities in which the events appear in different order. You can think of these probabilities as describing a causal model of a certain situation, whereas these are the probabilities that you get after you do some inference based on the information that you have available. Now the Bayes rule, we derived it, and it's a trivial half-line calculation. But it underlies lots and lots of useful things in the real world. We had the radar example last time. You can think of more complicated situations in which there's a bunch or lots of different hypotheses about the environment. Given any particular setting in the environment, you have a measuring device that can produce many different outcomes. And you observe the final outcome out of your measuring device, and you're trying to guess which particular branch occurred. That is, you're trying to guess the state of the world based on a particular measurement. That's what inference is all about. So real world problems only differ from the simple example that we saw last time in that this kind of tree is a little more complicated. You might have infinitely many possible outcomes here and so on. So setting up the model may be more elaborate, but the basic calculation that's done based on the Bayes rule is essentially the same as the one that we saw. Now something that we discuss is that sometimes we use conditional probabilities to describe models, and let's do this by looking at a model where we toss a coin three times. And how do we use conditional probabilities to describe the situation? So we have one experiment. But that one experiment consists of three consecutive coin tosses. So the possible outcomes, our sample space, consists of strings of length 3 that tell us whether we had heads, tails, and in what sequence. So three heads in a row is one particular outcome. So what is the meaning of those labels in front of the branches? So this P here, of course, stands for the probability that the first toss resulted in heads. And let me use this notation to denote that the first was heads. I put an H in toss one. How about the meaning of this probability here? Well the meaning of this probability is a conditional one. It's the conditional probability that the second toss resulted in heads, given that the first one resulted in heads. And similarly this label here corresponds to the probability that the third toss resulted in heads, given that the first one and the second one resulted in heads. So in this particular model that I wrote down here, those probabilities, P, of obtaining heads remain the same no matter what happened in the previous toss. For example, even if the first toss was tails, we still have the same probability, P, that the second one is heads, given that the first one was tails. So we're assuming that no matter what happened in the first toss, the second toss will still have a conditional probability equal to P. So that conditional probability does not depend on what happened in the first toss. And we will see that this is a very special situation, and that's really the concept of independence that we are going to introduce shortly. But before we get to independence, let's practice once more the three skills that we covered last time in this example. So first skill was multiplication rule. How do you find the probability of several things happening? That is the probability that we have tails followed by heads followed by tails. So here we're talking about this particular outcome here, tails followed by heads followed by tails. And the way we calculate such a probability is by multiplying conditional probabilities along the path that takes us to this outcome. And so these conditional probabilities are recorded here. So it's going to be (1 minus P) times P times (1 minus P). So this is the multiplication rule. Second question is how do we find the probability of a mildly complicated event? So the event of interest here that I wrote down is the probability that in the three tosses, we had a total of one head. Exactly one head. This is an event that can happen in multiple ways. It happens here. It happens here. And it also happens here. So we want to find the total probability of the event consisting of these three outcomes. What do we do? We just add the probabilities of each individual outcome. How do we find the probability of an individual outcome? Well, that's what we just did. Now notice that this outcome has probability P times (1 minus P) squared. That one should not be there. So where is it? Ah. It's this one. OK, so the probability of this outcome is (1 minus P times P) times (1 minus P), the same probability. And finally, this one is again (1 minus P) squared times P. So this event of one head can happen in three ways. And each one of those three ways has the same probability of occurring. And this is the answer. And finally, the last thing that we learned how to do is to use the Bayes rule to calculate and make an inference. So somebody tells you that there was exactly one head in your three tosses. What is the probability that the first toss resulted in heads? OK, I guess you can guess the answer here if I tell you that there were three tosses. One of them was heads. Where was that head in the first, the second, or the third? Well, by symmetry, they should all be equally likely. So there should be probably just 1/3 that that head occurred in the first toss. Let's check our intuition using the definitions. So the definition of conditional probability tells us the conditional probability is the probability of both things happening. First toss is heads, and we have exactly one head divided by the probability of one head. What is the probability that the first toss is heads, and we have exactly one head? This is the same as the event heads, tails, tails. If I tell you that the first is heads, and there's only one head, it means that the others are tails. So this is the probability of heads, tails, tails divided by the probability of one head. And we know all of these quantities probability of heads, tails, tails is P times (1 minus P) squared. Probability of one head is 3 times P times (1 minus P) squared. So the final answer is 1/3, which is what you should have a guessed on intuitive grounds. Very good. So we got our practice on the material that we did cover last time. Again, think. There's basically three basic skills that we are practicing and exercising here. In the problems, quizzes, and in the real life, you may have to apply those three skills in somewhat more complicated settings, but in the end that's what it boils down to usually. Now let's focus on this special feature of this particular model that I discussed a little earlier. Think of the event heads in the second toss. Initially, the probability of heads in the second toss, you know, that it's P, the probability of success of your coin. If I tell you that the first toss resulted in heads, what's the probability that the second toss is heads? It's again P. If I tell you that the first toss was tails, what's the probability that the second toss is heads? It's again P. So whether I tell you the result of the first toss, or I don't tell you, it doesn't make any difference to you. You would always say the probability of heads in the second toss is going to P, no matter what happened in the first toss. This is a special situation to which we're going to give a name, and we're going to call that property independence. Basically independence between two things stands for the fact that the first thing, whether it occurred or not, doesn't give you any information, does not cause you to change your beliefs about the second event. This is the intuition. Let's try to translate this into mathematics. We have two events, and we're going to say that they're independent if your initial beliefs about B are not going to change if I tell you that A occurred. So you believe something how likely B is. Then somebody comes and tells you, you know, A has happened. Are you going to change your beliefs? No, I'm not going to change them. Whenever you are in such a situation, then you say that the two events are independent. Intuitively, the fact that A occurred does not convey any information to you about the likelihood of event B. The information that A provides is not so useful, is not relevant. A has to do with something else. It's not useful for your guessing whether B is going to occur or not. So we can take this as a first attempt into a definition of independence. Now remember that we have this property, the probability of two things happening is the probability of the first times the conditional probability of the second. If we have independence, this conditional probability is the same as the unconditional probability. So if we have independence according to that definition, we get this property that you can find the probability of two things happening by just multiplying their individual probabilities. Probability of heads in the first toss is 1/2. Probability of heads in the second toss is 1/2. Probability of heads heads is 1/4. That's what happens if your two tosses are independent of each other. So this property here is a consequence of this definition, but it's actually nicer, better, simpler, cleaner, more beautiful to take this as our definition instead of that one. Are the two definitions equivalent? Well, they're are almost the same, except for one thing. Conditional probabilities are only defined if you condition on an event that has positive probability. So this definition would be limited to cases where event A has positive probability, whereas this definition is something that you can write down always. We will say that two events are independent if and only if their probability of happening simultaneously is equal to the product of their two individual probabilities. And in particular, we can have events of zero probability. There's nothing wrong with that. If A has 0 probability, then A intersection B will also have zero probability, because it's an even smaller event. And so we're going to get zero is equal to zero. A corollary of what I just said, if an event A has zero probability, it's actually independent of any other event in our model, because we're going to get zero is equal to zero. And the definition is going to be satisfied. This is a little bit harder to reconcile with the intuition we have about independence, but then again, it's part of the mathematical definition. So what I want you to retain is this notion that the independence is something that you can check formally using this definition, but also you can check intuitively by if, in some cases, you can reason that whatever happens and determines whether A is going to occur or not, has nothing absolutely to do with whatever happens and determines whether B is going to occur or not. So if I'm doing a science experiment in this room, and it gets hit by some noise that's causes randomness. And then five years later, somebody somewhere else does the same science experiment somewhere else, it gets hit by other noise, you would usually say that these experiments are independent. So what events happen in one experiment are not going to change your beliefs about what might be happening in the other, because the sources of noise in these two experiments are completely unrelated. They have nothing to do with each other. So if I flip a coin here today, and I flip a coin in my office tomorrow, one shouldn't affect the other. So the events that I get from these should be independent. So that's usually how independence arises. By having distinct physical phenomena that do not interact. Sometimes you also get independence even though there is a physical interaction, but you just happen to have a numerical accident. A and B might be physically related very tightly, but a numerical accident happens and you get equality here, that's another case where we do get independence. Now suppose that we have two events that are laid out like this. Are these two events independent or not? The picture kind of tells you that one is separate from the other. But separate has nothing to do with independent. In fact, these two events are as dependent as Siamese twins. Why is that? If I tell you that A occurred, then you are certain that B did not occur. So information about the occurrence of A definitely affects your beliefs about the possible occurrence or non-occurrence of B. When the picture is like that, knowing that A occurred will change drastically my beliefs about B, because now I suddenly become certain that B did not occur. So a picture like this is a case actually of extreme dependence. So don't confuse independence with disjointness. They're very different types of properties. AUDIENCE: Question. PROFESSOR: Yes? AUDIENCE: So I understand the explanation, but the probability of A intersect B [INAUDIBLE] to zero, because they're disjoint. PROFESSOR: Yes. AUDIENCE: But then the product of probability A and probability B, one of them is going to be 1. [INAUDIBLE] PROFESSOR: No, suppose that the probabilities are 1/3, 1/4, and the rest is out there. You check the definition of independence. Probability of A intersection B is zero. Probability of A times the probability of B is 1/12. The two are not equal. Therefore we do not have independence. AUDIENCE: Right. So what's wrong with the intuition of the probability of A being 1, and the other one being 0? [INAUDIBLE]. PROFESSOR: No. The probability of A given B is equal to 0. Probability of A is equal to 1/3. So again, these two are different. So we had some initial beliefs about A, but as soon as we are told that B occurred, our beliefs about A changed. And so since our beliefs changed, that means that B conveys information about A. AUDIENCE: So can you not draw independent [INAUDIBLE] on a Venn diagram? PROFESSOR: I can't hear you. AUDIENCE: Can you draw independence on a Venn diagram? PROFESSOR: No, the Venn diagram is never enough to decide independence. So the typical picture in which you're going to have independence would be one event this way, and another event this way. You need to take the probability of this times the probability of that, and check that, numerically, it's equal to the probability of this intersection. So it's more than a Venn diagram. Numbers need to come out right. Now we did say some time ago that conditional probabilities are just like ordinary probabilities, and whatever we do in probability theory can also be done in conditional universes. Talking about conditional probabilities. So since we have a notion of independence, then there should be also a notion of conditional independence. So independence was defined by the probability that A intersection B is equal to the probability of A times the probability of B. What would be a reasonable definition of conditional independence? Conditional independence would mean that this same property could be true, but in a conditional universe where we are told that the certain event happens. So if we're told that the event C has happened, then were transported in a conditional universe where the only thing that matters are conditional probabilities. And this is just the same plain, previous definition of independence, but applied in a conditional universe. So this is the definition of conditional independence. So it's independence, but with reference to the conditional probabilities. And intuitively it has, again, the same meaning, that in the conditional world, if I tell you that A occurred, then that doesn't change your beliefs about B. So suppose you had a picture like this. And somebody told you that events A and B are independent unconditionally. Then somebody comes and tells you that event C actually has occurred, so we now live in this new universe. In this new universe, is the independence of A and B going to be preserved or not? Are A and B independent in this new universe? The answer is no, because in the new universe, whatever is left of event A is this piece. Whatever is left of event B is this piece. And these two pieces are disjoint. So we are back in a situation of this kind. So in the conditional universe, A and B are disjoint. And therefore, generically, they're not going to be independent. What's the moral of this example? Having independence in the original model does not imply independence in a conditional model. The opposite is also possible. And let's illustrate by another example. So I have two coins, and both of them are badly biased. One coin is much biased in favor of heads. The other coin is much biased in favor of tails. So the probabilities being 90%. Let's consider independent flips of coin A. This is the relevant model. This is a model of two independent flips of the first coin. There's going to be two flips, and each one has probability 0.9 of being heads. So that's a model that describes coin A. You can think of this as a conditional model which is a model of the coin flips conditioned on the fact that they have chosen coin A. Alternatively we could be dealing with coin B In a conditional world where we chose coin B and flip it twice, this is the relevant model. The probability of two heads, for example, is the probability of heads the first time, heads the second time, and each one is 0.1. Now I'm building this into a bigger experiment in which I first start by choosing one of the two coins at random. So I have these two coins. I blindly pick one of them. And then I start flipping them. So the question now is, are the coin flips, or the coin tosses, are they independent of each other? If we just stay inside this sub-model here, are the coin flips independent? They are independent, because the probability of heads in the second toss is the same, 0.9, no matter what happened in the first toss. So the conditional probabilities of what happens in the second toss are not affected by the outcome of the first toss. So the second toss and the first toss are independent. So here we're just dealing with plain, independent coin flips. Similarity the coin flips within this sub-model are also independent. Now the question is, if we look at the big model as just one probability model, instead of looking at the conditional sub-models, are the coin flips independent of each other? Does the outcome of a few coin flips give you information about subsequent coin flips? Well if I observe ten heads in a row-- So instead of two coin flips, now let's think of doing more of them so that the tree gets expanded. So let's start with this. I don't know which coin it is. What's the probability that the 11th coin toss is going to be heads? There's complete symmetry here, so the answer could not be anything other than 1/2. So let's justify it, why is it 1/2? Well, the probability that the 11th toss is heads, how can that outcome happen? It can happen in two ways. You can choose coin A, which happens with probability 1/2. And having chosen coin A, there's probability 0.9 that it results in that you get heads in the 11th toss. Or you can choose coin B. And if it's coin B when you flip it, there's probably 0.1 that you have heads. So the final answer is 1/2. So each one of the coins is biased, but they're biased in different ways. If I don't know which coin it is, their two biases kind of cancel out, and the probability of obtaining heads is just in the middle, then it's 1/2. Now if someone tells you that the first ten tosses were heads, is that going to change your beliefs about the 11th toss? Here's how a reasonable person would think about it. If it's coin B the probability of obtaining 10 heads in a row is negligible. It's going to be 0.1 to the 10th. If it's coin A. The probability of 10 heads in a row is a more reasonable number. It's 0.9 to the 10th. So this event is a lot more likely to occur with coin A, rather than coin B. The plausible explanation of having seen ten heads in a row is that I actually chose coin A. When you see ten heads in a row, you are pretty certain that it's coin A that we're dealing with. And once you're pretty certain that it's coin A that we're dealing with, what's the probability that the next toss is heads? It's going to be 0.9. So essentially here I'm doing an inference calculation. Given this information, I'm making an inference about which coin I'm dealing with. I become pretty certain that it's coin A, and given that it's coin A, this probability is going to be 0.9. And I'm putting an approximate sign here, because the inference that I did is approximate. I'm pretty certain it's coin A. I'm not 100% certain that it's coin A. But in any case what happens here is that the unconditional probability is different from the conditional probability. This information here makes me change my beliefs about the 11th toss. And this means that the 11th toss is dependent on the previous tosses. So the coin tosses have now become dependent. What is the physical link that causes this dependence? Well, the physical link is the choice of the coin. By choosing a particular coin, I'm introducing a pattern in the future coin tosses. And that pattern is what causes dependence. OK, so I've been playing a little bit too loose with the language here, because we defined the concept of independence of two events. But here I have been referring to independent coin tosses, where I'm thinking about many coin tosses, like 10 or 11 of them. So to be proper, I should have defined for you also the notion of independence of multiple events, not just two. We don't want to just say coin toss one is independent from coin toss two. We want to be able to say something like, these 10 then coin tosses are all independent of each other. Intuitively what that means should be the same thing-- that information about some of the coin tosses doesn't change your beliefs about the remaining coin tosses. How do we translate that into a mathematical definition? Well, an ugly attempt would be to impose requirements such as this. Think of A1 being the event that the first flip was heads. A2 is the event of that the second flip was heads. A3, the third flip, was heads, and so on. Here is an event whose occurrence is not determined by the first three coin flips. And here's an event whose occurrence or not is determined by the fifth and sixth coin flip. If we think physically that all those coin flips have nothing to do with each other, information about the fifth and sixth coin flip are not going to change what we expect from the first three. So the probability of this event, the conditional probability, should be the same as the unconditional probability. And we would like a relation of this kind to be true, no matter what kind of formula you write down, as long as the events that show up here are different from the events that show up there. OK. That's sort of an ugly definition. The mathematical definition that actually does the job, and leads to all the formulas of this kind, is the following. We're going to say that the collection of events are independent if we can find the probability of their joint occurrence by just multiplying probabilities. And that will be true even if you look at sub-collections of these events. Let's make that more precise. If we have three events, the definition tells us that the three events are independent if the following are true. Probability A1 and A2 and A3, you can calculate this probability by multiplying individual probabilities. But the same is true even if you take fewer events. Just a few indices out of the indices that we have available. So we also require P(A1 intersection A2) is P(A1) times P(A2). And similarly for the other possibilities of choosing the indices. OK, so independence, mathematical definition, requires that calculating probabilities of any intersection of the events we have in our hands, that calculation can be done by just multiplying individual probabilities. And this has to apply to the case where we consider all of the events in our hands or just sub-collections of those events. Now these relations just by themselves are called pairwise independence. So this relation, for example, tells us that A1 is independent from A2. This tells us that A2 is independent from A3. This will tell us that A1 is independent from A3. But independence of all the events together actually requires a little more. One more equality that has to do with all three events being considered at the same time. And this extra equality is not redundant. It actually does make a difference. Independence and pairwise independence are different things. So let's illustrate the situation with an example. Suppose we have two coin flips. The coin tosses are independent, so the bias is 1/2, so all possible outcomes have a probability of 1/2 times 1/2, which is 1/4. And let's consider now a bunch of different events. One event is that the first toss is heads. This is this blue set here. Another event is the second toss is heads. And this is this black event here. OK. Are these two events independent? If you check it mathematically, yes. Probability of A is probability of B is 1/2. Probability of A times probability of B is 1/4, which is the same as the probability of A intersection B, which is this set. So we have just checked mathematically that A and B are independent. Now lets consider a third event which is that the first and second toss give the same result. I'll use a different color. First and second toss to give the same result. This is the event that we obtain heads, heads or tails, tails. So this is the probability of C. What's the probability of C? Well, C is made up of two outcomes, each one of which has probability 1/4, so the probability of C is 1/2. What is the probability of C intersection A? C intersection A is just this one outcome, and has probability 1/4. What's the probability of A intersection B intersection C? The three events intersect just this outcome, so this probability is also 1/4. OK. What's the probability of C given A and B? If A has occurred, and B has occurred, you are certain that this outcome here happened. If the first toss is H and the second toss is H, then you're certain of the first and second toss gave the same result. So the conditional probability of C given A and B is equal to 1. So do we have independence in this example? We don't. C, that we obtain the same result in the first and the second toss, has probability 1/2. Half of the possible outcomes give us two coin flips with the same result-- heads, heads or tails, tails. So the probability of C is 1/2. But if I tell you that the events A and B both occurred, then you're certain that C occurred. If I tell you that we had heads and heads, then you're certain the outcomes were the same. So the conditional probability is different from the unconditional probability. So by combining these two relations together, we get that the three events are not independent. But are they pairwise independent? Is A independent from B? Yes, because probability of A times probability of B is 1/4, which is probability of A intersection B. Is C independent from A? Well, the probability of C and A is 1/4. The probability of C is 1/2. The probability of A is 1/2. So it checks. 1/4 is equal to 1/2 and 1/2, so event C and event A are independent. Knowing that the first toss was heads does not change your beliefs about whether the two tosses are going to have the same outcome or not. Knowing that the first was heads, well, the second is equally likely to be heads or tails. So event C has just the same probability, again, 1/2, to occur. To put it the opposite way, if I tell you that the two results were the same-- so it's either heads, heads or tails, tails-- what does that tell you about the first toss? Is it heads, or is it tails? Well, it doesn't tell you anything. It could be either over the two, so the probability of heads in the first toss is equal to 1/2, and telling you C occurred does not change anything. So this is an example that illustrates the case where we have three events in which we check that pairwise independence holds for any combination of two of these events. We have the probability of their intersection is equal to the product of their probabilities. On the other hand, the three events taken all together are not independent. A doesn't tell me anything useful, whether C is going to occur or not. B doesn't tell me anything useful. But if I tell you that both A and B occurred, the two of them together tell me something useful about C. Namely, they tell me that C certainly has occurred. Very good. So independence is this somewhat subtle concept. Once you grasp the intuition of what it really means, then things perhaps fall in place. But it's a concept where it's easy to get some misunderstanding. So just take some time to digest. So to lighten things up, I'm going to spend the remaining four minutes talking about the very nice, simple problem that involves conditional probabilities and the like. So here's the problem, formulated exactly as it shows up in various textbooks. And the formulation says the following. Well, consider one of those anachronistic places where they still have kings or queens, and where actually boys take precedence over girls. So if there is a boy-- if the royal family has a boy, then he will become the king even if he has an older sister who might be the queen. So we have one of those royal families. That royal family had two children, and we know that there is a king. There is a king, which means that at least one of the two children was a boy. Otherwise we wouldn't have a king. What is the probability that the king's sibling is female? OK. I guess we need to make some assumptions about genetics. Let's assume that every child is a boy or a girl with probability 1/2, and that different children, what they are is independent from what the other children were. So every childbirth is basically a coin flip. OK, so if you take that, you say, well, the king is a child. His sibling is another child. Children are independent of each other. So the probability that the sibling is a girl is 1/2. That's the naive answer. Now let's try to do it formally. Let's set up a model of the experiment. The royal family had two children, as we we're told, so there's four outcomes-- boy boy, boy girl, girl boy, and girl girl. Now, we are told that there is a king, which means what? This outcome here did not happen. It is not possible. There are three outcomes that remain possible. So this is our conditional sample space given that there is king. What are the probabilities for the original model? Well with the model that we assume that every child is a boy or a girl independently with probability 1/2, then the four outcomes would be equally likely, and they're like this. These are the original probabilities. But once we are told that this outcome did not happen, because we have a king, then we are transported to the smaller sample space. In this sample space, what's the probability that the sibling is a girl? Well the sibling is a girl in two out of the three outcomes. So the probability that the sibling is a girl is actually 2/3. So that's supposed to be the right answer. Maybe a little counter-intuitive. So you can play smart and say, oh I understand such problems better than you, here is a trick problem and here's why the answer is 2/3. But actually I'm not fully justified in saying that the answer is 2/3. I made lots of hidden assumptions when I put this model down, which I didn't yet state. So to reverse engineer this answer, let's actually think what's the probability model for which this would have been the right answer. And here's the probability model. The royal family-- the royal parents decided to have exactly two children. They went and had them. It turned out that at least one was a boy and became a king. Under this scenario-- that they decide to have exactly two children-- then this is the big sample space. It turned out that one was a boy. That eliminates this outcome. And then this picture is correct and this is the right answer. But there's hidden assumptions being there. How about if the royal family had followed the following strategy? We're going to have children until we get a boy, so that we get a king, and then we'll stop. OK, given they have two children, what's the probability that the sibling is a girl? It's 1. The reason that they had two children was because the first was a girl, so they had to have a second. So assumptions about reproductive practices actually need to come in, and they're going to affect the decisions. Or, if it's one of those ancient kingdoms where a king would always make sure too strangle any of his brothers, then the probability that the sibling is a girl is actually 1 again, and so on. So it means that one needs to be careful when you start with loosely worded problems to make sure exactly what it means and what assumptions you're making. All right, see you next week.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
23_Classical_Statistical_Inference_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality, educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So for the last three lectures we're going to talk about classical statistics, the way statistics can be done if you don't want to assume a prior distribution on the unknown parameters. Today we're going to focus, mostly, on the estimation side and leave hypothesis testing for the next two lectures. So where there is one generic method that one can use to carry out parameter estimation, that's the maximum likelihood method. We're going to define what it is. Then we will look at the most common estimation problem there is, which is to estimate the mean of a given distribution. And we're going to talk about confidence intervals, which refers to providing an interval around your estimates, which has some properties of the kind that the parameter is highly likely to be inside that interval, but we will be careful about how to interpret that particular statement. Ok. So the big framework first. The picture is almost the same as the one that we had in the case of Bayesian statistics. We have some unknown parameter. And we have a measuring device. There is some noise, some randomness. And we get an observation, X, whose distribution depends on the value of the parameter. However, the big change from the Bayesian setting is that here, this parameter is just a number. It's not modeled as a random variable. It does not have a probability distribution. There's nothing random about it. It's a constant. It just happens that we don't know what that constant is. And in particular, this probability distribution here, the distribution of X, depends on Theta. But this is not a conditional distribution in the usual sense of the word. Conditional distributions were defined when we had two random variables and we condition one random variable on the other. And we used the bar to separate the X from the Theta. To make the point that this is not a conditioned distribution, we use a different notation. We put a semicolon here. And what this is meant to say is that X has a distribution. That distribution has a certain parameter. And we don't know what that parameter is. So for example, this might be a normal distribution, with variance 1 but a mean Theta. We don't know what Theta is. And we want to estimate it. Now once we have this setting, then your job is to design this box, the estimator. The estimator is some data processing box that takes the measurements and produces an estimate of the unknown parameter. Now the notation that's used here is as if X and Theta were one-dimensional quantities. But actually, everything we say remains valid if you interpret X and Theta as vectors of parameters. So for example, you may obtain several measurements, X1 up to 2Xn. And there may be several unknown parameters in the background. Once more, we do not have, and we do not want to assume, a prior distribution on Theta. It's a constant. And if you want to think mathematically about this situation, it's as if you have many different probabilistic models. So a normal with this mean or a normal with that mean or a normal with that mean, these are alternative candidate probabilistic models. And we want to try to make a decision about which one is the correct model. In some cases, we have to choose just between a small number of models. For example, you have a coin with an unknown bias. The bias is either 1/2 or 3/4. You're going to flip the coin a few times. And you try to decide whether the true bias is this one or is that one. So in this case, we have two specific, alternative probabilistic models from which we want to distinguish. But sometimes things are a little more complicated. For example, you have a coin. And you have one hypothesis that my coin is unbiased. And the other hypothesis is that my coin is biased. And you do your experiments. And you want to come up with a decision that decides whether this is true or this one is true. In this case, we're not dealing with just two alternative probabilistic models. This one is a specific model for the coin. But this one actually corresponds to lots of possible, alternative coin models. So this includes the model where Theta is 0.6, the model where Theta is 0.7, Theta is 0.8, and so on. So we're trying to discriminate between one model and lots of alternative models. How does one go about this? Well, there's some systematic ways that one can approach problems of this kind. And we will start talking about these next time. So today, we're going to focus on estimation problems. In estimation problems, theta is a quantity, which is a real number, a continuous parameter. We're to design this box, so what we get out of this box is an estimate. Now notice that this estimate here is a random variable. Even though theta is deterministic, this is random, because it's a function of the data that we observe. The data are random. We're applying a function to the data to construct our estimate. So, since it's a function of random variables, it's a random variable itself. The distribution of Theta hat depends on the distribution of X. The distribution of X is affected by Theta. So in the end, the distribution of your estimate Theta hat will also be affected by whatever Theta happens to be. Our general objective, when designing estimators, is that we want to get, in the end, an error, an estimation error, which is not too large. But we'll have to make that specific. Again, what exactly do we mean by that? So how do we go about this problem? One general approach is to pick a Theta, under which the data that we observe, that this is the X's, our most likely to have occurred. So I observe X. For any given Theta, I can calculate this quantity, which tells me, under this particular Theta, the X that you observed had this probability of occurring. Under that Theta, the X that you observe had that probability of occurring. You just choose that Theta, which makes the data that you observed most likely. It's interesting to compare this maximum likelihood estimate with the estimates that you would have, if you were in a Bayesian setting, and you were using maximum approach theory probability estimation. In the Bayesian setting, what we do is, given the data, we use the prior distribution on Theta. And we calculate the posterior distribution of Theta given X. Notice that this is sort of the opposite from what we have here. This is the probability of X for a particular value of Theta, whereas this is the probability of Theta for a particular X. So it's the opposite type of conditioning. In the Bayesian setting, Theta is a random variable. So we can talk about the probability distribution of Theta. So how do these two compare, except for this syntactic difference that the order X's and Theta's are reversed? Let's write down, in full detail, what this posterior distribution of Theta is. By the Bayes rule, this conditional distribution is obtained from the prior, and the model of the measurement process that we have. And we get to this expression. So in Bayesian estimation, we want to find the most likely value of Theta. And we need to maximize this quantity over all possible Theta's. First thing to notice is that the denominator is a constant. It does not involve Theta. So when you maximize this quantity, you don't care about the denominator. You just want to maximize the numerator. Now, here, things start to look a little more similar. And they would be exactly of the same kind, if that term here was absent, it the prior was absent. The two are going to become the same if that prior was just a constant. So if that prior is a constant, then maximum likelihood estimation takes exactly the same form as Bayesian maximum posterior probability estimation. So you can give this particular interpretation of maximum likelihood estimation. Maximum likelihood estimation is essentially what you have done, if you were in a Bayesian world, and you had assumed a prior on the Theta's that's uniform, all the Theta's being equally likely. Okay. So let's look at a simple example. Suppose that the Xi's are independent, identically distributed random variables, with a certain parameter Theta. So the distribution of each one of the Xi's is this particular term. So Theta is one-dimensional. It's a one-dimensional parameter. But we have several data. We write down the formula for the probability of a particular X vector, given a particular value of Theta. But again, when I use the word, given, here it's not in the conditioning sense. It's the value of the density for a particular choice of Theta. Here, I wrote down, I defined maximum likelihood estimation in terms of PMFs. That's what you would do if the X's were discrete random variables. Here, the X's are continuous random variables, so instead of I'm using the PDF instead of the PMF. So this a definition, here, generalizes to the case of continuous random variables. And you use F's instead of X's, our usual recipe. So the maximum likelihood estimate is defined. Now, since the Xi's are independent, the joint density of all the X's together is the product of the individual densities. So you look at this quantity. This is the density or sort of probability of observing a particular sequence of X's. And we ask the question, what's the value of Theta that makes the X's that we observe most likely? So we want to carry out this maximization. Now this maximization is just a calculational problem. We're going to do this maximization by taking the logarithm of this expression. Maximizing an expression is the same as maximizing the logarithm. So the logarithm of this expression, the logarithm of a product is the sum of the logarithms. You get contributions from this Theta term. There's n of these, so we get an n log Theta. And then we have the sum of the logarithms of these terms. It gives us minus Theta. And then the sum of the X's. So we need to maximize this expression with respect to Theta. The way to do this maximization is you take the derivative, with respect to Theta. And you get n over Theta equals to the sum of the X's. And then you solve for Theta. And you find that the maximum likelihood estimate is this quantity. Which sort of makes sense, because this is the reciprocal of the sample mean of X's. Theta, in an exponential distribution, we know that it's 1 over (the mean of the exponential distribution). So it looks like a reasonable estimate. So in any case, this is the estimates that the maximum likelihood estimation procedure tells us that we should report. This formula here, of course, tells you what to do if you have already observed specific numbers. If you have observed specific numbers, then you observe this particular number as your estimate of Theta. If you want to describe your estimation procedure more abstractly, what you have constructed is an estimator, which is a box that's takes in the random variables, capital X1 up to Capital Xn, and produces out your estimate, which is also a random variable. Because it's a function of these random variables and is denoted by an upper case Theta, to indicate that this is now a random variable. So this is an equality about numbers. This is a description of the general procedure, which is an equality between two random variables. And this gives you the more abstract view of what we're doing here. All right. So what can we tell about our estimate? Is it good or is it bad? So we should look at this particular random variable and talk about the statistical properties that it has. What we would like is this random variable to be close to the true value of Theta, with high probability, no matter what Theta is, since we don't know what Theta is. Let's make a little more specific the properties that we want. So we cook up the estimator somehow. So this estimator corresponds, again, to a box that takes data in, the capital X's, and produces an estimate Theta hat. This estimate is random. Sometimes it will be above the true value of Theta. Sometimes it will be below. Ideally, we would like it to not have a systematic error, on the positive side or the negative side. So a reasonable wish to have, for a good estimator, is that, on the average, it gives you the correct value. Now here, let's be a little more specific about what that expectation is. This is an expectation, with respect to the probability distribution of Theta hat. The probability distribution of Theta hat is affected by the probability distribution of the X's. Because Theta hat is a function of the X's. And the probability distribution of the X's is affected by the true value of Theta. So depending on which one is the true value of Theta, this is going to be a different expectation. So if you were to write this expectation out in more detail, it would look something like this. You need to write down the probability distribution of Theta hat. And this is going to be some function. But this function depends on the true Theta, is affected by the true Theta. And then you integrate this with respect to Theta hat. What's the point here? Again, Theta hat is a function of the X's. So the density of Theta hat is affected by the density of the X's. The density of the X's is affected by the true value of Theta. So the distribution of Theta hat is affected by the value of Theta. Another way to put it is, as I've mentioned a few minutes ago, in this business, it's as if we are considering different possible probabilistic models, one probabilistic model for each choice of Theta. And we're trying to guess which one of these probabilistic models is the true one. One way of emphasizing the fact that this expression depends on the true Theta is to put a little subscript here, expectation, under the particular value of the parameter Theta. So depending on what value the true parameter Theta takes, this expectation will have a different value. And what we would like is that no matter what the true value is, that our estimate will not have a bias on the positive or the negative sides. So this is a property that's desirable. Is it always going to be true? Not necessarily, it depends on what estimator we construct. Is it true for our exponential example? Unfortunately not, the estimate that we have in the exponential example turns out to be biased. And one extreme way of seeing this is to consider the case where our sample size is 1. We're trying to estimate Theta. And the estimator from the previous slide, in that case, is just 1/X1. Now X1 has a fair amount of density in the vicinity of 0, which means that 1/X1 has significant probability of being very large. And if you do the calculation, this ultimately makes the expected value of 1/X1 to be infinite. Now infinity is definitely not the correct value. So our estimate is biased upwards. And it's actually biased a lot upwards. So that's how things are. Maximum likelihood estimates, in general, will be biased. But under some conditions, they will turn out to be asymptotically unbiased. That is, as you get more and more data, as your X vector is longer and longer, with independent data, the estimate that you're going to have, the expected value of your estimator is going to get closer and closer to the true value. So you do have some nice asymptotic properties, but we're not going to prove anything like this. Speaking of asymptotic properties, in general, what we would like to have is that, as you collect more and more data, you get the correct answer, in some sense. And the sense that we're going to use here is the limiting sense of convergence in probability, since this is the only notion of convergence of random variables that we have in our hands. This is similar to what we had in the pollster problem, for example. If we had a bigger and bigger sample size, we could be more and more confident that the estimate that we obtained is close to the unknown true parameter of the distribution that we have. So this is a desirable property. If you have an infinitely large amount of data, you should be able to estimate an unknown parameter more or less exactly. So this is it desirable property of estimators. It turns out that maximum likelihood estimation, given independent data, does have this property, under mild conditions. So maximum likelihood estimation, in this respect, is a good approach. So let's see, do we have this consistency property in our exponential example? In our exponential example, we used this quantity to estimate the unknown parameter Theta. What properties does this quantity have as n goes to infinity? Well this quantity is the reciprocal of that quantity up here, which is the sample mean. We know from the weak law of large numbers, that the sample mean converges to the expectation. So this property here comes from the weak law of large numbers. In probability, this quantity converges to the expected value, which, for exponential distributions, is 1/Theta. Now, if something converges to something, then the reciprocal of that should converge to the reciprocal of that. That's a property that's certainly correct for numbers. But you're not talking about convergence of numbers. We're talking about convergence in probability, which is a more complicated notion. Fortunately, it turns out that the same thing is true, when we deal with convergence in probability. One can show, although we will not bother doing this, that indeed, the reciprocal of this, which is our estimate, converges in probability to the reciprocal of that. And that reciprocal is the true parameter Theta. So for this particular exponential example, we do have the desirable property, that as the number of data becomes larger and larger, the estimate that we have constructed will get closer and closer to the true parameter value. And this is true no matter what Theta is. No matter what the true parameter Theta is, we're going to get close to it as we collect more data. Okay. So these are two rough qualitative properties that would be nice to have. If you want to get a little more quantitative, you can start looking at the mean squared error that your estimator gives. Now, once more, the comment I was making up there applies. Namely, that this expectation here is an expectation with respect to the probability distribution of Theta hat that corresponds to a particular value of little theta. So fix a little theta. Write down this expression. Look at the probability distribution of Theta hat, under that little theta. And do this calculation. You're going to get some quantity that depends on the little theta. And so all quantities in this equality here should be interpreted as quantities under that particular value of little theta. So if you wanted to make this more explicit, you could start throwing little subscripts everywhere in those expressions. And let's see what those expressions tell us. The expected value squared of a random variable, we know that it's always equal to the variance of this random variable, plus the expectation of the random variable squared. So the expectation value of that random variable, squared. This equality here is just our familiar formula, that the expected value of X squared is the variance of X plus the expected value of X squared. So we apply this formula to X equal to Theta hat minus Theta. Now, remember that, in this classical setting, theta is just a constant. We have fixed Theta. We want to calculate the variance of this quantity, under that particular Theta. When you add or subtract a constant to a random variable, the variance doesn't change. This is the same as the variance of our estimator. And what we've got here is the bias of our estimate. It tells us, on the average, whether we fall above or below. And we're taking the bias to be b squared. If we have an unbiased estimator, the bias term will be 0. So ideally we want Theta hat to be very close to Theta. And since Theta is a constant, if that happens, the variance of Theta hat would be very small. So Theta is a constant. If Theta hat has a distribution that's concentrated just around own little theta, then Theta hat would have a small variance. So this is one desire that have. We're going to have a small variance. But we also want to have a small bias at the same time. So the general form of the mean squared error has two contributions. One is the variance of our estimator. The other is the bias. And one usually wants to design an estimator that simultaneously keeps both of these terms small. So here's an estimation method that would do very well with respect to this term, but badly with respect to that term. So suppose that my distribution is, let's say, normal with an unknown mean Theta and variance 1. And I use as my estimator something very dumb. I always produce an estimate that says my estimate is 100. So I'm just ignoring the data and report 100. What does this do? The variance of my estimator is 0. There's no randomness in the estimate that I report. But the bias is going to be pretty bad. The bias is going to be Theta hat, which is 100 minus the true value of Theta. And for some Theta's, my bias is going to be horrible. If my true Theta happens to be 0, my bias squared is a huge term. And I get a large error. So what's the moral of this example? There are ways of making that variance very small, but, in those cases, you pay a price in the bias. So you want to do something a little more delicate, where you try to keep both terms small at the same time. So these types of considerations become important when you start to try to design sophisticated estimators for more complicated problems. But we will not do this in this class. This belongs to further classes on statistics and inference. For this class, for parameter estimation, we will basically stick to two very simple methods. One is the maximum likelihood method we've just discussed. And the other method is what you would do if you were still in high school and didn't know any probability. You get data. And these data come from some distribution with an unknown mean. And you want to estimate that the unknown mean. What would you do? You would just take those data and average them out. So let's make this a little more specific. We have X's that come from a given distribution. We know the general form of the distribution, perhaps. We do know, perhaps, the variance of that distribution, or, perhaps, we don't know it. But we do not know the mean. And we want to estimate the mean of that distribution. Now, we can write this situation. We can represent it in a different form. The Xi's are equal to Theta. This is the mean. Plus a 0 mean random variable, that you can think of as noise. So this corresponds to the usual situation you would have in a lab, where you go and try to measure an unknown quantity. You get lots of measurements. But each time that you measure them, your measurements have some extra noise in there. And you want to kind of get rid of that noise. The way to try to get rid of the measurement noise is to collect lots of data and average them out. This is the sample mean. And this is a very, very reasonable way of trying to estimate the unknown mean of the X's. So this is the sample mean. It's a reasonable, plausible, in general, pretty good estimator of the unknown mean of a certain distribution. We can apply this estimator without really knowing a lot about the distribution of the X's. Actually, we don't need to know anything about the distribution. We can still apply it, because the variance, for example, does not show up here. We don't need to know the variance to calculate that quantity. Does this estimator have good properties? Yes, it does. What's the expected value of the sample mean? If the expectation of this, it's the expectation of this sum divided by n. The expected value for each one of the X's is Theta. So the expected value of the sample mean is just Theta itself. So our estimator is unbiased. No matter what Theta is, our estimator does not have a systematic error in either direction. Furthermore, the weak law of large numbers tells us that this quantity converges to the true parameter in probability. So it's a consistent estimator. This is good. And if you want to calculate the mean squared error corresponding to this estimator. Remember how we defined the mean squared error? It's this quantity. Then it's a calculation that we have done a fair number of times by now. The mean squared error is the variance of the distribution of the X's divided by n. So as we get more and more data, the mean squared error goes down to 0. In some examples, it turns out that the sample mean is also the same as the maximum likelihood estimate. For example, if the X's are coming from a normal distribution, you can write down the likelihood, do the maximization with respect to Theta, you'll find that the maximum likelihood estimate is the same as the sample mean. In other cases, the sample mean will be different from the maximum likelihood. And then you have a choice about which one of the two you would use. Probably, in most reasonable situations, you would just use the sample mean, because it's simple, easy to compute, and has nice properties. All right. So you go to your boss. And you report and say, OK, I did all my experiments in the lab. And the average value that I got is a certain number, 2.37. So is that the informative to your boss? Well your boss would like to know how much they can trust this number, 2.37. Well, I know that the true value is not going to be exactly that. But how close should it be? So give me a range of what you think are possible values of Theta. So the situation is like this. So suppose that we observe X's that are coming from a certain distribution. And we're trying to estimate the mean. We get our data. Maybe our data looks something like this. You calculate the mean. You find the sample mean. So let's suppose that the sample mean is a number, for some reason take to be 2.37. But you want to convey something to your boss about how spread out these data were. So the boss asks you to give him or her some kind of interval on which Theta, the true parameter, might lie. So the boss asked you for an interval. So what you do is you end up reporting an interval. And you somehow use the data that you have seen to construct this interval. And you report to your boss also the endpoints of this interval. Let's give names to these endpoints, Theta_n- and Theta_n+. The ends here just play the role of keeping track of how many data we're using. So what you report to your boss is this interval as well. Are these Theta's here, the endpoints of the interval, lowercase or uppercase? What should they be? Well you construct these intervals after you see your data. You take the data into account to construct your interval. So these definitely should depend on the data. And therefore they are random variables. Same thing with your estimator, in general, it's going to be a random variable. Although, when you go and report numbers to your boss, you give the specific realizations of the random variables, given the data that you got. So instead of having just a single box that produces estimates. So our previous picture was that you have your estimator that takes X's and produces Theta hats. Now our box will also be producing Theta hats minus and Theta hats plus. It's going to produce an interval as well. The X's are random, therefore these quantities are random. Once you go and do the experiment and obtain your data, then your data will be some lowercase x, specific numbers. And then your estimates and estimator become also lower case. What would we like this interval to do? We would like it to be highly likely to contain the true value of the parameter. So we might impose some specs of the following kind. I pick a number, alpha. Usually that alpha, think of it as a probability of a large error. Typical value of alpha might be 0.05, in which case this number here is point 0.95. And you're given specs that say something like this. I would like, with probability at least 0.95, this to happen, which says that the true parameter lies inside the confidence interval. Now let's try to interpret this statement. Suppose that you did the experiment, and that you ended up reporting to your boss a confidence interval from 1.97 to 2.56. That's what you report to your boss. And suppose that the confidence interval has this property. Can you go to your boss and say, with probability 95%, the true value of Theta is between these two numbers? Is that a meaningful statement? So the statement is, the tentative statement is, with probability 95%, the true value of Theta is between 1.97 and 2.56. Well, what is random in that statement? There's nothing random. The true value of theta is a constant. 1.97 is a number. 2.56 is a number. So it doesn't make any sense to talk about the probability that theta is in this interval. Either theta happens to be in that interval, or it happens to not be. But there are no probabilities associated with this. Because theta is not random. Syntactically, you can see this. Because theta here is a lower case. So what kind of probabilities are we talking about here? Where's the randomness? Well the random thing is the interval. It's not theta. So the statement that is being made here is that the interval, that's being constructed by our procedure, should have the property that, with probability 95%, it's going to fall on top of the true value of theta. So the right way of interpreting what the 95% confidence interval is, is something like the following. We have the true value of theta that we don't know. I get data. Based on the data, I construct a confidence interval. I get my confidence interval. I got lucky. And the true value of theta is in here. Next day, I do the same experiment, take my data, construct a confidence interval. And I get this confidence interval, lucky once more. Next day I get data. I use my data to come up with an estimate of theta and the confidence interval. That day, I was unlucky. And I got a confidence interval out there. What the requirement here is, is that 95% of the days, where we use this certain procedure for constructing confidence intervals, 95% of those days, we will be lucky. And we will capture the correct value of theta by your confidence interval. So it's a statement about the distribution of these random confidence intervals, how likely are they to fall on top of the true theta, as opposed to how likely they are to fall outside. So it's a statement about probabilities associated with a confidence interval. They're not probabilities about theta, because theta, itself, is not random. So this is what the confidence interval is, in general, and how we interpret it. How do we construct a 95% confidence interval? Let's go through this exercise, in a particular example. The calculations are exactly the same as the ones that you did when we talked about laws of large numbers and the central limit theorem. So there's nothing new calculationally but it's, perhaps, new in terms of the language that we use and the interpretation. So we got our sample mean from some distribution. And we would like to calculate a 95% confidence interval. We know from the normal tables, that the standard normal has 2.5% on the tail, that's after 1.96. Yes, by this time, the number 1.96 should be pretty familiar. So if this probability here is 2.5%, this number here is 1.96. Now look at this random variable here. This is the sample mean. Difference, from the true mean, normalized by the usual normalizing factor. By the central limit theorem, this is approximately normal. So it has probability 0.95 of being less than 1.96. Now take this event here and rewrite it. This the event, well, that Theta hat minus theta is bigger than this number and smaller than that number. This event here is equivalent to that event here. And so this suggests a way of constructing our 95% percent confidence interval. I'm going to report the interval, which gives this as the lower end of the confidence interval, and gives this as the upper end of the confidence interval In other words, at the end of the experiment, we report the sample mean, which is our estimate. And we report also, an interval around the sample mean. And this is our 95% confidence interval. The confidence interval becomes smaller, when n is larger. In some sense, we're more certain that we're doing a good estimation job, so we can have a small interval and still be quite confident that our interval captures the true value of the parameter. Also, if our data have very little noise, when you have more accurate measurements, you're more confident that your estimate is pretty good. And that results in a smaller confidence interval, smaller length of the confidence interval. And still you have 95% probability of capturing the true value of theta. So we did this exercise by taking 95% confidence intervals and the corresponding value from the normal tables, which is 1.96. Of course, you can do it more generally, if you set your alpha to be some other number. Again, you look at the normal tables. And you find the value here, so that the tail has probability alpha over 2. And instead of using these 1.96, you use whatever number you get from the normal tables. And this tells you how to construct a confidence interval. Well, to be exact, this is not necessarily a 95% confidence interval. It's approximately a 95% confidence interval. Why is this? Because we've done an approximation. We have used the central limit theorem. So it might turn out to be a 95.5% confidence interval instead of 95%, because our calculations are not entirely accurate. But for reasonable values of n, using the central limit theorem is a good approximation. And that's what people almost always do. So just take the value from the normal tables. Okay, except for one catch. I used the data. I obtained my estimate. And I want to go to my boss and report this theta minus and theta hat, which is the confidence interval. What's the difficulty? I know what n is. But I don't know what sigma is, in general. So if I don't know sigma, what am I going to do? Here, there's a few options for what you can do. And the first option is familiar from what we did when we talked about the pollster problem. We don't know what sigma is, but maybe we have an upper bound on sigma. For example, if the Xi's Bernoulli random variables, we have seen that the standard deviation is at most 1/2. So use the most conservative value for sigma. Using the most conservative value means that you take bigger confidence intervals than necessary. So that's one option. Another option is to try to estimate sigma from the data. How do you do this estimation? In special cases, for special types of distributions, you can think of heuristic ways of doing this estimation. For example, in the case of Bernoulli random variables, we know that the true value of sigma, the standard deviation of a Bernoulli random variable, is the square root of theta1 minus theta, where theta is the mean of the Bernoulli. Try to use this formula. But theta is the thing we're trying to estimate in the first place. We don't know it. What do we do? Well, we have an estimate for theta, the estimate, produced by our estimation procedure, the sample mean. So I obtain my data. I get my data. I produce the estimate theta hat. It's an estimate of the mean. Use that estimate in this formula to come up with an estimate of my standard deviation. And then use that standard deviation, in the construction of the confidence interval, pretending that this is correct. Well the number of your data is large, then we know, from the law of large numbers, that theta hat is a pretty good estimate of theta. So sigma hat is going to be a pretty good estimate of sigma. So we're not making large errors by using this approach. So in this scenario here, things were simple, because we had an analytical formula. Sigma was determined by theta. So we could come up with a quick and dirty estimate of sigma. In general, if you do not have any nice formulas of this kind, what could you do? Well, you still need to come up with an estimate of sigma somehow. What is a generic method for estimating a standard deviation? Equivalently, what could be a generic method for estimating a variance? Well the variance is an expected value of some random variable. The variance is the mean of the random variable inside of those brackets. How does one estimate the mean of some random variable? You obtain lots of measurements of that random variable and average them out. So this would be a reasonable way of estimating the variance of a distribution. And again, the weak law of large numbers tells us that this average converges to the expected value of this, which is just the variance of the distribution. So we got a nice and consistent way of estimating variances. But now, we seem to be getting in a vicious circle here, because to estimate the variance, we need to know the mean. And the mean is something we're trying to estimate in the first place. Okay. But we do have an estimate from the mean. So a reasonable approximation, once more, is to plug-in, here, since we don't know the mean, the estimate of the mean. And so you get that expression, but with a theta hat instead of theta itself. And this is another reasonable way of estimating the variance. It does have the same consistency properties. Why? When n is large, this is going to behave the same as that, because theta hat converges to theta. And when n is large, this is approximately the same as sigma squared. So for a large n, this quantity also converges to sigma squared. And we have a consistent estimate of the variance as well. And we can take that consistent estimate and use it back in the construction of confidence interval. One little detail, here, we're dividing by n. Here, we're dividing by n-1. Why do we do this? Well, it turns out that's what you need to do for these estimates to be an unbiased estimate of the variance. One has to do a little bit of a calculation, and one finds that that's the factor that you need to have here in order to be unbiased. Of course, if you get 100 data points, whether you divide by 100 or divided by 99, it's going to make only a tiny difference in your estimate of your variance. So it's going to make only a tiny difference in your estimate of the standard deviation. It's not a big deal. And it doesn't really matter. But if you want to show off about your deeper knowledge of statistics, you throw in the 1 over n-1 factor in there. So now one basically needs to put together this story here, how you estimate the variance. You first estimate the sample mean. And then you do some extra work to come up with a reasonable estimate of the variance and the standard deviation. And then you use your estimate, of the standard deviation, to come up with a confidence interval, which has these two endpoints. In doing this procedure, there's basically a number of approximations that are involved. There are two types of approximations. One approximation is that we're pretending that the sample mean has a normal distribution. That's something we're justified to do, by the central limit theorem. But it's not exact. It's an approximation. And the second approximation that comes in is that, instead of using the correct standard deviation, in general, you will have to use some approximation of the standard deviation. Okay so you will be getting a little bit of practice with these concepts in recitation and tutorial. And we will move on to new topics next week. But the material that's going to be covered in the final exam is only up to this point. So next week is just general education. Hopefully useful, but it's not in the exam.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
15_Poisson_Process_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN TSITSIKLIS: Today we're going to finish our discussion of the Poisson process. We're going to see a few of its properties, do a few interesting problems, some more interesting than others. So go through a few examples and then we're going to talk about some quite strange things that happen with the Poisson process. So the first thing is to remember what the Poisson processes is. It's a model, let's say, of arrivals of customers that are, in some sense, quote unquote, completely random, that is a customer can arrive at any point in time. All points in time are equally likely. And different points in time are sort of independent of other points in time. So the fact that I got an arrival now doesn't tell me anything about whether there's going to be an arrival at some other time. In some sense, it's a continuous time version of the Bernoulli process. So the best way to think about the Poisson process is that we divide time into extremely tiny slots. And in each time slot, there's an independent possibility of having an arrival. Different time slots are independent of each other. On the other hand, when the slot is tiny, the probability for obtaining an arrival during that tiny slot is itself going to be tiny. So we capture these properties into a formal definition what the Poisson process is. We have a probability mass function for the number of arrivals, k, during an interval of a given length. So this is the sort of basic description of the distribution of the number of arrivals. So tau is fixed. And k is the parameter. So when we add over all k's, the sum of these probabilities has to be equal to 1. There's a time homogeneity assumption, which is hidden in this, namely, the only thing that matters is the duration of the time interval, not where the time interval sits on the real axis. Then we have an independence assumption. Intervals that are disjoint are statistically independent from each other. So any information you give me about arrivals during this time interval doesn't change my beliefs about what's going to happen during another time interval. So this is a generalization of the idea that we had in Bernoulli processes that different time slots are independent of each other. And then to specify this function, the distribution of the number of arrivals, we sort of go in stages. We first specify this function for the case where the time interval is very small. And I'm telling you what those probabilities will be. And based on these then, we do some calculations and to find the formula for the distribution of the number of arrivals for intervals of a general duration. So for a small duration, delta, the probability of obtaining 1 arrival is lambda delta. The remaining probability is assigned to the event that we get to no arrivals during that interval. The probability of obtaining more than 1 arrival in a tiny interval is essentially 0. And when we say essentially, it's means modular, terms that of order delta squared. And when delta is very small, anything which is delta squared can be ignored. So up to delta squared terms, that's what happened during a little interval. Now if we know the probability distribution for the number of arrivals in a little interval. We can use this to get the distribution for the number of arrivals over several intervals. How do we do that? The big interval is composed of many little intervals. Each little interval is independent from any other little interval, so is it is as if we have a sequence of Bernoulli trials. Each Bernoulli trial is associated with a little interval and has a small probability of obtaining a success or an arrival during that mini-slot. On the other hand, when delta is small, and you take a big interval and chop it up, you get a large number of little intervals. So what we essentially have here is a Bernoulli process, in which is the number of trials is huge but the probability of success during any given trial is tiny. The average number of trials ends up being proportional to the length of the interval. If you have twice as large an interval, it's as if you're having twice as many over these mini-trials, so the expected number of arrivals will increase proportionately. There's also this parameter lambda, which we interpret as expected number of arrivals per unit time. And it comes in those probabilities here. When you double lambda, this means that a little interval is twice as likely to get an arrival. So you would expect to get twice as many arrivals as well. That's why the expected number of arrivals during an interval of length tau also scales proportional to this parameter lambda. Somewhat unexpectedly, it turns out that the variance of the number of arrivals is also the same as the mean. This is a peculiarity that happens in the Poisson process. So this is one way of thinking about Poisson process, in terms of little intervals, each one of which has a tiny probability of success. And we think of the distribution associated with that process as being described by this particular PMF. So this is the PMF for the number of arrivals during an interval of a fixed duration, tau. It's a PMF that extends all over the entire range of non-negative integers. So the number of arrivals you can get during an interval for certain length can be anything. You can get as many arrivals as you want. Of course the probability of getting a zillion arrivals is going to be tiny. But in principle, this is possible. And that's because an interval, even if it's a fixed length, consists of an infinite number of mini-slots in some sense. You can divide, chop it up, into as many mini-slots as you want. So in principle, it's possible that every mini-slot gets an arrival. In principle, it's possible to get an arbitrarily large number of arrivals. So this particular formula here is not very intuitive when you look at it. But it's a legitimate PMF. And it's called the Poisson PMF. It's the PMF that describes the number of arrivals. So that's one way of thinking about the Poisson process, where the basic object of interest would be this PMF and you try to work with it. There's another way of thinking about what happens in the Poisson process. And this has to do with letting things evolve in time. You start at time 0. There's going to be a time at which the first arrival occurs, and call that time T1. This time turns out to have an exponential distribution with parameter lambda. Once you get an arrival, it's as if the process starts fresh. The best way to understand why this is the case is by thinking in terms of the analogy with the Bernoulli process. If you believe that statement for the Bernoulli process, since this is a limiting case, it should also be true. So starting from this time, we're going to wait a random amount of time until we get the second arrival This random amount of time, let's call it T2. This time, T2 is also going to have an exponential distribution with the same parameter, lambda. And these two are going to be independent of each other. OK? So the Poisson process has all the same memorylessness properties that the Bernoulli process has. What's another way of thinking of this property? So think of a process where you have a light bulb. The time at the light bulb burns out, you can model it by an exponential random variable. And suppose that they tell you that so far, we're are sitting at some time, T. And I tell you that the light bulb has not yet burned out. What does this tell you about the future of the light bulb? Is the fact that they didn't burn out, so far, is it good news or is it bad news? Would you rather keep this light bulb that has worked for t times steps and is still OK? Or would you rather use a new light bulb that starts new at that point in time? Because of the memorylessness property, the past of that light bulb doesn't matter. So the future of this light bulb is statistically the same as the future of a new light bulb. For both of them, the time until they burn out is going to be described an exponential distribution. So one way that people described the situation is to say that used is exactly as good as a new. So a used on is no worse than a new one. A used one is no better than a new one. So a used light bulb that hasn't yet burnt out is exactly as good as a new light bulb. So that's another way of thinking about the memorylessness that we have in the Poisson process. Back to this picture. The time until the second arrival is the sum of two independent exponential random variables. So, in principle, you can use the convolution formula to find the distribution of T1 plus T2, and that would be what we call Y2, the time until the second arrival. But there's also a direct way of obtaining to the distribution of Y2, and this is the calculation that we did last time on the blackboard. And actually, we did it more generally. We found the time until the case arrival occurs. It has a closed form formula, which is called the Erlang distribution with k degrees of freedom. So let's see what's going on here. It's a distribution Of what kind? It's a continuous distribution. It's a probability density function. This is because the time is a continuous random variable. Time is continuous. Arrivals can happen at any time. So we're talking about the PDF. This k is just the parameter of the distribution. We're talking about the k-th arrival, so k is a fixed number. Lambda is another parameter of the distribution, which is the arrival rate So it's a PDF over the Y's, whereas lambda and k are parameters of the distribution. OK. So this was what we knew from last time. Just to get some practice, let us do a problem that's not too difficult, but just to see how we use the various formulas that we have. So Poisson was a mathematician, but Poisson also means fish in French. So Poisson goes fishing. And let's assume that fish are caught according to a Poisson process. That's not too bad an assumption. At any given point in time, you have a little probability that a fish would be caught. And whether you catch one now is sort of independent about whether at some later time a fish will be caught or not. So let's just make this assumption. And suppose that the rules of the game are that you-- Fish are being called it the certain rate of 0.6 per hour. You fish for 2 hours, no matter what. And then there are two possibilities. If I have caught a fish, I stop and go home. So if some fish have been caught, so there's at least 1 arrival during this interval, I go home. Or if nothing has being caught, I continue fishing until I catch something. And then I go home. So that's the description of what is going to happen. And now let's starts asking questions of all sorts. What is the probability that I'm going to be fishing for more than 2 hours? I will be fishing for more than 2 hours, if and only if no fish were caught during those 2 hours, in which case, I will have to continue. Therefore, this is just this quantity. The probability of catching 2 fish in-- of catching 0 fish in the next 2 hours, and according to the formula that we have, this is going to be e to the minus lambda times how much time we have. There's another way of thinking about this. The probability that I fish for more than 2 hours is the probability that the first catch happens after time 2, which would be the integral from 2 to infinity of the density of the first arrival time. And that density is an exponential. So you do the integral of an exponential, and, of course, you would get the same answer. OK. That's easy. So what's the probability of fishing for more than 2 but less than 5 hours? What does it take for this to happen? For this to happen, we need to catch 0 fish from time 0 to 2 and catch the first fish sometime between 2 and 5. So if you-- one way of thinking about what's happening here might be to say that there's a Poisson process that keeps going on forever. But as soon as I catch the first fish, instead of continuing fishing and obtaining those other fish I just go home right now. Now the fact that I go home before time 5 means that, if I were to stay until time 5, I would have caught at least 1 fish. I might have caught more than 1. So the event of interest here is that the first catch happens between times 2 and 5. So one way of calculating this quantity would be-- Its the probability that the first catch happens between times 2 and 5. Another way to deal with it is to say, this is the probability that I caught 0 fish in the first 2 hours and then the probability that I catch at least 1 fish during the next 3 hours. This. What is this? The probability of 0 fish in the next 3 hours is the probability of 0 fish during this time. 1 minus this is the probability of catching at least 1 fish, of having at least 1 arrival, between times 2 and 5. If there's at least 1 arrival between times 2 and 5, then I would have gone home by time 5. So both of these, if you plug-in numbers and all that, of course, are going to give you the same answer. Now next, what's the probability that I catch at least 2 fish? In which scenario are we? Under this scenario, I go home when I catch my first fish. So in order to catch at least 2 fish, it must be in this case. So this is the same as the event that I catch at least 2 fish during the first 2 time steps. So it's going to be the probability from 2 to infinity, the probability that I catch 2 fish, or that I catch 3 fish, or I catch more than that. So it's this quantity. k is the number of fish that I catch. At least 2, so k goes from 2 to infinity. These are the probabilities of catching a number k of fish during this interval. And if you want a simpler form without an infinite sum, this would be 1 minus the probability of catching 0 fish, minus the probability of catching 1 fish, during a time interval of length 2. Another way to think of it. I'm going to catch 2 fish, at least 2 fish, if and only if the second fish caught in this process happens before time 2. So that's another way of thinking about the same event. So it's going to be the probability that the random variable Y2, the arrival time over the second fish, is less than or equal to 2. OK. The next one is a little trickier. Here we need to do a little bit of divide and conquer. Overall, in this expedition, what the expected number of fish to be caught? One way to think about it is to try to use the total expectations theorem. And think of expected number of fish, given this scenario, or expected number of fish, given this scenario. That's a little more complicated than the way I'm going to do it. The way I'm going to do is to think as follows-- Expected number of fish is the expected number of fish caught between times 0 and 2 plus expected number of fish caught after time 2. So what's the expected number caught between time 0 and 2? This is lambda t. So lambda is 0.6 times 2. This is the expected number of fish that are caught between times 0 and 2. Now let's think about the expected number of fish caught afterwards. How many fish are being caught afterwards? Well it depends on the scenario. If we're in this scenario, we've gone home and we catch 0. If we're in this scenario, then we continue fishing until we catch one. So the expected number of fish to be caught after time 2 is going to be the probability of this scenario times 1. And the probability of that scenario is the probability that they call it's 0 fish during the first 2 time steps times 1, which is the number of fish I'm going to catch if I continue. The expected total fishing time we can calculate exactly the same way. I'm jumping to the last one. My total fishing time has a period of 2 time steps. I'm going to fish for 2 time steps no matter what. And then if I caught 0 fish, which happens with this probability, my expected time is going to be the expected time from here onwards, which is the expected value of this geometric random variable with parameter lambda. So the expected time is 1 over lambda. And in our case this, is 1/0.6. Finally, if I tell you that I have been fishing for 4 hours and nothing has been caught so far, how much do you expect this quantity to be? Here is the story that, again, that for the Poisson process used is as good as new. The process does not have any memory. Given what happens in the past doesn't matter for the future. It's as if the process starts new at this point in time. So this one is going to be, again, the same exponentially distributed random variable with the same parameter lambda. So expected time until an arrival comes is an exponential distribut -- has an exponential distribution with parameter lambda, no matter what has happened in the past. Starting from now and looking into the future, it's as if the process has just started. So it's going to be 1 over lambda, which is 1/0.6. OK. Now our next example is going to be a little more complicated or subtle. But before we get to the example, let's refresh our memory about what we discussed last time about merging Poisson independent Poisson processes. Instead of drawing the picture that way, another way we could draw it could be this. We have a Poisson process with rate lambda1, and a Poisson process with rate lambda2. They have, each one of these, have their arrivals. And then we form the merged process. And the merged process records an arrival whenever there's an arrival in either of the two processes. This process in that process are assumed to be independent of each other. Now different times in this process and that process are independent of each other. So what happens in these two time intervals is independent from what happens in these two time intervals. These two time intervals to determine what happens here. These two time intervals determine what happens there. So because these are independent from these, this means that this is also independent from that. So the independence assumption is satisfied for the merged process. And the merged process turns out to be a Poisson process. And if you want to find the arrival rate for that process, you argue as follows. During a little interval of length delta, we have probability lambda1 delta of having an arrival in this process. We have probability lambda2 delta of an arrival in this process, plus second order terms in delta, which we're ignoring. And then you do the calculation and you find that in this process, you're going to have an arrival probability, which is lambda1 plus lambda2, again ignoring second order in delta-- terms that are second order in delta. So the merged process is a Poisson process whose arrival rate is the sum of the arrival rates of the individual processes. And the calculation we did at the end of the last lecture-- If I tell you that the new arrival happened here, where did that arrival come from? Did it come from here or from there? If the lambda1 is equal to lambda2, then by symmetry you would say that it's equally likely to have come from here or to come from there. But if this lambda is much bigger than that lambda, the fact that they saw an arrival is more likely to have come from there. And the formula that captures this is the following. This is the probability that my arrival has come from this particular stream rather than that particular stream. So when an arrival comes and you ask, what is the origin of that arrival? It's as if I'm flipping a coin with these odds. And depending on outcome of that coin, I'm going to tell you came from there or it came from there. So the origin of an arrival is either this stream or that stream. And this is the probability that the origin of the arrival is that one. Now if we look at 2 different arrivals, and we ask about their origins-- So let's think about the origin of this arrival and compare it with the origin that arrival. The origin of this arrival is random. It could be right be either this or that. And this is the relevant probability. The origin of that arrival is random. It could be either here or is there, and again, with the same relevant probability. Question. The origin of this arrival, is it dependent or independent from the origin that arrival? And here's how the argument goes. Separate times are independent. Whatever has happened in the process during this set of times is independent from whatever happened in the process during that set of times. Because different times have nothing to do with each other, the origin of this, of an arrival here, has nothing to do with the origin of an arrival there. So the origins of different arrivals are also independent random variables. So if I tell you that-- yeah. OK. So it as if that each time that you have an arrival in the merge process, it's as if you're flipping a coin to determine where did that arrival came from and these coins are independent of each other. OK. OK. Now we're going to use this-- what we know about merged processes to solve the problem that would be harder to do, if you were not using ideas from Poisson processes. So the formulation of the problem has nothing to do with the Poisson process. The formulation is the following. We have 3 light-bulbs. And each light bulb is independent and is going to die out at the time that's exponentially distributed. So 3 light bulbs. They start their lives and then at some point they die or burn out. So let's think of this as X, this as Y, and this as Z. And we're interested in the time until the last light-bulb burns out. So we're interested in the maximum of the 3 random variables, X, Y, and Z. And in particular, we want to find the expected value of this maximum. OK. So you can do derived distribution, use the expected value rule, anything you want. You can get this answer using the tools that you already have in your hands. But now let us see how we can connect to this picture with a Poisson picture and come up with the answer in a very simple way. What is an exponential random variable? An exponential random variable is the first act in the long play that involves a whole Poisson process. So an exponential random variable is the first act of a Poisson movie. Same thing here. You can think of this random variable as being part of some Poisson process that has been running. So it's part of this bigger picture. We're still interested in the maximum of the 3. The other arrivals are not going to affect our answers. It's just, conceptually speaking, we can think of the exponential random variable as being embedded in a bigger Poisson picture. So we have 3 Poisson process that are running in parallel. Let us split the expected time until the last burnout into pieces, which is time until the first burnout, time from the first until the second, and time from the second until the third. And find the expected values of each one of these pieces. What can we say about the expected value of this? This is the first arrival out of all of these 3 Poisson processes. It's the first event that happens when you look at all of these processes simultaneously. So 3 Poisson processes running in parallel. We're interested in the time until one of them, any one of them, gets in arrival. Rephrase. We merged the 3 Poisson processes, and we ask for the time until we observe an arrival in the merged process. When 1 of the 3 gets an arrival for the first time, the merged process gets its first arrival. So what's the expected value of this time until the first burnout? It's going to be the expected value of a Poisson random variable. So the first burnout is going to have an expected value, which is-- OK. It's a Poisson process. The merged process of the 3 has a collective arrival rate, which is 3 times lambda. So this is the parameter over the exponential distribution that describes the time until the first arrival in the merged process. And the expected value of this random variable is 1 over that. When you have an exponential random variable with parameter lambda, the expected value of that random variable is 1 over lambda. Here we're talking about the first arrival time in a process with rate 3 lambda. The expected time until the first arrival is 1 over (3 lambda). Alright. So at this time, this bulb, this arrival happened, this bulb has been burned. So we don't care about that bulb anymore. We start at this time, and we look forward. This bulb has been burned. So let's just look forward from now on. What have we got? We have two bulbs that are burning. We have a Poisson process that's the bigger picture of what could happen to that light bulb, if we were to keep replacing it. Another Poisson process. These two processes are, again, independent. From this time until that time, how long does it take? It's the time until either this process records an arrival or that process records and arrival. That's the same as the time that the merged process of these two records an arrival. So we're talking about the expected time until the first arrival in a merged process. The merged process is Poisson. It's Poisson with rate 2 lambda. So that extra time is going to take-- the expected value is going to be 1 over the (rate of that Poisson process). So 1 over (2 lambda) is the expected value of this random variable. So at this point, this bulb now is also burned. So we start looking from this time on. That part of the picture disappears. Starting from this time, what's the expected value until that remaining light-bulb burns out? Well, as we said before, in a Poisson process or with exponential random variables, we have memorylessness. A used bulb is as good as a new one. So it's as if we're starting from scratch here. So this is going to be an exponential random variable with parameter lambda. And the expected value of it is going to be 1 over lambda. So the beauty of approaching this problem in this particular way is, of course, that we manage to do everything without any calculus at all, without striking an integral, without trying to calculate expectations in any form. Most of the non-trivial problems that you encounter in the Poisson world basically involve tricks of these kind. You have a question and you try to rephrase it, trying to think in terms of what might happen in the Poisson setting, use memorylessness, use merging, et cetera, et cetera. Now we talked about merging. It turns out that the splitting of Poisson processes also works in a nice way. The story here is exactly the same as for the Bernoulli process. So I'm having a Poisson process. And each time, with some rate lambda, and each time that an arrival comes, I'm going to send it to that stream and the record an arrival here with some probability P. And I'm going to send it to the other stream with some probability 1 minus P. So either of this will happen or that will happen, depending on the outcome of the coin flip that I do. Each time that then arrival occurs, I flip a coin and I decide whether to record it here or there. This is called splitting a Poisson process into two pieces. What kind of process do we get here? If you look at the little interval for length delta, what's the probability that this little interval gets an arrival? It's the probability that this one gets an arrival, which is lambda delta times the probability that after I get an arrival my coin flip came out to be that way, so that it sends me there. So this means that this little interval is going to have probability lambda delta P. Or maybe more suggestively, I should write it as lambda P times delta. So every little interval has a probability of an arrival proportional to delta. The proportionality factor is lambda P. So lambda P is the rate of that process. And then you go through the mental exercise that you went through for the Bernoulli process to argue that a different intervals here are independent and so on. And that completes checking that this process is going to be a Poisson process. So when you split a Poisson process by doing independent coin flips each time that something happens, the processes that you get is again a Poisson process, but of course with a reduced rate. So instead of the word splitting, sometimes people also use the words thinning-out. That is, out of the arrivals that came, you keep a few but throw away a few. OK. So now the last topic over this lecture is a quite curious phenomenon that goes under the name of random incidents. So here's the story. Buses have been running on Mass Ave. from time immemorial. And the bus company that runs the buses claims that they come as a Poisson process with some rate, let's say, of 4 buses per hour. So that the expected time between bus arrivals is going to be 15 minutes. OK. Alright. So people have been complaining that they have been showing up there. They think the buses are taking too long. So you are asked to investigate. Is the company-- Does it operate according to its promises or not. So you send an undercover agent to go and check the interarrival times of the buses. Are they 15 minutes? Or are they longer? So you put your dark glasses and you show up at the bus stop at some random time. And you go and ask the guy in the falafel truck, how long has it been since the last arrival? So of course that guy works for the FBI, right? So they tell you, well, it's been, let's say, 12 minutes since the last bus arrival. And then you say, "Oh, 12 minutes. Average time is 15. So a bus should be coming any time now." Is that correct? No, you wouldn't think that way. It's a Poisson process. It doesn't matter how long it has been since the last bus arrival. So you don't go through that fallacy. Instead of predicting how long it's going to be, you just sit down there and wait and measure the time. And you find that this is, let's say, 11 minutes. And you go to your boss and report, "Well, it took-- I went there and the time from the previous bus to the next one was 23 minutes. It's more than the 15 that they said." So go and do that again. You go day after day. You keep these statistics of the length of this interval. And you tell your boss it's a lot more than 15. It tends to be more like 30 or so. So the bus company is cheating us. Does the bus company really run Poisson buses at the rate that they have promised? Well let's analyze the situation here and figure out what the length of this interval should be, on the average. The naive argument is that this interval is an interarrival time. And interarrival times, on the average, are 15 minutes, if the company runs indeed Poisson processes with these interarrival times. But actually the situation is a little more subtle because this is not a typical interarrival interval. This interarrival interval consists of two pieces. Let's call them T1 and T1 prime. What can you tell me about those two random variables? What kind of random variable is T1? Starting from this time, with the Poisson process, the past doesn't matter. It's the time until an arrival happens. So T1 is going to be an exponential random variable with parameter lambda. So in particular, the expected value of T1 is going to be 15 by itself. How about the random variable T1 prime. What kind of random variable is it? This is like the first arrival in a Poisson process that runs backwards in time. What kind of process is a Poisson process running backwards in time? Let's think of coin flips. Suppose you have a movie of coin flips. And for some accident, that fascinating movie, you happen to watch it backwards. Will it look any different statistically? No. It's going to be just the sequence of random coin flips. So a Bernoulli process that's runs in reverse time is statistically identical to a Bernoulli process in forward time. The Poisson process is a limit of the Bernoulli. So, same story with the Poisson process. If you run it backwards in time it looks the same. So looking backwards in time, this is a Poisson process. And T1 prime is the time until the first arrival in this backward process. So T1 prime is also going to be an exponential random variable with the same parameter, lambda. And the expected value of T1 prime is 15. Conclusion is that the expected length of this interval is going to be 30 minutes. And the fact that this agent found the average to be something like 30 does not contradict the claims of the bus company that they're running Poisson buses with a rate of lambda equal to 4. OK. So maybe the company can this way-- they can defend themselves in court. But there's something puzzling here. How long is the interarrival time? Is it 15? Or is it 30? On the average. The issue is what do we mean by a typical interarrival time. When we say typical, we mean some kind of average. But average over what? And here's two different ways of thinking about averages. You number the buses. And you have bus number 100. You have bus number 101, bus number 102, bus number 110, and so on. One way of thinking about averages is that you pick a bus number at random. I pick, let's say, that bus, all buses being sort of equally likely to be picked. And I measure this interarrival time. So for a typical bus. Then, starting from here until there, the expected time has to be 1 over lambda, for the Poisson process. But what we did in this experiment was something different. We didn't pick a bus at random. We picked a time at random. And if the picture is, let's say, this way, I'm much more likely to pick this interval and therefore this interarrival time, rather than that interval. Because, this interval corresponds to very few times. So if I'm picking a time at random and, in some sense, let's say, uniform, so that all times are equally likely, I'm much more likely to fall inside a big interval rather than a small interval. So a person who shows up at the bus stop at a random time. They're selecting an interval in a biased way, with the bias favor of longer intervals. And that's why what they observe is a random variable that has a larger expected value then the ordinary expected value. So the subtlety here is to realize that we're talking between two different kinds of experiments. Picking a bus number at random verses picking an interval at random with a bias in favor of longer intervals. Lots of paradoxes that one can cook up using Poisson processes and random processes in general often have to do with the story of this kind. The phenomenon that we had in this particular example also shows up in general, whenever you have other kinds of arrival processes. So the Poisson process is the simplest arrival process there is, where the interarrival times are exponential random variables. There's a larger class of models. They're called renewal processes, in which, again, we have a sequence of successive arrivals, interarrival times are identically distributed and independent, but they may come from a general distribution. So to make the same point of the previous example but in a much simpler setting, suppose that bus interarrival times are either 5 or 10 minutes apart. So you get some intervals that are of length 5. You get some that are of length 10. And suppose that these are equally likely. So we have -- not exactly -- In the long run, we have as many 5 minute intervals as we have 10 minute intervals. So the average interarrival time is 7 and 1/2. But if a person shows up at a random time, what are they going to see? Do we have as many 5s as 10s? But every 10 covers twice as much space. So if I show up at a random time, I have probability 2/3 falling inside an interval of duration 10. And I have one 1/3 probability of falling inside an interval of duration 5. That's because, out of the whole real line, 2/3 of it is covered by intervals of length 10, just because they're longer. 1/3 is covered by the smaller intervals. Now if I fall inside an interval of length 10 and I measure the length of the interval that I fell into, that's going to be 10. But if I fall inside an interval of length 5 and I measure how long it is, I'm going to get a 5. And that, of course, is going to be different than 7.5. OK. And which number should be bigger? It's the second number that's bigger because this one is biased in favor of the longer intervals. So that's, again, another illustration of the different results that you get when you have this random incidence phenomenon. So the bottom line, again, is that if you talk about a typical interarrival time, one must be very precise in specifying what we mean typical. So typical means sort of random. But to use the word random, you must specify very precisely what is the random experiment that you are using. And if you're not careful, you can get into apparent puzzles, such as the following. Suppose somebody tells you the average family size is 4, but the average person lives in a family of size 6. Is that compatible? Family size is 4 on the average, but typical people live, on the average, in families of size 6. Well yes. There's no contradiction here. We're talking about two different experiments. In one experiment, I pick a family at random, and I tell you the average family is 4. In another experiment, I pick a person at random and I tell you that this person, on the average, will be in their family of size 6. And what is the catch here? That if I pick a person at random, large families are more likely to be picked. So there's a bias in favor of large families. Or if you want to survey, let's say, are trains crowded in your city? Or are buses crowded? One choice is to pick a bus at random and inspect how crowded it is. Another choice is to pick a typical person and ask them, "Did you ride the bus today? Was it's crowded?" Well suppose that in this city there's one bus that's extremely crowded and all the other buses are completely empty. If you ask a person. "Was your bus crowded?" They will tell you, "Yes, my bus was crowded." There's no witness from the empty buses to testify in their favor. So by sampling people instead of sampling buses, you're going to get different result. And in the process industry, if your job is to inspect and check cookies, you will be faced with a big dilemma. Do you want to find out how many chocolate chips there are on a typical cookie? Are you going to interview cookies or are you going to interview chocolate chips and ask them how many other chips where there on your cookie? And you're going to get different answers in these cases. So moral is, one has to be very precise on how you formulate the sampling procedure that you have. And you'll get different answers.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
22_Bayesian_Statistical_Inference_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu PROFESSOR: So we're going to finish today our discussion of Bayesian Inference, which we started last time. As you probably saw there's not a huge lot of concepts that we're introducing at this point in terms of specific skills of calculating probabilities. But, rather, it's more of an interpretation and setting up the framework. So the framework in Bayesian estimation is that there is some parameter which is not known, but we have a prior distribution on it. These are beliefs about what this variable might be, and then we'll obtain some measurements. And the measurements are affected by the value of that parameter that we don't know. And this effect, the fact that X is affected by Theta, is captured by introducing a conditional probability distribution-- the distribution of X depends on Theta. It's a conditional probability distribution. So we have formulas for these two densities, the prior density and the conditional density. And given that we have these, if we multiply them we can also get the joint density of X and Theta. So we have everything that's there is to know in this second. And now we observe the random variable X. Given this random variable what can we say about Theta? Well, what we can do is we can always calculate the conditional distribution of theta given X. And now that we have the specific value of X we can plot this as a function of Theta. OK. And this is the complete answer to a Bayesian Inference problem. This posterior distribution captures everything there is to say about Theta, that's what we know about Theta. Given the X that we have observed Theta is still random, it's still unknown. And it might be here, there, or there with several probabilities. On the other hand, if you want to report a single value for Theta then you do some extra work. You continue from here, and you do some data processing on X. Doing data processing means that you apply a certain function on the data, and this function is something that you design. It's the so-called estimator. And once that function is applied it outputs an estimate of Theta, which we call Theta hat. So this is sort of the big picture of what's happening. Now one thing to keep in mind is that even though I'm writing single letters here, in general Theta or X could be vector random variables. So think of this-- it could be a collection Theta1, Theta2, Theta3. And maybe we obtained several measurements, so this X is really a vector X1, X2, up to Xn. All right, so now how do we choose a Theta to report? There are various ways of doing it. One is to look at the posterior distribution and report the value of Theta, at which the density or the PMF is highest. This is called the maximum a posteriori estimate. So we pick a value of theta for which the posteriori is maximum, and we report it. An alternative way is to try to be optimal with respects to a mean squared error. So what is this? If we have a specific estimator, g, this is the estimate it's going to produce. This is the true value of Theta, so this is our estimation error. We look at the square of the estimation error, and look at the average value. We would like this squared estimation error to be as small as possible. How can we design our estimator g to make that error as small as possible? It turns out that the answer is to produce, as an estimate, the conditional expectation of Theta given X. So the conditional expectation is the best estimate that you could produce if your objective is to keep the mean squared error as small as possible. So this statement here is a statement of what happens on the average over all Theta's and all X's that may happen in our experiment. The conditional expectation as an estimator has an even stronger property. Not only it's optimal on the average, but it's also optimal given that you have made a specific observation, no matter what you observe. Let's say you observe the specific value for the random variable X. After that point if you're asked to produce a best estimate Theta hat that minimizes this mean squared error, your best estimate would be the conditional expectation given the specific value that you have observed. These two statements say almost the same thing, but this one is a bit stronger. This one tells you no matter what specific X happens the conditional expectation is the best estimate. This one tells you on the average, over all X's may happen, the conditional expectation is the best estimator. Now this is really a consequence of this. If the conditional expectation is best for any specific X, then it's the best one even when X is left random and you are averaging your error over all possible X's. OK so now that we know what is the optimal way of producing an estimate let's do a simple example to see how things work out. So we have started with an unknown random variable, Theta, which is uniformly distributed between 4 and 10. And then we have an observation model that tells us that given the value of Theta, X is going to be a random variable that ranges between Theta - 1, and Theta + 1. So think of X as a noisy measurement of Theta, plus some noise, which is between -1, and +1. So really the model that we are using here is that X is equal to Theta plus U -- where U is uniform on -1, and +1. one, and plus one. So we have the true value of Theta, but X could be Theta - 1, or it could be all the way up to Theta + 1. And the X is uniformly distributed on that interval. That's the same as saying that U is uniformly distributed over this interval. So now we have all the information that we need, we can construct the joint density. And the joint density is, of course, the prior density times the conditional density. We go both of these. Both of these are constants, so the joint density is also going to be a constant. 1/6 times 1/2, this is one over 12. But it is a constant, not everywhere. Only on the range of possible x's and thetas. So theta can take any value between four and ten, so these are the values of theta. And for any given value of theta x can take values from theta minus one, up to theta plus one. So here, if you can imagine, a line that goes with slope one, and then x can take that value of theta plus or minus one. So this object here, this is the set of possible x and theta pairs. So the density is equal to one over 12 over this set, and it's zero everywhere else. So outside here the density is zero, the density only applies at that point. All right, so now we're asked to estimate theta in terms of x. So we want to build an estimator which is going to be a function from the x's to the thetas. That's why I chose the axis this way-- x to be on this axis, theta on that axis-- Because the estimator we're building is a function of x. Based on the observation that we obtained, we want to estimate theta. So we know that the optimal estimator is the conditional expectation, given the value of x. So what is the conditional expectation? If you fix a particular value of x, let's say in this range. So this is our x, then what do we know about theta? We know that theta lies in this range. Theta can only be sampled between those two values. And what kind of distribution does theta have? What is the conditional distribution of theta given x? Well, remember how we built conditional distributions from joint distributions? The conditional distribution is just a section of the joint distribution applied to the place where we're conditioning. So the joint is constant. So the conditional is also going to be a constant density over this interval. So the posterior distribution of theta is uniform over this interval. So if the posterior of theta is uniform over that interval, the expected value of theta is going to be the meet point of that interval. So the estimate which you report-- if you observe that theta-- is going to be this particular point here, it's the midpoint. The same argument goes through even if you obtain an x somewhere here. Given this x, theta can take a value between these two values. Theta is going to have a uniform distribution over this interval, and the conditional expectation of theta given x is going to be the midpoint of that interval. So now if we plot our estimator by tracing midpoints in this diagram what you're going to obtain is a curve that starts like this, then it changes slope. So that it keeps track of the midpoint, and then it goes like that again. So this blue curve here is our g of x, which is the conditional expectation of theta given that x is equal to little x. So it's a curve, in our example it consists of three straight segments. But overall it's non-linear. It's not a single line through this diagram. And that's how things are in general. g of x, our optimal estimate has no reason to be a linear function of x. In general it's going to be some complicated curve. So how good is our estimate? I mean you reported your x, your estimate of theta based on x, and your boss asks you what kind of error do you expect to get? Having observed the particular value of x, what you can report to your boss is what you think is the mean squared error is going to be. We observe the particular value of x. So we're conditioning, and we're living in this universe. Given that we have made this observation, this is the true value of theta, this is the estimate that we have produced, this is the expected squared error, given that we have made the particular observation. Now in this conditional universe this is the expected value of theta given x. So this is the expected value of this random variable inside the conditional universe. So when you take the mean squared of a random variable minus the expected value, this is the same thing as the variance of that random variable. Except that it's the variance inside the conditional universe. Having observed x, theta is still a random variable. It's distributed according to the posterior distribution. Since it's a random variable, it has a variance. And that variance is our mean squared error. So this is the variance of the posterior distribution of Theta given the observation that we have made. OK, so what is the variance in our example? If X happens to be here, then Theta is uniform over this interval, and this interval has length 2. Theta is uniformly distributed over an interval of length 2. This is the posterior distribution of Theta. What is the variance? Then you remember the formula for the variance of a uniform random variable, it is the length of the interval squared divided by 12, so this is 1/3. So the variance of Theta -- the mean squared error-- is going to be 1/3 whenever this kind of picture applies. This picture applies when X is between 5 and 9. If X is less than 5, then the picture is a little different, and Theta is going to be uniform over a smaller interval. And so the variance of theta is going to be smaller as well. So let's start plotting our mean squared error. Between 5 and 9 the variance of Theta -- the posterior variance-- is 1/3. Now when the X falls in here Theta is uniformly distributed over a smaller interval. The size of this interval changes linearly over that range. And so when we take the square size of that interval we get a quadratic function of how much we have moved from that corner. So at that corner what is the variance of Theta? Well if I observe an X that's equal to 3 then I know with certainty that Theta is equal to 4. Then I'm in very good shape, I know exactly what Theta is going to be. So the variance, in this case, is going to be 0. If I observe an X that's a little larger than Theta is now random, takes values on a little interval, and the variance of Theta is going to be proportional to the square of the length of that little interval. So we get a curve that starts rising quadratically from here. It goes up forward 1/3. At the other end of the picture the same is true. If you observe an X which is 11 then Theta can only be equal to 10. And so the error in Theta is equal to 0, there's 0 error variance. But as we obtain X's that are slightly less than 11 then the mean squared error again rises quadratically. So we end up with a plot like this. What this plot tells us is that certain measurements are better than others. If you're lucky, and you see X equal to 3 then you're lucky, because you know Theta exactly what it is. If you see an X which is equal to 6 then you're sort of unlikely, because it doesn't tell you Theta with great precision. Theta could be anywhere on that interval. And so the variance of Theta -- even after you have observed X -- is a certain number, 1/3 in our case. So the moral to keep out of that story is that the error variance-- or the mean squared error-- depends on what particular observation you happen to obtain. Some observations may be very informative, and once you see a specific number than you know exactly what Theta is. Some observations might be less informative. You observe your X, but it could still leave a lot of uncertainty about Theta. So conditional expectations are really the cornerstone of Bayesian estimation. They're particularly popular, especially in engineering contexts. There used a lot in signal processing, communications, control theory, so on. So that makes it worth playing a little bit with their theoretical properties, and get some appreciation of a few subtleties involved here. No new math in reality, in what we're going to do here. But it's going to be a good opportunity to practice manipulation of conditional expectations. So let's look at the expected value of the estimation error that we obtained. So Theta hat is our estimator, is the conditional expectation. Theta hat minus Theta is what kind of error do we have? If Theta hat, is bigger than Theta then we have made the positive error. If not, if it's on the other side, we have made the negative error. Then it turns out that on the average the errors cancel each other out, on the average. So let's do this calculation. Let's calculate the expected value of the error given X. Now by definition the error is expected value of Theta hat minus Theta given X. We use linearity of expectations to break it up as expected value of Theta hat given X minus expected value of Theta given X. And now what? Our estimate is made on the basis of the data of the X's. If I tell you X then you know what Theta hat is. Remember that the conditional expectation is a random variable which is a function of the random variable, on which you're conditioning on. If you know X then you know the conditional expectation given X, you know what Theta hat is going to be. So Theta hat is a function of X. If it's a function of X then once I tell you X you know what Theta hat is going to be. So this conditional expectation is going to be Theta hat itself. Here this is-- just by definition-- Theta hat, and so we get equality to 0. So what we have proved is that no matter what I have observed, and given that I have observed something on the average my error is going to be 0. This is a statement involving equality of random variables. Remember that conditional expectations are random variables because they depend on the thing you're conditioning on. 0 is sort of a trivial random variable. This tells you that this random variable is identically equal to the 0 random variable. More specifically it tells you that no matter what value for X you observe, the conditional expectation of the error is going to be 0. And this takes us to this statement here, which is inequality between numbers. No matter what specific value for capital X you have observed, your error, on the average, is going to be equal to 0. So this is a less abstract version of these statements. This is inequality between two numbers. It's true for every value of X, so it's true in terms of these random variables being equal to that random variable. Because remember according to our definition this random variable is the random variable that takes this specific value when capital X happens to be equal to little x. Now this doesn't mean that your error is 0, it only means that your error is as likely, in some sense, to fall on the positive side, as to fall on the negative side. So sometimes your error will be positive, sometimes negative. And on the average these things cancel out and give you a 0 --. on the average. So this is a property that's sometimes giving the name we say that Theta hat is unbiased. So Theta hat, our estimate, does not have a tendency to be on the high side. It does not have a tendency to be on the low side. On the average it's just right. So let's do a little more playing here. Let's see how our error is related to an arbitrary function of the data. Let's do this in a conditional universe and look at this quantity. In a conditional universe where X is known then h of X is known. And so you can pull it outside the expectation. In the conditional universe where the value of X is given this quantity becomes just a constant. There's nothing random about it. So you can pull it out, the expectation, and write things this way. And we have just calculated that this quantity is 0. So this number turns out to be 0 as well. Now having done this, we can take expectations of both sides. And now let's use the law of iterated expectations. Expectation of a conditional expectation gives us the unconditional expectation, and this is also going to be 0. So here we use the law of iterated expectations. OK. OK, why are we doing this? We're doing this because I would like to calculate the covariance between Theta tilde and Theta hat. Theta hat is, ask the question -- is there a systematic relation between the error and the estimate? So to calculate the covariance we use the property that we can calculate the covariances by calculating the expected value of the product minus the product of the expected values. And what do we get? This is 0, because of what we just proved. And this is 0, because of what we proved earlier. That the expected value of the error is equal to 0. So the covariance between the error and any function of X is equal to 0. Let's use that to the case where the function of X we're considering is Theta hat itself. Theta hat is our estimate, it's a function of X. So this 0 result would still apply, and we get that this covariance is equal to 0. OK, so that's what we proved. Let's see, what are the morals to take out of all this? First is you should be very comfortable with this type of calculation involving conditional expectations. The main two things that we're using are that when you condition on a random variable any function of that random variable becomes a constant, and can be pulled out the conditional expectation. The other thing that we are using is the law of iterated expectations, so these are the skills involved. Now on the substance, why is this result interesting? This tells us that the error is uncorrelated with the estimate. What's a hypothetical situation where these would not happen? Whenever Theta hat is positive my error tends to be negative. Suppose that whenever Theta hat is big then you say oh my estimate is too big, maybe the true Theta is on the lower side, so I expect my error to be negative. That would be a situation that would violate this condition. This condition tells you that no matter what Theta hat is, you don't expect your error to be on the positive side or on the negative side. Your error will still be 0 on the average. So if you obtain a very high estimate this is no reason for you to suspect that the true Theta is lower than your estimate. If you suspected that the true Theta was lower than your estimate you should have changed your Theta hat. If you make an estimate and after obtaining that estimate you say I think my estimate is too big, and so the error is negative. If you thought that way then that means that your estimate is not the optimal one, that your estimate should have been corrected to be smaller. And that would mean that there's a better estimate than the one you used, but the estimate that we are using here is the optimal one in terms of mean squared error, there's no way of improving it. And this is really captured in that statement. That is knowing Theta hat doesn't give you a lot of information about the error, and gives you, therefore, no reason to adjust your estimate from what it was. Finally, a consequence of all this. This is the definition of the error. Send Theta to this side, send Theta tilde to that side, you get this relation. The true parameter is composed of two quantities. The estimate, and the error that they got with a minus sign. These two quantities are uncorrelated with each other. Their covariance is 0, and therefore, the variance of this is the sum of the variances of these two quantities. So what's an interpretation of this equality? There is some inherent randomness in the random variable theta that we're trying to estimate. Theta hat tries to estimate it, tries to get close to it. And if Theta hat always stays close to Theta, since Theta is random Theta hat must also be quite random, so it has uncertainty in it. And the more uncertain Theta hat is the more it moves together with Theta. So the more uncertainty it removes from Theta. And this is the remaining uncertainty in Theta. The uncertainty that's left after we've done our estimation. So ideally, to have a small error we want this quantity to be small. Which is the same as saying that this quantity should be big. In the ideal case Theta hat is the same as Theta. That's the best we could hope for. That corresponds to 0 error, and all the uncertainly in Theta is absorbed by the uncertainty in Theta hat. Interestingly, this relation here is just another variation of the law of total variance that we have seen at some point in the past. I will skip that derivation, but it's an interesting fact, and it can give you an alternative interpretation of the law of total variance. OK, so now let's return to our example. In our example we obtained the optimal estimator, and we saw that it was a nonlinear curve, something like this. I'm exaggerating the corner of a little bit to show that it's nonlinear. This is the optimal estimator. It's a nonlinear function of X -- nonlinear generally means complicated. Sometimes the conditional expectation is really hard to compute, because whenever you have to compute expectations you need to do some integrals. And if you have many random variables involved it might correspond to a multi-dimensional integration. We don't like this. Can we come up, maybe, with a simpler way of estimating Theta? Of coming up with a point estimate which still has some nice properties, it has some good motivation, but is simpler. What does simpler mean? Perhaps linear. Let's put ourselves in a straitjacket and restrict ourselves to estimators that's are of these forms. My estimate is constrained to be a linear function of the X's. So my estimator is going to be a curve, a linear curve. It could be this, it could be that, maybe it would want to be something like this. I want to choose the best possible linear function. What does that mean? It means that I write my Theta hat in this form. If I fix a certain a and b I have fixed the functional form of my estimator, and this is the corresponding mean squared error. That's the error between the true parameter and the estimate of that parameter, we take the square of this. And now the optimal linear estimator is defined as one for which these mean squared error is smallest possible over all choices of a and b. So we want to minimize this expression over all a's and b's. How do we do this minimization? Well this is a square, you can expand it. Write down all the terms in the expansion of the square. So you're going to get the term expected value of Theta squared. You're going to get another term-- a squared expected value of X squared, another term which is b squared, and then you're going to get to various cross terms. What you have here is really a quadratic function of a and b. So think of this quantity that we're minimizing as some function h of a and b, and it happens to be quadratic. How do we minimize a quadratic function? We set the derivative of this function with respect to a and b to 0, and then do the algebra. After you do the algebra you find that the best choice for a is this 1, so this is the coefficient next to X. This is the optimal a. And the optimal b corresponds of the constant terms. So this term and this times that together are the optimal choices of b. So the algebra itself is not very interesting. What is really interesting is the nature of the result that we get here. If we were to plot the result on this particular example you would get the curve that's something like this. It goes through the middle of this diagram and is a little slanted. In this example, X and Theta are positively correlated. Bigger values of X generally correspond to bigger values of Theta. So in this example the covariance between X and Theta is positive, and so our estimate can be interpreted in the following way: The expected value of Theta is the estimate that you would come up with if you didn't have any information about Theta. If you don't make any observations this is the best way of estimating Theta. But I have made an observation, X, and I need to take it into account. I look at this difference, which is the piece of news contained in X? That's what X should be on the average. If I observe an X which is bigger than what I expected it to be, and since X and Theta are positively correlated, this tells me that Theta should also be bigger than its average value. Whenever I see an X that's larger than its average value this gives me an indication that theta should also probably be larger than its average value. And so I'm taking that difference and multiplying it by a positive coefficient. And that's what gives me a curve here that has a positive slope. So this increment-- the new information contained in X as compared to the average value we expected apriori, that increment allows us to make a correction to our prior estimate of Theta, and the amount of that correction is guided by the covariance of X with Theta. If the covariance of X with Theta were 0, that would mean there's no systematic relation between the two, and in that case obtaining some information from X doesn't give us a guide as to how to change the estimates of Theta. If that were 0, we would just stay with this particular estimate. We're not able to make a correction. But when there's a non zero covariance between X and Theta that covariance works as a guide for us to obtain a better estimate of Theta. How about the resulting mean squared error? In this context turns out that there's a very nice formula for the mean squared error obtained from the best linear estimate. What's the story here? The mean squared error that we have has something to do with the variance of the original random variable. The more uncertain our original random variable is, the more error we're going to make. On the other hand, when the two variables are correlated we explored that correlation to improve our estimate. This row here is the correlation coefficient between the two random variables. When this correlation coefficient is larger this factor here becomes smaller. And our mean squared error become smaller. So think of the two extreme cases. One extreme case is when rho equal to 1 -- so X and Theta are perfectly correlated. When they're perfectly correlated once I know X then I also know Theta. And the two random variables are linearly related. In that case, my estimate is right on the target, and the mean squared error is going to be 0. The other extreme case is if rho is equal to 0. The two random variables are uncorrelated. In that case the measurement does not help me estimate Theta, and the uncertainty that's left-- the mean squared error-- is just the original variance of Theta. So the uncertainty in Theta does not get reduced. So moral-- the estimation error is a reduced version of the original amount of uncertainty in the random variable Theta, and the larger the correlation between those two random variables, the better we can remove uncertainty from the original random variable. I didn't derive this formula, but it's just a matter of algebraic manipulations. We have a formula for Theta hat, subtract Theta from that formula. Take square, take expectations, and do a few lines of algebra that you can read in the text, and you end up with this really neat and clean formula. Now I mentioned in the beginning of the lecture that we can do inference with Theta's and X's not just being single numbers, but they could be vector random variables. So for example we might have multiple data that gives us information about X. There are no vectors here, so this discussion was for the case where Theta and X were just scalar, one-dimensional quantities. What do we do if we have multiple data? Suppose that Theta is still a scalar, it's one dimensional, but we make several observations. And on the basis of these observations we want to estimate Theta. The optimal least mean squares estimator would be again the conditional expectation of Theta given X. That's the optimal one. And in this case X is a vector, so the general estimator we would use would be this one. But if we want to keep things simple and we want our estimator to have a simple functional form we might restrict to estimator that are linear functions of the data. And then the story is exactly the same as we discussed before. I constrained myself to estimating Theta using a linear function of the data, so my signal processing box just applies a linear function. And I'm looking for the best coefficients, the coefficients that are going to result in the least possible squared error. This is my squared error, this is (my estimate minus the thing I'm trying to estimate) squared, and then taking the average. How do we do this? Same story as before. The X's and the Theta's get averaged out because we have an expectation. Whatever is left is just a function of the coefficients of the a's and of b's. As before it turns out to be a quadratic function. Then we set the derivatives of this function of a's and b's with respect to the coefficients, we set it to 0. And this gives us a system of linear equations. It's a system of linear equations that's satisfied by those coefficients. It's a linear system because this is a quadratic function of those coefficients. So to get closed-form formulas in this particular case one would need to introduce vectors, and matrices, and metrics inverses and so on. The particular formulas are not so much what interests us here, rather, the interesting thing is that this is simply done just using straightforward solvers of linear equations. The only thing you need to do is to write down the correct coefficients of those non-linear equations. And the typical coefficient that you would get would be what? Let say a typical quick equations would be -- let's take a typical term of this quadratic one you expanded. You're going to get the terms such as a1x1 times a2x2. When you take expectations you're left with a1a2 times expected value of x1x2. So this would involve terms such as a1 squared expected value of x1 squared. You would get terms such as a1a2, expected value of x1x2, and a lot of other terms here should have a too. So you get something that's quadratic in your coefficients. And the constants that show up in your system of equations are things that have to do with the expected values of squares of your random variables, or products of your random variables. To write down the numerical values for these the only thing you need to know are the means and variances of your random variables. If you know the mean and variance then you know what this thing is. And if you know the covariances as well then you know what this thing is. So in order to find the optimal linear estimator in the case of multiple data you do not need to know the entire probability distribution of the random variables that are involved. You only need to know your means and covariances. These are the only quantities that affect the construction of your optimal estimator. We could see this already in this formula. The form of my optimal estimator is completely determined once I know the means, variance, and covariance of the random variables in my model. I do not need to know how the details distribution of the random variables that are involved here. So as I said in general, you find the form of the optimal estimator by using a linear equation solver. There are special examples in which you can get closed-form solutions. The nicest simplest estimation problem one can think of is the following-- you have some uncertain parameter, and you make multiple measurements of that parameter in the presence of noise. So the Wi's are noises. I corresponds to your i-th experiment. So this is the most common situation that you encounter in the lab. If you are dealing with some process, you're trying to measure something you measure it over and over. Each time your measurement has some random error. And then you need to take all your measurements together and come up with a single estimate. So the noises are assumed to be independent of each other, and also to be independent from the value of the true parameter. Without loss of generality we can assume that the noises have 0 mean and they have some variances that we assume to be known. Theta itself has a prior distribution with a certain mean and the certain variance. So the form of the optimal linear estimator is really nice. Well maybe you cannot see it right away because this looks messy, but what is it really? It's a linear combination of the X's and the prior mean. And it's actually a weighted average of the X's and the prior mean. Here we collect all of the coefficients that we have at the top. So the whole thing is basically a weighted average. 1/(sigma_i-squared) is the weight that we give to Xi, and in the denominator we have the sum of all of the weights. So in the end we're dealing with a weighted average. If mu was equal to 1, and all the Xi's were equal to 1 then our estimate would also be equal to 1. Now the form of the weights that we have is interesting. Any given data point is weighted inversely proportional to the variance. What does that say? If my i-th data point has a lot of variance, if Wi is very noisy then Xi is not very useful, is not very reliable. So I'm giving it a small weight. Large variance, a lot of error in my Xi means that I should give it a smaller weight. If two data points have the same variance, they're of comparable quality, then I'm going to give them equal weight. The other interesting thing is that the prior mean is treated the same way as the X's. So it's treated as an additional observation. So we're taking a weighted average of the prior mean and of the measurements that we are making. The formula looks as if the prior mean was just another data point. So that's the way of thinking about Bayesian estimation. You have your real data points, the X's that you observe, you also had some prior information. This plays a role similar to a data point. Interesting note that if all random variables are normal in this model these optimal linear estimator happens to be also the conditional expectation. That's the nice thing about normal random variables that conditional expectations turn out to be linear. So the optimal estimate and the optimal linear estimate turn out to be the same. And that gives us another interpretation of linear estimation. Linear estimation is essentially the same as pretending that all random variables are normal. So that's a side point. Now I'd like to close with a comment. You do your measurements and you estimate Theta on the basis of X. Suppose that instead you have a measuring device that's measures X-cubed instead of measuring X, and you want to estimate Theta. Are you going to get to different a estimate? Well X and X-cubed contained the same information. Telling you X is the same as telling you the value of X-cubed. So the posterior distribution of Theta given X is the same as the posterior distribution of Theta given X-cubed. And so the means of these posterior distributions are going to be the same. So doing transformations through your data does not matter if you're doing optimal least squares estimation. On the other hand, if you restrict yourself to doing linear estimation then using a linear function of X is not the same as using a linear function of X-cubed. So this is a linear estimator, but where the data are the X-cube's, and we have a linear function of the data. So this means that when you're using linear estimation you have some choices to make linear on what? Sometimes you want to plot your data on a not ordinary scale and try to plot a line through them. Sometimes you plot your data on a logarithmic scale, and try to plot a line through them. Which scale is the appropriate one? Here it would be a cubic scale. And you have to think about your particular model to decide which version would be a more appropriate one. Finally when we have multiple data sometimes these multiple data might contain the same information. So X is one data point, X-squared is another data point, X-cubed is another data point. The three of them contain the same information, but you can try to form a linear function of them. And then you obtain a linear estimator that has a more general form as a function of X. So if you want to estimate your Theta as a cubic function of X, for example, you can set up a linear estimation model of this particular form and find the optimal coefficients, the a's and the b's. All right, so the last slide just gives you the big picture of what's happening in Bayesian Inference, it's for you to ponder. Basically we talked about three possible estimation methods. Maximum posteriori, mean squared error estimation, and linear mean squared error estimation, or least squares estimation. And there's a number of standard examples that you will be seeing over and over in the recitations, tutorial, homework, and so on, perhaps on exams even. Where we take some nice priors on some unknown parameter, we take some nice models for the noise or the observations, and then you need to work out posterior distributions in the various estimates and compare them.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Inferring_a_Discrete_Random_Variable_from_a_Continuous_Measurement.txt
Hi. In this problem, we're going to look at how to infer a discrete random variable from a continuous measurement. And really, what it's going to give us is some practice working with a variation of Bayes' rule. So the problem tells us that we have a discrete random variable x with this PMF. It is 1 with probability P, minus 1 with probability 1 minus P, and 0 otherwise. So here is just a diagram of this PMF. And then we also have another random variable, y, which is continuous. And its PDF is given by this. It's 1/2 lambda e to the minus lambda times the absolute value of y. And so this may look familiar. It looks kind of like an exponential. And in fact, it's just a two-sided exponential. That's flattened by a factor of 1/2. And this is what it looks like, kind of like a tent that goes on both ways. And then we have a random variable z, which is equal to the sum of x and y. And the problem is going to be figuring out what x is, based on an observed value of what z is. So because x is discrete and y is random-- sorry, x is discrete, and y is continuous, z is also going to be continuous. So our measurement is z, which is continuous. And we want to infer x, which is discrete. So the problem asked us to find is this what is the probability that x equals 1, given that z is a little z. And you can write this another way, just as a conditional PMF as well. It's the conditional PMF of x, given z, evaluated to 1 conditioned on little z. All right, so now let's apply the correct variation of Bayes' rule. So remember, it's going to be this, the probability that x equals 1, or the PMF of x evaluated to 1, times the-- you flip this conditioning. So now it's going to be a conditional PDF of z, since z is continuous. It's going to be a conditional PDF of z, given x, evaluated at some little z condition on x being 1. And the bottom is the conditional PDF-- or sorry, just the regular PDF of z. And of course, we can rewrite this denominator. Remember, the denominator is always just-- you can use the law of total probability and rewrite it. And one of the terms is going to be exactly the same as the numerator. So one of the ways that z can be some little z is it's in combination with x being equal to 1. And the probability of that is exactly the same thing as the numerator. And the other way is if x is equal to negative 1. And that gives us this second term. All right. And now let's just fill in what all these different terms are. So with the PMF of x evaluated at 1, that is just P. What is the conditional PDF of z, given that x is equal to 1? Well, that takes a little bit more work. Given that x is 1, then z is just going to be-- so if x equals 1, then z is just y plus 1, which means that you can just imagine taking y-- this is what y is, the distribution of y-- and just adding 1 to it, which, in this diagram, would amount to shifting it over by one. So now, it would look like this, the distribution. And algebraically, all you would do is just change this thing in the absolute value to y minus 1. That amounts to shifting it over to the right by one. All right. So what is that? That's just 1/2 lambda, 1/2 lambda, e to the minus lambda y-- sorry, not y, z-- z minus 1. And the denominator, well, the first term is going to be exactly the same. It's just also P 1/2 lambda e to the minus lambda z minus 1. What about the second term? The second term, first we need to figure out what is the PMF of x evaluated at a negative 1. Or in other words, what's the probability that x is negative 1? That is given to us by the PMF. It's 1 minus P. And then the second part is, what is the conditional PDF of z, given that x is negative 1? Well, we can just do the same sort of trick here. If x is negative 1, then z is just y minus 1. In which case, the PDF of z would just look like this. You're shifted to the left by one now. And now what you have to do is change this into a plus 1. So this conditional PDF would be 1/2 lambda e to the minus lambda z plus 1, absolute value of z plus 1. All right, so this looks pretty messy. And we can try to simplify things a little bit. So we can get rid of these 1/2 lambdas. And then we can multiply the numerator and the denominator by the same term. Let's multiply it by e to the lambda absolute value of z minus 1. So what we're going to do is try to cancel out some of these exponential terms. So that will cancel out this top term. So all we have in the numerator now is just P. It will also cancel out this exponential in the denominator. And then we'll have to change this here, because it'll have an extra e to the lambda absolute value of z minus 1. All right, now let's rewrite this. And what we get is plus 1 minus P e to the minus lambda absolute value of z plus 1 minus absolute value of z minus 1. OK, so that is pretty much as far as you can go in terms of simplifying it. And now the question is, are we comfortable with this answer? And it helps always to try to interpret it a little bit, to make sure that it makes intuitive sense. And one way to do that is to try to-- some of the limiting cases of what some of the parameters can be. So in this case, the parameters are P and lambda. So P is the parameter related to x. And lambda is the parameter related to y. So let's try to see if it makes sense under some limiting cases. The first one we want to think about is when P goes to 0. So if P goes to 0, what happens to our answer? Well, the numerator is 0, this is 0, this is 1. But it doesn't matter, because the numerator is 0. So in this case, this would go to 0. Now does that make sense? Well, what does that mean when P goes to 0? When P goes to 0, that means that the probability that x is equal to 1 is 0. So even without thinking about y or z, there is already a 0 probability that x is equal to 1. Now this whole calculation, what we found is, well, if I had some more information, like what z is, does that help me find out what the probability of x being 1 is? Well, no matter what z tells me, I know for a fact that x can't be 1, because P is 0. So this posterior, or this conditional probability, should also be 0, because there's just no way that x can be 1. So in this case, this formula does check out. Now let's think about another case where P goes to 1. If P goes to 1, that means that X is for sure going to be 1. And it can't be anything else. In which case, what does our formula tell us? Well, this numerator is 1. This term is 1. 1 minus 1 is 0. So the second term gets zeroed out, and the answer is just 1/1 is 1. So what does this tell us? This tells us that if I know beforehand that x is for sure equal to 1, then, if I now give myself more information and condition on what I observe for z, that shouldn't change anything else. I should still know for sure that x is equal to 1. So the probability of this conditional probability should still be equal to 1. And it does, so our formula also works in this case. Now let's think about lambda. What about when lambda goes to 0? Well, when lambda goes to 0, that's a little harder to visualize. But really, what would happen is that you can imagine this distribution getting shallower, shallower and shallower, lower and lower, so that it's like it is kind of flat and goes on forever. And so what this tells you is that, basically, y-- this is the distribution y-- so when lambda goes to 0, that tells you that y has a really flat and kind of short distribution. And so what does our formula tell us in this case? Well, when lambda goes to 0, this exponent is equal to 0. And so e to the 0 is 1. So we get P over P plus 1 minus P, which is just 1. So the answer here, our formula will give us an answer of P. So what does that tell us? That tells us that, in this case, if lambda goes to 0, then our posterior probability, the probability that x equals 1 conditioned on z being some value, conditioned on our continuous measurement, is still P. So the prior, or the original probability for x being equal to 1 is P. And with this additional continuous measurement, our guess of the probability that x equal to 1 is still P. So it hasn't changed. So basically, it's telling us that this additional information was not informative. It didn't actually help us change our beliefs. And so why is that? Well, one way to think about it is that, because the distribution of y looks like this, is very flat and it could be anything, then, if you observe some value of z, then it could be that that was due to the fact that it was x equal to 1 plus some value of y that made z equal to that value. Or it could have just as equally been likely that x equal to negative 1 y equals to some other value that made it equal to z. And so, essentially, it's z-- because y has a shape, it can be likely to take on any value that complements either x being equal to 1 or x equal being to negative 1, to make z equal to whatever the value it is that you observe. And so because of that, in this case, y is not very informative. And so this probability is still just equal to P. Now the last case is when lambda goes to infinity. And now we have to break it down into the two other cases now. The first case is when-- lets write this over here-- when lambda goes to infinity. The first case, it depends on what this value is, the sine of this value. If this value, the absolute value of z plus 1 minus the absolute value of z minus 1, if that's positive, then, because lambda goes to infinity and you have a negative sign, then this entire exponential term will go to 0. In which case, the second term goes to 0. And the answer is P/P, or is 1. And so if absolute value of z plus 1 minus absolute value of z minus 1 is greater than 0, then the answer is 1. But in the other case, if this term in the exponent, if it's actually negative, if it's negative, then this negative sign turns to a positive, and lambda goes to infinity. And so this term blows up, and it dominates everything else. And so the denominator goes to infinity. The numerator is fixed at P, so this entire expression would go to 0. OK, so now let's try to interpret this case. Let's start with the first one. When is it that absolute value of z plus 1 minus absolute value of z minus 1 is greater than 0? Or you can also rewrite this as absolute value of z plus 1 is greater than absolute value of z minus 1. Well, when is that case? Well, it turns out, if you think about it, this is only true if z is positive. If z is positive, then adding 1-- let me draw a line here, and if this is 0-- if z is positive, something here, adding 1 to it and taking the absolute value-- the absolute value doesn't do anything-- but you will get something bigger. Where subtracting 1 will take you closer to 0, and so because of that, the absolute value, the magnitude, or the distance from 0 will be less. Now if you're on the other side, adding 1 will take you-- if you're on the other side, adding 1 will take you closer to 0. And so this magnitude would be smaller. Whereas, subtracting will take you farther away, so the absolute value actually increased the magnitude. And so this is the same as z being positive. And so this is the same as z being negative. So what this tells you is that, if z is positive, then this probability is equal to 1. And if z is negative, this probability is equal to 0. Now why does that make sense? Well, it's because when lambda goes to infinity, you have the other case. Essentially, you pull this all the way up, really, really far, and it drops off really quickly. And so when you take the limit, as lambda goes to infinity, effectively, it just becomes a spike at 0. And so, more or less, you're sure that y is going to be equal to 0. And so, effectively, z is actually going to be equal to x, effectively. And because of that, because x can only be 1 or negative 1, then, depending on if you get a z that's positive, then you know for sure that it must have been that x was equal to 1. And if you get a z that's negative, you know for sure that it must have been that x was equal to negative 1. And so because of that, you get this interpretation. And so we've looked at four different cases of the parameters. And in all four cases, our answer seems to make sense. And so we feel more confident in the answer. And so to summarize, this whole problem involved using Bayes' rule. You start out with some distributions, and you apply Bayes' rule. And you go through the steps, and you plug-in the right terms. And then, in the end, it's always helpful to try to check your answers to make sure that it makes sense in some of the limiting.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Using_the_Central_Limit_Theorem.txt
Hi. In this video, we're going to do some approximate calculations using the central limit theorem. We're given that Xn is the number of gadgets produced on day n by a factory. And it has a normal distribution with mean 5 and variance 9. And they're all independent and identically distributed. We're looking for the probability that the total number of gadgets in 100 days is less than 440. To start, we can first write this as the probability of the sum of the gadgets produced on each of 100 days being less than 440. Notice that this is a sum of a large number of independent random variables. So we can use the central limit theorem and approximate the sum as a normal random variable. And then, basically, in order to compute this probability, we'd basically need to standardize this and then use the standard normal table. So let's first compute the expectation and variance of the sum. So I'm going to actually sum up from 1 to n instead of 100, to do it more generally. So the linearity is preserved for the expectation operator. So this is the sum of the expected value. And since they're all identically distributed, they all have the same expectation, and there are n of them. And so we have this being n times 5. For the variance of the sum is also the sum of the variances because the independents. And so they're identically distributed to the -- so we have n times the variance of Xi, and this is n times 9. So now, we can standardize it, or make it 0 mean and variance 1. So to do that we would take these Xi's, subtract by their mean. So it's going to be 5 times 100 of them, so it's 500 over the square root of the variance, which is going to be 9 times 100 of them, so it's going to be 900. So that's going to be less than 440 minus 500 over square root of 900. So notice what we're trying to do here is-- notice that the sum of Xi's is a discrete quantity. So it's a discrete random variable, so it may have a PMF like this. And we're trying to approximate it with a normal density. So this is not drawn to scale, but let's say that this is 440 and this is 439. Basically, we're trying to say what's the probability of this being less than 440, so it's the probability that it's 439, or 438, or 437. But in the continuous case, a good approximation to this would be to take the middle, say, 439.5, and compute the area below that. So in this case, when we do the normal approximation, it works out better if we use this half correction. And so, this, in this case, probability, let's call Z the standard normal. And so this is approximately equal to a standard normal with the probability of standard normal being less than whatever that is. And if you plug that into your calculator, you get negative 2.02. So now, if we try to figure out what this-- from the table, we'll find that negative values are not tabulated. But we know that the normal, the center of normal is symmetric, and so if we want to compute the area in this region, it's the same as the area in this region, above 2.02. So this is the same as the probability that Z is bigger than 2.02. That's just 1 minus the probability that Z is less than or equal to 2.02, and so that's, by definition, phi of 2.02. And if we look it up on the table, 2.02 has probability here of 0.9783. And we can just write that in. That's the answer for Part A. So now for Part B. We're asked what's the largest n, approximately, so that it satisfies this. So again, we can use the central limit theorem. Use similar steps here so that we have, in this case, n greater than or equal to 200 plus 5n. And standardized. So we have n and the mean here-- this is where this comes handy. It's going to be 5n and the variance is 9n. It's greater than or equal to. 5n's will cancel and you subtract. And then you get 200 over the square root of 9n. And we can, again, use the half approximation here, half correction here. But I'm not going to do it, to keep the problem simple. And so in this case, this is approximately equal to the standard normal being greater than probability of the center of normal being greater than or equal to 200 over square root of 9n. And so same sort of thing here. This is just 1 minus this. The equal sign doesn't matter because Z is a continuous random variable. And so we have this here. And we want this to be less than or equal 0.05. So that means that phi of 200 over square root of 9 has to be greater than or equal to 0.95. So we're basically looking for something here that ensures that this region's at least 0.95. So if you look at the table, 0.95 lies somewhere in between 1.64 and 1.65. And I'm going to use 1.65 to be conservative, because we want this region to be at least 0.95. So 1.65 works better here. And so we want this thing, this here, which is going to be 200 over square root of n-- square root of 9n, to be bigger than or equal to 1.65. So n here is going to be less than or equal to 200 over 1.65 squared, 1 over 9. If you plug this into your calculator, you might have a decimal in there. Then we just pick n, the largest integer that satisfies this. So we can plug that into your calculator, you'll find that it's going to be 1,632. That's part B. Last part. Let n be the first day when the total number of gadgets is greater than 1,000. What's the probability that n is greater than or equal to 220? Again, we want to use the central limit theorem, but the trick here is to recognize that this is actually equal to the probability that the sum from i equals 1 to 219 of Xi, is less than or equal to 1,000. So let's look at both directions to check this. If n is greater than or equal to 220, then this has to be true. Because if it weren't true, and if this were greater than 1,000, then n would have been less than or equal to 219. So this direction works. The other direction. If this were the case, it has to be the case that n is greater than or equal to 220, because up till 219 it hasn't exceeded 1,000. And so, at some point beyond that, it's going to exceed 1,000 and n is going to be greater than or equal to 220. So this is the key trick here. And once you see this, you realize that this is very easy because we do the same steps as we did before. So you're looking for this, this is equal to, again, you do your standardization. So this is from 219, and you get 5 times 219 for the mean, and 9 times 219 for the variance, less than or equal to 1,000 minus 5 times 219 over square root of 9 times 219. Again, you can do the half correction here, make it 1,000.5, but I'm not going to do that in this case, for simplicity. So this is approximately equal to Z being less than whatever this is. And if you plug it in, you'll find that this is negative 2.14. So in this case, we have this is the probability that Z-- again, we do the same thing-- is greater than or equal to 2.14. And this is 1 minus the probability that Z is less than or equal to 2.14. And that's just phi of 2.14-- 1 minus Z of 2.14. And that's-- if you look it up on the table, it's 2.14. It's 0.9838. So here's your answer. So we're done with Part C as well. So in this exercise, we did a lot of approximate calculations using the central--
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Markov_Chain_Practice_1.txt
Hi, everyone. Today, I'm going to talk about Markov Chain Practice number one. Before we start, let's first take a look at this Markov chain. This Markov chain has six states. In this problem, we always assume the process starts from state S0. On the first trial, the process can either make a transition from S0 to S1 with probability 1/3 or from S0 to S3 with probability 1/3 third or from S0 to S5 with probability 1/3. If on the first trial, the process makes the transition from S0 to S1 or from S0 to S5, it will always be stuck in either S1 or S5 forever, because both of the states S1 and S5 have a self-transition probability of one. On the other hand, if on the first trial, the process makes the transition from S0 to S3, it can then either transition to the left or transition to the right or make self-transition back to the state S3. If the process ever enters the left of the chain, it will never be able to come to the right. On the other hand, if the process ever enters the right of the chain, it would never be able to go to the left. For part A of the problem, we have to calculate the probability that the process enter S2 for the first time at the case trial. First, notice that it would take at least two trials for the process to make a transition from S0 to S2. Therefore, for k equal to 1, the probability of ak is simply equal to 0. For k equal to 1, probability of a1 is equal to 0. Then for k equal to 2, 3 and on, the probability that the process enters S2 for the first time at a case trial is equivalent to the probability that the process first makes a transition from S0 to S3 and then stays in S3 for the next two k minus 2 trials and finally makes a transition from S3 to S2 on the kth trial. So let's write this out. For k equal to 2, 3, and on, the probability of ak is equal to the probability that the process first makes transition from S0 to S3 on the first trial, which is probability 03, times the probability that the process makes self-transition for the next k minus 2 trials, which is probability 33 to the power of k minus 2, and finally makes a transition from S3 to S2 on the kth trial, which is p32. And this gives us 1/3 times 1/4 to the power of k minus 2 times 1/4, which is equal to 1/3 times 1/4 to the power of k minus-- For part B of the problem, we have to calculate the probability that the process never enters as four. This event can happen in three ways. The first way is that the process makes a transition from S0 to S1 on the first trial and be stuck in S1 forever. The second way that the process makes a transition from S0 to S5 on the first trial and be stuck at S5 forever. The third way is that the process makes a transition from S0 to S3 on the first trial and then it makes a transition from S3 to S2 on the next state change so that it would never be able to go to S4. Therefore, the probability of B is equal to the sum of probabilities of this three events. So the probability of B is equal to the probability that the process makes a transition from S0 to S1 on the first trial, which is 1/3, plus the probability that the process makes a transition from S0 to S5 on the first trial, which is also 1/3, plus the probability that the process makes a transition from S0 to S3 on the first trial times the probability that the process then makes a transition from S3 to S2 on the next state change. So transition to S2, given that the processes are already in state S3 and there's a state change. Let's take a look at this conditional probability. The condition that the processes are already in state S3 and there's a state change imply two possible events, which are the transition from S3 to S2 and the transition from S3 to S4. Therefore, we can write this conditional probability as the conditional probability of transition from as S3 to S2, given that another event, S3 to S2 or S3 to S4 has happened. And this is simply equal to the proportion of p32 and p32 plus p34, which is equal to 1/4 over 1/4 plus 1/2, which is equal to 1/3. Therefore, the probability of B is equal to 1/3 plus 1/3 plus 1/3 times the 1/3 here, which is equal to 7/9. For part C of the problem, we have to calculate the probability that the process enter S2 and leaves S2 on the next trial. This probability can be written as the product of two probabilities-- the probability that the process enters S2 and the probability that it leaves S2 on the next trial, given it's already in S2. Let's first look at the probability that the process enters S2. Using a similar approach as part B, we know that the probability the process ever enters S2 is equal to the probability of the event that the process first makes a transition from S0 to S3 on the first trial and then makes a transition from S3 to S2 on the next state change. So the probability that the process enters S2 is equal to the probability that it first makes a transition from S0 to S3 on the first trial, which is P03, times the probability that it makes a transition to S2, given that it's already in S3 and there is a state change. We have already calculated this conditional probability in part B. Let's then look at the second probability term, the probability that the process leaves S2 on the next trial, given that it's already in S2. So given that the process is already in S2, it can take two transitions. In can either transition from S2 to S1 or make a self-transition from S2 back to S2. Therefore, this conditional probability that it leaves S2 on the next trial, given that it was already in S2 is simply equal to the transition probability from S2 to S1, which is P21. Therefore, this is equal to P03, which is 1/3, times 1/3 from the result from part B times P21, which is 1/2, and gives us 1/18. For part D of the problem, we have to calculate the probability that the process enters S1 for the first time on the third trial. So if you take a look at this Markov chain, you'll notice that the only way for this event to happen is when a process first makes a transition from S0 to S3 on the first trial and from S3 to S2 on the second trial and from S2 to S1 on the third trial. Therefore, the probability of D is equal to the probability of the event that the process makes a transition from S0 to S3 on the first trial and from S3 to S2 on the second trial and finally from S2 to S1 on the third trial. So this is equal to P03 times P32 times P21, which is equal to 1/3 times 1/4 times 1/2, which is equal to 1/24. For part E of the problem, we have to calculate the probability that the process is in S3 immediately after the nth trial. If you take a look at this Markov chain, you'll notice that if on the first trial, the process makes a transition from S0 to S1 or from S0 to S5, it will never be able to go to S3. On the other hand, if on the first trial, the process makes a transition from S0 to S3 and if it leaves S3 at some point, it will never be able to come back to S3. Therefore, in order for the process to be S3 immediately after the nth trial, we will need the process to first make transition from S0 to S3 on the first trial and then stay in S3 for the next n minus 1 trials. Therefore, the probability of the event e is simply equal to the probability of this event, which is equal to P03 times P33 to the power n minus 1, which is equal to 1/3 times 1/4 to the power of n minus 1. And this concludes our practice on Markov chain today.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Probabilty_Bounds.txt
In this problem, we're given a collection of 10 variables, x1 through x10, where each i, xi, is a uniform random variable between 0 and 1. So each i is uniform between 0 and 1, and all 10 variables are independent. And we'd like to develop a bound on the probability that some of the 10 variables, 1 to 10, being greater than 7 using different methods. So in part A we'll be using the Markov's inequality written here. That is, if we have a random variable, positive random variable x, the probability x is greater than a, where a is again some positive number, is bounded above by the expected value of x divided by a. And let's see how that works out in our situation. In our situation, we will call x the summation of i equal to 1 to 10xi, and therefore, E of x is simply 10 times E of x1, the individual ones, and this gives us 5. Here we use used the linearity of expectation such that the expectation of the sum of the random variable is simply the sum of the expectations. Now, we can invoke Markov's Inequality. It says x greater or equal to 7. This is less than E of x over 7, and this gives us 5 over 7. For part B, let's see if we can improve the bound we got in part A using the Chebyshev inequality, which takes into account the variance of random variable x. Again, to refresh you on this, the Chebyshev Inequality says the probability that x deviates from its mean E of x, by more than a is bound above by the variance of x divided by a squared. So we have to actually do some work to transform the probability we're interested in, which is x greater or equal to 7, into the form that's convenient to use using the Chebyshev Inequality. To do so, we'll rewrite this probability as the probability of x minus 5 greater or equal to 2 simply by moving 5 from the right to the left. The reason we chose 5 is because 5 is equal to the expected value of x from part A as we know before. And in fact, this quantity is also equal to the probability that x minus 5 less or equal to negative 2. To see why this is true, recall that x is simply the summation of the xi's, the 10 xi's, and each xi is a uniform random variable between 0 and 1. And therefore, each xi, the distribution of which is symmetric around its mean 1/2. So we can see that after we add up all the xi's, the resulting distribution x is also symmetric around its mean 5. And as a result, the probability of x minus 5 greater than 2 is now equal to the probability that x minus 5 less than negative 2. And knowing these two, we can then say they're both equal to 1/2 the probability x minus 5 absolute value greater or equal to 2, because this term right here is simply the sum of both terms here and here. At this point, we have transformed the probability of x greater or equal to 7 into the form right here, such that we can apply the Chebyshev's Inequality basically directly. And we'll write the probably here being less than or equal to 1/2 times, applying the Chebyshev Inequality, variance of x divided by 2 squared. Now, 2 is the same as a right here, and this gives us 1/8 times-- now, the variance of x, we know is 10 times the variance of a uniform random variable between 0 and 1, which is 1/12, and that gives us 5/48. Now, let's compare this with the number we got earlier using the Markov Inequality, which was 5/7. We see that 5/48 is much smaller, and this tells us that, at least for this example, using the Chebyshev Inequality combined with the information of the variance of x, we're able to get a stronger upper bound on the probability of the event that we're interested in. Now, in part B, we saw that by using the additional information of the variance combined with the Chebyshev Inequality, we can improve upon bound given by Markov's Inequality. Now, in part C, we'll use a somewhat more powerful approach in addition to the Chebyshev Inequality, the so-called central limit theorem. Let's see if we can even get a better bound. To remind you what a central limit theorem is, let's say we have a summation of i equal to 1 to some number n of independent and identically distributed random variables xi. Now, the central limit theorem says the following. We take the sum right here, and subtract out its means, which is E of the same summation, and further, we'll divide out, what we call normalize, by the standard deviation of the summation. In other words, the square root of the variance of the sum of xi. So if we perform this procedure right here, then as the number of terms in the sums going to infinity, here as in goes to infinity, we will actually see that this random variable will converge in distribution in some way that will eventually look like a standard normal random variable with means 0 and 1. And since we know how the distribution of a standard normal looks like, we can go to table and look up certain properties of the resulting distribution. So that is a plan to do. So right now, we have about 10 variables. It's not that many compared to a huge numbering, but again, if we believe it's a good approximation, we can get some information out of it by using the central limit theorem. So we are interesting knowing that probability summation of i equal to 1 to 10 x1 greater or equal to 7. We'll rewrite this as 1 minus the probability the summation i equal to 1 to 10, and xi less equal to 7. Now, we're going to apply the scaling to the summation right here. So this is equal to 1 minus the probability summation i equal to 1 to 10xi minus 5. Because we know from previous parts that 5 is the expected value of the sum right here, and divided by square root of 10/12. Again, earlier we know that 10/12 is the variance of the sum of xi's. And we'll do the same on the other side, writing it 7 minus 5 divided by square root of 10/12. Now, if we compute out the quantity right here, we know that this quantity is roughly 2.19, and by the central limit theorem, if we believe 10 is a large enough number, then this will be roughly equal to 1 minus the CDF of a standard normal evaluated at 2.19. And we could look up the number in the table, and this gives us number roughly, 0.014. Now let's do a quick summary of what this problem is about. We're asked to compute the probability of x greater or equal to 7, where x is the sum of 10 uniform random variables between 0 and 1, so we'll call it xi. We know that because each random variable has expectation 1/2, adding 10 of them up, gives us expectation of 5. So this is essentially asking, what is the chance that x is more than two away from its expectation? So if this is a real line, and 5 is here, maybe x has some distribution around 5, so the center what the expected value is at 5, we wonder how likely is it for us to see something greater than 7? Now, let's see where do we land on the probably spectrum from 0 to 1. Well, without using any information, we know the probability cannot be greater than 1, so a trivial upper bound for the probability right here will be 1. Well, for the first part we use Markov's Inequality and that gives us some number, which is roughly equal to 0.7. In fact, we got number 5/7, and this is from Markov's Inequality. Oh, it's better than 1, already telling us it cannot be between 0.7 and 1, but can we do better? Well, the part B, we see that all the way, using the additional information variance, we can get this number down to 5/48, which is roughly 0.1. Already, that's much better than 0.7. Can we even do better? And this is the Chebyshev, and it turns out we can indeed do better. Using the central limit theorem, we can squeeze this number all the way down to 0.014, almost a 10 times improvement over the previous number. This is from central limit theorem. As we can see, by using different bounding techniques, we can progressively improve the bound on the probability of x exceeding 7, and from this problem we learned that even with 10 variables, the truth is more like this, which says that the distribution of x concentrates very heavily around 5, and hence, the probability of x being greater or equal to 7 could be much smaller than one might expect.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
A_Coin_with_Random_Bias.txt
Hi. In this problem, we're going to be dealing with a variation of the usual coin-flipping problem. But in this case, the bias itself of the coin is going to be random. So you could think of it as, you don't even know what the probability of heads for the coin is. So as usual, we're still taking one coin and we're flipping it n times. But the difference here is that the bias is because it was random variable Q. And we're told that the expectation of this bias is some mu and that the variance of the bias is some sigma squared, which we're told is positive. And what we're going to be asked is find a bunch of different expectations, covariances, and variances. And we'll see that this problem gives us some good exercise in a few concepts, a lot of iterated expectations, which, again, tells you that when you take the expectation of a conditional expectation, it's just the expectation of the inner random variable. The covariance of two random variables is just the expectation of the product minus the product of the expectations. Law of total variance is the expectation of a variance, of a conditional variance plus the variance of a conditional expectation. And the last thing, of course, we're dealing with a bunch of Bernoulli random variables, coin flips. So as a reminder, for a Bernoulli random variable, if you know what the bias is, it's some known quantity p, then the expectation of the Bernoulii is just p, and the variance of the Bernoulli is p times 1 minus p. So let's get started. The problem tells us that we're going to define some random variables. So xi is going to be a Bernoulli random variable for the i coin flip. So xi is going to be 1 if the i coin flip was heads and 0 if it was tails. And one very important thing that the problem states is that conditional on Q, the random bias, so if we know what the random bias is, then all the coin flips are independent. And that's going to be important for us when we calculate all these values. OK, so the first thing that we need to calculate is the expectation of each of these individual Bernoulli random variables, xi. So how do we go about calculating what this is? Well, the problem gives us a int. It tells us to try using the law of iterated expectations. But in order to use it, you need to figure out what you need the condition on. What this y? What takes place in y? And in this case, a good candidate for what you condition on would be the bias, the Q that we're unsure about. So let's try doing that and see what we get. So we write out the law of iterated expectations with Q. So now hopefully, we can simplify it with this inter-conditional expectation is. Well, what is it really? It's saying, given what Q is, what is the expectation of this Bernoulli random interval xi? Well, we know that if we knew what the bias was, then the expectation is just the bias itself. But in this case, the bias is random. But remember a conditional expectation is still a random variable. And so in this case, this actually just simplifies into Q. So whatever the bias is, the expectation is just equal to the bias. And so that's what it tells us. And this part is easy because we're given that the expectation of q is mu. And then the problem also defines the random variable x. X is the total number of heads within the n tosses. Or you can think of it as a sum of all these individual xi Bernoulli random variables. And now, what can we do with this? Well we can remember that linearity of expectations allows us to split up this sum. Expectation of a sum, we could split up into a sum of expectations. So this is actually just expectation of x1 plus dot dot dot plus all the way to expectation of xn. All right. And now, remember that we're flipping the same coin. We don't know what the bias is, but for all the n flips, it's the same coin. And so each of these expectations of xi should be the same, no matter what xi is. And each one of them is mu. We already calculated that earlier. And there's 10 of them, so the answer would be n times mu. So let's move on to part B. Part B now asks us to find what the covariance is between xi and xj. And we have to be a little bit careful here because there are two different scenarios, one where i and j are different indices, different tosses, and another where i and j are the same. So we have to consider both of these cases separately. Let's first do the case where x and i are different. So i does not equal j. In this case, we can just apply the formula that we talked about in the beginning. So this covariance is just equal to the expectation of xi times xj minus the expectation of xi times expectation of xj. All right, so we actually know what these two are, right? Expectation of xi is mu. Expectation of xj is also mu. So this part is just mu squared. But we need to figure out what this expectation of xi times xj is. Well, the expectation of xi times xj, we can again use the law of iterated expectations. So let's try conditioning on cue again. And remember we said that this second part is just mu squared. All right, well, how can we simplify this inner-conditional expectation? Well, we can use the fact that the problem tells us that, conditioned on Q, the tosses are independent. So that means that, conditioned on Q, xi and xj are independent. And remember, when random variables are independent, the expectation of product, you could simplify that to be the product of the expectations. And because we're in the condition world on Q, you have to remember that it's going to be a product of two conditional expectations. So this will be expectation of xi given Q times expectation of xj given Q minus mu squared still. All right, now what is this? Well the expectation of xi given Q, we already argued earlier here that it should just be Q. And then the same thing for xj. That should also be Q. So this is just expectation of Q squared minus mu squared. All right, now if we look at this, what is the expectation of Q squared minus mu squared? Well, remember mu is just, we're told that mu is the expectation of Q. So what we have is the expectation of Q squared minus the quantity expectation of Q squared. And what is that, exactly? That is just the formula or the definition of what the variance of Q should be. So this is, in fact, exactly equal to the variance of Q, which we're told is sigma squared. All right, so what we found is that for i not equal to j, the coherence of xi and xj is exactly equal to sigma squared. And remember, we're told that sigma squared is positive. So what does that tell us? That tells us that xi and xj, or i not equal to j, these two random variables are correlated. And so, because they're correlated, they can't be independent. Remember, if two intervals are independent, that means they're uncorrelated. But the converse isn't true. But if we do know that two random variables are correlated, that means that they can't be independent. And now let's finish this by considering the second case. The second case is when i actually does equal j. And in that case, well, the covariance of xi and xi is just another way of writing the variance of xi. So covariance, xi, xi, it's just the variance of xi. And what is that? That is just the expectation of xi squared minus expectation of xi quantity squared. And again, we know what the second term is. The second term is expectation of xi quantity squared. Expectation of xi we know from part A is just mu, right? So that's just second term is just mu squared. But what is the expectation of xi squared? Well, we can think about this a little bit more. And you can realize that xi squared is actually exactly the same thing as just xi. And this is just a special case because xi is a Bernoulli random variable. Because Bernoulli is either 0 or 1. And if it's 0 and you square it, it's still 0. And if it's 1 and you square it, it's still 1. So squaring it doesn't really doesn't actually change anything. It's exactly the same thing as the original random variable. And so, because this is a Bernoulli random variable, this is exactly just the expectation of xi. And we said this part is just mu squared. So this is just expectation of xi, which we said was mu. So the answer is just mu minus mu squared. OK, so this completes part B. And the answer that we wanted was that in fact, xi and xj are in fact not independent. Right. So let's write down some facts that we'll want to remember. One of them is that expectation of xi is mu. And we also want to remember what this covariance is. The covariance of xi and xj is equal to sigma squared when i does not equal j. So we'll be using these facts again later. And the variance of xi is equal to mu minus mu squared. So now let's move on to the last part, part C, which asks us to calculate the variance of x in two different ways. So the first way we'll do it is using the law of total variance. So the law of total variance will tell us that we can write the variance of x as a sum of two different parts. So the first is variance of x expectation of the variance of x conditioned on something plus the variance of the initial expectation of x conditioned on something. And as you might have guessed, what we're going to condition on is Q. Let's calculate what these two things are. So let's do the two terms separately. What is the expectation of the conditional variance of x given Q? Well, what is-- this, we can write out x. Because x, remember, is just the sum of a bunch of these Bernoulli random variables. And now what we'll do was, well, again, use the important fact that the x's, we're told, are conditionally independent, conditional on Q. And because they're independent, remember the variance of a sum is not the sum of the variance. It's only the sum of the variance if the terms in the sum are independent. In this case, they are conditionally independent given Q. So we can in fact split this up and write it as the variance of x1 given Q plus all the way to the variance of xn given Q. And in fact, all these are the same, right? So we just have n copies of the variance of, say, x1 given Q. Now, what is the variance of x1 given Q? Well, x1 is just a Bernoulli random variable. But the difference is that for x, we don't know what the bias or what the Q is. Because it's some random bias Q But just like we said earlier in part A, when we talked about the expectation of x1 given Q, this is actually just Q times 1 minus Q. Because if you knew what the bias were, it would be p times 1 minus p. So the bias times 1 minus the bias. But you don't know what it is. But if you did, it would just be q. So what we do is we just plug in Q, and you get Q times 1 minus 2. All right, and now this is expectation of n. I can pull out the n. So it's n times the expectation of Q minus Q squared, which is just n times expectation Q, we can use linearity of expectations again, expectation of Q is mu. And the expectation of Q 2 squared is, well, we can do that on the side. Expectation of Q squared is the variance of Q plus expectation of Q quantity squared. So that's just sigma squared plus mu squared. And so this is just going to be then minus sigma squared minus mu squared. All right, so that's the first term. Now let's do the second term. The variance the conditional expectation of x given Q. And again, what we can do is we can write x as the sum of all these xi's. And now we can apply linearity of expectations. So we would get n times one of these expectations. And remember, we said earlier the expectation of x1 given Q is just Q. So it's the variance of n times Q. And remember now, n is just-- it's not random. It's just some number. So when you pull it out of a variance, you square it. So this is n squared times the variance of Q. And the variance of Q we're given is sigma squared. So this is n squared times sigma squared. So the final answer is just a combination of these two terms. This one and this one. So let's write it out. The variance of x, then, is equal to-- we can combine terms a little bit. So the first one, let's take the mus and we'll put them together. So it's n mu minus mu squared. And then we have n squared times sigma squared from this term and minus n times sigma squared from this term. So it would be n squared minus n times sigma squared, or n times n minus 1 times sigma squared. So that is the final answer that we get for the variance of x. And now, let's try doing it another way. So that's one way of doing it. That's using the law of total expectations and conditioning on Q. Another way of finding the variance of x is to use the formula involving covariances, right? And we can use that because x is actually a sum of multiple random variables x1 through xn. And the formula for this is, you have n variance terms plus all these other ones. Where i is not equal to j, you have the covariance terms. And really, it's just, you can think of it as a double sum of all pairs of xi and xj where if i and j happen just to be the same, that it simplifies to be just the variance. Now, so we pulled theses n terms out because they are different than these because they have a different value. And now fortunately, we've already calculated what these values are in part B. So we can just plug them them. All the variances are the same. And there's n of them, so we get n times the variance of each one. The variance of each one we calculated already was mu minus mu squared. And then, we have all the terms were i is not equal to j. Well, there are actually n squared minus n of them. So because you can take any one of the n's to be the first to be i, any one of the n to be j. So that gives you n squared pairs. But then you have to subtract out all the ones where i and j are the same. And there are n of them. So that leaves you with n squared minus n of these pairs where i is not equal to j. And the coherence for this case where i is not equal to j, we also calculated in part B. That's just sigma squared. All right, and now if we compare these two, we'll see that they are proportionally exactly the same. So we've use two different methods to calculate the variance, one using this summation and one using the law of total variance. So what do we learn from this problem? Well, we saw that first of all, in order to find some expectations, it's very useful to use law of iterated expectations. But the trick is to figure out what you should condition on. And that's kind of an art that you learn through more practice. But one good rule of thumb is, when you have kind of a hierarchy or layers of randomness where one layer of randomness depends on the randomness of the layer above-- so in this case, whether or not you get heads or tails depends on, that's random, but that depends on the randomness on the level above, which was the random bias of the coin itself. So the rule of thumb is, when you want to calculate the expectations for the layer where you're talking about heads or tails, it's useful to condition on the layer above where that is, in this case, the random bias. Because once you condition on the layer above, that makes the next level much simpler. Because you kind of assume that you know what all the previous levels of randomness are, and that helps you calculate what the expectation for this current level. And the rest of the problem was just kind of going through exercises of actually applying the--
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
4_Counting.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. So today's lecture will be on the subject of counting. So counting, I guess, is a pretty simple affair conceptually, but it's a topic that can also get to be pretty tricky. The reason we're going to talk about counting is that there's a lot of probability problems whose solution actually reduces to successfully counting the cardinalities of various sets. So we're going to see the basic, simplest methods that one can use to count systematically in various situations. So in contrast to previous lectures, we're not going to introduce any significant new concepts of a probabilistic nature. We're just going to use the probability tools that we already know. And we're going to apply them in situations where there's also some counting involved. Now, today we're going to just touch the surface of this subject. There's a whole field of mathematics called combinatorics who are people who actually spend their whole lives counting more and more complicated sets. We were not going to get anywhere close to the full complexity of the field, but we'll get just enough tools that allow us to address problems of the type that one encounters in most common situations. So the basic idea, the basic principle is something that we've already discussed. So counting methods apply in situations where we have probabilistic experiments with a finite number of outcomes and where every outcome-- every possible outcome-- has the same probability of occurring. So we have our sample space, omega, and it's got a bunch of discrete points in there. And the cardinality of the set omega is some capital N. So, in particular, we assume that the sample points are equally likely, which means that every element of the sample space has the same probability equal to 1 over N. And then we are interested in a subset of the sample space, call it A. And that subset consists of a number of elements. Let the cardinality of that subset be equal to little n. And then to find the probability of that set, all we need to do is to add the probabilities of the individual elements. There's little n elements, and each one has probability one over capital N. And that's the answer. So this means that to solve problems in this context, all that we need to be able to do is to figure out the number capital N and to figure out the number little n. Now, if somebody gives you a set by just giving you a list and gives you another set, again, giving you a list, it's easy to count there element. You just count how much there is on the list. But sometimes the sets are described in some more implicit way, and we may have to do a little bit more work. There's various tricks that are involved in counting properly. And the most common one is to-- when you consider a set of possible outcomes, to describe the construction of those possible outcomes through a sequential process. So think of a probabilistic experiment that involves a number of stages, and in each one of the stages there's a number of possible choices that there may be. The overall experiment consists of carrying out all the stages to the end. And the number of points in the sample space is how many final outcomes there can be in this multi-stage experiment. So in this picture we have an experiment in which of the first stage we have four choices. In the second stage, no matter what happened in the first stage, the way this is drawn we have three choices. No matter whether we ended up here, there, or there, we have three choices in the second stage. And then there's a third stage and at least in this picture, no matter what happened in the first two stages, in the third stage we're going to have two possible choices. So how many leaves are there at the end of this tree? That's simple. It's just the product of these three numbers. The number of possible leaves that we have out there is 4 times 3 times 2. Number of choices at each stage gets multiplied, and that gives us the number of overall choices. So this is the general rule, the general trick that we are going to use over and over. So let's apply it to some very simple problems as a warm up. How many license plates can you make if you're allowed to use three letters and then followed by four digits? At least if you're dealing with the English alphabet, you have 26 choices for the first letter. Then you have 26 choices for the second letter. And then 26 choices for the third letter. And then we start the digits. We have 10 choices for the first digit, 10 choices for the second digit, 10 choices for the third, 10 choices for the last one. Let's make it a little more complicated, suppose that we're interested in license plates where no letter can be repeated and no digit can be repeated. So you have to use different letters, different digits. How many license plates can you make? OK, let's choose the first letter, and we have 26 choices. Now, I'm ready to choose my second letter, how many choices do I have? I have 25, because I already used one letter. I have the 25 remaining letters to choose from. For the next letter, how many choices? Well, I used up two of my letters, so I only have 24 available. And then we start with the digits, 10 choices for the first digit, 9 choices for the second, 8 for the third, 7 for the last one. All right. So, now, let's bring some symbols in a related problem. You are given a set that consists of n elements and you're supposed to take those n elements and put them in a sequence. That is to order them. Any possible ordering of those elements is called a permutation. So for example, if we have the set 1, 2, 3, 4, a possible permutation is the list 2, 3, 4, 1. That's one possible permutation. And there's lots of possible permutations, of course, the question is how many are there. OK, let's think about building this permutation by choosing one at a time. Which of these elements goes into each one of these slots? How many choices for the number that goes into the first slot or the elements? Well, we can choose any one of the available elements, so we have n choices. Let's say this element goes here, having used up that element, we're left with n minus 1 elements and we can pick any one of these and bring it into the second slot. So here we have n choices, here we're going to have n minus 1 choices, then how many we put there will have n minus 2 choices. And you go down until the end. What happens at this point when you are to pick the last element? Well, you've used n minus of them, there's only one left in your bag. You're forced to use that one. So the last stage, you're going to have only one choice. So, basically, the number of possible permutations is the product of all integers from n down to one, or from one up to n. And there's a symbol that we use for this number, it's called n factorial. So n factorial is the number of permutations of n objects. The number of ways that you can order n objects that are given to you. Now, a different equation. We have n elements. Let's say the elements are 1, 1,2, up to n. And it's a set. And we want to create a subset. How many possible subsets are there? So speaking of subsets means looking at each one of the elements and deciding whether you're going to put it in to subsets or not. For example, I could choose to put 1 in, but 2 I'm not putting it in, 3 I'm not putting it in, 4 I'm putting it, and so on. So that's how you create a subset. You look at each one of the elements and you say, OK, I'm going to put it in the subset, or I'm not going to put it. So think of these as consisting of stages. At each stage you look at one element, and you make a binary decision. Do I put it in the subset, or not? So therefore, how many subsets are there? Well, I have two choices for the first element. Am I going to put in the subset, or not? I have two choices for the next element, and so on. For each one of the elements, we have two choices. So the overall number of choices is 2 to the power n. So, conclusion-- the number of subsets, often n element set, is 2 to the n. So in particular, if we take n equal to 1, let's check that our answer makes sense. If we have n equal to one, how many subsets does it have? So we're dealing with a set of just one. What are the subsets? One subset is this one. Do we have other subsets of the one element set? Yes, we have the empty set. That's the second one. These are the two possible subsets of this particular set. So 2 subsets when n is equal to 1, that checks the answer. All right. OK, so having gone so far, we can do our first example now. So we are given a die and we're going to roll it 6 times. OK, let's make some assumptions about the rolls. Let's assume that the rolls are independent, and that the die is also fair. So this means that the probability of any particular outcome of the die rolls-- for example, so we have 6 rolls, one particular outcome could be 3,3,1,6,5. So that's one possible outcome. What's the probability of this outcome? There's probability 1/6 that this happens, 1/6 that this happens, 1/6 that this happens, and so on. So the probability that the outcome is this is 1/6 to the sixth. What did I use to come up with this answer? I used independence, so I multiplied the probability of the first roll gives me a 2, times the probability that the second roll gives me a 3, and so on. And then I used the assumption that the die is fair, so that the probability of 2 is 1/6, the probably of 3 is 1/6, and so on. So if I were to spell it out, it's the probability that we get the 2 in the first roll, times the probability of 3 in the second roll, times the probability of the 5 in the last roll. So by independence, I can multiply probabilities. And because the die is fair, each one of these numbers is 1/6 to the sixth. And so the same calculation would apply no matter what numbers I would put in here. So all possible outcomes are equally likely. Let's start with this. So since all possible outcomes are equally likely to find an answer to a probability question, if we're dealing with some particular event, so the event is that all rolls give different numbers. That's our event A. And our sample space is some set capital omega. We know that the answer is going to be the cardinality of the set A, divided by the cardinality of the set omega. So let's deal with the easy one first. How many elements are there in the sample space? How many possible outcomes are there when you roll a dice 6 times? You have 6 choices for the first roll. You have 6 choices for the second roll and so on. So the overall number of outcomes is going to be 6 to the sixth. So number of elements in the sample space is 6 to the sixth power. And I guess this checks with this. We have 6 to the sixth outcomes, each one has this much probability, so the overall probability is equal to one. Right? So the probability of an individual outcome is one over how many possible outcomes we have, which is this. All right. So how about the numerator? We are interested in outcomes in which the numbers that we get are all different. So what is an outcome in which the numbers are all different? So the die has 6 faces. We roll it 6 times. We're going to get 6 different numbers. This means that we're going to exhaust all the possible numbers, but they can appear in any possible sequence. So an outcome that makes this event happen is a list of the numbers from 1 to 6, but arranged in some arbitrary order. So the possible outcomes that make event A happen are just the permutations of the numbers from 1 to 6. One possible outcome that makes our events to happen-- it would be this. Here we have 6 possible numbers, but any other list of this kind in which none of the numbers is repeated would also do. So number of outcomes that make the event happen is the number of permutations of 6 elements. So it's 6 factorial. And so the final answer is going to be 6 factorial divided by 6 to the sixth. All right, so that's a typical way that's one solves problems of this kind. We know how to count certain things. For example, here we knew how to count permutations, and we used our knowledge to count the elements of the set that we need to deal with. So now let's get to a slightly more difficult problem. We're given once more a set with n elements. We already know how many subsets that set has, but now we would be interested in subsets that have exactly k elements in them. So we start with our big set that has n elements, and we want to construct a subset that has k elements. Out of those n I'm going to choose k and put them in there. In how many ways can I do this? More concrete way of thinking about this problem-- you have n people in some group and you want to form a committee by picking people from that group, and you want to form a committee with k people. Where k is a given number. For example, a 5 person committee. How many 5 person committees are possible if you're starting with 100 people? So that's what we want to count. How many k element subsets are there? We don't yet know the answer, but let's give a name to it. And the name is going to be this particular symbol, which we read as n choose k. Out of n elements, we want to choose k of them. OK. That may be a little tricky. So what we're going to do is to instead figure out a somewhat easier problem, which is going to be-- in how many ways can I pick k out of these people and puts them in a particular order? So how many possible ordered lists can I make that consist of k people? By ordered, I mean that we take those k people and we say this is the first person in the community. That's the second person in the committee. That's the third person in the committee and so on. So in how many ways can we do this? Out of these n, we want to choose just k of them and put them in slots. One after the other. So this is pretty much like the license plate problem we solved just a little earlier. So we have n choices for who we put as the top person in the community. We can pick anyone and have them be the first person. Then I'm going to choose the second person in the committee. I've used up 1 person. So I'm going to have n minus 1 choices here. And now, at this stage I've used up 2 people, so I have n minus 2 choices here. And this keeps going on. Well, what is going to be the last number? Is it's n minus k? Well, not really. I'm starting subtracting numbers after the second one, so by the end I will have subtracted k minus 1. So that's how many choices I will have for the last person. So this is the number of ways-- the product of these numbers there gives me the number of ways that I can create ordered lists consisting of k people out of the n that we started with. Now, you can do a little bit of algebra and check that this expression here is the same as that expression. Why is this? This factorial has all the products from 1 up to n. This factorial has all the products from 1 up to n minus k. So you get cancellations. And what's left is all the products starting from the next number after here, which is this particular number. So the number of possible ways of creating such ordered lists is n factorial divided by n minus k factorial. Now, a different way that I could make an ordered list-- instead of picking the people one at a time, I could first choose my k people who are going to be in the committee, and then put them in order. And tell them out of these k, you are the first, you are the second, you are the third. Starting with this k people, in how many ways can I order them? That's the number of permutations. Starting with a set with k objects, in how many ways can I put them in a specific order? How many specific orders are there? That's basically the question. In how many ways can I permute these k people and arrange them. So the number of ways that you can do this step is k factorial. So in how many ways can I start with a set with n elements, go through this process, and end up with a sorted list with k elements? By the rule that-- when we have stages, the total number of stages is how many choices we had in the first stage, times how many choices we had in the second stage. The number of ways that this process can happen is this times that. This is a different way that that process could happen. And the number of possible of ways is this number. No matter which way we carry out that process, in the end we have the possible ways of arranging k people out of the n that we started with. So the final answer that we get when we count should be either this, or this times that. Both are equally valid ways of counting, so both should give us the same answer. So we get this equality here. So these two expressions corresponds to two different ways of constructing ordered lists of k people starting with n people initially. And now that we have this relation, we can send the k factorial to the denominator. And that tells us what that number, n choose k, is going to be. So this formula-- it's written here in red, because you're going to see it a zillion times until the end of the semester-- they are called the binomial coefficients. And they tell us the number of possible ways that we can create a k element subset, starting with a set that has n elements. It's always good to do a sanity check to formulas by considering extreme cases. So let's take the case where k is equal to n. What's the right answer in this case? How many n elements subsets are there out of an element set? Well, your subset needs to include every one. You don't have any choices. There's only one choice. It's the set itself. So the answer should be equal to 1. That's the number of n element subsets, starting with a set with n elements. Let's see if the formula gives us the right answer. We have n factorial divided by k, which is n in our case-- n factorial. And then n minus k is 0 factorial. So if our formula is correct, we should have this equality. And what's the way to make that correct? Well, it depends what kind of meaning do we give to this symbol? How do we define zero factorial? I guess in some ways it's arbitrary. We're going to define it in a way that makes this formula right. So the definition that we will be using is that whenever you have 0 factorial, it's going to stand for the number 1. So let's check that this is also correct, at the other extreme case. If we let k equal to 0, what does the formula give us? It gives us, again, n factorial divided by 0 factorial times n factorial. According to our convention, this again is equal to 1. So there is one subset of our set that we started with that has zero elements. Which subset is it? It's the empty set. So the empty set is the single subset of the set that we started with that happens to have exactly zero elements. So the formula checks in this extreme case as well. So we're comfortable using it. Now these factorials and these coefficients are really messy algebraic objects. There's lots of beautiful identities that they satisfy, which you can prove algebraically sometimes by using induction and having cancellations happen all over the place. But it's really messy. Sometimes you can bypass those calculations by being clever and using your understanding of what these coefficients stand for. So here's a typical example. What is the sum of those binomial coefficients? I fix n, and sum over all possible cases. So if you're an algebra genius, you're going to take this expression here, plug it in here, and then start doing algebra furiously. And half an hour later, you may get the right answer. But now let's try to be clever. What does this really do? What does that formula count? We're considering k element subsets. That's this number. And we're considering the number of k element subsets for different choices of k. The first term in this sum counts how many 0-element subsets we have. The next term in this sum counts how many 1-element subsets we have. The next term counts how many 2-element subsets we have. So in the end, what have we counted? We've counted the total number of subsets. We've considered all possible cardinalities. We've counted the number of subsets of size k. We've considered all possible sizes k. The overall count is going to be the total number of subsets. And we know what this is. A couple of slides ago, we discussed that this number is equal to 2 to the n. So, nice, clean and simple answer, which is easy to guess once you give an interpretation to the algebraic expression that you have in front of you. All right. So let's move again to sort of an example in which those binomial coefficients are going to show up. So here's the setting-- n independent coin tosses, and each coin toss has a probability, P, of resulting in heads. So this is our probabilistic experiment. Suppose we do 6 tosses. What's the probability that we get this particular sequence of outcomes? Because of independence, we can multiply probability. So it's going to be the probability that the first toss results in heads, times the probability that the second toss results in tails, times the probability that the third one results in tails, times probability of heads, times probability of heads, times probability of heads, which is just P to the fourth times (1 minus P) squared. So that's the probability of this particular sequence. How about a different sequence? If I had 4 tails and 2 heads, but in a different order-- let's say if we considered this particular outcome-- would the answer be different? We would still have P, times P, times P, times P, times (1 minus P), times (1 minus P). We would get again, the same answer. So what you observe from just this example is that, more generally, the probability of obtaining a particular sequence of heads and tails is P to a power, equal to the number of heads. So here we had 4 heads. So there's P to the fourth showing up. And then (1 minus P) to the power number of tails. So every k head sequence-- every outcome in which we have exactly k heads, has the same probability, which is going to be P to the k, (1 minus p), to the (n minus k). This is the probability of any particular sequence that has exactly k heads. So that's the probability of a particular sequence with k heads. So now let's ask the question, what is the probability that my experiment results in exactly k heads, but in some arbitrary order? So the heads could show up anywhere. So there's a number of different ways that this can happen. What's the overall probability that this event takes place? So the probability of an event taking place is the sum of the probabilities of all the individual ways that the event can occur. So it's the sum of the probabilities of all the outcomes that make the event happen. The different ways that we can obtain k heads are the number of different sequences that contain exactly k heads. We just figured out that any sequence with exactly k heads has this probability. So to do this summation, we just need to take the common probability of each individual k head sequence, times how many terms we have in this sum. So what we're left to do now is to figure out how many k head sequences are there. How many outcomes are there in which we have exactly k heads. OK. So what are the ways that I can describe to you a sequence with k heads? I can take my n slots that corresponds to the different tosses. I'm interested in particular sequences that have exactly k heads. So what I need to do is to choose k slots and assign heads to them. So to specify a sequence that has exactly k heads is the same thing as drawing this picture and telling you which are the k slots that happened to have heads. So I need to choose out of those n slots, k of them, and assign them heads. In how many ways can I choose this k slots? Well, it's the question of starting with a set of n slots and choosing k slots out of the n available. So the number of k head sequences is the same as the number of k element subsets of the set of slots that we started with, which are the n slots 1 up to n. We know what that number is. We counted, before, the number of k element subsets, starting with a set with n elements. And we gave a symbol to that number, which is that thing, n choose k. So this is the final answer that we obtain. So these are the so-called binomial probabilities. And they gave us the probabilities for different numbers of heads starting with a fair coin that's being tossed a number of times. This formula is correct, of course, for reasonable values of k, meaning its correct for k equals 0, 1, up to n. If k is bigger than n, what's the probability of k heads? If k is bigger than n, there's no way to obtain k heads, so that probability is, of course, zero. So these probabilities only makes sense for the numbers k that are possible, given that we have n tosses. And now a question similar to the one we had in the previous slide. If I write down this summation-- even worse algebra than the one in the previous slide-- what do you think this number will turn out to be? It should be 1 because this is the probability of obtaining k heads. When we do the summation, what we're doing is we're considering the probability of 0 heads, plus the probability of 1 head, plus the probability of 2 heads, plus the probability of n heads. We've exhausted all the possibilities in our experiment. So the overall probability, when you exhaust all possibilities, must be equal to 1. So that's yet another beautiful formula that evaluates into something really simple. And if you tried to prove this identity algebraically, of course, you would have to suffer quite a bit. So now armed with the binomial probabilities, we can do the harder problems. So let's take the same experiment again. We flip a coin independently 10 times. So these 10 tosses are independent. We flip it 10 times. We don't see the result, but somebody comes and tells us, you know, there were exactly 3 heads in the 10 tosses that you had. OK? So a certain event happened. And now you're asked to find the probability of another event, which is that the first 2 tosses were heads. Let's call that event A. OK. So are we in the setting of discrete uniform probability laws? When we toss a coin multiple times, is it the case that all outcomes are equally likely? All sequences are equally likely? That's the case if you have a fair coin-- that all sequences are equally likely. But if your coin is not fair, of course, heads/heads is going to have a different probability than tails/tails. If your coin is biased towards heads, then heads/heads is going to be more likely. So we're not quite in the uniform setting. Our overall sample space, omega, does not have equally likely elements. Do we care about that? Not necessarily. All the action now happens inside the event B that we are told has occurred. So we have our big sample space, omega. Elements of that sample space are not equally likely. We are told that a certain event B occurred. And inside that event B, we're asked to find the conditional probability that A has also occurred. Now here's the lucky thing, inside the event B, all outcomes are equally likely. The outcomes inside B are the sequences of 10 tosses that have exactly 3 heads. Every 3-head sequence has this probability. So the elements of B are equally likely with each other. Once we condition on the event B having occurred, what happens to the probabilities of the different outcomes inside here? Well, conditional probability laws keep the same proportions as the unconditional ones. The elements of B were equally likely when we started, so they're equally likely once we are told that B has occurred. So to do with this problem, we need to just transport us to this smaller universe and think about what's happening in that little universe. In that little universe, all elements of B are equally likely. So to find the probability of some subset of that set, we only need to count the cardinality of B, and count the cardinality of A. So let's do that. Number of outcomes in B-- in how many ways can we get 3 heads out of 10 tosses? That's the number we considered before, and it's 10 choose 3. This is the number of 3-head sequences when you have 10 tosses. Now let's look at the event A. The event A is that the first 2 tosses where heads, but we're living now inside this universe B. Given that B occurred, how many elements does A have in there? In how many ways can A happen inside the B universe. If you're told that the first 2 were heads-- sorry. So out of the outcomes in B that have 3 heads, how many start with heads/heads? Well, if it starts with heads/heads, then the only uncertainty is the location of the third head. So we started with heads/heads, we're going to have three heads, the question is, where is that third head going to be. It has eight possibilities. So slot 1 is heads, slot 2 is heads, the third heads can be anywhere else. So there's 8 possibilities for where the third head is going to be. OK. So what we have counted here is really the cardinality of A intersection B, which is out of the elements in B, how many of them make A happen, divided by the cardinality of B. And that gives us the answer, which is going to be 10 choose 3, divided by 8. And I should probably redraw a little bit of the picture that they have here. The set A is not necessarily contained in B. It could also have stuff outside B. So the event that the first 2 tosses are heads can happen with a total of 3 heads, but it can also happen with a different total number of heads. But once we are transported inside the set B, what we need to count is just this part of A. It's A intersection B and compare it with the total number of elements in the set B. Did I write it the opposite way? Yes. So this is 8 over 10 choose 3. OK. So we're going to close with a more difficult problem now. OK. This business of n choose k has to do with starting with a set and picking a subset of k elements. Another way of thinking of that is that we start with a set with n elements and you choose a subset that has k, which means that there's n minus k that are left. Picking a subset is the same as partitioning our set into two pieces. Now let's generalize this question and start counting partitions in general. Somebody gives you a set that has n elements. Somebody gives you also certain numbers-- n1, n2, n3, let's say, n4, where these numbers add up to n. And you're asked to partition this set into four subsets where each one of the subsets has this particular cardinality. So you're asking to cut it into four pieces, each one having the prescribed cardinality. In how many ways can we do this partitioning? n choose k was the answer when we partitioned in two pieces, what's the answer more generally? For a concrete example of a partition, you have your 52 card deck and you deal, as in bridge, by giving 13 cards to each one of the players. Assuming that the dealing is done fairly and with a well shuffled deck of cards, every particular partition of the 52 cards into four hands, that is four subsets of 13 each, should be equally likely. So we take the 52 cards and we partition them into subsets of 13, 13, 13, and 13. And we assume that all possible partitions, all possible ways of dealing the cards are equally likely. So we are again in a setting where we can use counting, because all the possible outcomes are equally likely. So an outcome of the experiment is the hands that each player ends up getting. And when you get the cards in your hands, it doesn't matter in which order that you got them. It only matters what cards you have on you. So it only matters which subset of the cards you got. All right. So what's the cardinality of the sample space in this experiment? So let's do it for the concrete numbers that we have for the problem of partitioning 52 cards. So think of dealing as follows-- you shuffle the deck perfectly, and then you take the top 13 cards and give them to one person. In how many possible hands are there for that person? Out of the 52 cards, I choose 13 at random and give them to the first person. Having done that, what happens next? I'm left with 39 cards. And out of those 39 cards, I pick 13 of them and give them to the second person. Now I'm left with 26 cards. Out of those 26, I choose 13, give them to the third person. And for the last person there isn't really any choice. Out of the 13, I have to give that person all 13. And that number is just equal to 1. So we don't care about it. All right. So next thing you do is to write down the formulas for these numbers. So, for example, here you would have 52 factorial, divided by 13 factorial, times 39 factorial, and you continue. And then there are nice cancellations that happen. This 39 factorial is going to cancel the 39 factorial that comes from there, and so on. After you do the cancellations and all the algebra, you're left with this particular answer, which is the number of possible partitions of 52 cards into four players where each player gets exactly 13 hands. If you were to generalize this formula to the setting that we have here, the more general formula is-- you have n factorial, where n is the number of objects that you are distributing, divided by the product of the factorials of the-- OK, here I'm doing it for the case where we split it into four sets. So that would be the answer when we partition a set into four subsets of prescribed cardinalities. And you can guess how that formula would generalize if you want to split it into five sets or six sets. OK. So far we just figured out the size of the sample space. Now we need to look at our event, which is the event that each player gets an ace, let's call that event A. In how many ways can that event happens? How many possible hands are there in which every player has exactly one ace? So I need to think about the sequential process by which I distribute the cards so that everybody gets exactly one ace, and then try to think in how many ways can that sequential process happen. So one way of making sure that everybody gets exactly one ace is the following-- I take the four aces and I distribute them randomly to the four players, but making sure that each one gets exactly one ace. In how many ways can that happen? I take the ace of spades and I send it to a random person out of the four. So there's 4 choices for this. Then I'm left with 3 aces to distribute. That person already gotten an ace. I take the next ace, and I give it to one of the 3 people remaining. So there's 3 choices for how to do that. And then for the next ace, there's 2 people who have not yet gotten an ace, and they give it randomly to one of them. So these are the possible ways of distributing for the 4 aces, so that each person gets exactly one. It's actually the same as this problem. Starting with a set of four things, in how many ways can I partition them into four subsets where the first set has one element, the second has one element, the third one has another element, and so on. So it agrees with that formula by giving us 4 factorial. OK. So there are different ways of distributing the aces. And then there's different ways of distributing the remaining 48 cards. How many ways are there? Well, I have 48 cards that I'm going to distribute to four players by giving 12 cards to each one. It's exactly the same question as the one we had here, except that now it's 48 cards, 12 to each person. And that gives us this particular count. So putting all that together gives us the different ways that we can distribute the cards to the four players so that each one gets exactly one ace. The number of possible ways is going to be this four factorial, coming from here, times this number-- this gives us the number of ways that the event of interest can happen-- and then the denominator is the cardinality of our sample space, which is this number. So this looks like a horrible mess. It turns out that this expression does simplify to something really, really simple. And if you look at the textbook for this problem, you will see an alternative derivation that gives you a short cut to the same numerical answer. All right. So that basically concludes chapter one. From next time we're going to consider introducing random variables and make the subject even more interesting.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Communication_over_a_Noisy_Channel.txt
Hi. In this problem, we'll be talking about communication across a noisy channel. But before we dive into the problem itself, I wanted to first motivate the context a little bit and talk more about what exactly a communication channel is and what "noise" means. So in our everyday life, we deal with a lot of communication channels, for example, the internet, where we download data and we watch videos online, or even just talking to a friend. And the air could be your communication channel for our voice. But as you probably have experienced, sometimes these channels have noise, which just means that what the sender was trying to send isn't necessarily exactly what the receiver receives. And so in probability, we try to model these communication channels and noise and try to understand the probability behind it. And so now, let's go into the problem itself. In this problem, we're dealing with a pretty simple communication channel. It's just a binary channel, which means that what we're sending is just one bit at a time. And here, a bit just means either 0 or 1-- so essentially, the simplest case of information that you could send. But sometimes when you send a 0, the receiver actually receives a 1 instead, or vice versa. And that is where noise comes in. So here in this problem, we actually have a probabilistic model of this channel and the noise that hits the channel. What we're trying to send is either 0 or a 1. And what we know is that on the receiving end, a 0 can either be received when a 0 is sent, or a 1 can be received when a 0 is sent. And when a 1 is sent, we could also have noise that corrupts it. And you get a 0 instead. Or you can have a 1 being successfully transmitted. And the problem actually tells us what the probabilities here are. So we know that if a 0 is sent, then with probability 1 minus epsilon naught, a 0 is received. And with the remaining probability, it actually gets corrupted and turned into a 1. And similarly, if a 1 is sent, then with probability 1 minus epsilon 1, the 1 is correctly transmitted. And with the remaining probability epsilon 1, it's turned into a 0 instead. And the last bit of information is that we know that with the probability p, any random bit is actually is 0 that is being sent. And with probability 1 minus p, we're actually trying to send a 1. So that is the basic setup for the problem. And the first part that the problem asks us to find, what is the probability of a successful transmission when you have just any arbitrary bit that's being sent. So what we can do here is, use this tree that we've already drawn and identify what are the cases, the outcomes where a bit is actually successfully transmitted. So if a 0 is sent and a 0 is received, then that corresponds to a successful transmission. Similarly, if a 1 is sent and a 1 is received, that also corresponds to a successful transmission. And then we can calculate what these probabilities are, because we just calculate the probabilities along the branches. And so here implicitly, what we're doing is invoking the multiplication rule. So we can calculate the probabilities of these two individual outcomes and their disjoint outcomes. So we can actually just sum the two probabilities to find the answer. So the probability here is p times 1 minus epsilon naught-- that's the probability of a 0 being successfully transmitted-- plus 1 minus p times 1 minus epsilon, 1, which is the probability that a 1 is successfully transmitted. And so what we've done here is actually just looked at this kind of diagram, this tree to find the answer. What we also could have done is been a little bit more methodical maybe and actually apply the law of total probability, which is really what we're trying to do here. So you can see that this actually corresponds to-- the p corresponds to the probability of 0 being sent. And 1 minus epsilon naught is the probability of success, given that a 0 is sent. And this second term is analogous. It's the probability that a 1 was sent times the probability that you have a success, given that a 1 was sent. And this is just an example of applying the law of total probability, where we partitioned into the two cases of a 0 being sent and a 1 being sent and calculated the probabilities for each of those two cases and added those up. So that's kind of a review of the multiplication rule and law of total probability. So now, let's move on to part B. Part B is asking, what is the probability that a particular sequence of bits, not just a single one, but a sequence of four bits in a row is successfully transmitted? And the sequence that we're looking for is, 1, 0, 1, 1. So this is how I'll denote this event. 1, 0, 1, 1 gets successfully transmitted into 1, 0, 1, 1. Now, instead of dealing with single bits in isolation, we have a sequence of four bits. But we can really just break this out into the four individual bits and look at those one by one. So in order to transmit successfully 1, 0, 1, 1, that whole sequence, we first need to transmit a 1 successfully, then a 0 successfully, then another 1 successfully, and then finally, the last 1 successfully. So really, this is the same as the intersection of four different smaller events, a 1 being successfully transmitted and a 0 being successfully transmitted and two more 1's being successfully transmitted. So why are we able to do this, first of all? We are using an important assumption that we make in the problem that each transmission of an individual bit has the same probabilistic structure so that no matter which bit you're talking about, they all have the same [? error ?] probability, the same probabilities of being either successfully transmitted or having noise corrupt it. So because of that, it doesn't really matter which particular 1 or 0 we're talking about. And now, we'll make one more step, and we'll invoke independence, which is the third topic here. And the other important assumption here we're making is that every single bit is independent from any other bit. So the fact that this one was successfully transmitted has no impact on the probability of the 0 being successfully transmitted or not. And so because of that, we can now break this down into a product of four probabilities. So this becomes the probability of 1 transmitted into a 1 times the probability of 0 transmitted into a 0, 1 to a 1, and 1 to 1. And that simplifies things, because we know what each one of these are. The probability of 1 being successful transmitted into a 1, we know that's just 1 minus epsilon 1. And similarly, probability of 0 being transmitted into a 0 is 1 minus epsilon naught. So our final answer then is just-- well, we have three of these and one of these. So the answer is going to be 1 minus epsilon naught times 1 minus epsilon 1 to the third power. Now, let's move on go on to part C, which adds another wrinkle to the problem. So now, maybe we're not satisfied with the success rate of our current channel. And we want to improve it somehow. And one way of doing this is to add some redundancy. So instead of just sending a single 0 and hoping that it gets successfully transmitted, instead what we can do is, send three 0's in a row to represent a single 0 and hope that because we've added some redundancy, we can somehow improve our error rates. So in particular what we're going to do is, for a 0, when we want to send a 0, which I'll put in quotes here, what we're actually going to send is a sequence of three 0s. And what's going to happen is, this sequence of three 0s, each one of these bits is going to go through the same channel. So the 0, 0, 0 can stay and get transmitted successfully as a 0, 0, 0. Or maybe the last 0 gets turned into a 1, or the second 0 gets turned into a 1, or we can have any one of these eight possible outcomes on the receiving end. And then similarly, for a 1, when we want to send a 1, what we'll actually send is a sequence of three 1's, three bits. And just like above, this 1, 1, 1, due to the noise in the channel, it can get turned into any one of these eight sequences on the receiving end. So what we're going to do now is, instead of sending just a single 0, we'll send three 0s, and instead of sending a 1, we'll send three 1s. But now, the question is, this is what you'll get on the receiving end. How do you interpret-- 0, 0, 0, maybe intuitively you'll say that's obviously a 0. But what if you get something like 0, 1, 0 or 1, 0, 1, when there's both 0s and 1s in the received message? What are you going to do? So one obvious thing to do is to take a majority rule. So because there's three of them, if there's two or more 0s, we'll say that what was meant to be sent was actually a 0. And if there's two or more 1s, then we'll interpret that as a 1 being sent. So in this case, let's look at the case of 0. The majority rule here would say that, well, if 0, 0, 0 was sent, then the majority is 0s. And similarly, in these two cases, 0, 0, 1 or 0, 1, 0, the majority is also 0s. And then finally, in this last case, 1, 0, 0, you get a majority of 0s. So in these four received messages, we'll interpret that as a 0 have being set. So part C is asking, given this majority rule and this redundancy, what is the probability that a 0 is correctly transmitted? Well, to answer that, we've already identified these are the four outcomes, where a 0 would be correctly transmitted. So to find the answer to this question, all we have to do is find the probability that a sequence of 0, 0, 0 gets turned into one of these four sequences. So let's do that. What is the probability that a 0, 0, 0 gets turned into a 0, 0, 0? Well, that means that all three of these 0s had no errors. So we would have the answer being 1 minus epsilon 0 cubed, because all three of these bits had to have been successfully transmitted. Now, let's consider the other ones. For example, what's the probability that a 0, 0, 0 gets turned into a 0, 0, 1? Well, in this case, we need two successful transmissions of 0s, plus one transmission of 0 that had an error. So that is going to be 1 minus epsilon naught squared for the two successful transmissions of 0, times epsilon naught for the single one that was wrong. And if you think about it, that was only for this case-- 0, 0, 1. But the case where 0, 1, 0 and 1, 0, 0 are the same, because for all three of these, you have two successful transmissions of 0, plus one that was corrupted with noise. And so it turns out that all three of those probabilities are going to be the same. So this is our final answer for this part. Now, let's move on to part D. Part D is asking now a type of inference problem. And we'll talk more about inference later on in this course. The purpose of this problem-- what it's asking is, suppose you received a 1, 0, 1. That's the sequence of three messages, three bits that you received. Given that you received a 1, 0, 1, what's the probability that 0 was actually the thing that was being sent. So if you look at this, you'll look at it and say, this looks like something where we can apply Bayes' rule. So that's the fourth thing that we're covering in this problem. And if you apply Bayes' rule, what you'll get is, this is equal to the probability of 0 times the probability of 1, 0, 1 being received, given that 0 was what was sent, divided by the probability that 1, 0, 1 is received. So we have this basic structure. And we also know that we can use the law of total probability again on this denominator. So we know that the probability that 1, 0, 1 is received is equal to the probability of 0 being sent times probability of 1, 0, 1 being received, given that 0 was sent, plus the probability that 1 was sent times the probability that 1, 0, 1 is received, given that 1 is sent. And as you'll notice in applications of Bayes' rule, usually what you'll have is a numerator is then repeated as one of the terms in the denominator, because it's just an application of total probability. So if you put these pieces together, really, what we need is just these four terms. Once we have those four terms, we can just plug them into this equation, and we'll have our answer. So let's figure out what those four terms are. The probability of 0 being sent-- well, we said that earlier. Probability of 0 being sent is just p. And the probability of 1 being sent is 1 minus p. That's just from the model that we're given in the problem. Now, let's figure out this part. What is the probability of a 1, 0, 1 being received, given that 0 was sent? So if 0 was sent, then we know that what really was sent was 0, 0, 0, that sequence of three bits. And now, what's the probability that 0, 0, 0 got turned into 1, 0, 1? Wall, in this case, what we have is one successful transmission of a 0, plus two failed transmission of a 0. So that one successful transmission of a 0, that probability is 1 minus epsilon naught. And now, we have two failed transmissions of a 0. So we have to multiply that by epsilon naught squared. And now, for the last piece, what's the probability of receiving the 1, 0, 1, given that 1 was actually sent? Well, in that case, if a 1 was sent, what was really sent was a sequence of three 1s. And now, we want the probability that a 1, 1, 1 got turned into a 1, 0, 1. In this case, we have two successful transmissions of the 1 with one failed transmission. So the two successful transmissions will have 1 minus epsilon 1 squared. And then the one failed transmission will give us an extra term of epsilon 1. So just for completeness, let's actually write out what this final answer would be. So probability of 0 is p. Probability of 1, 0, 1, given 0 is, we calculated that as 1 minus epsilon naught times epsilon naught squared. The same term appears again in the denominator. Plus the other term is, probability of 1 times the probability of 1, 0, 1, given 1. So that is 1 minus epsilon squared times epsilon 1. So that is our final answer. And it's really just a application of Bayes' rule. So this was a nice problem, because it represents a real world phenomenon that happens. And we can see that you can apply a pretty simple probabilistic model to it and still be able to answer some interesting questions. And there are other extensions that you can ask also. For example, we've talked about adding redundancy by tripling the number of bits, but tripling the number of bits also reduces the throughputs, because instead of sending one, you have to send three bits just to send one. So if there's a cost of that, at what point does the benefit of having lower ever outweigh the cost of having to send more things? And so that's a question that you can answer with some more tools in probability. So we hope you enjoyed this problem. And we'll see you again next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
24_Classical_Inference_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOHN TSITSIKLIS: And we're going to continue today with our discussion of classical statistics. We'll start with a quick review of what we discussed last time, and then talk about two topics that cover a lot of statistics that are happening in the real world. So two basic methods. One is the method of linear regression, and the other one is the basic methods and tools for how to do hypothesis testing. OK, so these two are topics that any scientifically literate person should know something about. So we're going to introduce the basic ideas and concepts involved. So in classical statistics we basically have essentially a family of possible models about the world. So the world is the random variable that we observe, and we have a model for it, but actually not just one model, several candidate models. And each candidate model corresponds to a different value of a parameter theta that we do not know. So in contrast to Bayesian statistics, this theta is assumed to be a constant that we do not know. It is not modeled as a random variable, there's no probabilities associated with theta. We only have probabilities about the X's. So in this context what is a reasonable way of choosing a value for the parameter? One general approach is the maximum likelihood approach, which chooses the theta for which this quantity is largest. So what does that mean intuitively? I'm trying to find the value of theta under which the data that I observe are most likely to have occurred. So is the thinking is essentially as follows. Let's say I have to choose between two choices of theta. Under this theta the X that I observed would be very unlikely. Under that theta the X that I observed would have a decent probability of occurring. So I chose the latter as my estimate of theta. It's interesting to do the comparison with the Bayesian approach which we did discuss last time, in the Bayesian approach we also maximize over theta, but we maximize a quantity in which the relation between X's and thetas run the opposite way. Here in the Bayesian world, Theta is a random variable. So it has a distribution. Once we observe the data, it has a posterior distribution, and we find the value of Theta, which is most likely under the posterior distribution. As we discussed last time when you do this maximization now the posterior distribution is given by this expression. The denominator doesn't matter, and if you were to take a prior, which is flat-- that is a constant independent of Theta, then that term would go away. And syntactically, at least, the two approaches look the same. So syntactically, or formally, maximum likelihood estimation is the same as Bayesian estimation in which you assume a prior which is flat, so that all possible values of Theta are equally likely. Philosophically, however, they're very different things. Here I'm picking the most likely value of Theta. Here I'm picking the value of Theta under which the observed data would have been more likely to occur. So maximum likelihood estimation is a general purpose method, so it's applied all over the place in many, many different types of estimation problems. There is a special kind of estimation problem in which you may forget about maximum likelihood estimation, and come up with an estimate in a straightforward way. And this is the case where you're trying to estimate the mean of the distribution of X, where X is a random variable. You observe several independent identically distributed random variables X1 up to Xn. All of them have the same distribution as this X. So they have a common mean. We do not know the mean we want to estimate it. What is more natural than just taking the average of the values that we have observed? So you generate lots of X's, take the average of them, and you expect that this is going to be a reasonable estimate of the true mean of that random variable. And indeed we know from the weak law of large numbers that this estimate converges in probability to the true mean of the random variable. The other thing that we talked about last time is that besides giving a point estimate we may want to also give an interval that tells us something about where we might believe theta to lie. And 1-alpha confidence interval is in interval generated based on the data. So it's an interval from this value to that value. These values are written with capital letters because they're random, because they depend on the data that we have seen. And this gives us an interval, and we would like this interval to have the property that theta is inside that interval with high probability. So typically we would take 1-alpha to be a quantity such as 95% for example. In which case we have a 95% confidence interval. As we discussed last time it's important to have the right interpretation of what's 95% means. What it does not mean is the following-- the unknown value has 95% percent probability of being in the interval that we have generated. That's because the unknown value is not a random variable, it's a constant. Once we generate the interval either it's inside or it's outside, but there's no probabilities involved. Rather the probabilities are to be interpreted over the random interval itself. What a statement like this says is that if I have a procedure for generating 95% confidence intervals, then whenever I use that procedure I'm going to get a random interval, and it's going to have 95% probability of capturing the true value of theta. So most of the time when I use this particular procedure for generating confidence intervals the true theta will happen to lie inside that confidence interval with probability 95%. So the randomness in this statement is with respect to my confidence interval, it's not with respect to theta, because theta is not random. How does one construct confidence intervals? There's various ways of going about it, but in the case where we're dealing with the estimation of the mean of a random variable doing this is straightforward using the central limit theorem. Basically we take our estimated mean, that's the sample mean, and we take a symmetric interval to the left and to the right of the sample mean. And we choose the width of that interval by looking at the normal tables. So if this quantity, 1-alpha is 95% percent, we're going to look at the 97.5 percentile of the normal distribution. Find the constant number that corresponds to that value from the normal tables, and construct the confidence intervals according to this formula. So that gives you a pretty mechanical way of going about constructing confidence intervals when you're estimating the sample mean. So constructing confidence intervals in this way involves an approximation. The approximation is the central limit theorem. We are pretending that the sample mean is a normal random variable. Which is, more or less, right when n is large. That's what the central limit theorem tells us. And sometimes we may need to do some extra approximation work, because quite often we do not know the true value of sigma. So we need to do some work either to estimate sigma from the data. So sigma is, of course, the standard deviation of the X's. We may want to estimate it from the data, or we may have an upper bound on sigma, and we just use that upper bound. So now let's move on to a new topic. A lot of statistics in the real world are of the following flavor. So suppose that X is the SAT score of a student in high school, and Y is the MIT GPA of that same student. So you expect that there is a relation between these two. So you go and collect data for different students, and you record for a typical student this would be their SAT score, that could be their MIT GPA. And you plot all this data on an (X,Y) diagram. Now it's reasonable to believe that there is some systematic relation between the two. So people who had higher SAT scores in high school may have higher GPA in college. Well that may or may not be true. You want to construct a model of this kind, and see to what extent a relation of this type is true. So you might hypothesize that the real world is described by a model of this kind. That there is a linear relation between the SAT score, and the college GPA. So it's a linear relation with some parameters, theta0 and theta1 that we do not know. So we assume a linear relation for the data, and depending on the choices of theta0 and theta1 it could be a different line through those data. Now we would like to find the best model of this kind to explain the data. Of course there's going to be some randomness. So in general it's going to be impossible to find a line that goes through all of the data points. So let's try to find the best line that comes closest to explaining those data. And here's how we go about it. Suppose we try some particular values of theta0 and theta1. These give us a certain line. Given that line, we can make predictions. For a student who had this x, the model that we have would predict that y would be this value. The actual y is something else, and so this quantity is the error that our model would make in predicting the y of that particular student. We would like to choose a line for which the predictions are as good as possible. And what do we mean by as good as possible? As our criteria we're going to take the following. We are going to look at the prediction error that our model makes for each particular student. Take the square of that, and then add them up over all of our data points. So what we're looking at is the sum of this quantity squared, that quantity squared, that quantity squared, and so on. We add all of these squares, and we would like to find the line for which the sum of these squared prediction errors are as small as possible. So that's the procedure. We have our data, the X's and the Y's. And we're going to find theta's the best model of this type, the best possible model, by minimizing this sum of squared errors. So that's a method that one could pull out of the hat and say OK, that's how I'm going to build my model. And it sounds pretty reasonable. And it sounds pretty reasonable even if you don't know anything about probability. But does it have some probabilistic justification? It turns out that yes, you can motivate this method with probabilistic considerations under certain assumptions. So let's make a probabilistic model that's going to lead us to these particular way of estimating the parameters. So here's a probabilistic model. I pick a student who had a specific SAT score. And that could be done at random, but also could be done in a systematic way. That is, I pick a student who had an SAT of 600, a student of 610 all the way to 1,400 or 1,600, whatever the right number is. I pick all those students. And I assume that for a student of this kind there's a true model that tells me that their GPA is going to be a random variable, which is something predicted by their SAT score plus some randomness, some random noise. And I model that random noise by independent normal random variables with 0 mean and a certain variance. So this is a specific probabilistic model, and now I can think about doing maximum likelihood estimation for this particular model. So to do maximum likelihood estimation here I need to write down the likelihood of the y's that I have observed. What's the likelihood of the y's that I have observed? Well, a particular w has a likelihood of the form e to the minus w squared over (2 sigma-squared). That's the likelihood of a particular w. The probability, or the likelihood of observing a particular value of y, that's the same as the likelihood that w takes a value of y minus this, minus that. So the likelihood of the y's is of this form. Think of this as just being the w_i-squared. So this is the density -- and if we have multiple data you multiply the likelihoods of the different y's. So you have to write something like this. Since the w's are independent that means that the y's are also independent. The likelihood of a y vector is the product of the likelihoods of the individual y's. The likelihood of every individual y is of this form. Where w is y_i minus these two quantities. So this is the form that the likelihood function is going to take under this particular model. And under the maximum likelihood methodology we want to maximize this quantity with respect to theta0 and theta1. Now to do this maximization you might as well consider the logarithm and maximize the logarithm, which is just the exponent up here. Maximizing this exponent because we have a minus sign is the same as minimizing the exponent without the minus sign. Sigma squared is a constant. So what you end up doing is minimizing this quantity here, which is the same as what we had in our linear regression methods. So in conclusion you might choose to do linear regression in this particular way, just because it looks reasonable or plausible. Or you might interpret what you're doing as maximum likelihood estimation, in which you assume a model of this kind where the noise terms are normal random variables with the same distribution -- independent identically distributed. So linear regression implicitly makes an assumption of this kind. It's doing maximum likelihood estimation as if the world was really described by a model of this form, and with the W's being random variables. So this gives us at least some justification that this particular approach to fitting lines to data is not so arbitrary, but it has a sound footing. OK so then once you accept this formulation as being a reasonable one what's the next step? The next step is to see how to carry out this minimization. This is not a very difficult minimization to do. The way it's done is by setting the derivatives of this expression to 0. Now because this is a quadratic function of theta0 and theta1-- when you take the derivatives with respect to theta0 and theta1-- you get linear functions of theta0 and theta1. And you end up solving a system of linear equations in theta0 and theta1. And it turns out that there's very nice and simple formulas for the optimal estimates of the parameters in terms of the data. And the formulas are these ones. I said that these are nice and simple formulas. Let's see why. How can we interpret them? So suppose that the world is described by a model of this kind, where the X's and Y's are random variables. And where W is a noise term that's independent of X. So we're assuming that a linear model is indeed true, but not exactly true. There's always some noise associated with any particular data point that we obtain. So if a model of this kind is true, and the W's have 0 mean then we have that the expected value of Y would be theta0 plus theta1 expected value of X. And because W has 0 mean there's no extra term. So in particular, theta0 would be equal to expected value of Y minus theta1 expected value of X. So let's use this equation to try to come up with a reasonable estimate of theta0. I do not know the expected value of Y, but I can estimate it. How do I estimate it? I look at the average of all the y's that I have obtained. so I replace this, I estimate it with the average of the data I have seen. Here, similarly with the X's. I might not know the expected value of X's, but I have data points for the x's. I look at the average of all my data points, I come up with an estimate of this expectation. Now I don't know what theta1 is, but my procedure is going to generate an estimate of theta1 called theta1 hat. And once I have this estimate, then a reasonable person would estimate theta0 in this particular way. So that's how my estimate of theta0 is going to be constructed. It's this formula here. We have not yet addressed the harder question, which is how to estimate theta1 in the first place. So to estimate theta0 I assumed that I already had an estimate for a theta1. OK, the right formula for the estimate of theta1 happens to be this one. It looks messy, but let's try to interpret it. What I'm going to do is I'm going to take this model for simplicity let's assume that they're the random variables have 0 means. And see how we might estimate how we might try to estimate theta1. Let's multiply both sides of this equation by X. So we get Y times X equals theta0 plus theta0 times X plus theta1 times X-squared, plus X times W. And now take expectations of both sides. If I have 0 mean random variables the expected value of Y times X is just the covariance of X with Y. I have assumed that my random variables have 0 means, so the expectation of this is 0. This one is going to be the variance of X, so I have theta1 times variance of X. And since I'm assuming that my random variables have 0 mean, and I'm also assuming that W is independent of X this last term also has 0 mean. So under such a probabilistic model this equation is true. If we knew the variance and the covariance then we would know the value of theta1. But we only have data, we do not necessarily know the variance and the covariance, but we can estimate it. What's a reasonable estimate of the variance? The reasonable estimate of the variance is this quantity here divided by n, and the reasonable estimate of the covariance is that numerator divided by n. So this is my estimate of the mean. I'm looking at the squared distances from the mean, and I average them over lots and lots of data. This is the most reasonable way of estimating the variance of our distribution. And similarly the expected value of this quantity is the covariance of X with Y, and then we have lots and lots of data points. This quantity here is going to be a very good estimate of the covariance. So basically what this formula does is-- one way of thinking about it-- is that it starts from this relation which is true exactly, but estimates the covariance and the variance on the basis of the data, and then using these estimates to come up with an estimate of theta1. So this gives us a probabilistic interpretation of the formulas that we have for the way that the estimates are constructed. If you're willing to assume that this is the true model of the world, the structure of the true model of the world, except that you do not know means and covariances, and variances. Then this is a natural way of estimating those unknown parameters. All right, so we have a closed-form formula, we can apply it whenever we have data. Now linear regression is a subject on which there are whole courses, and whole books that are given. And the reason for that is that there's a lot more that you can bring into the topic, and many ways that you can elaborate on the simple solution that we got for the case of two parameters and only two random variables. So let me give you a little bit of flavor of what are the topics that come up when you start looking into linear regression in more depth. So in our discussions so far we made the linear model in which we're trying to explain the values of one variable in terms of the values of another variable. We're trying to explain GPAs in terms of SAT scores, or we're trying to predict GPAs in terms of SAT scores. But maybe your GPA is affected by several factors. For example maybe your GPA is affected by your SAT score, also the income of your family, the years of education of your grandmother, and many other factors like that. So you might write down a model in which I believe that GPA has a relation, which is a linear function of all these other variables that I mentioned. So perhaps you have a theory of what determines performance at college, and you want to build a model of that type. How do we go about in this case? Well, again we collect the data points. We look at the i-th student, who has a college GPA. We record their SAT score, their family income, and grandmother's years of education. So this is one data point that is for one particular student. We postulate the model of this form. For the i-th student this would be the mistake that our model makes if we have chosen specific values for those parameters. And then we go and choose the parameters that are going to give us, again, the smallest possible sum of squared errors. So philosophically it's exactly the same as what we were discussing before, except that now we're including multiple explanatory variables in our model instead of a single explanatory variable. So that's the formulation. What do you do next? Well, to do this minimization you're going to take derivatives once you have your data, you have a function of these three parameters. You take the derivative with respect to the parameter, set the derivative equal to 0, you get the system of linear equations. You throw that system of linear equations to the computer, and you get numerical values for the optimal parameters. There are no nice closed-form formulas of the type that we had in the previous slide when you're dealing with multiple variables. Unless you're willing to go into matrix notation. In that case you can again write down closed-form formulas, but they will be a little less intuitive than what we had before. But the moral of the story is that numerically this is a procedure that's very easy. It's a problem, an optimization problem that the computer can solve for you. And it can solve it for you very quickly. Because all that it involves is solving a system of linear equations. Now when you choose your explanatory variables you may have some choices. One person may think that your GPA a has something to do with your SAT score. Some other person may think that your GPA has something to do with the square of your SAT score. And that other person may want to try to build a model of this kind. Now when would you want to do this? ? Suppose that the data that you have looks like this. If the data looks like this then you might be tempted to say well a linear model does not look right, but maybe a quadratic model will give me a better fit for the data. So if you want to fit a quadratic model to the data then what you do is you take X-squared as your explanatory variable instead of X, and you build a model of this kind. There's nothing really different in models of this kind compared to models of that kind. They are still linear models because we have theta's showing up in a linear fashion. What you take as your explanatory variables, whether it's X, whether it's X-squared, or whether it's some other function that you chose. Some general function h of X, doesn't make a difference. So think of you h of X as being your new X. So you can formulate the problem exactly the same way, except that instead of using X's you choose h of X's. So it's basically a question do I want to build a model that explains Y's based on the values of X, or do I want to build a model that explains Y's on the basis of the values of h of X. Which is the right value to use? And with this picture here, we see that it can make a difference. A linear model in X might be a poor fit, but a quadratic model might give us a better fit. So this brings to the topic of how to choose your functions h of X if you're dealing with a real world problem. So in a real world problem you're just given X's and Y's. And you have the freedom of building models of any kind you want. You have the freedom of choosing a function h of X of any type that you want. So this turns out to be a quite difficult and tricky topic. Because you may be tempted to overdo it. For example, I got my 10 data points, and I could say OK, I'm going to choose an h of X. I'm going to choose h of X and actually multiple h's of X to do a multiple linear regression in which I'm going to build a model that's uses a 10th degree polynomial. If I choose to fit my data with a 10th degree polynomial I'm going to fit my data perfectly, but I may obtain a model is does something like this, and goes through all my data points. So I can make my prediction errors extremely small if I use lots of parameters, and if I choose my h functions appropriately. But clearly this would be garbage. If you get those data points, and you say here's my model that explains them. That has a polynomial going up and down, then you're probably doing something wrong. So choosing how complicated those functions, the h's, should be. And how many explanatory variables to use is a very delicate and deep topic on which there's deep theory that tells you what you should do, and what you shouldn't do. But the main thing that one should avoid doing is having too many parameters in your model when you have too few data. So if you only have 10 data points, you shouldn't have 10 free parameters. With 10 free parameters you will be able to fit your data perfectly, but you wouldn't be able to really rely on the results that you are seeing. OK, now in practice, when people run linear regressions they do not just give point estimates for the parameters theta. But similar to what we did for the case of estimating the mean of a random variable you might want to give confidence intervals that sort of tell you how much randomness there is when you estimate each one of the particular parameters. There are formulas for building confidence intervals for the estimates of the theta's. We're not going to look at them, it would take too much time. Also you might want to estimate the variance in the noise that you have in your model. That is if you are pretending that your true model is of the kind we were discussing before, namely Y equals theta1 times X plus W, and W has a variance sigma squared. You might want to estimate this, because it tells you something about the model, and this is called standard error. It puts a limit on how good predictions your model can make. Even if you have the correct theta0 and theta1, and somebody tells you X you can make a prediction about Y, but that prediction will not be accurate. Because there's this additional randomness. And if that additional randomness is big, then your predictions will also have a substantial error in them. There's another quantity that gets reported usually. This is part of the computer output that you get when you use a statistical package which is called R-square. And its a measure of the explanatory power of the model that you have built linear regression. Using linear regression. Instead of defining R-square exactly, let me give you a sort of analogous quantity that's involved. After you do your linear regression you can look at the following quantity. You look at the variance of Y, which is something that you can estimate from data. This is how much randomness there is in Y. And compare it with the randomness that you have in Y, but conditioned on X. So this quantity tells me if I knew X how much randomness would there still be in my Y? So if I know X, I have more information, so Y is more constrained. There's less randomness in Y. This is the randomness in Y if I don't know anything about X. So naturally this quantity would be less than 1, and if this quantity is small it would mean that whenever I know X then Y is very well known. Which essentially tells me that knowing x allows me to make very good predictions about Y. Knowing X means that I'm explaining away most of the randomness in Y. So if you read a statistical study that uses linear regression you might encounter statements of the form 60% of a student's GPA is explained by the family income. If you read the statements of this kind it's really refers to quantities of this kind. Out of the total variance in Y, how much variance is left after we build our model? So if only 40% of the variance of Y is left after we build our model, that means that X explains 60% of the variations in Y's. So the idea is that randomness in Y is caused by multiple sources. Our explanatory variable and random noise. And we ask the question what percentage of the total randomness in Y is explained by variations in the X parameter? And how much of the total randomness in Y is attributed just to random effects? So if you have a model that explains most of the variation in Y then you can think that you have a good model that tells you something useful about the real world. Now there's lots of things that can go wrong when you use linear regression, and there's many pitfalls. One pitfall happens when you have this situation that's called heteroskedacisity. So suppose your data are of this kind. So what's happening here? You seem to have a linear model, but when X is small you have a very good model. So this means that W has a small variance when X is here. On the other hand, when X is there you have a lot of randomness. This would be a situation in which the W's are not identically distributed, but the variance of the W's, of the noise, has something to do with the X's. So with different regions of our x-space we have different amounts of noise. What will go wrong in this situation? Since we're trying to minimize sum of squared errors, we're really paying attention to the biggest errors. Which will mean that we are going to pay attention to these data points, because that's where the big errors are going to be. So the linear regression formulas will end up building a model based on these data, which are the most noisy ones. Instead of those data that are nicely stacked in order. Clearly that's not to the right thing to do. So you need to change something, and use the fact that the variance of W changes with the X's, and there are ways of dealing with it. It's something that one needs to be careful about. Another possibility of getting into trouble is if you're using multiple explanatory variables that are very closely related to each other. So for example, suppose that I tried to predict your GPA by looking at your SAT the first time that you took it plus your SAT the second time that you took your SATs. I'm assuming that almost everyone takes the SAT more than once. So suppose that you had a model of this kind. Well, SAT on your first try and SAT on your second try are very likely to be fairly close. And you could think of coming up with estimates in which this is ignored. And you build a model based on this, or an alternative model in which this term is ignored, and you make predictions based on the second SAT. And both models are likely to be essentially as good as the other one, because these two quantities are essentially the same. So in that case, your theta's that you estimate are going to be very sensitive to little details of the data. You change your data, you have your data, and your data tell you that this coefficient is big and that coefficient is small. You change your data just a tiny bit, and your theta's would drastically change. So this is a case in which you have multiple explanatory variables, but they're redundant in the sense that they're very closely related to each other, and perhaps with a linear relation. So one must be careful about the situation, and do special tests to make sure that this doesn't happen. Finally the biggest and most common blunder is that you run your linear regression, you get your linear model, and then you say oh, OK. Y is caused by X according to this particular formula. Well, all that we did was to identify a linear relation between X and Y. This doesn't tell us anything. Whether it's Y that causes X, or whether it's X that causes Y, or maybe both X and Y are caused by some other variable that we didn't think about. So building a good linear model that has small errors does not tell us anything about causal relations between the two variables. It only tells us that there's a close association between the two variables. If you know one you can make predictions about the other. But it doesn't tell you anything about the underlying physics, that there's some physical mechanism that introduces the relation between those variables. OK, that's it about linear regression. Let us start the next topic, which is hypothesis testing. And we're going to continue with it next time. So here, instead of trying to estimate continuous parameters, we have two alternative hypotheses about the distribution of the X random variable. So for example our random variable could be either distributed according to this distribution, under H0, or it might be distributed according to this distribution under H1. And we want to make a decision which distribution is the correct one? So we're given those two distributions, and some common terminologies that one of them is the null hypothesis-- sort of the default hypothesis, and we have some alternative hypotheses-- and we want to check whether this one is true, or that one is true. So you obtain a data point, and you want to make a decision. In this picture what would a reasonable person do to make a decision? They would probably choose a certain threshold, Xi, and decide that H1 is true if your data falls in this interval. And decide that H0 is true if you fall on the side. So that would be a reasonable way of approaching the problem. More generally you take the set of all possible X's, and you divide the set of possible X's into two regions. One is the rejection region, in which you decide H1, or you reject H0. And the complement of that region is where you decide H0. So this is the x-space of your data. In this example here, x was one-dimensional. But in general X is going to be a vector, where all the possible data vectors that you can get, they're divided into two types. If it falls in this set you'd make one decision. If it falls in that set, you make the other decision. OK, so how would you characterize the performance of the particular way of making a decision? Suppose I chose my threshold. I may make mistakes of two possible types. Perhaps H0 is true, but my data happens to fall here. In which case I make a mistake, and this would be a false rejection of H0. If my data falls here I reject H0. I decide H1. Whereas H0 was true. The probability of this happening? Let's call it alpha. But there's another kind of error that can be made. Suppose that H1 was true, but by accident my data happens to falls on that side. Then I'm going to make an error again. I'm going to decide H0 even though H1 was true. How likely is this to occur? This would be the area under this curve here. And that's the other type of error than can be made, and beta is the probability of this particular type of error. Both of these are errors. Alpha is the probability of error of one kind. Beta is the probability of an error of the other kind. You would like the probabilities of error to be small. So you would like to make both alpha and beta as small as possible. Unfortunately that's not possible, there's a trade-off. If I go to my threshold it this way, then alpha become smaller, but beta becomes bigger. So there's a trade-off. If I make my rejection region smaller one kind of error is less likely, but the other kind of error becomes more likely. So we got this trade-off. So what do we do about it? How do we move systematically? How do we come up with rejection regions? Well, what the theory basically tells you is it tells you how you should create those regions. But it doesn't tell you exactly how. It tells you the general shape of those regions. For example here, the theory who tells us that the right thing to do would be to put the threshold and make decisions one way to the right, one way to the left. But it might not necessarily tell us where to put the threshold. Still, it's useful enough to know that the way to make a good decision would be in terms of a particular threshold. Let me make this more specific. We can take our inspiration from the solution of the hypothesis testing problem that we had in the Bayesian case. In the Bayesian case we just pick the hypothesis which is more likely given the data. The produced posterior probabilities using Bayesian rule, they're written this way. And this term is the same as that term. They cancel out, then let me collect terms here and there. I get an expression here. I think the version you have in your handout is the correct one. The one on the slide was not the correct one, so I'm fixing it here. OK, so this is the form of how you make decisions in the Bayesian case. What you do in the Bayesian case, you calculate this ratio. Let's call it the likelihood ratio. And compare that ratio to a threshold. And the threshold that you should be using in the Bayesian case has something to do with the prior probabilities of the two hypotheses. In the non-Bayesian case we do not have prior probabilities, so we do not know how to set this threshold. But we're going to do is we're going to keep this particular structure anyway, and maybe use some other considerations to pick the threshold. So we're going to use a likelihood ratio test, that's how it's called in which we calculate a quantity of this kind that we call the likelihood, and compare it with a threshold. So what's the interpretation of this likelihood? We ask-- the X's that I have observed, how likely were they to occur if H1 was true? And how likely were they to occur if H0 was true? This ratio could be big if my data are plausible they might occur under H1. But they're very implausible, extremely unlikely to occur under H0. Then my thinking would be well the data that I saw are extremely unlikely to have occurred under H0. So H0 is probably not true. I'm going to go for H1 and choose H1. So when this ratio is big it tells us that the data that we're seeing are better explained if we assume H1 to be true rather than H0 to be true. So I calculate this quantity, compare it with a threshold, and that's how I make my decision. So in this particular picture, for example the way it would go would be the likelihood ratio in this picture goes monotonically with my X. So comparing the likelihood ratio to the threshold would be the same as comparing my x to the threshold, and we've got the question of how to choose the threshold. The way that the threshold is chosen is usually done by fixing one of the two probabilities of error. That is, I say, that I want my error of one particular type to be a given number, so I fix this alpha. And then I try to find where my threshold should be. So that this probability theta, probability out there, is just equal to alpha. And then the other probability of error, beta, will be whatever it turns out to be. So somebody picks alpha ahead of time. Based on the probability of a false rejection based on alpha, I find where my threshold is going to be. I choose my threshold, and that determines subsequently the value of beta. So we're going to continue with this story next time, and we'll stop here.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
14_Poisson_Process_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So last time we started talking about random processes. A random process is a random experiment that evolves over time. And conceptually, it's important to realize that it's a single probabilistic experiment that has many stages. Actually, it has an infinite number of stages. And we discussed the simplest random process there is, the Bernoulli process, which is nothing but the sequence of Bernoulli trials-- an infinite sequence of Bernoulli trials. For example, flipping a coin over and over. Once we understand what's going on with that process, then what we want is to move into a continuous time version of the Bernoulli process. And this is what we will call the Poisson process. And for the Poisson process, we're going to do exactly the same things that we did for the Bernoulli process. That is, talk about the number of arrivals during a given time period, and talk also about the time between consecutive arrivals, and for the distribution of inter-arrival times. So let's start with a quick review of what we discussed last time. First, a note about language. If you think of coin tosses, we then talk about heads and tails. If you think of these as a sequence of trials, you can talk about successes and failures. The language that we will be using will be more the language of arrivals. That is, if in a given slot you have a success, you say that something arrived. If you have a failure, nothing arrived. And that language is a little more convenient and more natural, especially when we talk about continuous time-- to talk about arrivals instead of successes. But in any case, for the Bernoulli process let's keep, for a little bit, the language of successes. Whereas working in discrete time, we have time slots. During each time slot, we have an independent Bernoulli trial. There is probability p of having a success. Different slots are independent of each other. And this probability p is the same for any given time slot. So for this process we will discuss the one random variable of interest, which is the following. If we have n time slots, or n trials, how many arrivals will there be? Or how many successes will there be? Well, this is just given by the binomial PMF. Number of successes in n trials is a random variable that has a binomial PMF, and we know what this is. Then we talked about inter-arrival times. The time until the first arrival happens has a geometric distribution. And we have seen that from some time ago. Now if you start thinking about the time until k arrivals happen, and we denote that by Yk, this is the time until the first arrival happens. And then after the first arrival happens, you have to wait some time until the second arrival happens, and so on. And then the time from the (k -1)th arrival, until arrival number k. The important thing to realize here is that because the process has a memorylessness property, once the first arrival comes, it's as if we're starting from scratch and we will be flipping our coins until the next arrival comes. So the time it will take until the next arrival comes will also be a geometric random variable. And because different slots are independent, whatever happens after the first arrival is independent from whatever happened before. So T1 and T2 will be independent random variables. And similarly, all the way up to Tk. So the time until the k-th arrival is a sum of independent geometric random variables, with the same parameter p. And we saw last time that we can find the probability distribution of Yk. The probability that Yk takes a value of t is equal to-- there's this combinatorial factor here, and then you get p to the k, (1-p) to the (t-k), and this formula is true for t equal to k, k+1, and so on. And this distribution has a name. It's called the Pascal PMF. So this is all there is to know about the Bernoulli process. One important comment is to realize what exactly this memorylessness property is saying. So I discussed it a little bit last time. Let me reiterate it. So we have a Bernoulli process, which is a sequence of Bernoulli trials. And these are (0,1) random variables that keep going on forever. So someone is watching this movie of Bernoulli trials B_t. And at some point, they say they think, or something interesting has happened, why don't you come in and start watching? So at some time t, they tell you to come in and start watching. So what you will see once you come in will be this future trials. So actually what you will see is a random process, whose first random variable is going to be the first one that you see, B_(t +1). The second one is going to be this, and so on. So this is the process that's seen by the person who's asked to come in and start watching at that time. And the claim is that this process is itself a Bernoulli process, provided that the person who calls you into the room does not look into the future. The person who calls you into the room decides to call you in only on the basis of what they have seen so far. So for example, who calls you into the room might have a rule that says, as soon as I see a sequence of 3 heads, I ask the other person to come in. So if they use that particular rule, it means that when you're called in, the previous 3 were heads. But this doesn't give you any information about the future. And so the future ones will be just independent Bernoulli trials. If on the other hand, the person who calls you in has seen the movie before and they use a rule, such as, for example, I call you in just before 3 heads show up for the first time. So the person calls you in based on knowledge that these two would be three heads. If they have such foresight-- if they can look into the future-- then X1, X2, X3, they're certain to be three heads, so they do not correspond to random independent Bernoulli trials. So to rephrase this, the process is memoryless. It does not matter what has happened in the past. And that's true even if you are called into the room and start watching at a random time, as long as that random time is determined in a causal way on the basis of what has happened so far. So you are called into the room in a causal manner, just based on what's happened so far. What you're going to see starting from that time will still be a sequence of independent Bernoulli trials. And this is the argument that we used here, essentially, to argue that this T2 is an independent random variable from T1. So a person is watching the movie, sees the first success. And on the basis of what they have seen-- they have just seen the first success-- they ask you to come in. You come in. What you're going to see is a sequence of Bernoulli trials. And you wait this long until the next success comes in. What you see is a Bernoulli process, as if the process was just starting right now. And that convinces us that this should be a geometric random variable of the same kind as this one, as independent from what happened before. All right. So this is pretty much all there is to know about the Bernoulli process. Plus the two things that we did at the end of the last lecture where we merge two independent Bernoulli processes, we get a Bernoulli process. If we have a Bernoulli process and we split it by flipping a coin and sending things one way or the other, then we get two separate Bernoulli processes. And we see that all of these carry over to the continuous time. And our task for today is basically to work these continuous time variations. So the Poisson process is a continuous time version of the Bernoulli process. Here's the motivation for considering it a Bernoulli process. So you have that person whose job is to sit outside the door of a bank. And they have this long sheet, and for every one second slot, they mark an X if a person came in, or they mark something else if no one came in during that slot. Now the bank manager is a really scientifically trained person and wants very accurate results. So they tell you, don't use one second slots, use milliseconds slots. So you have all those slots and you keep filling if someone arrived or not during that slot. Well then you come up with an idea. Why use millisecond slots and keep putting crosses or zero's into each slot? It's much simpler if I just record the exact times when people came in. So time is continuous. I don't keep doing something at every time slot. But instead of the time axis, I mark the times at which customers arrive. So there's no real need for slots. The only information that you want is when did we have arrivals of people. And we want to now model a process of this kind happening in continuous time, that has the same flavor, however, as the Bernoulli process. So that's the model we want to develop. OK. So what are the properties that we're going to have? First, we're going to assume that intervals over the same length behave probabilistically in an identical fashion. So what does that mean? Think of an interval of some given length. During the interval of that length, there's going to be a random number of arrivals. And that random number of arrivals is going to have a probability distribution. So that probability distribution-- let's denote it by this notation. We fix t, we fix the duration. So this is fixed. And we look at the different k's. The probability of having 0 arrivals, the probability of 1 arrival, the probability of 2 arrivals, and so on. So this thing is essentially a PMF. So it should have the property that the sum over all k's of this P_(k, tau) should be equal to 1. Now, hidden inside this notation is an assumption of time homogeneity. That is, this probability distribution for the number of arrivals only depends on the length of the interval, but not the exact location of the interval on the time axis. That is, if I take an interval of length tau, and I ask about the number of arrivals in this interval. And I take another interval of length tau, and I ask about the number of arrivals during that interval. Number of arrivals here, and number of arrivals there have the same probability distribution, which is denoted this way. So the statistical behavior of arrivals here is the same as the statistical behavioral of arrivals there. What's the relation with the Bernoulli process? It's very much like the assumption-- the Bernoulli process-- that in different slots, we have the same probability of success. Every slot looks probabilistically as any other slot. So similarly here, any interval of length tau looks probabilistically as any other interval of length tau. And the number of arrivals during that interval is a random variable described by these probabilities. Number of arrivals here is a random variable described by these same probabilities. So that's our first assumption. Then what else? In the Bernoulli process we had the assumption that different time slots were independent of each other. Here we do not have time slots, but we can still think in a similar way and impose the following assumption, that these joint time intervals are statistically independent. What does that mean? Does a random number of arrivals during this interval, and the random number of arrivals during this interval, and the random number of arrivals during this interval-- so these are three different random variables-- these three random variables are independent of each other. How many arrivals we got here is independent from how many arrivals we got there. So this is similar to saying that different time slots were independent. That's what we did in discrete time. The continuous time analog is this independence assumption. So for example, in particular, number of arrivals here is independent from the number of arrivals there. So these are two basic assumptions about the process. Now in order to write down a formula, eventually, about this probability distribution-- which is our next objective, we would like to say something specific about this distribution of number of arrivals-- we need to add a little more structure into the problem. And we're going to make the following assumption. If we look at the time interval of length delta-- and delta now is supposed to be a small number, so a picture like this-- during a very small time interval, there is a probability that we get exactly one arrival, which is lambda times delta. Delta is the length of the interval and lambda is a proportionality factor, which is sort of the intensity of the arrival process. Bigger lambda means that a little interval is more likely to get an arrival. So there's a probability lambda times delta of 1 arrival. The remaining probability goes to 0 arrivals. And when delta is small, the probability of 2 arrivals can be approximated by 0. So this is a description of what happens during a small, tiny slot. Now this is something that's supposed to be true in some limiting sense, when delta is very small. So the exact version of this statement would be that this is an equality, plus order of delta squared terms. So this is an approximate equality. And what approximation means is that in the limit of small deltas, the dominant terms-- the constant and the first order term are given by this. Now when delta is very small, second order terms in delta do not matter. They are small compared to first order terms. So we ignore this. So you can either think in terms of an exact relation, which is the probabilities are given by this, plus delta squared terms. Or if you want to be a little more loose, you just write here, as an approximate equality. And the understanding is that this equality holds-- approximately becomes more and more correct as delta goes to 0. So another version of that statement would be that if you take the limit as delta goes to 0, of p, the probability of having 1 arrival in an interval of length delta, divided by delta, this is equal to lambda. So that would be one version of an exact statement of what we are assuming here. So this lambda, we call it the arrival rate, or the intensity of the process. And clearly, if you double lambda, then a little interval is likely -- you expect to get -- the probability of obtaining an arrival during that interval has doubled. So in some sense we have twice as intense arrival process. If you look at the number of arrivals during delta interval, what is the expected value of that random variable? Well with probability lambda delta we get 1 arrival. And with the remaining probability, we get 0 arrivals. So it's just lambda times delta. So expected number of arrivals during a little interval is lambda times delta. So expected number of arrivals is proportional to lambda, and that's again why we call lambda the arrival rate. If you send delta to the denominator in this equality, it tells you that lambda is the expected number of arrivals per unit time. So the arrival rate is expected number of arrivals per unit time. And again, that justifies why we call lambda the intensity of this process. All right. So where are we now? For the Bernoulli process, the number of arrivals during a given interval of length n had the PMF that we knew it was the binomial PMF. What is the formula for the corresponding PMF for the continuous time process? Somehow we would like to use our assumptions and come up with the formula for this quantity. So this tells us about the distribution of number of arrivals during an interval of some general length. We have made assumptions about the number of arrivals during an interval of small length. An interval of big length is composed of many intervals of small length, so maybe this is the way to go. Take a big interval, and split it into many intervals of small length. So we have here our time axis. And we have an interval of length tau. And I'm going to split it into lots of little intervals of length delta. So how many intervals are we going to have? The number of intervals is going to be the total time, divided by delta. Now what happens during each one of these little intervals? As long as the intervals are small, what you have is that during an interval, you're going to have either 0 or 1 arrival. The probability of more than 1 arrival during a little interval is negligible. So with this picture, you have essentially a Bernoulli process that consists of so many trials. And during each one of those trials, we have a probability of success, which is lambda times delta. Different little intervals here are independent of each other. That's one of our assumptions, that these joint time intervals are independent. So approximately, what we have is a Bernoulli process. We have independence. We have the number of slots of interest. And during each one of the slots we have a certain probability of success. So if we think of this as another good approximation of the Poisson process-- with the approximation becoming more and more accurate as delta goes to 0 -- what we should do would be to take the formula for the PMF of number of arrivals in a Bernoulli process, and then take the limit as delta goes to 0. So in the Bernoulli process, the probability of k arrivals is n choose k, and then you have p to the k. Now in our case, we have here lambda times delta, delta is tau over n. Delta is tau over n, so p is lambda times tau divided by n. So here's our p -- Lambda tau over n -- to the power k, and then times one minus this-- this is our one minus p-- to the power n-k. So this is the exact formula for the Bernoulli process. For the Poisson process, what we do is we take that formula and we let delta go to 0. As delta goes to 0, n goes to infinity. So that's the limit that we're taking. On the other hand, this expression lambda times tau-- lambda times tau, what is it going to be? Lambda times tau is equal to n times p. n times p, is that what I want? No, let's see. Lambda tau is np. Yeah. So lambda tau is np. All right. So we have this relation, lambda tau equals np. These two numbers being equal kind of makes sense. np is the expected number of successes you're going to get in the Bernoulli process. Lambda tau-- since lambda is the arrival rate and you have a total time of tau, lambda tau you can think of it as the number of expected arrivals in the Bernoulli process. We're doing a Bernoulli approximation to the Poisson process. We take the formula for the Bernoulli, and now take the limit as n goes to infinity. Now lambda tau over n is equal to p, so it's clear what this term is going to give us. This is just p to the power k. It will actually take a little more work than that. Now I'm not going to do the algebra, but I'm just telling you that one can take the limit in this formula here, as n goes to infinity. And that will give you another formula, the final formula for the Poisson PMF. One thing to notice is that here you have something like 1 minus a constant over n, to the power n. And you may recall from calculus a formula of this kind, that this converges to e to the minus c. If you remember that formula from calculus, then you will expect that here, in the limit, you are going to get something like an e to the minus lambda tau. So indeed, we will get such a term. There is some work that needs to be done to find the limit of this expression, times that expression. The algebra is not hard, it's in the text. Let's not spend more time doing this. But let me just give you the formula of what comes at the end. And the formula that comes at the end is of this form. So what matters here is not so much the specific algebra that you will do to go from this formula to that one. It's kind of straightforward. What's important is the idea that the Poisson process, by definition, can be approximated by a Bernoulli process in which we have a very large number of slots-- n goes to infinity. Whereas we have a very small probability of success during each time slot. So a large number of slots, but tiny probability of success during each slot. And we take the limit as the slots become smaller and smaller. So with this approximation we end up with this particular formula. And this is the so-called Poisson PMF. Now this function P here -- has two arguments. The important thing to realize is that when you think of this as a PMF, you fix t to tau. And for a fixed tau, now this is a PMF. As I said before, the sum over k has to be equal to 1. So for a given tau, these probabilities add up to 1. The formula is moderately messy, but not too messy. One can work with it without too much pain. And what's the mean and variance of this PMF? Well what's the expected number of arrivals? If you think of this Bernoulli analogy, we know that the expected number of arrivals in the Bernoulli process is n times p. In the approximation that we're using in these procedure, n times p is the same as lambda tau. And that's why we get lambda tau to be the expected number of arrivals. Here I'm using t instead of tau. The expected number of arrivals is lambda t. So if you double the time, you expect to get twice as many arrivals. If you double the arrival rate, you expect to get twice as many arrivals. How about the formula for the variance? The variance of the Bernoulli process is np, times one minus p. What does this go to in the limit? In the limit that we're taking, as delta goes to zero, then p also goes to zero. The probability of success in any given slot goes to zero. So this term becomes insignificant. So this becomes n times p, which is again lambda t, or lambda tau. So the variance, instead of having this more complicated formula of the variance is the Bernoulli process, here it gets simplified and it's lambda t. So interestingly, the variance in the Poisson process is exactly the same as the expected value. So you can look at this as just some interesting coincidence. So now we're going to take this formula and see how to use it. First we're going to do a completely trivial, straightforward example. So 15 years ago when that example was made, email was coming at a rate of five messages per hour. I wish that was the case today. And now emails that are coming in, let's say during the day-- the arrival rates of emails are probably different in different times of the day. But if you fix a time slot, let's say 1:00 to 2:00 in the afternoon, there's probably a constant rate. And email arrivals are reasonably well modeled by a Poisson process. Speaking of modeling, it's not just email arrivals. Whenever arrivals happen in a completely random way, without any additional structure, the Poisson process is a good model of these arrivals. So the times at which car accidents will happen, that's a Poisson processes. If you have a very, very weak light source that's shooting out photons, just one at a time, the times at which these photons will go out is well modeled again by a Poisson process. So it's completely random. Or if you have a radioactive material where one atom at a time changes at random times. So it's a very slow radioactive decay. The time at which these alpha particles, or whatever we get emitted, again is going to be described by a Poisson process. So if you have arrivals, or emissions, that happen at completely random times, and once in a while you get an arrival or an event, then the Poisson process is a very good model for these events. So back to emails. Get them at a rate of five messages per day, per hour. In 30 minutes this is half an hour. So what we have is that lambda t, total number of arrivals is-- the expected number of arrivals is-- lambda is five, t is one-half, if we talk about hours. So lambda t is two to the 0.5. The probability of no new messages is the probability of zero, in time interval of length t, which, in our case, is one-half. And then we look back into the formula from the previous slide, and the probability of zero arrivals is lambda t to the power zero, divided by zero factorial, and then an e to the lambda t. And you plug in the numbers that we have. Lambda t to the zero power is one. Zero factorial is one. So we're left with e to the minus 2.5. And that number is 0.08. Similarly, you can ask for the probability that you get exactly one message in half an hour. And that would be-- the probability of one message in one-half an hour-- is going to be lambda t to the first power, divided by 1 factorial, e to the minus lambda t, which-- as we now get the extra lambda t factor-- is going to be 2.5, e to the minus 2.5. And the numerical answer is 0.20. So this is how you use the PMF formula for the Poisson distribution that we had in the previous slide. All right. So this was all about the distribution of the number of arrivals. What else did we do last time? Last time we also talked about the time it takes until the k-th arrival. OK. So let's try to figure out something about this particular distribution. We can derive the distribution of the time of the k-th arrival by using the exact same argument as we did last time. So now the time of the k-th arrival is a continuous random variable. So it has a PDF. Since we are in continuous time, arrivals can happen at any time. So Yk is a continuous random variable. But now let's think of a time interval of length little delta. And use our usual interpretation of PDFs. The PDF of a random variable evaluated at a certain time times delta, this is the probability that the Yk falls in this little interval. So as I've said before, this is the best way of thinking about PDFs. PDFs give you probabilities of little intervals. So now let's try to calculate this probability. For the k-th arrival to happen inside this little interval, we need two things. We need an arrival to happen in this interval, and we need k minus one arrivals to happen during that interval. OK. You'll tell me, but it's possible that we might have the k minus one arrival happen here, and the k-th arrival to happen here. In principle, that's possible. But in the limit, when we take delta very small, the probability of having two arrivals in the same little slot is negligible. So assuming that no two arrivals can happen in the same mini slot, then for the k-th one to happen here, we must have k minus one during this interval. Now because we have assumed that these joint intervals are independent of each other, this breaks down into the probability that we have exactly k minus one arrivals, during the interval from zero to t, times the probability of exactly one arrival during that little interval, which is lambda delta. We do have a formula for this from the previous slide, which is lambda t, to the k minus 1, over k minus one factorial, times e to minus lambda t. And then lambda times delta. Did I miss something? Yeah, OK. All right. And now you cancel this delta with that delta. And that gives us a formula for the PDF of the time until the k-th arrival. This PDF, of course, depends on the number k. The first arrival is going to happen somewhere in this range of time. So this is the PDF that it has. The second arrival, of course, is going to happen later. And the PDF is this. So it's more likely to happen around these times. The third arrival has this PDF, so it's more likely to happen around those times. And if you were to take k equal to 100, you might get a PDF-- it's extremely unlikely that the k-th arrival happens in the beginning, and it might happen somewhere down there, far into the future. So depending on which particular arrival we're talking about, it has a different probability distribution. The time of the 100th arrival, of course, is expected to be a lot larger than the time of the first arrival. Incidentally, the time of the first arrival has a PDF whose form is quite simple. If you let k equal to one here, this term disappears. That term becomes a one. You're left with just lambda, e to the minus lambda. And you recognize it, it's the exponential distribution. So the time until the first arrival in a Poisson process is an exponential distribution. What was the time of the first arrival in the Bernoulli process? It was a geometric distribution. Well, not coincidentally, these two look quite a bit like the other. A geometric distribution has this kind of shape. The exponential distribution has that kind of shape. The geometric is just a discrete version of the exponential. In the Bernoulli case, we are in discrete time. We have a PMF for the time of the first arrival, which is geometric. In the Poisson case, what we get is the limit of the geometric as you let those lines become closer and closer, which gives you the exponential distribution. Now the Poisson process shares all the memorylessness properties of the Bernoulli process. And the way one can argue is just in terms of this picture. Since the Poisson process is the limit of Bernoulli processes, whatever qualitative processes you have in the Bernoulli process remain valid for the Poisson process. In particular we have this memorylessness property. You let the Poisson process run for some time, and then you start watching it. What ever happened in the past has no bearing about the future. Starting from right now, what's going to happen in the future is described again by a Poisson process, in the sense that during every little slot of length delta, there's going to be a probability of lambda delta of having an arrival. And that probably lambda delta is the same-- is always lambda delta-- no matter what happened in the past of the process. And in particular, we could use this argument to say that the time until the k-th arrival is the time that it takes for the first arrival to happen. OK, let me do it for k equal to two. And then after the first arrival happens, you wait a certain amount of time until the second arrival happens. Now once the first arrival happened, that's in the past. You start watching. From now on you have mini slots of length delta, each one having a probability of success lambda delta. It's as if we started the Poisson process from scratch. So starting from that time, the time until the next arrival is going to be again an exponential distribution, which doesn't care about what happened in the past, how long it took you for the first arrival. So these two random variables are going to be independent and exponential, with the same parameter lambda. So among other things, what we have done here is we have essentially derived the PDF of the sum of k independent exponentials. The time of the k-th arrival is the sum of k inter-arrival times. The inter-arrival times are all independent of each other because of memorylessness. And they all have the same exponential distribution. And by the way, this gives you a way to simulate the Poisson process. If you wanted to simulate it on your computer, you would have one option to break time into tiny, tiny slots. And for every tiny slot, use your random number generator to decide whether there was an arrival or not. To get it very accurate, you would have to use tiny, tiny slots. So that would be a lot of computation. The more clever way of simulating the Poisson process is you use your random number generator to generate a sample from an exponential distribution and call that your first arrival time. Then go back to the random number generator, generate another independent sample, again from the same exponential distribution. That's the time between the first and the second arrival, and you keep going that way. So as a sort of a quick summary, this is the big picture. This table doesn't tell you anything new. But it's good to have it as a reference, and to look at it, and to make sure you understand what all the different boxes are. Basically the Bernoulli process runs in discrete time. The Poisson process runs in continuous time. There's an analogy of arrival rates, p per trial, or intensity per unit time. We did derive, or sketched the derivation for the PMF of the number of arrivals. And the Poisson distribution, which is the distribution that we get, this Pk of t. Pk and t is the limit of the binomial when we take the limit in this particular way, as delta goes to zero, and n goes to infinity. The geometric becomes an exponential in the limit. And the distribution of the time of the k-th arrival-- we had a closed form formula last time for the Bernoulli process. We got the closed form formula this time for the Poisson process. And we actually used exactly the same argument to get these two closed form formulas. All right. So now let's talk about adding or merging Poisson processes. And there's two statements that we can make here. One has to do with adding Poisson random variables, just random variables. There's another statement about adding Poisson processes. And the second is a bigger statement than the first. But this is a warm up. Let's work with the first statement. So the claim is that the sum of independent Poisson random variables is Poisson. OK. So suppose that we have a Poisson process with rate-- just for simplicity-- lambda one. And I take the interval from zero to two. And that take then the interval from two until five. The number of arrivals during this interval-- let's call it n from zero to two-- is going to be a Poisson random variable, with parameter, or with mean, two. The number of arrivals during this interval is n from time two until five. This is again a Poisson random variable with mean equal to three, because the arrival rate is 1 and the duration of the interval is three. These two random variables are independent. They obey the Poisson distribution that we derived before. If you add them, what you get is the number of arrivals during the interval from zero to five. Now what kind of distribution does this random variable have? Well this is the number of arrivals over an interval of a certain length in a Poisson process. Therefore, this is also Poisson with mean five. Because for the Poisson process we know that this number of arrivals is Poisson, this is Poisson, but also the number of overall arrivals is also Poisson. This establishes that the sum of a Poisson plus a Poisson random variable gives us another Poisson random variable. So adding Poisson random variables gives us a Poisson random variable. But now I'm going to make a more general statement that it's not just number of arrivals during a fixed time interval-- it's not just numbers of arrivals for given time intervals-- but rather if you take two different Poisson processes and add them up, the process itself is Poisson in the sense that this process is going to satisfy all the assumptions of a Poisson process. So the story is that you have a red bulb that flashes at random times at the rate of lambda one. It's a Poisson process. You have an independent process where a green bulb flashes at random times. And you happen to be color blind, so you just see when something is flashing. So these two are assumed to be independent Poisson processes. What can we say about the process that you observe? So in the processes that you observe, if you take a typical time interval of length little delta, what can happen during that little time interval? The red process may have something flashing. So red flashes. Or the red does not. And for the other bulb, the green bulb, there's two possibilities. The green one flashes. And the other possibility is that the green does not. OK. So there's four possibilities about what can happen during a little slot. The probability that the red one flashes and the green one flashes, what is this probability? It's lambda one delta that the first one flashes, and lambda two delta that the second one does. I'm multiplying probabilities here because I'm making the assumption that the two processes are independent. OK. Now the probability that the red one flashes is lambda one delta. But the green one doesn't is one, minus lambda two delta. Here the probability would be that the red one does not, times the probability that the green one does. And then here we have the probability that none of them flash, which is whatever is left. But it's one minus lambda one delta, times one minus lambda two delta. Now we're thinking about delta as small. So think of the case where delta goes to zero, but in a way that we keep the first order terms. We keep the delta terms, but we throw away the delta squared terms. Delta squared terms are much smaller than the delta terms when delta becomes small. If we do that-- if we only keep the order of delta terms-- this term effectively disappears. This is delta squared. So we make it zero. So the probability of having simultaneously a red and a green flash during a little interval is negligible. What do we get here? Lambda delta times one survives, but this times that doesn't. So we can throw that away. So the approximation that we get is lambda one delta. Similarly here, this goes away. We're left with a lambda two delta. And this is whatever remains, whatever is left. So what do we have? That there is a probability of seeing a flash, either a red or a green, which is lambda one delta, plus lambda two delta. So if we take a little interval of length delta here, it's going to see an arrival with probability approximately lambda one, plus lambda two, delta. So every slot in this merged process has an arrival probability with a rate which is the sum of the rates of these two processes. So this is one part of the definition of the Poisson process. There's a few more things that one would need to verify. Namely, that intervals of the same length have the same probability distribution and that different slots are independent of each other. This can be argued by starting from here because different intervals in this process are independent from each other. Different intervals here are independent from each other. It's not hard to argue that different intervals in the merged process will also be independent of each other. So the conclusion that comes at the end is that this process is a Poisson process, with a total rate which is equal to the sum of the rate of the two processes. And now if I tell you that an arrival happened in the merged process at a certain time, how likely is it that it came from here? How likely is it? We go to this picture. Given that an arrival occurred-- which is the event that this or that happened-- what is the probability that it came from the first process, the red one? Well it's the probability of this divided by the probability of this, times that. Given that this event occurred, you want to find the conditional probability of that sub event. So we're asking the question, out of the total probability of these two, what fraction of that probability is assigned here? And this is lambda one delta, after we ignore the other terms. This is lambda two delta. So that fraction is going to be lambda one, over lambda one plus lambda two. What does this tell you? If lambda one and lambda two are equal, given that I saw an arrival here, it's equally likely to be red or green. But if the reds have a much higher arrival rate, when I see an arrival here, it's more likely this number will be large. So it's more likely to have come from the red process. OK so we'll continue with this story and do some applications next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
The_Absent_Minded_Professor.txt
Hi. In this problem, we have an absent-minded professor who will inadvertently give us some practice with exponential random variables. So the professor has made two appointments with two students and inadvertently made them at the same time. And what we do is we model the duration of these appointments with an exponential random variable. So remember, an exponential random variable is a continuous random variable that takes on non-negative values, and it's parametrized by a rate parameter, lambda. And the exponential random variable is often used to model durations of time-- so time until something happens, so for example, in this case, time until the student leaves or the appointment is over. Or sometimes you will also use it to be as a model of time until something fails. And one thing that will be useful is the CDF of this exponential random variable. So the probability that it's less than or equal to some value, little t, is equal to 1 minus e to the minus lambda t. So this is, of course, valid only when t is non-negative. The other useful property is that the expected value of an exponential random variable is just 1 over the parameter lambda. And the last thing that we'll use specifically in this problem is the memory list property of exponential random variables. And so recall that that just means that if you pop in the middle of an exponential random variable, the distribution for that random variable going forward from the point where you popped in is exactly the same as if it had just started over. So that's why we call it the memory list property. Basically, the past doesn't really matter, and going forward from whatever point that you observe it at, it looks as if it had just started over afresh. And last thing we'll use, which is a review of a concept from earlier, is total expectation. So let's actually model this problem with two random variables. Let's let T1 be the time that the first student takes in the appointment and T2 be the time that the second student takes. And what we're told in the problem is that they're both exponential with mean 30 minutes. So remember the mean being 30 minutes means that the lambda is 1 over the mean. And so the lambda in this case would be 1/30. And importantly, we're also told that they are independent. So how long the first person takes is independent of how long the second person takes. So the first student arrives on time and take some random amount of time, T1. The second student arrives exactly five minutes late. And whatever the second person meets with the professor, that student will then take some random amount of time, T2. What we're asked to do is find the expected time between when the first student arrives-- so we can just call that time 0-- and when the second student leaves. Now you may say, well we're dealing with expectations, so it's easier. And in this case, it probably is just the expectation of how long the first student takes plus the expectation of how long the second student takes. So it should be about 60 minutes or exactly 60 minutes. Now, why is that not exactly right? It's because there is a small wrinkle, that the students may not go exactly back to back. So let's actually draw out a time frame of what might actually happen. So here's time 0, when the first student arrives. And the first will go for some amount of time and leave. And now let's consider two scenarios. One scenario is that the first student takes more than five minutes to complete. Well then the second student will have arrived at 5 minutes and then will be already waiting whenever this first student leaves. So then the second student will immediately pick up and continue. And in that case, we do have two exponentials back to back. But there could be another situation. Suppose that the first student didn't take very long at all and finished within five minutes, in which case the second student hasn't arrived yet. So this professor is idle in between here. And so we actually don't necessarily have two of them going back to back. So there's an empty period in between that we have to account for. So with that in mind, we see that we have two scenarios. And so what does that beg to use? Well, we can split them up into the two scenarios and then calculate expectations with each one and then use total expectation to find the overall expected length of time. OK, so let's begin with the first scenario. The first scenario is that, let's say, the first student finished within five minutes. So what does that mean in terms of the definitions that we've used? That means T1 is less than or equal to 5. So if the first student took less than five minutes, then what happens? Then we know that the amount of time that you'd need to take-- let's call that something else. Let's call that X. So X is the random variable that we're interested in, the time between when the first student comes and the second student leaves. This is the value that we want to find. Well we know that we're guaranteed that there will be a five-minute interval. So first student will come, and then the second person will take over. So we're guaranteed that the first five minutes will be the difference between when time starts and when the second student arrives. And then, after that, it's just however long the second student takes, which is just the expected value of T2. And T2 is a exponential random variable with mean 30. So in this case, it's just 35. So the first student doesn't take very long. Then we just get the five minutes, that little buffer, plus however long the second student takes, which, on average, is 30 minutes. Now what is the probability of this happening? The probability of this happening is the probability that the first student takes less than five minutes. And here is where we use the CDF that we wrote out earlier. It's going to be 1 minus e to the minus lambda t. So in this case, t is five and lambda is 1/30. So it's minus 5/30 is the probability. All right, now let's consider the second case. The second case is that the first student actually takes longer than five minutes. OK, so what happens in that case? Here's five minutes. The first student came to five minutes. The second student arrived, and the first student is still going. So he goes for some amount of time. And then whenever he finishes, the second student continues. So now the question is, what is the total amount of time in this case? Well, you can think of it as using the memory list property. This is where it comes in. So the first five minutes, we know that it was already taken because we're considering the second scenario, which we're given that T1 is greater than 5. And so the question now is, if we know that, how much longer does it take? How much longer past the five-minute mark does the first student take? And by the member list property, we know that it's as if the first student started over. So there was no memory of the first five minutes, and it's as if the first student just arrived also at the five-minute mark and met with the professor. So past the five-minute mark, it's as if you have a new exponential random variable, still with mean 30. And so what we get is that, in this case, you get the guaranteed five minutes, and then you get the memory list continuation of the first student's appointment. So you get another 30 minutes on average because of the memory list property. And then whenever the first student finally does finish up, the second student will immediately take over because he has already arrived. It's past the five-minute mark. And then that second student will take, again, on average, 30 more minutes. So what you get is, in this case, the appointment lasts 65 minutes on average. Now what is the probability of this case? The probability of this case is the probability that T1 is greater than 5. And now we know that that is just the complement of this, 1 minus that. So it's just e to the minus 5/30. So now we have both scenarios. We have the probabilities of each scenario, and we have the expectation under each scenario. Now all that remains now is to combine them using total expectation. So I really should have written expectation of X given T1 is less than or equal to 5 here. And this is expectation of X given that T1 is greater than 5. So expectation of X overall is the probability that T1 is less than or equal to 5 times the expectation of X given that T1 is less than or equal to 5 plus the probability that T1 is greater than 5 times the expectation of X given that T1 is greater than 5. And we have all four of these pieces here. so it's 35 times 1 minus e to the minus 5/30 plus 65 times e to the minus 5 5/30. And it turns out that this is approximately equal to 60.394 minutes. All right, so what have we found? We found that the original guess that we had, if we just had two meetings back to back, was on average it would take 60 minutes. It turns out that, because of the way that things are set up, because of the five minute thing, it actually takes a little longer than 60 minutes on average. And why is that? It's because the five sometimes adds an extra buffer, adds a little bit of extra amount, because it would have been shorter in this scenario because, if the both students had arrived on time, then the second student would have been able to pick up right here immediately. And so both appointments would have ended sooner. But because the second student didn't arrive until five minutes, there was some empty space that was wasted. And that's where you get you the little bit of extra time. So this is a nice problem just to get some more exercise with exponential random variables and also nicely illustrates the memory list property, which was a key points in order to solve this. And it also is nice because we get to review a useful tool that we've been using all course long, which is to split things into different scenarios and then solve the simpler problems and then combine them up, for example using total expectation. So I hope that was helpful, and see you next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
2_Conditioning_and_Bayes_Rule.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu JOHN TSISIKLIS: So here's the agenda for today. We're going to do a very quick review. And then we're going to introduce some very important concepts. The idea is that all information is-- Information is always partial. And the question is what do we do to probabilities if we have some partial information about the random experiments. We're going to introduce the important concept of conditional probability. And then we will see three very useful ways in which it is used. And these ways basically correspond to divide and conquer methods for breaking up problems into simpler pieces. And also one more fundamental tool which allows us to use conditional probabilities to do inference, that is, if we get a little bit of information about some phenomenon, what can we infer about the things that we have not seen? So our quick review. In setting up a model of a random experiment, the first thing to do is to come up with a list of all the possible outcomes of the experiment. So that list is what we call the sample space. It's a set. And the elements of the sample space are all the possible outcomes. Those possible outcomes must be distinguishable from each other. They're mutually exclusive. Either one happens or the other happens, but not both. And they are collectively exhaustive, that is no matter what the outcome of the experiment is going to be an element of the sample space. And then we discussed last time that there's also an element of art in how to choose your sample space, depending on how much detail you want to capture. This is usually the easy part. Then the more interesting part is to assign probabilities to our model, that is to make some statements about what we believe to be likely and what we believe to be unlikely. The way we do that is by assigning probabilities to subsets of the sample space. So as we have our sample space here, we may have a subset A. And we assign a number to that subset P(A), which is the probability that this event happens. Or this is the probability that when we do the experiment and we get an outcome it's the probability that the outcome happens to fall inside that event. We have certain rules that probabilities should satisfy. They're non-negative. The probability of the overall sample space is equal to one, which expresses the fact that we're are certain, no matter what, the outcome is going to be an element of the sample space. Well, if we set the top right so that it exhausts all possibilities, this should be the case. And then there's another interesting property of probabilities that says that, if we have two events or two subsets that are disjoint, and we're interested in the probability, that one or the other happens, that is the outcome belongs to A or belongs to B. For disjoint events the total probability of these two, taken together, is just the sum of their individual probabilities. So probabilities behave like masses. The mass of the object consisting of A and B is the sum of the masses of these two objects. Or you can think of probabilities as areas. They have, again, the same property. The area of A together with B is the area of A plus the area B. But as we discussed at the end of last lecture, it's useful to have in our hands a more general version of this additivity property, which says the following, if we take a sequence of sets-- A1, A2, A3, A4, and so on. And we put all of those sets together. It's an infinite sequence. And we ask for the probability that the outcome falls somewhere in this infinite union, that is we are asking for the probability that the outcome belongs to one of these sets, and assuming that the sets are disjoint, we can again find the probability for the overall set by adding up the probabilities of the individual sets. So this is a nice and simple property. But it's a little more subtle than you might think. And let's see what's going on by considering the following example. We had an example last time where we take our sample space to be the unit square. And we said let's consider a probability law that says that the probability of a subset is just the area of that subset. So let's consider this probability law. OK. Now the unit square is the set --let me just draw it this way-- the unit square is the union of one element set consisting all of the points. So the unit square is made up by the union of the various points inside the square. So union over all x's and y's. OK? So the square is made up out of all the points that this contains. And now let's do a calculation. One is the probability of our overall sample space, which is the unit square. Now the unit square is the union of these things, which, according to our additivity axiom, is the sum of the probabilities of all of these one element sets. Now what is the probability of a one element set? What is the probability of this one element set? What's the probability that our outcome is exactly that particular point? Well, it's the area of that set, which is zero. So it's just the sum of zeros. And by any reasonable definition the sum of zeros is zero. So we just proved that one is equal to zero. OK. Either probability theory is dead or there is some mistake in the derivation that I did. OK, the mistake is quite subtle and it comes at this step. We're sort of applied the additivity axiom by saying that the unit square is the union of all those sets. Can we really apply our additivity axiom. Here's the catch. The additivity axiom applies to the case where we have a sequence of disjoint events and we take their union. Is this a sequence of sets? Can you make up the whole unit square by taking a sequence of elements inside it and cover the whole unit square? Well if you try, if you start looking at the sequence of one element points, that sequence will never be able to exhaust the whole unit square. So there's a deeper reason behind that. And the reason is that infinite sets are not all of the same size. The integers are an infinite set. And you can arrange the integers in a sequence. But the continuous set like the units square is a bigger set. It's so-called uncountable. It has more elements than any sequence could have. So this union here is not of this kind, where we would have a sequence of events. It's a different kind of union. It's a Union that involves a union of many, many more sets. So the countable additivity axiom does not apply in this case. Because, we're not dealing with a sequence of sets. And so this is the incorrect step. So at some level you might think that this is puzzling and awfully confusing. On the other hand, if you think about areas of the way you're used to them from calculus, there's nothing mysterious about it. Every point on the unit square has zero area. When you put all the points together, they make up something that has finite area. So there shouldn't be any mystery behind it. Now, one interesting thing that this discussion tells us, especially the fact that the single elements set has zero area, is the following-- Individual points have zero probability. After you do the experiment and you observe the outcome, it's going to be an individual point. So what happened in that experiment is something that initially you thought had zero probability of occurring. So if you happen to get some particular numbers and you say, "Well, in the beginning, what did I think about those specific numbers? I thought they had zero probability. But yet those particular numbers did occur." So one moral from this is that zero probability does not mean impossible. It just means extremely, extremely unlikely by itself. So zero probability things do happen. In such continuous models, actually zero probability outcomes are everything that happens. And the bumper sticker version of this is to always expect the unexpected. Yes? AUDIENCE: [INAUDIBLE]. JOHN TSISIKLIS: Well, probability is supposed to be a real number. So it's either zero or it's a positive number. So you can think of the probability of things just close to that point and those probabilities are tiny and close to zero. So that's how we're going to interpret probabilities in continuous models. But this is two chapters ahead. Yeah? AUDIENCE: How do we interpret probability of zero? If we can use models that way, then how about probability of one? That it it's extremely likely but not necessarily for certain? JOHN TSISIKLIS: That's also the case. For example, if you ask in this continuous model, if you ask me for the probability that x, y, is different than the zero, zero this is the whole square, except for one point. So the area of this is going to be one. But this event is not entirely certain because the zero, zero outcome is also possible. So again, probability of one means essential certainty. But it still allows the possibility that the outcome might be outside that set. So these are some of the weird things that are happening when you have continuous models. And that's why we start to this class with discrete models, on which would be spending the next couple of weeks. OK. So now once we have set up our probability model and we have a legitimate probability law that has these properties, then the rest is usually simple. Somebody asks you a question of calculating the probability of some event. While you were told something about the probability law, such as for example the probabilities are equal to areas, and then you just need to calculate. In these type of examples somebody would give you a set and you would have to calculate the area of that set. So the rest is just calculation and simple. Alright, so now it's time to start with our main business for today. And the starting point is the following-- You know something about the world. And based on what you know when you set up a probability model and you write down probabilities for the different outcomes. Then something happens, and somebody tells you a little more about the world, gives you some new information. This new information, in general, should change your beliefs about what happened or what may happen. So whenever we're given new information, some partial information about the outcome of the experiment, we should revise our beliefs. And conditional probabilities are just the probabilities that apply after the revision of our beliefs, when we're given some information. So lets make this into a numerical example. So inside the sample space, this part of the sample space, let's say has probability 3/6, this part has 2/6, and that part has 1/6. I guess that means that out here we have zero probability. So these were our initial beliefs about the outcome of the experiment. Suppose now that someone comes and tells you that event B occurred. So they don't tell you the full outcome with the experiment. But they just tell you that the outcome is known to lie inside this set B. Well then, you should certainly change your beliefs in some way. And your new beliefs about what is likely to occur and what is not is going to be denoted by this notation. This is the conditional probability that the event A is going to occur, the probability that the outcome is going to fall inside the set A given that we are told and we're sure that the event lies inside the event B Now once you're told that the outcome lies inside the event B, then our old sample space in some ways is irrelevant. We have then you sample space, which is just the set B. We are certain that the outcome is going to be inside B. For example, what is this conditional probability? It should be one. Given that I told you that B occurred, you're certain that B occurred, so this has unit probability. So here we see an instance of revision of our beliefs. Initially, event B had the probability of (2+1)/6 -- that's 1/2. Initially, we thought B had probability 1/2. Once we're told that B occurred, the new probability of B is equal to one. OK. How do we revise the probability that A occurs? So we are going to have the outcome of the experiment. We know that it's inside B. So we will either get something here, and A does not occur. Or something inside here, and A does occur. What's the likelihood that, given that we're inside B, the outcome is inside here? Here's how we're going to think about. This part of this set B, in which A also occurs, in our initial model was twice as likely as that part of B. So outcomes inside here collectively were twice as likely as outcomes out there. So we're going to keep the same proportions and say, that given that we are inside the set B, we still want outcomes inside here to be twice as likely outcomes there. So the proportion of the probabilities should be two versus one. And these probabilities should add up to one because together they make the conditional probability of B. So the conditional probabilities should be 2/3 probability of being here and 1/3 probability of being there. That's how we revise our probabilities. That's a reasonable, intuitively reasonable, way of doing this revision. Let's translate what we did into a definition. The definition says the following, that the conditional probability of A given that B occurred is calculated as follows. We look at the total probability of B. And out of that probability that was inside here, what fraction of that probability is assigned to points for which the event A also occurs? Does it give us the same numbers as we got with this heuristic argument? Well in this example, probability of A intersection B is 2/6, divided by total probability of B, which is 3/6, and so it's 2/3, which agrees with this answer that's we got before. So the former indeed matches what we were trying to do. One little technical detail. If the event B has zero probability, and then here we have a ratio that doesn't make sense. So in this case, we say that conditional probabilities are not defined. Now you can take this definition and unravel it and write it in this form. The probability of A intersection B is the probability of B times the conditional probability. So this is just consequence of the definition but it has a nice interpretation. Think of probabilities as frequencies. If I do the experiment over and over, what fraction of the time is it going to be the case that both A and B occur? Well, there's going to be a certain fraction of the time at which B occurs. And out of those times when B occurs, there's going to be a further fraction of the experiments in which A also occurs. So interpret the conditional probability as follows. You only look at those experiments at which B happens to occur. And look at what fraction of those experiments where B already occurred, event A also occurs. And there's a symmetrical version of this equality. There's symmetry between the events B and A. So you also have this relation that goes the other way. OK, so what do we use these conditional probabilities for? First, one comment. Conditional probabilities are just like ordinary probabilities. They're the new probabilities that apply in a new universe where event B is known to have occurred. So we had an original probability model. We are told that B occurs. We revise our model. Our new model should still be legitimate probability model. So it should satisfy all sorts of properties that ordinary probabilities do satisfy. So for example, if A and B are disjoint events, then we know that the probability of A union B is equal to the probability of A plus probability of B. And now if I tell you that a certain event C occurred, we're placed in a new universe where event C occurred. We have new probabilities for that universe. These are the conditional probabilities. And conditional probabilities also satisfy this kind of property. So this is just our usual additivity axiom but the applied in a new model, in which we were told that event C occurred. So conditional probabilities do not taste or smell any different than ordinary probabilities do. Conditional probabilities, given a specific event B, just form a probability law on our sample space. It's a different probability law but it's still a probability law that has all of the desired properties. OK, so where do conditional probabilities come up? They do come up in quizzes and they do come up in silly problems. So let's start with this. We have this example from last time. Two rolls of a die, all possible pairs of roles are equally likely, so every element in this square has probability of 1/16. So all elements are equally likely. That's our original model. Then somebody comes and tells us that the minimum of the two rolls is equal to zero. What's that event? The minimum equal to zero can happen in many ways, if we get two zeros or if we get a zero and-- sorry, if we get two two's, or get a two and something larger. And so the is our new event B. The red event is the event B. And now we want to calculate probabilities inside this new universe. For example, you may be interested in the question, questions about the maximum of the two rolls. In the new universe, what's the probability that the maximum is equal to one? The maximum being equal to one is this black event. And given that we're told that B occurred, this black events cannot happen. So this probability is equal to zero. How about the maximum being equal to two, given that event B? OK, we can use the definition here. It's going to be the probability that the maximum is equal to two and B occurs divided by the probability of B. The probability that the maximum is equal to two. OK, what's the event that the maximum is equal to two? Let's draw it. This is going to be the blue event. The maximum is equal to two if we get any of those blue points. So the intersection of the two events is the intersection of the red event and the blue event. There's only one point in their intersection. So the probability of that intersection happening is 1/16. That's the numerator. How about the denominator? The event B consists of five elements, each one of which had probability of 1/16. So that's 5/16. And so the answer is 1/5. Could we have gotten this answer in a faster way? Yes. Here's how it goes. We're trying to find the conditional probability that we get this point, given that B occurred. B consist of five elements. All of those five elements were equally likely when we started, so they remain equally likely afterwards. Because when we define conditional probabilities, we keep the same proportions inside the set. So the five red elements were equally likely. They remain equally likely in the conditional world. So conditional event B having happened, each one of these five elements has the same probability. So the probability that we actually get this point is going to be 1/5. And so that's the shortcut. More generally, whenever you have a uniform distribution on your initial sample space, when you condition on an event, your new distribution is still going to be uniform, but on the smaller events of that we considered. So we started with a uniform distribution on the big square and we ended up with a uniform distribution just on the red point. Now besides silly problems, however, conditional probabilities show up in real and interesting situations. And this example is going to give you some idea of how that happens. OK. Actually, in this example, instead of starting with a probability model in terms of regular probabilities, I'm actually going to define the model in terms of conditional probabilities. And we'll see how this is done. So here's the story. There may be an airplane flying up in the sky, in a particular sector of the sky that you're watching. Sometimes there is one sometimes there isn't. And from experience you know that when you look up, there's five percent probability that the plane is flying above there and 95% probability that there's no plane up there. So event A is the event that the plane is flying out there. Now you bought this wonderful radar that's looks up. And you're told in the manufacturer's specs that, if there is a plane out there, your radar is going to register something, a blip on the screen with probability 99%. And it will not register anything with probability one percent. So this particular part of the picture is a self-contained probability model of what your radar does in a world where a plane is out there. So I'm telling you that the plane is out there. So we're now dealing with conditional probabilities because I gave you some particular information. Given this information that the plane is out there, that's how your radar is going to behave with probability 99% is going to detect it, with probability one percent is going to miss it. So this piece of the picture is a self-contained probability model. The probabilities add up to one. But it's a piece of a larger model. Similarly, there's the other possibility. Maybe a plane is not up there and the manufacturer specs tell you something about false alarms. A false alarm is the situation where the plane is not there, but for some reason your radar picked up some noise or whatever and shows a blip on the screen. And suppose that this happens with probability ten percent. Whereas with probability 90% your radar gives the correct answer. So this is sort of a model of what's going to happen with respect to both the plane -- we're given probabilities about this -- and we're given probabilities about how the radar behaves. So here I have indirectly specified the probability law in our model by starting with conditional probabilities as opposed to starting with ordinary probabilities. Can we derive ordinary probabilities starting from the conditional number ones? Yeah, we certainly can. Let's look at this event, A intersection B, which is the event up here, that there is a plane and our radar picks it up. How can we calculate this probability? Well we use the definition of conditional probabilities and this is the probability of A times the conditional probability of B given A. So it's 0.05 times 0.99. And the answer, in case you care-- It's 0.0495. OK. So we can calculate the probabilities of final outcomes, which are the leaves of the tree, by using the probabilities that we have along the branches of the tree. So essentially, what we ended up doing was to multiply the probability of this branch times the probability of that branch. Now, how about the answer to this question. What is the probability that our radar is going to register something? OK, this is an event that can happen in multiple ways. It's the event that consists of this outcome. There is a plane and the radar registers something together with this outcome, there is no plane but the radar still registers something. So to find the probability of this event, we need the individual probabilities of the two outcomes. For the first outcome, we already calculated it. For the second outcome, the probability that this happens is going to be this probability 95% times 0.10, which is the conditional probability for taking this branch, given that there was no plane out there. So we just add the numbers. 0.05 times 0.99 plus 0.95 times 0.1 and the final answer is 0.1445. OK. And now here's the interesting question. Given that your radar recorded something, how likely is it that there is an airplane up there? Your radar registering something -- that can be caused by two things. Either there's a plane there, and your radar did its job. Or there was nothing, but your radar fired a false alarm. What's the probability that this is the case as opposed to that being the case? OK. The intuitive shortcut would be that it should be the probability-- you look at their relative odds of these two elements and you use them to find out how much more likely it is to be there as opposed to being there. But instead of doing this, let's just write down the definition and just use it. It's the probability of A and B happening, divided by the probability of B. This is just our definition of conditional probabilities. Now we have already found the numerator. We have already calculated the denominator. So we take the ratio of these two numbers and we find the final answer -- which is 0.34. OK. There's this slightly curious thing that's happened in this example. Doesn't this number feel a little too low? My radar -- So this is a conditional probability, given that my radar said there is something out there, that there is indeed something there. So it's sort of the probability that our radar gave the correct answer. Now, the specs of our radar we're pretty good. In this situation, it gives you the correct answer 99% of the time. In this situation, it gives you the correct answer 90% of the time. So you would think that your radar there is really reliable. But yet here the radar recorded something, but the chance that the answer that you get out of this is the right one, given that it recorded something, the chance that there is an airplane out there is only 30%. So you cannot really rely on the measurements from your radar, even though the specs of the radar were really good. What's the reason for this? Well, the reason is that false alarms are pretty common. Most of the time there's nothing. And there's a ten percent probability of false alarms. So there's roughly a ten percent probability that in any given experiment, you have a false alarm. And there is about the five percent probability that something out there and your radar gets it. So when your radar records something, it's actually more likely to be a false alarm rather than being an actual airplane. This has probability ten percent roughly. This has probability roughly five percent So conditional probabilities are sometimes counter-intuitive in terms of the answers that they get. And you can make similar stories about doctors interpreting the results of tests. So you tested positive for a certain disease. Does it mean that you have the disease necessarily? Well if that disease has been eradicated from the face of the earth, testing positive doesn't mean that you have the disease, even if the test was designed to be a pretty good one. So unfortunately, doctors do get it wrong also sometimes. And the reasoning that comes in such situations is pretty subtle. Now for the rest of the lecture, what we're going to do is to take this example where we did three things and abstract them. These three trivial calculations that's we just did are three very important, very basic tools that you use to solve more general probability problems. So what's the first one? We find the probability of a composite event, two things happening, by multiplying probabilities and conditional probabilities. More general version of this, look at any situation, maybe involving lots and lots of events. So here's a story that event A may happen or may not happen. Given that A occurred, it's possible that B happens or that B does not happen. Given that B also happens, it's possible that the event C also happens or that event C does not happen. And somebody specifies for you a model by giving you all these conditional probabilities along the way. Notice what we move along the branches as the tree progresses. Any point in the tree corresponds to certain events having happened. And then, given that this has happened, we specify conditional probabilities. Given that this has happened, how likely is it for that C also occurs? Given a model of this kind, how do we find the probability or for this event? The answer is extremely simple. All that you do is move along with the tree and multiply conditional probabilities along the way. So in terms of frequencies, how often do all three things happen, A, B, and C? You first see how often does A occur. Out of the times that A occurs, how often does B occur? And out of the times where both A and B have occurred, how often does C occur? And you can just multiply those three frequencies with each other. What is the formal proof of this? Well, the only thing we have in our hands is the definition of conditional probabilities. So let's just use this. And-- OK. Now, the definition of conditional probabilities tells us that the probability of two things is the probability of one of them times a conditional probability. Unfortunately, here we have the probability of three things. What can I do? I can put a parenthesis in here and think of this as the probability of this and that and apply our definition of conditional probabilities here. The probability of two things happening is the probability that the first happens times the conditional probability that the second happens, given A and B, given that the first one happened. So this is just the definition of the conditional probability of an event, given another event. That other event is a composite one, but that's not an issue. It's just an event. And then we use the definition of conditional probabilities once more to break this apart and make it P(A), P(B given A) and then finally, the last term. OK. So this proves the formula that I have up there on the slides. And if you wish to calculate any other probability in this diagram. For example, if you want to calculate this probability, you would still multiply the conditional probabilities along the different branches of the tree. In particular, here in this branch, you would have the conditional probability of C complement, given A intersection B complement, and so on. So you write down probabilities along all those tree branches and just multiply them as you go. So this was the first skill that we are covering. What was the second one? What we did was to calculate the total probability of a certain event B that consisted of-- was made up from different possibilities, which corresponded to different scenarios. So we wanted to calculate the probability of this event B that consisted of those two elements. Let's generalize. So we have our big model. And this sample space is partitioned in a number of sets. In our radar example, we had a partition in two sets. Either a plane is there, or a plane is not there. Since we're trying to generalize, now I'm going to give you a picture for the case of three possibilities or three possible scenarios. So whatever happens in the world, there are three possible scenarios, A1, A2, A3. So think of these as there's nothing in the air, there's an airplane in the air, or there's a flock of geese flying in the air. So there's three possible scenarios. And then there's a certain event B of interest, such as a radar records something or doesn't record something. We specify this model by giving probabilities for the Ai's-- That's the probability of the different scenarios. And somebody also gives us the probabilities that this event B is going to occur, given that the Ai-th scenario has occurred. Think of the Ai's as scenarios. And we want to calculate the overall probability of the event B. What's happening in this example? Perhaps, instead of this picture, it's easier to visualize if I go back to the picture I was using before. We have three possible scenarios, A1, A2, A3. And under each scenario, B may happen or B may not happen. And so on. So here we have A2 intersection B. And here we have A3 intersection B. In the previous slide, we found how to calculate the probability of any event of this kind, which is done by multiplying probabilities here and conditional probabilities there. Now we are asked to calculate the total probability of the event B. The event B can happen in three possible ways. It can happen here. It can happen there. And it can happen here. So this is our event B. It consists of three elements. To calculate the total probability of our event B, all we need to do is to add these three probabilities. So B is an event that consists of these three elements. There are three ways that B can happen. Either B happens together with A1, or B happens together with A2, or B happens together with A3. So we need to add the probabilities of these three contingencies. For each one of those contingencies, we can calculate its probability by using the multiplication rule. So the probability of A1 and B happening is this-- It's the probability of A1 and then B happening given that A1 happens. The probability of this contingency is found by taking the probability that A2 happens times the conditional probability of A2, given that B happened. And similarly for the third one. So this is the general rule that we have here. The rule is written for the case of three scenarios. But obviously, it has a generalization for the case of four or five or more scenarios. It gives you a way of breaking up the calculation of an event that can happen in multiple ways by considering individual probabilities for the different ways that the event can happen. OK. So-- Yes? AUDIENCE: Does this have to change for infinite sample space? JOHN TSISIKLIS: No. This is true whether your sample space is infinite or finite. What I'm using in this argument that we have a partition into just three scenarios, three events. So it's a partition to a finite number of events. It's also true if it's a partition into an infinite sequence of events. But that's, I think, one of the theoretical problems at the end of the chapter. You probably may not need it for now. OK, going back to the story here. There are three possible scenarios about what could happen in the world that are captured here. Event, under each scenario, event B may or may not happen. And so these probabilities tell us the likelihoods of the different scenarios. These conditional probabilities tell us how likely is it for B to happen under one scenario, or the other scenario, or the other scenario. The overall probability of B is found by taking some combination of the probabilities of B in the different possible worlds, in the different possible scenarios. Under some scenario, B may be very likely. Under another scenario, it may be very unlikely. We take all of these into account and weigh them according to the likelihood of the scenarios. Now notice that since A1, A2, and three form a partition, these three probabilities have what property? Add to what? They add to one. So it's the probability of this branch, plus this branch, plus this branch. So what we have here is a weighted average of the probabilities of the B's into the different worlds, or in the different scenarios. Special case. Suppose the three scenarios are equally likely. So P of A1 equals 1/3, equals to P of A2, P of A3. what are we saying here? In that case of equally likely scenarios, the probability of B is the average of the probabilities of B in the three different words, or in the three different scenarios. OK. So to finally, the last step. If we go back again two slides, the last thing that we did was to calculate a conditional probability of this kind, probability of A given B, which is a probability associated essentially with an inference problem. Given that our radar recorded something, how likely is it that the plane was up there? So we're trying to infer whether a plane was up there or not, based on the information that we've got. So let's generalize once more. And we're just going to rewrite what we did in that example, but in terms of general symbols instead of the specific numbers. So once more, the model that we have involves probabilities of the different scenarios. These we call them prior probabilities. They're are our initial beliefs about how likely each scenario is to occur. We also have a model of our measuring device that tells us under that scenario how likely is it that our radar will register something or not. So we're given again these conditional probabilities. We're given the conditional probabilities for these branches. Then we are told that event B occurred. And on the basis of this new information, we want to form some new beliefs about the relative likelihood of the different scenarios. Going back again to our radar example, an airplane was present with probability 5%. Given that the radar recorded something, we're going to change our beliefs. Now, a plane is present with probability 34%. The radar, since we saw something, we are going to revise our beliefs as to whether the plane is out there or is not there. And so what we need to do is to calculate the conditional probabilities of the different scenarios, given the information that we got. So initially, we have these probabilities for the different scenarios. Once we get the information, we update them and we calculate our revised probabilities or conditional probabilities given the observation that we made. OK. So what do we do? We just use the definition of conditional probabilities twice. By definition the conditional probability is the probability of two things happening divided by the probability of the conditioning event. Now, I'm using the definition of conditional probabilities once more, or rather I use the multiplication rule. The probability of two things happening is the probability of the first and the second. So these are things that are given to us. They're the probabilities of the different scenarios. And it's the model of our measuring device, which we assume to be available. And how about the denominator? This is total probability of the event B. But we just found that's it's easy to calculate using the formula in the previous slide. To find the overall probability of event B occurring, we look at the probabilities of B occurring under the different scenario and weigh them according to the probabilities of all the scenarios. So in the end, we have a formula for the conditional probability, A's given B, based on the data of the problem, which were probabilities of the different scenarios and conditional probabilities of B, given the A's. So what this calculation does is, basically, it reverses the order of conditioning. We are given conditional probabilities of these kind, where it's B given A and we produce new conditional probabilities, where things go the other way. So schematically, what's happening here is that we have model of cause and effect and-- So a scenario occurs and that may cause B to happen or may not cause it to happen. So this is a cause/effect model. And it's modeled using probabilities, such as probability of B given Ai. And what we want to do is inference where we are told that B occurs, and we want to infer whether Ai also occurred or not. And the appropriate probabilities for that are the conditional probabilities that A occurred, given that B occurred. So we're starting with a causal model of our situation. It models from a given cause how likely is a certain effect to be observed. And then we do inference, which answers the question, given that the effect was observed, how likely is it that the world was in this particular situation or state or scenario. So the name of the Bayes rule comes from Thomas Bayes, a British theologian back in the 1700s. It actually-- This calculation addresses a basic problem, a basic philosophical problem, how one can learn from experience or from experimental data and some systematic way. So the British at that time were preoccupied with this type of question. Is there a basic theory that about how we can incorporate new knowledge to previous knowledge. And this calculation made an argument that, yes, it is possible to do that in a systematic way. So the philosophical underpinnings of this have a very long history and a lot of discussion around them. But for our purposes, it's just an extremely useful tool. And it's the foundation of almost everything that gets done when you try to do inference based on partial observations. Very well. Till next time.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
5_Discrete_Random_Variables_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu, OK. So let us start. All right. So today we're starting a new unit in this class. We have covered, so far, the basics of probability theory-- the main concepts and tools, as far as just probabilities are concerned. But if that was all that there is in this subject, the subject would not be rich enough. What makes probability theory a lot more interesting and richer is that we can also talk about random variables, which are ways of assigning numerical results to the outcomes of an experiment. So we're going to define what random variables are, and then we're going to describe them using so-called probability mass functions. Basically some numerical values are more likely to occur than other numerical values, and we capture this by assigning probabilities to them the usual way. And we represent these in a compact way using the so-called probability mass functions. We're going to see a couple of examples of random variables, some of which we have already seen but with different terminology. And so far, it's going to be just a couple of definitions and calculations of the type that you already know how to do. But then we're going to introduce the one new, big concept of the day. So up to here it's going to be mostly an exercise in notation and definitions. But then we got our big concept which is the concept of the expected value of a random variable, which is some kind of average value of the random variable. And then we're going to also talk, very briefly, about close distance of the expectation, which is the concept of the variance of a random variable. OK. So what is a random variable? It's an assignment of a numerical value to every possible outcome of the experiment. So here's the picture. The sample space is this class, and we've got lots of students in here. This is our sample space, omega. I'm interested in the height of a random student. So I'm going to use a real line where I record height, and let's say this is height in inches. And the experiment happens, I pick a random student. And I go and measure the height of that random student, and that gives me a specific number. So what's a good number in inches? Let's say 60. OK. Or I pick another student, and that student has a height of 71 inches, and so on. So this is the experiment. These are the outcomes. These are the numerical values of the random variable that we call height. OK. So mathematically, what are we dealing with here? We're basically dealing with a function from the sample space into the real numbers. That function takes as argument, an outcome of the experiment, that is a typical student, and produces the value of that function, which is the height of that particular student. So we think of an abstract object that we denote by a capital H, which is the random variable called height. And that random variable is essentially this particular function that we talked about here. OK. So there's a distinction that we're making here-- H is height in the abstract. It's the function. These numbers here are particular numerical values that this function takes when you choose one particular outcome of the experiment. Now, when you have a single probability experiment, you can have multiple random variables. So perhaps, instead of just height, I'm also interested in the weight of a typical student. And so when the experiment happens, I pick that random student-- this is the height of the student. But that student would also have a weight, and I could record it here. And similarly, every student is going to have their own particular weight. So the weight function is a different function from the sample space to the real numbers, and it's a different random variable. So the point I'm making here is that a single probabilistic experiment may involve several interesting random variables. I may be interested in the height of a random student or the weight of the random student. These are different random variables that could be of interest. I can also do other things. Suppose I define an object such as H bar, which is 2.58. What does that correspond to? Well, this is the height in centimeters. Now, H bar is a function of H itself, but if you were to draw the picture, the picture would go this way. 60 gets mapped to 150, 71 gets mapped to, oh, that's too hard for me. OK, gets mapped to something, and so on. So H bar is also a random variable. Why? Once I pick a particular student, that particular outcome determines completely the numerical value of H bar, which is the height of that student but measured in centimeters. What we have here is actually a random variable, which is defined as a function of another random variable. And the point that this example is trying to make is that functions of random variables are also random variables. The experiment happens, the experiment determines a numerical value for this object. And once you have the numerical value for this object, that determines also the numerical value for that object. So given an outcome, the numerical value of this particular object is determined. So H bar is itself a function from the sample space, from outcomes to numerical values. And that makes it a random variable according to the formal definition that we have here. So the formal definition is that the random variable is not random, it's not a variable, it's just a function from the sample space to the real numbers. That's the abstract, right way of thinking about them. Now, random variables can be of different types. They can be discrete or continuous. Suppose that I measure the heights in inches, but I round to the nearest inch. Then the numerical values I'm going to get here would be just integers. So that would make it an integer valued random variable. And this is a discrete random variable. Or maybe I have a scale for measuring height which is infinitely precise and records your height to an infinite number of digits of precision. In that case, your height would be just a general real number. So we would have a random variable that takes values in the entire set of real numbers. Well, I guess not really negative numbers, but the set of non-negative numbers. And that would be a continuous random variable. It takes values in a continuous set. So we will be talking about both discrete and continuous random variables. The first thing we will do will be to devote a few lectures on discrete random variables, because discrete is always easier. And then we're going to repeat everything in the continuous setting. So discrete is easier, and it's the right place to understand all the concepts, even those who may appear to be elementary. And then you will be set to understand what's going on when we go to the continuous case. So in the continuous case, you get all the complications of calculus and some extra math that comes in there. So it's important to have been down all the concepts very well in the easy, discrete case so that you don't have conceptual hurdles when you move on to the continuous case. Now, one important remark that may seem trivial but it's actually very important so that you don't get tangled up between different types of concepts-- there's a fundamental distinction between the random variable itself, and the numerical values that it takes. Abstractly speaking, or mathematically speaking, a random variable, x, or H in this example, is a function. OK. Maybe if you like programming the words "procedure" or "sub-routine" might be better. So what's the sub-routine height? Given a student, I take that student, force them on the scale and measure them. That's the sub-routine that measures heights. It's really a function that takes students as input and produces numbers as output. The sub-routine we denoted by capital H. That's the random variable. But once you plug in a particular student into that sub-routine, you end up getting a particular number. This is the numerical output of that sub-routine or the numerical value of that function. And that numerical value is an element of the real numbers. So the numerical value is a real number, where this capital X is a function from omega to the real numbers. So they are very different types of objects. And the way that we keep track of what we're talking about at any given time is by using capital letters for random variables and lower case letters for numbers. OK. So now once we have a random variable at hand, that random variable takes on different numerical values. And we want to describe to say something about the relative likelihoods of the different numerical values that the random variable can take. So here's our sample space, and here's the real line. And there's a bunch of outcomes that gave rise to one particular numerical value. There's another numerical value that arises if we have this outcome. There's another numerical value that arises if we have this outcome. So our sample space is here. The real numbers are here. And what we want to do is to ask the question, how likely is that particular numerical value to occur? So what we're essentially asking is, how likely is it that we obtain an outcome that leads to that particular numerical value? We calculate that overall probability of that numerical value and we represent that probability using a bar so that we end up generating a bar graph. So that could be a possible bar graph associated with this picture. The size of this bar is the total probability that our random variable took on this numerical value, which is just the sum of the probabilities of the different outcomes that led to that numerical value. So the thing that we're plotting here, the bar graph-- we give a name to it. It's a function, which we denote by lowercase b, capital X. The capital X indicates which random variable we're talking about. And it's a function of little x, which is the range of values that our random variable is taking. So in mathematical notation, the value of the PMF at some particular number, little x, is the probability that our random variable takes on the numerical value, little x. And if you want to be precise about what this means, it's the overall probability of all outcomes for which the random variable ends up taking that value, little x. So this is the overall probability of all omegas that lead to that particular numerical value, x, of interest. So what do we know about PMFs? Since there are probabilities, all these entries in the bar graph have to be non-negative. Also, if you exhaust all the possible values of little x's, you will have exhausted all the possible outcomes here. Because every outcome leads to some particular x. So the sum of these probabilities should be equal to one. This is the second relation here. So this relation tell us that some little x is going to happen. They happen with different probabilities, but when you consider all the possible little x's together, one of those little x's is going to be realized. Probabilities need to add to one. OK. So let's get our first example of a non-trivial bar graph. Consider the experiment where I start with a coin and I start flipping it over and over. And I do this until I obtain heads for the first time. So what are possible outcomes of this experiment? One possible outcome is that I obtain heads at the first toss, and then I stop. In this case, my random variable takes the value 1. Or it's possible that I obtain tails and then heads. How many tosses did it take until heads appeared? This would be x equals to 2. Or more generally, I might obtain tails for k minus 1 times, and then obtain heads at the k-th time, in which case, our random variable takes the value, little k. So that's the experiment. So capital X is a well defined random variable. It's the number of tosses it takes until I see heads for the first time. These are the possible outcomes. These are elements of our sample space. And these are the values of X depending on the outcome. Clearly X is a function of the outcome. You tell me the outcome, I'm going to tell you what X is. So what we want to do now is to calculate the PMF of X. So Px of k is, by definition, the probability that our random variable takes the value k. For the random variable to take the value of k, the first head appears at toss number k. The only way that this event can happen is if we obtain this sequence of events. T's the first k minus 1 times, tails, and heads at the k-th flip. So this event, that the random variable is equal to k, is the same as this event, k minus 1 tails followed by 1 head. What's the probability of that event? We're assuming that the coin tosses are independent. So to find the probability of this event, we need to multiply the probability of tails, times the probability of tails, times the probability of tails. We multiply k minus one times, times the probability of heads, which puts an extra p at the end. And this is the formula for the so-called geometric PMF. And why do we call it geometric? Because if you go and plot the bar graph of this random variable, X, we start at 1 with a certain number, which is p. And then at 2 we get p(1-p). At 3 we're going to get something smaller, it's p times (1-p)-squared. And the bars keep going down at the rate of geometric progression. Each bar is smaller than the previous bar, because each time we get an extra factor of 1-p involved. So the shape of this PMF is the graph of a geometric sequence. For that reason, we say that it's the geometric PMF, and we call X also a geometric random variable. So the number of coin tosses until the first head is a geometric random variable. So this was an example of how to compute the PMF of a random variable. This was an easy example, because this event could be realized in one and only one way. So to find the probability of this, we just needed to find the probability of this particular outcome. More generally, there's going to be many outcomes that can lead to the same numerical value. And we need to keep track of all of them. For example, in this picture, if I want to find this value of the PMF, I need to add up the probabilities of all the outcomes that leads to that value. So the general procedure is exactly what this picture suggests. To find this probability, you go and identify which outcomes lead to this numerical value, and add their probabilities. So let's do a simple example. I take a tetrahedral die. I toss it twice. And there's lots of random variables that you can associate with the same experiment. So the outcome of the first throw, we can call it F. That's a random variable because it's determined once you tell me what happens in the experiment. The outcome of the second throw is another random variable. The minimum of the two throws is also a random variable. Once I do the experiment, this random variable takes on a specific numerical value. So suppose I do the experiment and I get a 2 and a 3. So this random variable is going to take the numerical value of 2. This is going to take the numerical value of 3. This is going to take the numerical value of 2. And now suppose that I want to calculate the PMF of this random variable. What I will need to do is to calculate Px(0), Px(1), Px(2), Px(3), and so on. Let's not do the entire calculation then, let's just calculate one of the entries of the PMF. So Px(2)-- that's the probability that the minimum of the two throws gives us a 2. And this can happen in many ways. There are five ways that it can happen. Those are all of the outcomes for which the smallest of the two is equal to 2. That's five outcomes assuming that the tetrahedral die is fair and the tosses are independent. Each one of these outcomes has probability of 1/16. There's five of them, so we get an answer, 5/16. Conceptually, this is just the procedure that you use to calculate PMFs the way that you construct this particular bar graph. You consider all the possible values of your random variable, and for each one of those random variables you find the probability that the random variable takes on that value by adding the probabilities of all the possible outcomes that leads to that particular numerical value. So let's do another, more interesting one. So let's revisit the coin tossing problem from last time. Let us fix a number n, and we decide to flip a coin n consecutive times. Each time the coin tosses are independent. And each one of the tosses will have a probability, p, of obtaining heads. Let's consider the random variable, which is the total number of heads that have been obtained. Well, that's something that we dealt with last time. We know the probabilities for different numbers of heads, but we're just going to do the same now using today's notation. So let's, for concreteness, n equal to 4. Px is the PMF of that random variable, X. Px(2) is meant to be, by definition, it's the probability that a random variable takes the value of 2. So this is the probability that we have, exactly two heads in our four tosses. The event of exactly two heads can happen in multiple ways. And here I've written down the different ways that it can happen. It turns out that there's exactly six ways that it can happen. And each one of these ways, luckily enough, has the same probability-- p-squared times (1-p)-squared. So that gives us the value for the PMF evaluated at 2. So here we just counted explicitly that we have six possible ways that this can happen, and this gave rise to this factor of 6. But this factor of 6 turns out to be the same as this 4 choose 2. If you remember definition from last time, 4 choose 2 is 4 factorial divided by 2 factorial, divided by 2 factorial, which is indeed equal to 6. And this is the more general formula that you would be using. In general, if you have n tosses and you're interested in the probability of obtaining k heads, the probability of that event is given by this formula. So that's the formula that we derived last time. Except that last time we didn't use this notation. We just said the probability of k heads is equal to this. Today we introduce the extra notation. And also having that notation, we may be tempted to also plot a bar graph for the Px. In this case, for the coin tossing problem. And if you plot that bar graph as a function of k when n is a fairly large number, what you will end up obtaining is a bar graph that has a shape of something like this. So certain values of k are more likely than others, and the more likely values are somewhere in the middle of the range. And extreme values-- too few heads or too many heads, are unlikely. Now, the miraculous thing is that it turns out that this curve gets a pretty definite shape, like a so-called bell curve, when n is big. This is a very deep and central fact from probability theory that we will get to in a couple of months. For now, it just could be a curious observation. If you go into MATLAB and put this formula in and ask MATLAB to plot it for you, you're going to get an interesting shape of this form. And later on we will have to sort of understand where this is coming from and whether there's a nice, simple formula for the asymptotic form that we get. All right. So, so far I've said essentially nothing new, just a little bit of notation and this little conceptual thing that you have to think of random variables as functions in the sample space. So now it's time to introduce something new. This is the big concept of the day. In some sense it's an easy concept. But it's the most central, most important concept that we have to deal with random variables. It's the concept of the expected value of a random variable. So the expected value is meant to be, let's speak loosely, something like an average, where you interpret probabilities as something like frequencies. So you play a certain game and your rewards are going to be-- use my standard numbers-- your rewards are going to be one dollar with probability 1/6. It's going to be 2 dollars with probability 1/2, and four dollars with probability 1/3. So this is a plot of the PMF of some random variable. If you play that game and you get so many dollars with this probability, and so on, how much do you expect to get on the average if you play the game a zillion times? Well, you can think as follows-- one sixth of the time I'm going to get one dollar. One half of the time that outcome is going to happen and I'm going to get two dollars. And one third of the time the other outcome happens, and I'm going to get four dollars. And you evaluate that number and it turns out to be 2.5. OK. So that's a reasonable way of calculating the average payoff if you think of these probabilities as the frequencies with which you obtain the different payoffs. And loosely speaking, it doesn't hurt to think of probabilities as frequencies when you try to make sense of various things. So what did we do here? We took the probabilities of the different outcomes, of the different numerical values, and multiplied them with the corresponding numerical value. Similarly here, we have a probability and the corresponding numerical value and we added up over all x's. So that's what we did. It looks like an interesting quantity to deal with. So we're going to give a name to it, and we're going to call it the expected value of a random variable. So this formula just captures the calculation that we did. How do we interpret the expected value? So the one interpretation is the one that I used in this example. You can think of it as the average that you get over a large number of repetitions of an experiment where you interpret the probabilities as the frequencies with which the different numerical values can happen. There's another interpretation that's a little more visual and that's kind of insightful, if you remember your freshman physics, this kind of formula gives you the center of gravity of an object of this kind. If you take that picture literally and think of this as a mass of one sixth sitting here, and the mass of one half sitting here, and one third sitting there, and you ask me what's the center of gravity of that structure. This is the formula that gives you the center of gravity. Now what's the center of gravity? It's the place where if you put your pen right underneath, that diagram will stay in place and will not fall on one side and will not fall on the other side. So in this thing, by picture, since the 4 is a little more to the right and a little heavier, the center of gravity should be somewhere around here. And that's what for the math gave us. It turns out to be two and a half. Once you have this interpretation about centers of gravity, sometimes you can calculate expectations pretty fast. So here's our new random variable. It's the uniform random variable in which each one of the numerical values is equally likely. Here there's a total of n plus 1 possible numerical values. So each one of them has probability 1 over (n + 1). Let's calculate the expected value of this random variable. We can take the formula literally and consider all possible outcomes, or all possible numerical values, and weigh them by their corresponding probability, and do this calculation and obtain an answer. But I gave you the intuition of centers of gravity. Can you use that intuition to guess the answer? What's the center of gravity infrastructure of this kind? We have symmetry. So it should be in the middle. And what's the middle? It's the average of the two end points. So without having to do the algebra, you know that's the answer is going to be n over 2. So this is a moral that you should keep whenever you have PMF, which is symmetric around a certain point. That certain point is going to be the expected value associated with this particular PMF. OK. So having defined the expected value, what is there that's left for us to do? Well, we want to investigate how it behaves, what kind of properties does it have, and also how do you calculate expected values of complicated random variables. So the first complication that we're going to start with is the case where we deal with a function of a random variable. OK. So let me redraw this same picture as before. We have omega. This is our sample space. This is the real line. And we have a random variable that gives rise to various values for X. So the random variable is capital X, and every outcome leads to a particular numerical value x for our random variable X. So capital X is really the function that maps these points into the real line. And then I consider a function of this random variable, call it capital Y, and it's a function of my previous random variable. And this new random variable Y takes numerical values that are completely determined once I know the numerical value of capital X. And perhaps you get a diagram of this kind. So X is a random variable. Once you have an outcome, this determines the value of x. Y is also a random variable. Once you have the outcome, that determines the value of y. Y is completely determined once you know X. We have a formula for how to calculate the expected value of X. Suppose that you're interested in calculating the expected value of Y. How would you go about it? OK. The only thing you have in your hands is the definition, so you could start by just using the definition. And what does this entail? It entails for every particular value of y, collect all the outcomes that leads to that value of y. Find their probability. Do the same here. For that value, collect those outcomes. Find their probability and weight by y. So this formula does the addition over this line. We consider the different outcomes and add things up. There's an alternative way of doing the same accounting where instead of doing the addition over those numbers, we do the addition up here. We consider the different possible values of x, and we think as follows-- for each possible value of x, that value is going to occur with this probability. And if that value has occurred, this is how much I'm getting, the g of x. So I'm considering the probability of this outcome. And in that case, y takes this value. Then I'm considering the probabilities of this outcome. And in that case, g of x takes again that value. Then I consider this particular x, it happens with this much probability, and in that case, g of x takes that value, and similarly here. We end up doing exactly the same arithmetic, it's only a question whether we bundle things together. That is, if we calculate the probability of this, then we're bundling these two cases together. Whereas if we do the addition up here, we do a separate calculation-- this probability times this number, and then this probability times that number. So it's just a simple rearrangement of the way that we do the calculations, but it does make a big difference in practice if you actually want to calculate expectations. So the second procedure that I mentioned, where you do the addition by running over the x-axis corresponds to this formula. Consider all possibilities for x and when that x happens, how much money are you getting? That gives you the average money that you are getting. All right. So I kind of hand waved and argued that it's just a different way of accounting, of course one needs to prove this formula. And fortunately it can be proved. You're going to see that in recitation. Most people, once they're a little comfortable with the concepts of probability, actually believe that this is true by definition. In fact it's not true by definition. It's called the law of the unconscious statistician. It's something that you always do, but it's something that does require justification. All right. So this gives us basically a shortcut for calculating expected values of functions of a random variable without having to find the PMF of that function. We can work with the PMF of the original function. All right. So we're going to use this property over and over. Before we start using it, one general word of caution-- the average of a function of a random variable, in general, is not the same as the function of the average. So these two operations of taking averages and taking functions do not commute. What this inequality tells you is that, in general, you can not reason on the average. So we're going to see instances where this property is not true. You're going to see lots of them. Let me just throw it here that it's something that's not true in general, but we will be interested in the exceptions where a relation like this is true. But these will be the exceptions. So in general, expectations are average, something like averages. But the function of an average is not the same as the average of the function. OK. So now let's go to properties of expectations. Suppose that alpha is a real number, and I ask you, what's the expected value of that real number? So for example, if I write down this expression-- expected value of 2. What is this? Well, we defined random variables and we defined expectations of random variables. So for this to make syntactic sense, this thing inside here should be a random variable. Is 2 -- the number 2 --- is it a random variable? In some sense, yes. It's the random variable that takes, always, the value of 2. So suppose that you have some experiment and that experiment always outputs 2 whenever it happens. Then you can say, yes, it's a random experiment but it always gives me 2. The value of the random variable is always 2 no matter what. It's kind of a degenerate random variable that doesn't have any real randomness in it, but it's still useful to think of it as a special case. So it corresponds to a function from the sample space to the real line that takes only one value. No matter what the outcome is, it always gives me a 2. OK. If you have a random variable that always gives you a 2, what is the expected value going to be? The only entry that shows up in this summation is that number 2. The probability of a 2 is equal to 1, and the value of that random variable is equal to 2. So it's the number itself. So the average value in an experiment that always gives you 2's is 2. All right. So that's nice and simple. Now let's go to our experiment where age was your height in inches. And I know your height in inches, but I'm interested in your height measured in centimeters. How is that going to be related to your height in inches? Well, if you take your height in inches and convert it to centimeters, I have another random variable, which is always, no matter what, two and a half times bigger than the random variable I started with. If you take some quantity and always multiplied by two and a half what happens to the average of that quantity? It also gets multiplied by two and a half. So you get a relation like this, which says that the average height of a student measured in centimeters is two and a half times the average height of a student measured in inches. So that makes perfect intuitive sense. If you generalize it, it gives us this relation, that if you have a number, you can pull it outside the expectation and you get the right result. So this is a case where you can reason on the average. If you take a number, such as height, and multiply it by a certain number, you can reason on the average. I multiply the numbers by two, the averages will go up by two. So this is an exception to this cautionary statement that I had up there. How do we prove that this fact is true? Well, we can use the expected value rule here, which tells us that the expected value of alpha X, this is our g of X, essentially, is going to be the sum over all x's of my function, g of X, times the probability of the x's. In our particular case, g of X is alpha times X. And we have those probabilities. And the alpha goes outside the summation. So we get alpha, sum over x's, x Px of x, which is alpha times the expected value of X. So that's how you prove this relation formally using this rule up here. And the next formula that I have here also gets proved the same way. What does this formula tell you? If I take everybody's height in centimeters-- we already multiplied by alpha-- and the gods give everyone a bonus of ten extra centimeters. What's going to happen to the average height of the class? Well, it will just go up by an extra ten centimeters. So this expectation is going to be giving you the bonus of beta just adds a beta to the average height in centimeters, which we also know to be alpha times the expected value of X, plus beta. So this is a linearity property of expectations. If you take a linear function of a single random variable, the expected value of that linear function is the linear function of the expected value. So this is our big exception to this cautionary note, that we have equal if g is linear. OK. All right. So let's get to the last concept of the day. What kind of functions of random variables may be of interest? One possibility might be the average value of X-squared. Why is it interesting? Well, why not. It's the simplest function that you can think of. So if you want to calculate the expected value of X-squared, you would use this general rule for how you can calculate expected values of functions of random variables. You consider all the possible x's. For each x, you see what's the probability that it occurs. And if that x occurs, you consider and see how big x-squared is. Now, the more interesting quantity, a more interesting expectation that you can calculate has to do not with x-squared, but with the distance of x from the mean and then squared. So let's try to parse what we've got up here. Let's look just at the quantity inside here. What kind of quantity is it? It's a random variable. Why? X is random, the random variable, expected value of X is a number. Subtract a number from a random variable, you get another random variable. Take a random variable and square it, you get another random variable. So the thing inside here is a legitimate random variable. What kind of random variable is it? So suppose that we have our experiment and we have different x's that can happen. And the mean of X in this picture might be somewhere around here. I do the experiment. I obtain some numerical value of x. Let's say I obtain this numerical value. I look at the distance from the mean, which is this length, and I take the square of that. Each time that I do the experiment, I go and record my distance from the mean and square it. So I give more emphasis to big distances. And then I take the average over all possible outcomes, all possible numerical values. So I'm trying to compute the average squared distance from the mean. This corresponds to this formula here. So the picture that I drew corresponds to that. For every possible numerical value of x, that numerical value corresponds to a certain distance from the mean squared, and I weight according to how likely is that particular value of x to arise. So this measures the average squared distance from the mean. Now, because of that expected value rule, of course, this thing is the same as that expectation. It's the average value of the random variable, which is the squared distance from the mean. With this probability, the random variable takes on this numerical value, and the squared distance from the mean ends up taking that particular numerical value. OK. So why is the variance interesting? It tells us how far away from the mean we expect to be on the average. Well, actually we're not counting distances from the mean, it's distances squared. So it gives more emphasis to the kind of outliers in here. But it's a measure of how spread out the distribution is. A big variance means that those bars go far to the left and to the right, typically. Where as a small variance would mean that all those bars are tightly concentrated around the mean value. It's the average squared deviation. Small variance means that we generally have small deviations. Large variances mean that we generally have large deviations. Now as a practical matter, when you want to calculate the variance, there's a handy formula which I'm not proving but you will see it in recitation. It's just two lines of algebra. And it allows us to calculate it in a somewhat simpler way. We need to calculate the expected value of the random variable and the expected value of the squares of the random variable, and these two are going to give us the variance. So to summarize what we did up here, the variance, by definition, is given by this formula. It's the expected value of the squared deviation. But we have the equivalent formula, which comes from application of the expected value rule, to the function g of X, equals to x minus the (expected value of X)-squared. OK. So this is the definition. This comes from the expected value rule. What are some properties of the variance? Of course variances are always non-negative. Why is it always non-negative? Well, you look at the definition and your just adding up non-negative things. We're adding squared deviations. So when you add non-negative things, you get something non-negative. The next question is, how do things scale if you take a linear function of a random variable? Let's think about the effects of beta. If I take a random variable and add the constant to it, how does this affect the amount of spread that we have? It doesn't affect-- whatever the spread of this thing is, if I add the constant beta, it just moves this diagram here, but the spread doesn't grow or get reduced. The thing is that when I'm adding a constant to a random variable, all the x's that are going to appear are further to the right, but the expected value also moves to the right. And since we're only interested in distances from the mean, these distances do not get affected. x gets increased by something. The mean gets increased by that same something. The difference stays the same. So adding a constant to a random variable doesn't do anything to it's variance. But if I multiply a random variable by a constant alpha, what is that going to do to its variance? Because we have a square here, when I multiply my random variable by a constant, this x gets multiplied by a constant, the mean gets multiplied by a constant, the square gets multiplied by the square of that constant. And because of that reason, we get this square of alpha showing up here. So that's how variances transform under linear transformations. You multiply your random variable by constant, the variance goes up by the square of that same constant. OK. That's it for today. See you on Wednesday.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
1_Probability_Models_and_Axioms.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, so welcome to 6.041/6.431, the class on probability models and the like. I'm John Tsitsiklis. I will be teaching this class, and I'm looking forward to this being an enjoyable and also useful experience. We have a fair amount of staff involved in this course, your recitation instructors and also a bunch of TAs, but I want to single out our head TA, Uzoma, who is the key person in this class. Everything has to go through him. If he doesn't know in which recitation section you are, then simply you do not exist, so keep that in mind. All right. So we want to jump right into the subject, but I'm going to take just a few minutes to talk about a few administrative details and how the course is run. So we're going to have lectures twice a week and I'm going to use old fashioned transparencies. Now, you get copies of these slides with plenty of space for you to keep notes on them. A useful way of making good use of the slides is to use them as a sort of mnemonic summary of what happens in lecture. Not everything that I'm going to say is, of course, on the slides, but by looking them you get the sense of what's happening right now. And it may be a good idea to review them before you go to recitation. So what happens in recitation? In recitation, your recitation instructor is going to maybe review some of the theory and then solve some problems for you. And then you have tutorials where you meet in very small groups together with your TA. And what happens in tutorials is that you actually do the problem solving with the help of your TA and the help of your classmates in your tutorial section. Now probability is a tricky subject. You may be reading the text, listening to lectures, everything makes perfect sense, and so on, but until you actually sit down and try to solve problems, you don't quite appreciate the subtleties and the difficulties that are involved. So problem solving is a key part of this class. And tutorials are extremely useful just for this reason because that's where you actually get the practice of solving problems on your own, as opposed to seeing someone else who's solving them for you. OK but, mechanics, a key part of what's going to happen today is that you will turn in your schedule forms that are at the end of the handout that you have in your hands. Then, the TAs will be working frantically through the night, and they're going to be producing a list of who goes into what section. And when that happens, any person in this class, with probability 90%, is going to be happy with their assignment and, with probability 10%, they're going to be unhappy. Now, unhappy people have an option, though. You can resubmit your form together with your full schedule and constraints, give it back to the head TA, who will then do some further juggling and reassign people, and after that happens, 90% of those unhappy people will become happy. And 10% of them will be less unhappy. OK. So what's the probability that a random person is going to be unhappy at the end of this process? It's 1%. Excellent. Good. Maybe you don't need this class. OK, so 1%. We have about 100 people in this class, so there's going to be about one unhappy person. I mean, anywhere you look in life, in any group you look at, there's always one unhappy person, right? So, what can we do about it? All right. Another important part about mechanics is to read carefully the statement that we have about collaboration, academic honesty, and all that. You're encouraged, it's a very good idea to work with other students. You can consult sources that are out there, but when you sit down and write your solutions you have to do that by setting things aside and just write them on your own. You cannot copy something that somebody else has given to you. One reason is that we're not going to like it when it happens, and then another reason is that you're not going to do yourself any favor. Really the only way to do well in this class is to get a lot of practice by solving problems yourselves. So if you don't do that on your own, then when quiz and exam time comes, things are going to be difficult. So, as I mentioned here, we're going to have recitation sections, that some of them are for 6.041 students, some are for 6.431 students, the graduate section of the class. Now undergraduates can sit in the graduate recitation sections. What's going to happen there is that things may be just a little faster and you may be covering a problem that's a little more advanced and is not covered in the undergrad sections. But if you sit in the graduate section, and you're an undergraduate, you're still just responsible for the undergraduate material. That is, you can just do the undergraduate work in the class, but maybe be exposed at the different section. OK. A few words about the style of this class. We want to focus on basic ideas and concepts. There's going to be lots of formulas, but what we try to do in this class is to actually have you understand what those formulas mean. And, in a year from now when almost all of the formulas have been wiped out from your memory, you still have the basic concepts. You can understand them, so when you look things up again, they will still make sense. It's not the plug and chug kind of class where you're given a list of formulas, you're given numbers, and you plug in and you get answers. The really hard part is usually to choose which formulas you're going to use. You need judgment, you need intuition. Lots of probability problems, at least the interesting ones, often have lots of different solutions. Some are extremely long, some are extremely short. The extremely short ones usually involve some kind of deeper understanding of what's going on so that you can pick a shortcut and use it. And hopefully you are going to develop this skill during this class. Now, I could spend a lot of time in this lecture talking about why the subject is important. I'll keep it short because I think it's almost obvious. Anything that happens in life is uncertain. There's uncertainty anywhere, so whatever you try to do, you need to have some way of dealing or thinking about this uncertainty. And the way to do that in a systematic way is by using the models that are given to us by probability theory. So if you're an engineer and you're dealing with a communication system or signal processing, basically you're facing a fight against noise. Noise is random, is uncertain. How do you model it? How do you deal with it? If you're a manager, I guess you're dealing with customer demand, which is, of course, random. Or you're dealing with the stock market, which is definitely random. Or you play the casino, which is, again, random, and so on. And the same goes for pretty much any other field that you can think of. But, independent of which field you're coming from, the basic concepts and tools are really all the same. So you may see in bookstores that there are books, probability for scientists, probability for engineers, probability for social scientists, probability for astrologists. Well, what all those books have inside them is exactly the same models, the same equations, the same problems. They just make them somewhat different word problems. The basic concepts are just one and the same, and we'll take this as an excuse for not going too much into specific domain applications. We will have problems and examples that are motivated, in some loose sense, from real world situations. But we're not really trying in this class to develop the skills for domain-specific problems. Rather, we're going to try to stick to general understanding of the subject. OK. So the next slide, of which you do have in your handout, gives you a few more details about the class. Maybe one thing to comment here is that you do need to read the text. And with calculus books, perhaps you can live with a just a two page summary of all of the interesting formulas in calculus, and you can get by just with those formulas. But here, because we want to develop concepts and intuition, actually reading words, as opposed to just browsing through equations, does make a difference. In the beginning, the class is kind of easy. When we deal with discrete probability, that's the material until our first quiz, and some of you may get by without being too systematic about following the material. But it does get substantially harder afterwards. And I would keep restating that you do have to read the text to really understand the material. OK. So now we can start with the real part of the lecture. Let us set the goals for today. So probability, or probability theory, is a framework for dealing with uncertainty, for dealing with situations in which we have some kind of randomness. So what we want to do is, by the end of today's lecture, to give you anything that you need to know how to set up what does it take to set up a probabilistic model. And what are the basic rules of the game for dealing with probabilistic models? So, by the end of this lecture, you will have essentially recovered half of this semester's tuition, right? So we're going to talk about probabilistic models in more detail-- the sample space, which is basically a description of all the things that may happen during a random experiment, and the probability law, which describes our beliefs about which outcomes are more likely to occur compared to other outcomes. Probability laws have to obey certain properties that we call the axioms of probability. So the main part of today's lecture is to describe those axioms, which are the rules of the game, and consider a few really trivial examples. OK, so let's start with our agenda. The first piece in a probabilistic model is a description of the sample space of an experiment. So we do an experiment, and by experiment we just mean that just something happens out there. And that something that happens, it could be flipping a coin, or it could be rolling a dice, or it could be doing something in a card game. So we fix a particular experiment. And we come up with a list of all the possible things that may happen during this experiment. So we write down a list of all the possible outcomes. So here's a list of all the possible outcomes of the experiment. I use the word "list," but, if you want to be a little more formal, it's better to think of that list as a set. So we have a set. That set is our sample space. And it's a set whose elements are the possible outcomes of the experiment. So, for example, if you're dealing with flipping a coin, your sample space would be heads, this is one outcome, tails is one outcome. And this set, which has two elements, is the sample space of the experiment. OK. What do we need to think about when we're setting up the sample space? First, the list should be mutually exclusive, collectively exhaustive. What does that mean? Collectively exhaustive means that, no matter what happens in the experiment, you're going to get one of the outcomes inside here. So you have not forgotten any of the possibilities of what may happen in the experiment. Mutually exclusive means that if this happens, then that cannot happen. So at the end of the experiment, you should be able to point out to me just one, exactly one, of these outcomes and say, this is the outcome that happened. OK. So these are sort of basic requirements. There's another requirement which is a little more loose. When you set up your sample space, sometimes you do have some freedom about the details of how you're going to describe it. And the question is, how much detail are you going to include? So let's take this coin flipping experiment and think of the following sample space. One possible outcome is heads, a second possible outcome is tails and it's raining, and the third possible outcome is tails and it's not raining. So this is another possible sample space for the experiment where I flip a coin just once. It's a legitimate one. These three possibilities are mutually exclusive and collectively exhaustive. Which one is the right sample space? Is it this one or that one? Well, if you think that my coin flipping inside this room is completely unrelated to the weather outside, then you're going to stick with this sample space. If, on the other hand, you have some superstitious belief that maybe rain has an effect on my coins, you might work with the sample space of this kind. So you probably wouldn't do that, but it's a legitimate option, strictly speaking. Now this example is a little bit on the frivolous side, but the issue that comes up here is a basic one that shows up anywhere in science and engineering. Whenever you're dealing with a model or with a situation, there are zillions of details in that situation. And when you come up with a model, you choose some of those details that you keep in your model, and some that you say, well, these are irrelevant. Or maybe there are small effects, I can neglect them, and you keep them outside your model. So when you go to the real world, there's definitely an element of art and some judgment that you need to do in order to set up an appropriate sample space. So, an easy example now. So of course, the elementary examples are coins, cards, and dice. So let's deal with dice. But to keep the diagram small, instead of a six-sided die, we're going to think about the die that only has four faces. So you can do that with a tetrahedron, doesn't really matter. Basically, it's a die that when you roll it, you get a result which is one, two, three or four. However, the experiment that I'm going to think about will consist of two rolls of a dice. A crucial point here-- I'm rolling the die twice, but I'm thinking of this as just one experiment, not two different experiments, not a repetition twice of the same experiment. So it's one big experiment. During that big experiment various things could happen, such as I'm rolling the die once, and then I'm rolling the die twice. OK. So what's the sample space for that experiment? Well, the sample space consists of the possible outcomes. One possible outcome is that your first roll resulted in two and the second roll resulted in three. In which case, the outcome that you get is this one, a two followed by three. This is one possible outcome. The way I'm describing things, this outcome is to be distinguished from this outcome here, where a three is followed by two. If you're playing backgammon, it doesn't matter which one of the two happened. But if you're dealing with a probabilistic model that you want to keep track of everything that happens in this composite experiment, there are good reasons for distinguishing between these two outcomes. I mean, when this happens, it's definitely something different from that happening. A two followed by a three is different from a three followed by a two. So this is the correct sample space for this experiment where we roll the die twice. It has a total of 16 elements and it's, of course, a finite set. Sometimes, instead of describing sample spaces in terms of lists, or sets, or diagrams of this kind, it's useful to describe the experiment in some sequential way. Whenever you have an experiment that consists of multiple stages, it might be useful, at least visually, to give a diagram that shows you how those stages evolve. And that's what we do by using a sequential description or a tree-based description by drawing a tree of the possible evolutions during our experiment. So in this tree, I'm thinking of a first stage in which I roll the first die, and there are four possible results, one, two, three and four.and 4. And, given what happened, let's say in the first roll, suppose I got a one. Then I'm rolling the second dice, and there are four possibilities for what may happen to the second die. And the possible results are one, tow, three and four again. So what's the relation between the two diagrams? Well, for example, the outcome two followed by three corresponds to this path on the tree. So this path corresponds to two followed by a three. Any path is associated to a particular outcome, any outcome is associated to a particular path. And, instead of paths, you may want to think in terms of the leaves of this diagram. Same thing, think of each one of the leaves as being one possible outcome. And of course we have 16 outcomes here, we have 16 outcomes here. Maybe you noticed the subtlety that I used in my language. I said I rolled the first dice and the result that I get is a two. I didn't use the word "outcome." I want to reserve the word "outcome" to mean the overall outcome at the end of the overall experiment. So "2, 3" is the outcome of the experiment. The experiment consisted of stages. Two was the result in the first stage, three was the result in the second stage. You put all those results together, and you get your outcome. OK, perhaps we are splitting hairs here, but it's useful to keep the concepts right. What's special about this example is that, besides being trivial, it has a sample space which is finite. There's 16 possible total outcomes. Not every experiment has a finite sample space. Here's an experiment in which the sample space is infinite. So you are playing darts and the target is this square. And you're perfect at that game, so you're sure that your darts will always fall inside the square. So, but where exactly your dart would fall inside that square, that itself is random. We don't know what it's going to be. It's uncertain. So all the possible points inside the square are possible outcomes of the experiment. So a typical outcome of the experiment is going to a pair of numbers, x,y, where x and y are real numbers between zero and one. Now there's infinitely many real numbers, there's infinitely many points in the square, so this is an example in which our sample space is an infinite set. OK, so we're going to revisit this example a little later. So these are two examples of what the sample space might be in simple experiments. Now, the more important order of business is now to look at those possible outcomes and to make some statements about their relative likelihoods. Which outcome is more likely to occur compared to the others? And the way we do this is by assigning probabilities to the outcomes. Well, not exactly. Suppose that all you were to do was to assign probabilities to individual outcomes. If you go back to this example, and you consider one particular outcome-- let's say this point-- what would be the probability that you hit exactly this point to infinite precision? Intuitively, that probability would be zero. So any individual point in this diagram in any reasonable model should have zero probability. So if you just tell me that any individual outcome has zero probability, you're not really telling me much to work with. For that reason, what instead we're going to do is to assign probabilities to subsets of the sample space, as opposed to assigning probabilities to individual outcomes. So here's the picture. We have our sample space, which is omega, and we consider some subset of the sample space. Call it A. And I want to assign a number, a numerical probability, to this particular subset which represents my belief about how likely this set is to occur. OK. What do we mean "to occur?" And I'm introducing here a language that's being used in probability theory. When we talk about subsets of the sample space, we usually call them events, as opposed to subsets. And the reason is because it works nicely with the language that describes what's going on. So the outcome is a point. The outcome is random. The outcome may be inside this set, in which case we say that event A occurred, if we get an outcome inside here. Or the outcome may fall outside the set, in which case we say that event A did not occur. So we're going to assign probabilities to events. And now, how should we do this assignment? Well, probabilities are meant to describe your beliefs about which sets are more likely to occur versus other sets. So there's many ways that you can assign those probabilities. But there are some ground rules for this game. First, we want probabilities to be numbers between zero and one because that's the usual convention. So a probability of zero means we're certain that something is not going to happen. Probability of one means that we're essentially certain that something's going to happen. So we want numbers between zero and one. We also want a few other things. And those few other things are going to be encapsulated in a set of axioms. What "axioms" means in this context, it's the ground rules that any legitimate probabilistic model should obey. You have a choice of what kind of probabilities you use. But, no matter what you use, they should obey certain consistency properties because if they obey those properties, then you can go ahead and do useful calculations and do some useful reasoning. So what are these properties? First, probabilities should be non-negative. OK? That's our convention. We want probabilities to be numbers between zero and one. So they should certainly be non-negative. The probability that event A occurs should be a non-negative number. What's the second axiom? The probability of the entire sample space is equal to one. Why does this make sense? Well, the outcome is certain to be an element of the sample space because we set up a sample space, which is collectively exhaustive. No matter what the outcome is, it's going to be an element of the sample space. We're certain that event omega is going to occur. Therefore, we represent this certainty by saying that the probability of omega is equal to one. Pretty straightforward so far. The more interesting axiom is the third rule. Before getting into it, just a quick reminder. If you have two sets, A and B, the intersection of A and B consists of those elements that belong both to A and B. And we denote it this way. When you think probabilistically, the way to think of intersection is by using the word "and." This event, this intersection, is the event that A occurred and B occurred. If I get an outcome inside here, A has occurred and B has occurred at the same time. So you may find the word "and" to be a little more convenient than the word "intersection." And similarly, we have some notation for the union of two events, which we write this way. The union of two sets, or two events, is the collection of all the elements that belong either to the first set, or to the second, or to both. When you talk about events, you can use the word "or." So this is the event that A occurred or B occurred. And this "or" means that it could also be that both of them occurred. OK. So now that we have this notation, what does the third axiom say? The third axiom says that if we have two events, A and B, that have no common elements-- so here's A, here's B, and perhaps this is our big sample space. The two events have no common elements. So the intersection of the two events is the empty set. There's nothing in their intersection. Then, the total probability of A together with B has to be equal to the sum of the individual probabilities. So the probability that A occurs or B occurs is equal to the probability that A occurs plus the probability that B occurs. So think of probability as being cream cheese. You have one pound of cream cheese, the total probability assigned to the entire sample space. And that cream cheese is spread out over this set. The probability of A is how much cream cheese sits on top of A. Probability of B is how much sits on top of B. The probability of A union B is the total amount of cream cheese sitting on top of this and that, which is obviously the sum of how much is sitting here and how much is sitting there. So probabilities behave like cream cheese, or they behave like mass. For example, if you think of some material object, the mass of this set consisting of two pieces is obviously the sum of the two masses. So this property is a very intuitive one. It's a pretty natural one to have. OK. Are these axioms enough for what we want to do? I mentioned a while ago that we want probabilities to be numbers between zero and one. Here's an axiom that tells you that probabilities are non-negative. Should we have another axiom that tells us that probabilities are less than or equal to one? It's a desirable property. We would like to have it in our hands. OK, why is it not in that list? Well, the people who are in the axiom making business are mathematicians and mathematicians tend to be pretty laconic. You don't say something if you don't have to say it. And this is the case here. We don't need that extra axiom because we can derive it from the existing axioms. Here's how it goes. One is the probability over the entire sample space. Here we're using the second axiom. Now the sample space consists of A together with the complement of A. OK? When I write the complement of A, I mean the complement of A inside of the set omega. So we have omega, here's A, here's the complement of A, and the overall set is omega. OK. Now, what's the next step? What should I do next? Which axiom should I use? We use axiom three because a set and the complement of that set are disjoint. They don't have any common elements. So axiom three applies and tells me that this is the probability of A plus the probability of A complement. In particular, the probability of A is equal to one minus the probability of A complement, and this is less than or equal to one. Why? Because probabilities are non-negative, by the first axiom. OK. So we got the conclusion that we wanted. Probabilities are always less than or equal to one, and this is a simple consequence of the three axioms that we have. This is a really nice argument because it actually uses each one of those axioms. The argument is simple, but you have to use all of these three properties to get the conclusion that you want. OK. So we can get interesting things out of our axioms. Can we get some more interesting ones? How about the union of three sets? What kind of probability should it have? So here's an event consisting of three pieces. And I want to say something about the probability of A union B union C. What I would like to say is that this probability is equal to the sum of the three individual probabilities. How can I do it? I have an axiom that tells me that I can do it for two events. I don't have an axiom for three events. Well, maybe I can massage things and still be able to use that axiom. And here's the trick. The union of three sets, you can think of it as forming the union of the first two sets and then taking the union with the third set. OK? So taking unions, you can take the unions in any order that you want. So here we have the union of two sets. Now, ABC are disjoint, by assumption or that's how I drew it. So if A, B, and C are disjoint, then A union B is disjoint from C. So here we have the union of two disjoint sets. So by the additivity axiom, the probability of that the union is going to be the probability of the first set plus the probability of the second set. And now I can use the additivity axiom once more to write that this is probability of A plus probability of B plus probability of C. So by using this axiom which was stated for two sets, we can actually derive a similar property for the union of three disjoint sets. And then you can repeat this argument as many times as you want. It's valid for the union of ten disjoint sets, for the union of a hundred disjoint sets, for the union of any finite number of sets. So if A1 up to An are disjoint, then the probability of A1 union An is equal to the sum of the probabilities of the individual sets. OK. Special case of this is when we're dealing with finite sets. Suppose I have just a finite set of outcomes. I put them together in a set and I'm interested in the probability of that set. So here's our sample space. There's lots of outcomes, but I'm taking a few of these and I form a set out of them. This is a set consisting of, in this picture, three elements. In general, it consists of k elements. Now, a finite set, I can write it as a union of single element sets. So this set here is the union of this one element set, together with this one element set together with that one element set. So the total probability of this set is going to be the sum of the probabilities of the one element sets. Now, probability of a one element set, you need to use the brackets here because probabilities are assigned to sets. But this gets kind of tedious, so here one abuses notation a little bit and we get rid of those brackets and just write probability of this single, individual outcome. In any case, conclusion from this exercise is that the total probability of a finite collection of possible outcomes, the total probability is equal to the sum of the probabilities of individual elements. So these are basically the axioms of probability theory. Or, well, they're almost the axioms. There are some subtleties that are involved here. One subtlety is that this axiom here doesn't quite do the job for everything we would like to do. And we're going to come back to this at the end of the lecture. A second subtlety has to do with weird sets. We said that an event is a subset of the sample space and we assign probabilities to events. Does this mean that we are going to assign probability to every possible subset of the sample space? Ideally, we would wish to do that. Unfortunately, this is not always possible. If you take a sample space, such as the square, the square has nice subsets, those that you can describe by cutting it with lines and so on. But it does have some very ugly subsets, as well, that are impossible to visualize, impossible to imagine, but they do exist. And those very weird sets are such that there's no way to assign probabilities to them in a way that's consistent with the axioms of probability. OK. So this is a very, very fine point that you can immediately forget for the rest of this class. You will only encounter these sets if you end up doing doctoral work on the theoretical aspects of probability theory. So it's just a mathematical subtlety that some very weird sets do not have probabilities assigned to them. But we're not going to encounter these sets and they do not show up in any applications. OK. So now let's revisit our examples. Let's go back to the die example. We have our sample space. Now we need to assign a probability law. There's lots of possible probability laws that you can assign. I'm picking one here, arbitrarily, in which I say that every possible outcome has the same probability of 1/16. OK. Why do I make this model? Well, empirically, if you have well-manufactured dice, they tend to behave that way. We will be coming back to this kind of story later in this class. But I'm not saying that this is the only probability law that there can be. You might have weird dice in which certain outcomes are more likely than others. But to keep things simple, let's take every outcome to have the same probability of 1/16. OK. Now that we have in our hands a sample space and the probability law, we can actually solve any problem there is. We can answer any question that could be posed to us. For example, what's the probability that the outcome, which is this pair, is either 1,1 or 1,2. We're talking here about this particular event, 1,1 or 1,2. So it's an event consisting of these two items. According to what we were just discussing, the probability of a finite collection of outcomes is the sum of their individual probabilities. Each one of them has probability of 1/16, so the probability of this is 2/16. How about the probability of the event that x is equal to one. x is the first roll, so that's the probability that the first roll is equal to one. Notice the syntax that's being used here. Probabilities are assigned to subsets, to sets, so we think of this as meaning the set of all outcomes such that x is equal to one. How do you answer this question? You go back to the picture and you try to visualize or identify this event of interest. x is equal to one corresponds to this event here. These are all the outcomes at which x is equal to one. There's four outcomes. Each one has probability 1/16, so the answer is 4/16. OK. How about the probability that x plus y is odd? OK. That will take a little bit more work. But you go to the sample space and you identify all the outcomes at which the sum is an odd number. So that's a place where the sum is odd, these are other places, and I guess that exhausts all the possible outcomes at which we have an odd sum. We count them. How many are there? There's a total of eight of them. Each one has probability 1/16, total probability is 8/16. And harder question. What is the probability that the minimum of the two rolls is equal to 2? This is something that you probably couldn't do in your head without the help of a diagram. But once you have a diagram, things are simple. You ask the question. OK, this is an event, that the minimum of the two rolls is equal to two. This can happen in several ways. What are the several ways that it can happen? Go to the diagram and try to identify them. So the minimum is equal to two if both of them are two's. Or it could be that x is two and y is bigger, or y is two and x is bigger. OK. I guess we rediscover that yellow and blue make green, so we see here that there's a total of five possible outcomes. The probability of this event is 5/16. Simple example, but the procedure that we followed in this example actually applies to any probability model you might ever encounter. You set up your sample space, you make a statement that describes the probability law over that sample space, then somebody asks you questions about various events. You go to your pictures, identify those events, pin them down, and then start kind of counting and calculating the total probability for those outcomes that you're considering. This example is a special case of what is called the discrete uniform law. The model obeys the discrete uniform law if all outcomes are equally likely. It doesn't have to be that way. That's just one example of a probability law. But when things are that way, if all outcomes are equally likely and we have N of them, and you have a set A that has little n elements, then each one of those elements has probability one over capital N since all outcomes are equally likely. And for our probabilities to add up to one, each one must have this much probability, and there's little n elements. That gives you the probability of the event of interest. So problems like the one in the previous slide and more generally of the type described here under discrete uniform law, these problems reduce to just counting. How many elements are there in my sample space? How many elements are there inside the event of interest? Counting is generally simple, but for some problems it gets pretty complicated. And in a couple of weeks, we're going to have to spend the whole lecture just on the subject of how to count systematically. Now the procedure we followed in the previous example is the same as the procedure you would follow in continuous probability problems. So, going back to our dart problem, we get the random point inside the square. That's our sample space. We need to assign a probability law. For lack of imagination, I'm taking the probability law to be the area of a subset. So if we have two subsets of the sample space that have equal areas, then I'm postulating that they are equally likely to occur. The probably that they fall here is the same as the probability that they fall there. The model doesn't have to be that way. But if I have sort of complete ignorance of which points are more likely than others, that might be the reasonable model to use. So equal areas mean equal probabilities. If the area is twice as large, the probability is going to be twice as big. So this is our model. We can now answer questions. Let's answer the easy one. What's the probability that the outcome is exactly this point? That of course is zero because a single point has zero area. And since this probability is equal to area, that's zero probability. How about the probability that the sum of the coordinates of the point that we got is less than or equal to 1/2? How do you deal with it? Well, you look at the picture again, at your sample space, and try to describe the event that you're talking about. The sum being less than 1/2 corresponds to getting an outcome that's below this line, where this line is the line where x plus y equals to 1/2. So the intercepts of that line with the axis are 1/2 and 1/2. So you describe the event visually and then you use your probability law. The probability law that we have is that the probability of a set is equal to the area of that set. So all we need to find is the area of this triangle, which is 1/2 times 1/2 times 1/2, half, equals to 1/8. OK. Moral from these two examples is that it's always useful to have a picture and work with a picture to visualize the events that you're talking about. And once you have a probability law in your hands, then it's a matter of calculation to find the probabilities of an event of interest. The calculations we did in these two examples, of course, were very simple. Sometimes calculations may be a lot harder, but it's a different business. It's a business of calculus, for example, or being good in algebra and so on. As far as probability is concerned, it's clear what you will be doing, and then maybe you're faced with a harder algebraic part to actually carry out the calculations. The area of a triangle is easy to compute. If I had put down a very complicated shape, then you might need to solve a hard integration problem to find the area of that shape, but that's stuff that belongs to another class that you have presumably mastered by now. Good, OK. So now let me spend just a couple of minutes to return to a point that I raised before. I was saying that the axiom that we had about additivity might not quite be enough. Let's illustrate what I mean by the following example. Think of the experiment where you keep flipping a coin and you wait until you obtain heads for the first time. What's the sample space of this experiment? It might happen the first flip, it might happen in the tenth flip. Heads for the first time might occur in the millionth flip. So the outcome of this experiment is going to be an integer and there's no bound to that integer. You might have to wait very much until that happens. So the natural sample space is the set of all possible integers. Somebody tells you some information about the probability law. The probability that you have to wait for n flips is equal to two to the minus n. Where did this come from? That's a separate story. Where did it come from? Somebody tells this to us, and those probabilities are plotted here as a function of n. And you're asked to find the probability that the outcome is an even number. How do you go about calculating that probability? So the probability of being an even number is the probability of the subset that consists of just the even numbers. So it would be a subset of this kind, that includes two, four, and so on. So any reasonable person would say, well the probability of obtaining an outcome that's either two or four or six and so on is equal to the probability of obtaining a two, plus the probability of obtaining a four, plus the probability of obtaining a six, and so on. These probabilities are given to us. So here I have to do my algebra. I add this geometric series and I get an answer of 1/3. That's what any reasonable person would do. But the person who only knows the axioms that they posted just a little earlier may get stuck. They would get stuck at this point. How do we justify this? We had this property for the union of disjoint sets and the corresponding property that tells us that the total probability of finitely many things, outcomes, is the sum of their individual probabilities. But here we're using it on an infinite collection. The probability of infinitely many points is equal to the sum of the probabilities of each one of these. To justify this step we need to introduce one additional rule, an additional axiom, that tells us that this step is actually legitimate. And this is the countable additivity axiom, which is a little stronger, or quite a bit stronger, than the additivity axiom we had before. It tells us that if we have a sequence of sets that are disjoint and we want to find their total probability, then we are allowed to add their individual probabilities. So the picture might be such as follows. We have a sequence of sets, A1, A2, A3, and so on. I guess in order to fit them inside the sample space, the sets need to get smaller and smaller perhaps. They are disjoint. We have a sequence of such sets. The total probability of falling anywhere inside one of those sets is the sum of their individual probabilities. A key subtlety that's involved here is that we're talking about a sequence of events. By "sequence" we mean that these events can be arranged in order. I can tell you the first event, the second event, the third event, and so on. So if you have such a collection of events that can be ordered as first, second, third, and so on, then you can add their probabilities to find the probability of their union. So this point is actually a little more subtle than you might appreciate at this point, and I'm going to return to it at the beginning of the next lecture. For now, enjoy the first week of classes and have a good weekend. Thank you.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
The_Probability_of_the_Difference_of_Two_Events.txt
Hi. In this problem, we're going to use the set of probability axioms to derive the probability of the difference of two events. Now, before we get started, there's one thing you might notice that, the equation we're trying to prove is actually quite complicated. And I don't like it either, so the first thing we're going to do will be to find a simpler notation for the events that we're interested in. So we start with two events, A and B, and there might be some intersection between the two events. We'll label the set of points or samples in A that are not in B, as a set C. So C will be A intersection B complement. Similarly, for all points that are in B but not in A, this area, we'll call it D. And D will be the set A complement intersection B. And finally, for points that are in the intersection of A and B, we'll call it E. So E is A intersection B. And for the rest of our problem, we're going to be using the notation C, D, and E instead of whatever's down below. If we use this notation, we can rewrite our objective as the following. We want to show that the probability of C union D is equal to the probability of the event A plus the probability of B minus twice the probability of E. And that will be our goal for the problem. Now, let's take a minute to review what the axioms are, what the probability axioms are. The first one says non-negativity. We take any event A, then the probability of A must be at least 0. The second normalization says the probability of the entire space, the entire sample space omega, must be equal to 1. And finally, the additivity axiom, which will be the axiom that we're going to use for this problem says, if there are two events, A and B that are disjoint-- which means they don't have anything in common, therefore. the intersection is the empty set. Then the probability of their union will be equal to the probably A plus the probability of B. For the rest of the problem, I will refer to this axiom as add. So whenever we invoke this axiom, I'll write "add" on the board. Let's get started. First, we'll invoke the additivity axioms to argue that the probability of C union D is simply the sum of probability of C plus probability of D. Why is this true? We can apply this axiom, because the set C here and the set D here, they're completely disjoint from each other. And in a similar way, we'll also notice the following. We see that A is equal to the union of the set C and E. And also, C and E, they're disjoint with each other, because C and E by definition don't share any points. And therefore, we have probably A is equal to probability of C plus the probability of E. Now, in a similar way, the probability of event B can also be written as a probability of D plus the probability of E, because event B is the union of D and E. And D and E are disjoint from each other. So we again invoke the additivity axiom. Now, this should be enough to prove our final claim. We have the probability of C union D. By the very first line, we see this is simply probability of C plus the probability of D. Now, I'm going to insert two terms here to make the connection with a second part of the equation more obvious. That is, I will write probability C plus probability E plus probability D plus probability of E. Now, I've just added two terms here-- probability E. So to make the equality valid or subtract it out two times, the probability of E. Hence this equality is valid. So if we look at this equation, we see that there are two parts here that we've already seen before right here. The very first parenthesis is equal to the probability of A. And the value of the second parenthesis is equal to the probability of B. We just derived these here. And finally, we have the minus 2 probability of E. This line plus this line gives us the final equation. And that will be the answer for the problem.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Flipping_a_Coin_a_Random_Number_of_Times.txt
In this problem, we're looking at a two stage process in which the first stage, we roll a fair die which has four faces to obtain a number N, where N belongs to the set 0, 1, 2, and 3 with equal probability. Now, given the result of the die roll, N will toss a fair coin N times in getting K heads from the coin tosses. For instance, if from the first die roll, we get N equal to 3, then we'll toss a coin 3 times. Let's say the outcome is heads, heads, and tails. And that will give us K equal to 2. For part A, we're asked to compute the PMF for N, which is a result of the first die roll. Now, since we had assumed the die roll was uniformly distributed in the set in the set 0, 1, 2, and 3, we have that the chance of N being equal to any little n is equal to 1/4 if n is in the set 0, 1, 2, 3, and 0 otherwise. If we were to plot this in a figure, we'll have the following plot. For part B, things are getting a little more complicated. This time, we want to compute the joint PMF between N and K for N equal to little n and K equal to little k. What we'll do first is to use the law of conditional probability to break the joint probability into the product of probability of K is equal to little k conditional on N is equal to little n, multiply by the probability that N is equal to little n. Now, the second term right here is simply the PMF of N, which will be computed earlier. So this gives us 1/4 times probability K equal to little k, N equal to little n, for all N in the set 0, 1, 2, and 3. Now, clearly if N is not one of those four values, this whole event couldn't have happened in the first place, and hence will have P and K equal to 0. We'll now go over all the cases for little n in this expression right here. The first case is the simplest. If we assume that little n is equal to 0, that means the die roll was 0, and hence we're not tossing any coins afterwards. And this implies that we must have K is equal to 0, which, mathematically speaking, is equivalent to saying probability of K equal to 0 conditional on N equal to 0 is 1. And K being any other value conditional N equal to 0 is 0. So we're done with the case that little n is equal to 0. Now, let's say little n is in the set 1, 2, and 3. In this case, we want to notice that after having observed the value of N, all the coin tosses for N times are conditionally independent from each other. What this means is now the total number of heads in the subsequent coin toss is equal in distribution to a binomial random variable with parameter n and 1/2. And here says the number of trials is n, and 1/2 is because the coin is fair. And the reason it is a binomial random variable, again, is because the coin tosses are independent conditional on the outcome of the die roll. And now we're done, since we know what the binomial distribution looks like given parameter n and 1/2. And we'll simply substitute based on the case of n the conditional distribution of K back into the product we had earlier, which in turn will give us the joint PMF. This table summarizes the PMF we were computing earlier. P of N, K, little n, and little k. Now, as we saw before, if n equal to 0, the only possibility for k is equal to 0. And this is scaled by the probability of n equal to 0, which is 1/4. For any other values of n, we see that the distribution of k, conditional n, is the same as a binomial random variable with n trials. And again, every entry here is scaled by 1/4. And this completes part B. In part C, we're asked for the conditional PMF of K conditioning on the value of N being equal to 2. Now, as we discussed in part B, when N is equal to 2, we're essentially flipping a fair coin twice, and this should give us the same distribution as a binomial random variable with parameter 2 and 1/2. Now, 2 is the number of flips, and 1/2 is the chance of seeing a head in each flip. And that gives us the following distribution. But there's another way to see this. It's to write P K given N, little k, and you go to 2 by using the law of conditional probability as P K, N, the joint PMF, k n2, divided by the probability that N is equal to 2. Now, we know that probability n equal to 2 is simply 1/4, so this gives us 4 times the joint density K, N, k, 2. In other words, in order to arrive at the distribution right here, [INAUDIBLE] to go back to the table we had earlier and look at the role where n is equal to 2 and multiply each number by 4. Finally, in part D, we're asked for the conditional distribution of N, write as P N, given K of N equal to little n conditional on K is equal to 2. Again, we'll apply the formula for conditional probability. This is equal to the joint PMF evaluated at n and 2 divided by the probability of K being equal to 2. Since we have computed the entire table of the joint PMF, this shouldn't be too difficult. In particular, for the denominator, the probability that k is ever equal to 2, we just look at the column right here. So the entries in this column shows all the cases where k can be equal to 2. And in fact, we can see that k can be equal to 2 only if n is equal to 2 or 3. Clearly, if you toss the coin fewer than 2 times, there's no chance that we'll get 2 heads. So to get this probability right here, we'll add up the number in these two cells. So we get P N, K, little n, and 2 divided by 1/16 plus 3/32. Now, the numerator, again, can be read off from the table right here. In particular, this tells us that there are only two possibilities. Either n is equal to 2 or n equal to 3. When n is equal to 2, we know this quantity gives us 1/16 reading off this cell divided by 1/16 plus 3/32 for n equal to 2. And the remaining probability goes to the case where n is equal to 3. So this is 3 divided by 32, 1/16 plus 3/32, which simplifies to 2/5 and 3/5. And this distribution gives us the following plot. And this completes our problem.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
PMF_of_a_Function_of_a_Random_Variable.txt
Hey guys. Welcome back. Today, we're going to be working on a problem that asks you to find the PMF of a function of a random variable. So let's just jump right in. The problem statement gives you the PMF for a random variable called x. So we're told that there's this random variable x that takes on values minus 3, minus 2, minus 1, 1, 2, and 3. And for each of those values, the probability mass lying over that value is given by this formula, x squared over a. Now I didn't write it here to save room, but we're also told that a is a real number that is greater than 0. And we're told that the probability of x taking on any value outside of the set is 0. Now we're asked to do two things in the problem. First is to find the value of the parameter a. And that's sort of a natural question to ask, because if you think about it, the PMF isn't fully specified. And in fact, if you plug in the wrong number for a, you actually won't get a valid PMF. So we'll explore that idea in the first part. And then the second part, you're given a new random variable called z. And z happens to be a function of x. In fact, it's equal to x squared. And then you're asked to compute that PMF. So this problem is a good practice problem. I think, at this point, you guys are sort of newly acquainted with the idea of a PMF, or probability mass function. So this problem will hopefully help you get more familiar with that concept and how to manipulate PMFs. And by the way, just to make sure we're all on the same page, what does a PMF really tell you? So p sub X, where this is a capital X, because the convention in this class is to use capital letters for random variables. So p X of k, this is defined to be the probability that your random variable X takes on a value of k. So essentially, this says-- and this is just some number. So in our particular case, this would be equal to k squared over a. And how you can interpret this is this px guy is sort of like a machine. He takes in some value that your random variable could take on, and then he spits out the amount of probability mass lying over that value. OK. So now that we've done that quick recap, let's get back to the first part of the problem. So we have this formula for px of x, and we need to solve for a. So in order to do that, we're going to use one of our axioms of probability to set up an equation. And then we can solve precisely for a. So namely, we know that every PMF must sum to 1. And so essentially, if you sum this guy over all possible values of x, you should get a 1, and that equation will let us solve for a. So let's do that. Summation over x of px of x. So here, essentially you're only summing over these six values. So this is equal to px of minus 3, plus px of minus 2, plus px of minus 1, et cetera. Oops. px of 2 plus px of 3. OK. And again, like the interpretation as we said, this number here should be interpreted as the amount of probability mass lying over minus 3. And to help you visualize this, actually, before we go further with the computation, let's actually plot this PMF. So the amount of probability mass lying over minus 3, the way we figure that out is we take minus 3 and we plug it into this formula up here. So you get 9/a. Now you can do this for minus 2. You've got 4/a, looking at the formula. For 1, you get 1/a. And of course, this graph, it's the mirror image over 0, because of the symmetry. So hopefully this little visualization helps you understand what I'm talking about. And now we can just read these values off of the plot we just made. So we know px minus 3 is equal to px of 3. So we can go ahead and just take 2 times 9/a. Similarly, we get 2 times 4/a, and then plus 2 times 1/a. So now it's just a question of algebra. So simplifying this, you're going to get 18 plus 8 plus 2, divided by a. And this gives you 28/a. And as I argued before, you know that if you sum a PMF over all possible values, you must get 1. So this is equal to 1, which of course implies that a is equal to 28. So what we've shown here is that you actually don't have a choice for what value a can take on. It must take on 28. And in fact, if you plug in any other value than 28 in here, you actually are not going to have a valid PMF, because it's not going to sum to 1. OK. So I'm going to write my answer here, and then erase to give myself more room for part
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
11_Derived_Distributions_ctd_Covariance.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality, educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Good morning. So today we're going to continue the subject from last time. So we're going to talk about derived distributions a little more, how to derive the distribution of a function of a random variable. So last time we discussed a couple of examples in which we had a function of a single variable. And we found the distribution of Y, if we're told the distribution of X. So today we're going to do an example where we deal with the function of two random variables. And then we're going to consider the most interesting example of this kind, in which we have a random variable of the form W, which is the sum of two independent, random variables. That's a case that shows up quite often. And so we want to see what exactly happens in this particular case. Just one comment that I should make. The material that we're covering now, chapter four, is sort of conceptually a little more difficult than one we have been doing before. So I would definitely encourage you to read the text before you jump and try to do the problems in your problem sets. OK, so let's start with our example, in which we're given two random variables. They're jointly continuous. And their distribution is pretty simple. They're uniform on the unit square. In particular, each one of the random variables is uniform on the unit interval. And the two random variables are independent. What we're going to find is the distribution of the ratio of the two random variables. How do we go about it? , Well, the same cookbook procedure that we used last time for the case of a single random variable. The cookbook procedure that we used for this case also applies to the case where you have a function of multiple random variables. So what was the cookbook procedure? The first step is to find the cumulative distribution function of the random variable of interest and then take the derivative in order to find the density. So let's find the cumulative. So, by definition, the cumulative is the probability that the random variable is less than or equal to the argument of the cumulative. So if we write this event in terms of the random variable of interest, this is the probability that our random variable is less than or equal to z. So what is that? OK, so the ratio is going to be less than or equal to z, if and only if the pair, (x,y), happens to fall below the line that has a slope z. OK, so we draw a line that has a slope z. The ratio is less than this number, if and only if we get the pair of x and y that falls inside this triangle. So we're talking about the probability of this particular event. Since this line has a slope of z, the height at this point is equal to z. And so we can find the probability of this event. It's just the area of this triangle. And so the area is 1 times z times 1/2. And we get the answer, z/2. Now, is this answer always correct? Now, this answer is going to be correct only if the slope happens to be such that we get a picture of this kind. So when do we get a picture of this kind? When the slope is less than 1. If I consider a different slope, a number, little z -- that happens to be a slope of that kind -- then the picture changes. And in that case, we get a picture of this kind, let's say. So this is a line here of slope z, again. And this is the second case in which our number, little z, is bigger than 1. So how do we proceed? Once more, the cumulative is the probability that the ratio is less than or equal to that number. So it's the probability that we fall below the red line. So we're talking about the event, about this event. So to find the probability of this event, we need to find the area of this red shape. And one way of finding this area is to consider the whole area and subtract the area of this triangle. So let's do it this way. It's going to be 1 minus the area of the triangle. Now, what's the area of the triangle? It's 1/2 times this side, which is 1 times this side. How big is that side? Well, if y and the slope is z, now z is the ratio y over x. So if y over x-- at this point we have y/x = z and y =1. This means that z is 1/x. So the coordinate of this point is 1/x. And this means that we're going to-- 1/z So here we get the factor of 1/z. And we're basically done. I guess if you want to have a complete answer, you should also give the formula for z less than 0. What is the cumulative when z is less than 0, the probability that you get the ratio that's negative? Well, since our random variables are positive, there's no way that you can get a negative ratio. So the cumulative down there is equal to 0. So we can plot the cumulative. And we can take its derivative in order to find the density. So the cumulative that we got starts at 0, when z's are negative. Then it starts going up in proportion to z, at the slope of 1/2. So this takes us up to 1. And then it starts increasing towards 1, according to this function. When you let z go to infinity, the cumulative is going to go to 1. And it has a shape of, more or less, this kind. So now to get the density, we just take the derivative. And the density is, of course, 0 down here. Up here the derivative is just 1/2. And beyond that point we need to take the derivative of this expression. And the derivative is going to be 1/2 times 1 over z-squared. So it's going to be a shape of this kind. And we're done. So you see that problems involving functions of multiple random variables are no harder than problems that deal with the functional of a single random variable. The general procedure is, again, exactly the same. You first find the cumulative, and then you differentiate. The only extra difficulty will be that when you calculate the cumulative, you need to find the probability of an event that involves multiple random variables. And sometimes this could be a little harder to do. By the way, since we dealt with this example, just a couple of questions. What do you think is going to be the expected value of the random variable Z? Let's see, the expected value of the random variable Z is going to be the integral of z times the density. And the density is equal to 1/2 for z going from 0 to 1. And then there's another contribution from 1 to infinity. There the density is 1/(2z-squared). And we get the z, since we're dealing with expectations, dz. So what is this integral? Well, if you look here, you're integrating 1/z, all the way to infinity. 1/z has an integral, which is the logarithm of z. And since the logarithm goes to infinity, this means that this integral is also infinite. So the expectation of the random variable Z is actually infinite in this example. There's nothing wrong with this. Lots of random variables have infinite expectations. If the tail of the density falls kind of slowly, as the argument goes to infinity, then it may well turn out that you get an infinite integral. So that's just how things often are. Nothing strange about it. And now, since we are still in this example, let me ask another question. Would we reason, on the average, would it be true that the expected value of Z -- remember that Z is the ratio Y/X -- could it be that the expected value of Z is this number? Or could it be that it's equal to this number? Or could it be that it's none of the above? OK, so how many people think this is correct? Small number. How many people think this is correct? Slightly bigger, but still a small number. And how many people think this is correct? OK, that's-- this one wins the vote. OK, let's see. This one is not correct, just because there's no reason it should be correct. So, in general, you cannot reason on the average. The expected value of a function is not the same as the same function of the expected values. This is only true if you're dealing with linear functions of random variables. So this is not-- this turns out to not be correct. How about this one? Well, X and Y are independent, by assumption. So 1/X and Y are also independent. Why is this? Independence means that one random variable does not convey any information about the other. So Y doesn't give you any information about X. So Y doesn't give you any information about 1/X. Or to put it differently, if two random variables are independent, functions of each one of those random variables are also independent. If X is independent from Y, then g(X) is independent of h(Y). So this applies to this case. These two random variables are independent. And since they are independent, this means that the expected value of their product is equal to the product of the expected values. So this relation actually is true. And therefore, this is not true. OK. Now, let's move on. We have this general procedure of finding the derived distribution by going through the cumulative. Are there some cases where we can have a shortcut? Turns out that there is a special case or a special structure in which we can get directly from densities to densities using directly just a formula. And in that case, we don't have to go through the cumulative. And this case is also interesting, because it gives us some insight about how one density changes to a different density and what affects the shape of those densities. So the case where things easy is when the transformation from one random variable to the other is a strictly monotonic one. So there's a one-to-one relation between x's and y's. Here we can reason directly in terms of densities by thinking in terms of probabilities of small intervals. So let's look at the small interval on the x-axis, like this one, when X ranges from-- where capital X ranges from a small x to a small x plus delta. So this is a small interval of length delta. Whenever X happens to fall in this interval, the random variable Y is going to fall in a corresponding interval up there. So up there we have a corresponding interval. And these two intervals, the red and the blue interval-- this is the blue interval. And that's the red interval. These two intervals should have the same probability. They're exactly the same event. When X falls here, g(X) happens to fall in there. So we can sort of say that the probability of this little interval is the same as the probability of that little interval. And we know that probabilities of little intervals have something to do with densities. So what is the probability of this little interval? It's the density of the random variable X, at this point, times the length of the interval. How about the probability of that interval? It's going to be the density of the random variable Y times the length of that little interval. Now, this interval has length delta. Does that mean that this interval also has length delta? Well, not necessarily. The length of this interval has something to do with the slope of your function g. So slope is dy by dx. Is how much-- the slope tells you how big is the y interval when you take an interval x of a certain length. So the slope is what multiplies the length of this interval to give you the length of that interval. So the length of this interval is delta times the slope of your function. So the length of the interval is delta times the slope of the function, approximately. So the probability of this interval is going to be the density of Y times the length of the interval that we are considering. So this gives us a relation between the density of X, evaluated at this point, to the density of Y, evaluated at that point. The two densities are closely related. If these x's are very likely to occur, then this is big, which means that that density will also be big. If these x's are very likely to occur, then those y's are also very likely to occur. But there's also another factor that comes in. And that's the slope of the function at this particular point. So we have this relation between the two densities. Now, in interpreting this equation, you need to make sure what's the relation between the two variables. I have both little x's and little y's. Well, this formula is true for an (x,y) pair, that they're related according to this particular function. So if I fix an x and consider the corresponding y, then the densities at those x's and corresponding y's will be related by that formula. Now, in the end, you want to come up with a formula that just gives you the density of Y as a function of y. And that means that you need to eliminate x from the picture. So let's see how that would go in an example. So suppose that we're dealing with the function y equal to x cubed, in which case our function, g(x), is the function x cubed. And if x cubed is equal to a little y, If we have a pair of x's and y's that are related this way, then this means that x is going to be the cubic root of y. So this is the formula that takes us back from y's to x's. This is the direct function from x, how to construct y. This is essentially the inverse function that tells us, from a given y what is the corresponding x. Now, if we write this formula, it tells us that the density at the particular x is going to be the density at the corresponding y times the slope of the function at the particular x that we are considering. The slope of the function is 3x squared. Now, we want to end up with a formula for the density of Y. So I'm going to take this factor, send it to the other side. But since I want it to be a function of y, I want to eliminate the x's. And I'm going to eliminate the x's using this correspondence here. So I'm going to get the density of X evaluated at y to the 1/3. And then this factor in the denominator, it's 1/(3y to the power 2/3). So we end up finally with the formula for the density of the random variable Y. And this is the same answer that you would get if you go through this exercise using the cumulative distribution function method. You end up getting the same answer. But here we sort of get it directly. Just to get a little more insight as to why the slope comes in-- suppose that we have a function like this one. So the function is sort of flat, then moves quickly, and then becomes flat again. What should be -- and suppose that X has some kind of reasonable density, some kind of flat density. Suppose that X is a pretty uniform random variable. What's going to happen to the random variable Y? What kind of distribution should it have? What are the typical values of the random variable Y? Either x falls here, and y is a very small number, or-- let's take that number here to be -- let's say 2 -- or x falls in this range, and y takes a value close to 2. And there's a small chance that x's will be somewhere in the middle, in which case y takes intermediate values. So what kind of shape do you expect for the distribution of Y? There's going to be a fair amount of probability that Y takes values close to 0. There's a small probability that Y takes intermediate values. That corresponds to the case where x falls in here. That's not a lot of probability. So the probability that Y takes values between 0 and 2, that's kind of small. But then there's a lot of x's that produces y's that are close to 2. So there's a significant probability that Y would take values that are close to 2. So you-- the density of Y would have a shape of this kind. By looking at this picture, you can tell that it's most likely that either x will fall here or x will fall there. So the g(x) is most likely to be close to 0 or to be close to 2. So since y is most likely to be close to 0 or close to most of the probability of y is here. And there's a small probability of being in between. Notice that the y's that get a lot of probability are those y's associated with flats regions off your g function. When the g function is flat, that gives you big densities for Y. So the density of Y is inversely proportional to the slope of the function. And that's what you get from here. The density of Y is-- send that term to the other side-- is inversely proportional to the slope of the function that you're dealing with. OK, so this formula works nicely for the case where the function is one-to-one. So we can have a unique association between x's and y's and through an inverse function, from y's to x's. It works for the monotonically increasing case. It also works for the monotonically decreasing case. In the monotonically decreasing case, the only change that you need to do is to take the absolute value of the slope, instead of the slope itself. OK, now, here's another example or a special case. Let's talk about the most interesting case that involves a function of two random variables. And this is the case where we have two independent, random variables, and we want to find the distribution of the sum of the two. We're really interested in the continuous case. But as a warm-up, it's useful to look at the discrete case first of discrete random variables. Let's say we want to find the probability that the sum of X and Y is equal to a particular number. And to illustrate this, let's take that number to be equal to 3. What's the probability that the sum of the two random variables is equal to 3? To find the probability that the sum is equal to 3, you consider all possible ways that you can get the sum of 3. And the different ways are the points in this picture. And they correspond to a line that goes this way. So the probability that the sum is equal to a certain number is the probability that -- is the sum of the probabilities of all of those points. What is a typical point in this picture? In a typical point, the random variable X takes a certain value. And Y takes the value that's needed so that the sum is equal to W. Any combination of an x with a w minus x, any such combination gives you a sum of w. So the probability that the sum is w is the sum over all possible x's. That's over all these points of the probability that we get a certain x. Let's say x equals 2 times the corresponding probability that random variable Y takes the value 1. And why am I multiplying probabilities here? That's where we use the assumption that the two random variables are independent. So the probability that X takes a certain value and Y takes the complementary value, that probability is the product of two probabilities because of independence. And when we write that into our usual PMF notation, it's a formula of this kind. So this formula is called the convolution formula. It's an operation that takes one PMF and another PMF-- p we're given the PMF's of X and Y -- and produces a new PMF. So think of this formula as giving you a transformation. You take two PMF's, you do something with them, and you obtain a new PMF. This procedure, what this formula does is -- nicely illustrated sort of by mechanically. So let me show you a picture here and illustrate how the mechanics go, in general. So you don't have these slides, but let's just reason through it. So suppose that you are given the PMF of X, and it has this shape. You're given the PMF of Y. It has this shape. And somehow we are going to do this calculation. Now, we need to do this calculation for every value of W, in order to get the PMF of W. Let's start by doing the calculation just for one case. Suppose the W is equal to 0, in which case we need to find the sum of Px(x) and Py(-x). How do you do this calculation graphically? It involves the PMF of X. But it involves the PMF of Y, with the argument reversed. So how do we plot this? Well, in order to reverse the argument, what you need is to take this PMF and flip it. So that's where it's handy to have a pair of scissors with you. So you cut this down. And so now you take the PMF of the random variable Y and just flip it. So what you see here is this function where the argument is being reversed. And then what do we do? We cross-multiply the two plots. Any entry here gets multiplied with the corresponding entry there. And we consider all those products and add them up. In this particular case, the flipped PMF doesn't have any overlap with the PMF of X. So we're going to get an answer that's equal to 0. So for w's equal to 0, the Pw is going to be equal to 0, in this particular plot. Now if we have a different value of w -- oops. If we have a different value of the argument w, then we have here the PMF of Y that's flipped and shifted by an amount of w. So the correct picture of what you do is to take this and displace it by a certain amount of w. So here, how much did I shift it? I shifted it until one falls just below 4. So I have shifted by a total amount of 5. So 0 falls under 5, whereas 0 initially was under 0. So I'm shifting it by 5 units. And I'm now going to cross-multiply and add. Does this give us the correct-- does it do the correct thing? Yes, because a typical term will be the probability that this random variable is 3 times the probability that this random variable is 2. That's a particular way that you can get a sum of 5. If you see here, the way that things are aligned, it gives you all the different ways that you can get the sum of 5. You can get the sum of 5 by having 1 + 4, or 2 + 3, or 3 + 2, or 4 + 1. You need to add the probabilities of all those combinations. So you take this times that. That's one product term. Then this times 0, this times that. And so 1-- you cross-- you find all the products of the corresponding terms, and you add them together. So it's a kind of handy mechanical procedure for doing this calculation, especially when the PMF's are given to you in terms of a picture. So the summary of these mechanics are just what we did, is that you put the PMF's on top of each other. You take the PMF of Y. You flip it. And for any particular w that you're interested in, you take this flipped PMF and shift it by an amount of w. Given this particular shift for a particular value of w, you cross-multiply terms and then accumulate them or add them together. What would you expect to happen in the continuous case? Well, the story is familiar. In the continuous case, pretty much, almost always things work out the same way, except that we replace PMF's by PDF's. And we replace sums by integrals. So there shouldn't be any surprise here that you get a formula of this kind. The density of W can be obtained from the density of X and the density of Y by calculating this integral. Essentially, what this integral does is it fits a particular w of interest. We're interested in the probability that the random variable, capital W, takes a value equal to little w or values close to it. So this corresponds to the event, which is this particular line on the two-dimensional space. So we need to find the sort of odd probabilities along that line. But since the setting is continuous, we will not add probabilities. We're going to integrate. And for any typical point in this picture, the probability of obtaining an outcome in this neighborhood is the-- has something to do with the density of that particular x and the density of the particular y that would compliment x, in order to form a sum of w. So this integral that we have here is really an integral over this particular line. OK, so I'm going to skip the formal derivation of this result. There's a couple of derivations in the text. And the one which is outlined here is yet a third derivation. But the easiest way to make sense of this formula is to consider what happens in the discrete case. So for the rest of the lecture we're going to consider a few extra, more miscellaneous topics, a few remarks, and a few more definitions. So let's change-- flip a page and consider the next mini topic. There's not going to be anything deep here, but just something that's worth being familiar with. If you have two independent, normal random variables with certain parameters, the question is, what does the joined PDF look like? So if they're independent, by definition the joint PDF is the product of the individual PDF's. And the PDF's each one of them involves an exponential of something. The product of two exponentials is the exponential of the sum. So you just add the exponents. So this is the formula for the joint PDF. Now, you look at that formula and you ask, what does it look like? OK, you can understand it, a function of two variables by thinking about the contours of this function. Look at the points at which the function takes a constant value. Where is it? When is it constant? What's the shape of the set of points where this is a constant? So consider all x's and y's for which this expression here is a constant, that this expression here is a constant. What kind of shape is this? This is an ellipse. And it's an ellipse that's centered at-- it's centered at mu x, mu y. These are the means of the two random variables. If those sigmas were equal, that ellipse would be actually a circle. And you would get contours of this kind. But if, on the other hand, the sigmas are different, you're going to get an ellipse that has contours of this kind. So if my contours are of this kind, that corresponds to what? Sigma x being bigger than sigma y or vice versa. OK, contours of this kind basically tell you that X is more likely to be spread out than Y. So the range of possible x's is bigger. And X out here is as likely as a Y up there. So big X's have roughly the same probability as certain smaller y's. So in a picture of this kind, the variance of X is going to be bigger than the variance of Y. So depending on how these variances compare with each other, that's going to determine the shape of the ellipse. If the variance of Y we're bigger, then your ellipse would be the other way. It would be elongated in the other dimension. Just visualize it a little more. Let me throw at you a particular picture. This is one-- this is a picture of one special case. Here, I think, the variances are equal. That's the kind of shape that you get. It looks like a two-dimensional bell. So remember, for a normal random variables, for a single random variable you get a PDF that's bell shaped. That's just a bell-shaped curve. In the two-dimensional case, we get the joint PDF, which is bell shaped again. And now it looks more like a real bell, the way it would be laid out in ordinary space. And if you look at the contours of this function, the places where the function is equal, the typcial contour would have this shape here. And it would be an ellipse. And in this case, actually, it will be more like a circle. So these would be the different contours for different-- so the contours are places where the joint PDF is a constant. When you change the value of that constant, you get the different contours. And the PDF is, of course, centered around the mean of the two random variables. So in this particular case, since the bell is centered around the (0, 0) vector, this is a plot of a bivariate normal with 0 means. OK, there's-- bivariate normals are also interesting when your bell is oriented differently in space. We talked about ellipses that are this way, ellipses that are this way. You could imagine also bells that you take them, you squash them somehow, so that they become narrow in one dimension and then maybe rotate them. So if you had-- we're not going to go into this subject, but if you had a joint pdf whose contours were like this, what would that correspond to? Would your x's and y's be independent? No. This would indicate that there's a relation between the x's and the y's. That is, when you have bigger x's, you would expect to also get bigger y's. So it would be a case of dependent normals. And we're coming back to this point in a second. Before we get to that point in a second that has to do with the dependencies between the random variables, let's just do another digression. If we have our two normals that are independent, as we discussed here, we can go and apply the formula, the convolution formula that we were just discussing. Suppose you want to find the distribution of the sum of these two independent normals. How do you do this? There is a closed-form formula for the density of the sum, which is this one. We do have formulas for the density of X and the density of Y, because both of them are normal, random variables. So you need to calculate this particular integral here. It's an integral with respect to x. And you have to calculate this integral for any given value of w. So this is an exercise in integration, which is not very difficult. And it turns out that after you do everything, you end up with an answer that has this form. And you look at that, and you suddenly recognize, oh, this is normal. And conclusion from this exercise, once it's done, is that the sum of two independent normal random variables is also normal. Now, the mean of W is, of course, going to be equal to the sum of the means of X and Y. In this case, in this formula I took the means to be 0. So the mean of W is also going to be 0. In the more general case, the mean of W is going to be just the sum of the two means. The variance of W is always the sum of the variances of X and Y, since we have independent random variables. So there's no surprise here. The main surprise in this calculation is this fact here, that the sum of independent normal random variables is normal. I had mentioned this fact in a previous lecture. Here what we're doing is to basically outline the argument that justifies this particular fact. It's an exercise in integration, where you realize that when you convolve two normal curves, you also get back a normal one once more. So now, let's return to the comment I was making here, that if you have a contour plot that has, let's say, a shape of this kind, this indicates some kind of dependence between your two random variables. So instead of a contour plot, let me throw in here a scattered diagram. What does this scattered diagram correspond to? Suppose you have a discrete distribution, and each one of the points in this diagram has positive probability. When you look at this diagram, what would you say? I would say that when y is big then x also tends to be larger. So bigger x's are sort of associated with bigger y's in some average, statistical sense. Whereas, if you have a picture of this kind, it tells you in association that the positive y's tend to be associated with negative x's most of the time. Negative y's tend to be associated mostly with positive x's. So here there's a relation that when one variable is large, the other one is also expected to be large. Here there's a relation of the opposite kind. How can we capture this relation between two random variables? The way we capture it is by defining this concept called the covariance, that looks at the relation of was X bigger than usual? That's the question, whether this is positive. And how does this relate to the answer-- to the question, was Y bigger than usual? We're asking-- by calculating this quantity, we're sort of asking the question, is there a systematic relation between having a big X with having a big Y? OK , to understand more precisely what this does, let's suppose that the random variable has 0 means, So that we get rid of this-- get rid of some clutter. So the covariance is defined just as this product. What does this do? If positive x's tends to go together with positive y's, and negative x's tend to go together with negative y's, this product will always be positive. And the covariance will end up being positive. In particular, if you sit down with a scattered diagram and you do the calculations, you'll find that the covariance of X and Y in this diagram would be positive, because here, most of the time, X times Y is positive. There's going to be a few negative terms, but there are fewer than the positive ones. So this is a case of a positive covariance. It indicates a positive relation between the two random variables. When one is big, the other also tends to be big. This is the opposite situation. Here, when one variable-- here, most of the action happens in this quadrant and that quadrant, which means that X times Y, most of the time, is negative. You get a few positive contributions, but there are few. When you add things up, the negative terms dominate. And in this case we have covariance of X and Y being negative. So a positive covariance indicates a sort of systematic relation, that there's a positive association between the two random variables. When one is large, the other also tends to be large. Negative covariance is sort of the opposite. When one tends to be large, the other variable tends to be small. OK, so what else is there to say about the covariance? One observation to make is the following. What's the covariance of X with X itself? If you plug in X here, you see that what we have is expected value of X minus expected of X squared. And that's just the definition of the variance of a random variable. So that's one fact to keep in mind. We had a shortcut formula for calculating variances. There's a similar shortcut formula for calculating covariances. In particular, we can calculate covariances in this particular way. That's just the convenient way of doing it whenever you need to calculate it. And finally, covariances are very useful when you want to calculate the variance of a sum of random variables. We know that if two random variables are independent, the variance of the sum is the sum of the variances. When the random variables are dependent, this is no longer true, and we need to supplement the formula a little bit. And there's a typo on the slides that you have in your hands. That term of 2 shouldn't be there. And let's see where that formula comes from. Let's suppose that our random variables are independent of -- not independent -- our random variables have 0 means. And we want to calculate the variance. So the variance is going to be expected value of (X1 plus Xn) squared. What you do is you expand the square. And you get the expected value of the sum of the Xi squared. And then you get all the cross terms. OK. And so now, here, let's assume for simplicity that we have 0 means. The expected value of this is the sum of the expected values of the X squared terms. And that gives us the variance. And then we have all the possible cross terms. And each one of the possible cross terms is the expected value of Xi times Xj. This is just the covariance. So if you can calculate all the variances and the covariances, then you're able to calculate also the variance of a sum of random variables. Now, if two random variables are independent, then you look at this expression. Because of independence, expected value of the product is going to be the product of the expected values. And the expected value of just this term is always equal to 0. You're expected deviation from the mean is just 0. So the covariance will turn out to be 0. So independent random variables lead to 0 covariances, although the opposite fact is not necessarily true. So covariances give you some indication of the relation between two random variables. Something that's not so convenient conceptually about covariances is that it has the wrong units. That's the same comment that we had made regarding variances. And with variances we got out of that issue by considering the standard deviation, which has the correct units. So with the same reasoning, we want to have a concept that captures the relation between two random variables and, in some sense, that doesn't have to do with the units that we're dealing. We want to have a dimensionless quantity. That tells us how strongly two random variables are related to each other. So instead of considering the covariance of just X with Y, we take our random variables and standardize them by dividing them by their individual standard deviations and take the expectation of this. So what we end up doing is the covariance of X and Y, which has units that are the units of X times the units of Y. But divide with a standard deviation, so that we get a quantity that doesn't have units. This quantity, we call it the correlation coefficient. And it's a very useful quantity, a very useful measure of the strength of association between two random variables. It's very informative, because it falls always between -1 and +1. This is an algebraic exercise that you're going to see in recitation. And the way that you interpret it is as follows. If the two random variables are independent, the covariance is going to be 0. The correlation coefficient is going to be 0. So 0 correlation coefficient basically indicates a lack of a systematic relation between the two random variables. On the other hand, when rho is large, either close to 1 or close to -1, this is an indication of a strong association between the two random variables. And the extreme case is when rho takes an extreme value. When rho has a magnitude equal to 1, it's as big as it can be. In that case, the two random variables are very strongly related. How strongly? Well, if you know one random variable, if you know the value of y, you can recover the value of x and conversely. So the case of a complete correlation is the case where one random variable is a linear function of the other random variable. In terms of a scatter plot, this would mean that there's a certain line and that the only possible (x,y) pairs that can happen would lie on that line. So if all the possible (x,y) pairs lie on this line, then you have this relation, and the correlation coefficient is equal to 1. A case where the correlation coefficient is close to 1 would be a scatter plot like this, where the x's and y's are quite strongly aligned with each other, maybe not exactly, but fairly strongly. All right, so you're going to hear a little more about correlation coefficients and covariances in recitation tomorrow.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Network_Reliability.txt
Previously, we learned the concept of independent experiments. In this exercise, we'll see how the seemingly simple idea of independence can help us understand the behavior of quite complex systems. In particular, we'll combined the concept of independence with the idea of divide and conquer, where we break a larger system into smaller components, and then using independent properties to glue them back together. Now, let's take a look at the problem. We are given a network of connected components, and each component can be good with probability P or bad otherwise. All components are independent from each other. We say the system is operational if there exists a path connecting point A here to point B that go through only the good components. And we'd like to understand, what is the probability that system is operational? Which we'll denote by P of A to B. Although the problem might seem a little complicated at the beginning, it turns out only two structures really matter. So let's look at each of them. In the first structure, which we call the serial structure, we have a collection of k components, each one having probability P being good, connected one next to each other in a serial line. Now, in this structure, in order for there to be a good path from A to B, every single one of the components must be working. So the probability of having a good path from A to B is simply P times P, so on and so, repeated k times, which is P raised to the k power. Know that the reason we can write the probability this way, in terms of this product, is because of the independence property. Now, the second useful structure is parallel structure. Here again, we have k components one, two, through k, but this time they're connected in parallel to each other, namely they start from one point here and ends at another point here, and this holds for every single component. Now, for the parallel structure to work, namely for there to exist a good path from A to B, it's easy to see that as long as one of these components works the whole thing will work. So the probability of A to B is the probability that at least one of these components works. Or in the other word, the probability of the complement of the event where all components fail. Now, if each component has probability P to be good, then the probability that all key components fail is 1 minus P raised to the kth power. Again, having this expression means that we have used the property of independence, and that is probability of having a good parallel structure. Now, there's one more observation that will be useful for us. Just like how we define two components to be independent, we can also find two collections of components to be independent from each other. For example, in this diagram, if we call the components between points C and E as collection two, and the components between E and B as collection three. Now, if we assume that each component in both collections-- they're completely independent from each other, then it's not hard to see that collection two and three behave independently. And this will be very helpful in getting us the breakdown from complex networks to simpler elements. Now, let's go back to the original problem of calculating the probability of having a good path from point big A to point big B in this diagram. Based on that argument of independent collections, we can first divide the whole network into three collections, as you see here, from A to C, C to E and E to B. Now, because they're independent and in a serial structure, as seen by the definition of a serial structure here, we see that the probability of A to B can be written as a probability of A to C multiplied by C to E, and finally, E to B. Now, the probability of A to C is simply P because the collection contains only one element. And similarly, the probability of E to B is not that hard knowing the parallel structure here. We see that collection three has two components in parallel, so this probability will be given by 1 minus 1 minus P squared. And it remains to calculate just the probability of having a good path from point C to point E. To get a value for P C to E, we notice again, that this area can be treated as two components, C1 to E and C2 to E connected in parallel. And using the parallel law we get this probability is 1 minus 1 minus P C1 to E multiplied by the 1 minus P C2 to E. Know that I'm using two different characters, C1 and C2, to denote the same node, which is C. This is simply for making it easier to analyze two branches where they actually do note the same node. Now P C1 to E is another serial connection of these three elements here with another component. So the first three elements are connected in parallel, and we know the probability of that being successful is 1 minus P3, and the last one is P. And finally, P C2 to E. It's just a single element component with probability of successful being P. At this point, there is no longer any unknown variables, and we have indeed obtained exact values for all the quantities that we're interested in. So starting from this equation, we can plug in the values for P C2 to E, P C1 to E back here, and then further plug in P C to E back here. That will give us the final solution, which is given by the following somewhat complicated formula. So in summary, in this problem, we learned how to use the independence property among different components to break down the entire fairly complex network into simple modular components, and use the law of serial and parallel connections to put the probabilities back together in common with the overall success probability of finding a path from A to B.
MIT_6041SC_Probabilistic_Systems_Analysis_and_Applied_Probability_Fall_2013
Mean_Variance_of_the_Exponential.txt
Hi. In this video, we're going to compute some useful quantities for the exponential random variable. So we're given that x is exponential with rate lambda. PDF looks like this, and the formula is here. First question, part a, what's the CDF? So let's go right in. The CDF of x is the probability that X is less than or equal to little x. Let's look at some cases here. What if little x is less than 0? Well, x random variable only takes on these non-negative values. And so the probability that X is less than or equal to some negative number is going to be 0. On the other hand, if x is greater than or equal to 0, we do actually have to integrate here. So to do that, we take the integral from minus infinity to x of fx of t-- the dummy variable here used is t. Notice that again, fx of t is going to be 0 for negative values, so we take the integral here from 0. And now we plug in for fx of t. That's minus lambda t dt. And recall that the integral of u to the a t is 1 over a times e to the a t. So here in this case, we'll get lambda, which is just a constant. And then a here is going to be negative lambda. So we get this, 0 to x. Lambdas cancel and we actually get 1 minus e to the minus lambda x. So do this. And we are done with the CDF. Now for the expectation. We use the standard formula, which is minus infinity to infinity t times fx of t dt. So again, fx of t is going to be 0 for a negative value. So we do the integral from 0. We get 0 to infinity t lambda e to the minus lambda t dt. Now, you can try all you want to get rid of this t. It's not going to go even if you try all kinds of u substitution. But at the end the day, you're going to have to pull out your calculus textbook and find the integration by parts formula, which is-- v du. So the hope is that this integral is going to be easier than the one on the left. Notice that this is the integral of one of the terms here. And this is the derivative of one of the terms. So that may help you decide on how you select u and v. In our case actually, I'm going to use u as-- t for u. Because when you take the derivative, it's going to become 1. And the derivative is what's going to go in that integral. So this is going to be dt for du. And then, dv I'm going to select as whatever's left over. It's lambda e to minus lambda t dt. So v is going to be-- we already did the integral-- minus e to the minus lambda t. And so if we do this, it's going to be negative t times e to the minus lambda t. So that's uv. Minus v, which is negative e to the minus lambda t times dt. That goes from 0 to infinity. This is evaluated from 0 to infinity. Well, what does it mean for this to be evaluated from 0 to infinity? A better and easier way to look at this is to say, well, it's going to go from 0 to x. But then you take the limit as x goes to infinity. So that's going to help us here. And this negative-- these negatives cancel. And we're left with-- let's plug in the bounds. We're left with negative x minus lambda x plus the integral of this is going to be 1 over negative lambda e to the minus lambda t evaluated from 0 to infinity. All right, so now the limit. So for the limit, notice that x increases as x goes to infinity. And this exponential decays. So they're kind of competing for each other. But the exponential is going to win because it decays way faster than x. And so this first term is going to go off-- the limit is going to go to 0. All right. For this, if you evaluate the balance, the infinity makes this 0. And 0, you're going to get 1 over lambda. So that's 1 over lambda. All right. And so the expectation is 1 over lambda. OK, so now what's the variance? That's part c, right? So we use the standard formula for variance, which is this. We already figured out the expectation. We just need to figure out the expectation of x squared. Well, we're just going to follow the same set of steps from before. For x squared, it's just going to be t squared, t squared, t squared, x squared. The only thing that's going to change is what we choose for u here, for the u substitution. So it's going to be t squared. So the derivative is going to change to 2t dt. v is going to be exactly the same. And so here in this term, we get negative 2t e to the minus lambda t. But there's a negative sign out here, so the negatives cancel and we're left with a positive sign here. This is going to change. All right. OK. So in order to do this integral, we can use a trick. We can move this-- so there's a 2t here. We move this 2 in here, leave the t inside. And you have to leave the t inside. But multiply by lambda and divide by lambda. Now, look at that integral. 0 to infinity t times lambda e to the minus lambda t dt. Exactly the expectation that we computed. We already did that. That is just 1 over lambda, so it's 2 over lambda times 1 over lambda. Again, the limit as x goes to infinity-- the exponential will beat x squared. No matter what polynomial we put in there, the exponential's going to win. So this is going to be 0 still. This one's going to be 2 over lambda squared. So we're left with 2 over lambda squared for expectation of x squared. And so we have 1 over lambda squared for the variance. OK, so we're done with the variance. Part d. We're given that x1, x2, and x3 are independent and identically distributed. They're exponentials with rate lambda. We're asked for the PDF of z, which is the max of x1, x2, and x2. How do we generally find a PDF? We take the CDF and then take the derivative, right? We first find the CDF, and then take the derivative. So let's do that. So first, let's see. Part d, find the CDF of z, which is going to be the probability that Z is less than or equal to little z, which is going to be equal to the probability that the max of x1, x2, x3 is less than or equal to z. And this is going to have the same sort of structure as before. If z is less than 0, x1, x2, x3 are positive-- non-negative. And so this is the probability that if you get little z less than 0, you're not going to have any probability there. And so if z is greater than or equal to 0 is where it gets interesting. We need to do something special. So the special thing here is to recognize that the probability of the max being less than or equal to z is actually also the probability of each of these random variables individually being less than or equal to z. Why is that true? One way to check whether the events-- these two events are the same is to check the two directions. One direction say, if the max of x1, x2, x3 is less than or equal to z, does that mean x1 is less than or equal to z, x2 is less than or equal to z, and x3 is less than or equal to z? Yes. OK. And then, if x1, x2, and x3 are individually less than or equal to z, then the max is also less than or equal to z. So these two events are equivalent and this is true. By independence we can break this up. And we get-- these are all CDFs of the exponential and they all have this form. So it's just going to be 1 minus e to the minus lambda z cubed. Plug this in here. And then, try to take the derivative to get the PDF. Let's see. So it's going to be the same, like this for z less than 0. For z greater than or equal to 0, it's going to be the derivative of this thing. Derivative of this thing is by chain rule, 3 times 1 minus e to the minus lambda z squared. Then the derivative of negative e to the minus lambda z, that's just lambda e to the minus lambda z. There we go. This is the PDF we were looking for. So last problem. We're looking for the PDF of w, which is the min of x1 and x2. So let's try this as a similar approach. Try the same thing, actually. See if it works. So w, w, w, w, min, less than or equal to w. OK. So let's see if this works. Is it true that the min-- if the min of x1 and x2 is less than or equal to w, that each of them is less than or equal to w? No, right? X1 could be less than or equal to w and x2 could be bigger than w. And the min could still be less than or equal to w. So that's definitely not true. So what do we do here? The trick is to flip it and say we want to compute the min of x1 and x2 being greater than w. In that case, let's check if we can do this trick. If the min of x1 and x2 is greater than w, then clearly x1 is bigger than w and x2 is bigger than w. And if x1 and x2 are individually bigger than w, then clearly the min's also bigger than w. So this works. And now we can use independence as before. And for this, this is just 1 minus the CDF here. So it's just going to be e to the minus lambda w for each of them. But that's the same as e to the minus lambda 2w. Or e to the 2 lambda w. So it's going to be-- Notice the similarity between this and this. The only difference is this has a 2 lambda in there. That means that w is an exponential random variable with rate 2 lambda. So then the PDF is going to be an exponential, whatever it is for an exponential. Except with rate 2 lambda. You can also take the derivative of this and find that you get this. OK, so we're done with the problems. We computed some interesting quantities for the exponential random variable in this--
SLAM_Lectures
SLAM_A_03.txt
so now we had a look at the motor Tis of the robot but what we want to know is where the robot is so we need to know how the robot ticks translate into a movement of the robot so we need a motion model let's have a look at the robot once again this is the robot has seen from above and these are the caterpillar tracks of the robot which may move at different speeds so let us just assume for a moment that these were no tracks but these were just Wheels that are placed here and here and so if we assume that the robot has some Center like here and some axis will go through here whereas the left and right motor will turn those wheels at different speeds so then our motion model is as follows the robot has a left wheel and the robot has a right wheel and now if the robot moves and say the right wheel moves a little bit faster than the left wheel then the robot will make some turn so it will look like that and the robot will turn around Point say this point is here so in the end the wheel will be here and here so the movement of the left wheel this curve segment will have a length of L and the right wheel has length of R and we need to know Alpha that's the angle we want to find out and we also do have a radius and we also know the width of the vehicle so that's the width the distance between the two wheels or tracks of the robot so having a look at that we have a situation for the right and for the left wheel well for the right wheel it looks like that this is our angle Alpha this is r and down here that is R plus the width of the vehicle so from that we can set up the equation right equals Alpha taken in radians time r plus W the width and for the left wheel we have the situation it's a little bit shorter here like that it's Alpha it's the left and that's R so for the left wheel the equation is left equal Alpha * R so if we subtract that from each other then we get Rus L equal Alpha * the width of the vehicle and so the unknown R is dropped from those equations so from that we obtain that Alpha the angle we're looking for equals r - L / by the width and now if we put that into this equation here we also find out that this unknown r that is L / by Alpha so what we can see here is we obtain two equations the first one is from the movement of the right and left wheel and the known width of the vehicle we obtain the angle Alpha and having this angle Alpha we can actually put it in here we know the left length and we know Alpha so we can compute R so what we know so far is Alpha and R and we also have to note that Alpha is not allowed to be zero because then we would have a division by zero and that makes sense because if Rus L is Zer then the robot would actually go straight and this point here would be in Infinity now our robot is somewhere in the real world let's say this is the left wheel this is the right wheel this is the axis so the robot will move in that direction and let's say around a point as we just discussed say this is the point the center around which it moves and will move up here so this will make this curve segment this will make that curve segment it will end up here so this is the moving Direction That's The Heading and we will say the heading angle we will use Theta to describe the heading angle then this Vector here will be cosine Theta sin Theta and we now construct the vector that is rectangular to this heading Direction and this will be sin minus cosine Theta you have to exchange the two terms and you have after exchanging you have to make a minus depending on if you turn that right by 90° you have have to make the minus down here if you would turn it left you would make the minus up here so now where is this point say this is the center around which we turn and say this here is the position of the robot P now the distance between here and here we know that that is R plus half of the width of the robot so overall we get the center is at the position here minus the distance times this Vector so after we draw up here where will be this position well we find this position by also constructing this Vector in that direction and adding r+ the half of the width to here in that direction this direction is the same as that direction after it has been turned now we determine the turn the turn is Alpha so this direction here is actually the same as that but now with Alpha added so we Ain the new position of our robot P Prime which is the center plus again R plus half of the width times the S and cosine but this time of theta plus Alpha and also we get the new heading which points in that direction up here and the new heading will be just the old heading plus Alpha and now since we want to keep that within 2 pi we just do a modo division 2 pi right and remember in Python that will be Modo 2 * pi now let's wrap that up so we are given the last position and orientation of our robot which is x y and Theta and we're also given the wheel counts of the left and right wheel and we also know the width of the vehicle which is a calibration parameter and we're looking for the new position so we first compute our angle Alpha the turn angle which is r - L / by the width from that we can compute the radius which is L / Alpha from that we can compute the center which is the old position minus r + half of the width of the robot times this Vector which US is the old heading now we can compute the new heading the new heading is the old heading plus the turn Alpha modal 2 pi and from that we can compute the new position which is the center plus again the radius plus half of the width times the S and minus cosine of the new Theta and this is all for the case if Alpha is not zero or equivalently if R is not equal to l now the second case if R equals L that's actually much much simpler so our robot is here it has this heading and if R equals L it just moves straight so ends up here it is easy to find out the formulas our Theta Prime is just Theta because we didn't change our heading where's our X Prime well that's the old X plus the distance we drove and I'll write L here now since L is equal to R you can also write R times the cosine of theta and Y is the same using the sign so this is the second set of formulas which we do need when the robot goes straight there's two more things you need to know first of all you know in the formulas you need the width of the robot so what is the width it turns out that's actually not so easy to determine because as you see here our robot doesn't have this mathematical wheels that we use to figure out the formulas rather it has this caterpillar tracks we assumed that it's some mathematical Wheels like that what I did is I took a ruler and measured the distance between the center of the two caterpillar tracks using that method I figured out the width is50 mm so and all our computations will be based on millimeters so as you remember we plotted the ticks of the motors but the tick that's what the wheel encoders deliver the tick is not the millimeters that we drove so you need a factor so for every tick that the wheel encoder counts the robot moves a little bit in my opinion the factor is 0.349 so I figured this out by lighting the robot drive straight and after driving for a certain number of ticks I measured the distance that it actually drove so it's 0.349 meaning one tick equals 0.349 mm in the real world
SLAM_Lectures
SLAM_B_05.txt
and now the next step is to use the correspondences in order to estimate the transformation and this is so trivial that I implemented that already for you it's called slam 5p estimated wall transform as you see down here in main here's the loop over all positions and this is exactly the same as before and here's the difference so after getting the corresponding point lists we estimate the transformation using estimate transform and if that succeeds then we correct the pose so this estimate transform is exactly the same function that you have implemented before and the correct pose is also the function which you implement before so apart from correcting the pose we also use the estimate transformation to make the points from the scanner to their corrected positions and if you try to run this make sure that you import the right files here so from your previous solutions you should have the function estimate transform apply transform and correct pose it should be in the solution of SLAM 4d and also the get subsampled points and get corresponding points on wall which are just in the previous file so whatever names you gave to those files you should put them here so that python is able to find the imports now if you run this it will produce the estimate wall transform the text file now let's open that and as you can see here the shape of our trajectory looks almost correct now all the points that you see here are the scanners points after the transformation is applied so we can immediately see that those points didn't move all the way down here as we expected but still the matching of the balls keeps the robot on track so it has some trouble down here where it turns very strongly that's how our estimation tries to turn that back there's some error here and for example here we see that our transform points do not really match the balls so what happened so as you remember we have set up the correspondences between the scan points and of all points using the projection of the points onto the wall so after one estimation the situation might stat because those equations moved the points down here whereas those equations moved them to the left and so in the end the final distances where from here to here and from here to here so the problem is moving it down makes those distances longer well as moving it to the left makes those distances longer so the solution to this problem is with point pairs selected according to the closest distance we compute the best transformation so this point has moved down here this point has moved here and so on then again we find closest points on the wall and determine a new transformation and this algorithm is called two iterative closest point or ICP because it iterates to two steps of finding point pairs and estimating the best transform and this is the final program I want you to implement so it's lam 5c ICP wool transform so it's actually pretty short so make sure that in the header you put in the names of the correct files so that all those functions can be imported by Python then this is the function you'll have to implement and after implemented I promise you it's much much shorter than those comments here now let's first have a look at the main function not much happened down here so we compute the pose get the real points and now here we call this ICP transform then we used to transform to correct the pose and that's all so let's go back to the function it gets the points and the number of iterations and here's already a plan what you can do here so what makes it a little bit complicated is that we have to determine a transformation and then we find new point pairs again and then we again determinate transformation but overall we have to return a transformation that is the concatenation of all those transformations that we did during all those iterations so you set up the overall transformation is the identity then you loop for a maximum number of iterations transform the original world points given the current transformation then you assign the point pairs determine the transformation and you concatenate the found transformation with the previous or the overall transformation that has been found so far and then in the end you return the overall transformation so this is the last assignment I had please program this function now
SLAM_Lectures
PP_08.txt
so we will now do a final modification to our AAR algorithm and this is due to the following observation say you want to go from this start to this goal and you're driving on a parking lot for example and so here's another parked car and here's yet another car then as you see and as we expect from our algorithm those cars won't change the trajectory at all because our trajectory is only changed if it actually hits one of those obstacles but in real life it would be more natural that we get around this obstacle at a certain distance and also we try to get around this obstacle at a certain distance so that in general small distances to obstacles are avoided the same is true of course if this is some entrance where you would have to pass through and looks like this then of course you would expect that the trajectory tries to to drive in between those obstacles instead of almost scratching one of these while there is lots of space on the other side so how can we achieve this and we will use a potential function let me explain this to you using a real world example so say you go hiking in the mountains but actually you're not a friend of hiking in the mountains but are somehow forced by your friends or whatsoever and so they tell you we will hike from start to go in this mountain area and so you take a map and look at the contour lines and you see here is one mountain and there's another mountain and so you tell your friends as you know I don't like hiking in the mountains so going from as to G I will just go along this path down in the valley and I will never go above sea level so that's good for me so unfortunately your friends tell you that you didn't look care carefully enough at the contour lines and although s is at C Level and G is at C Level as well there's additional contour lines in the map here like this so say this is the 50 m contour line 100 150 200 250 and 300 so it is impossible to go from s to G when you want to pass between those two mountains and you want to stay at sea level so you tell your friends okay so if I have to stay sea level I'll just go around the two mountains but then they tell you well this would be a huge detour and so you say I would be willing to go a little bit above sea level if this avoids a huge detour and so your friends ask well how much of a detour that's a pretty inexact statement now being an engineer you of course are able to come up with a definition that is exact so you say these are all the levels I'm able to make so I love sea level and here I will never ever go at a height of 250 M or higher so this here is an absolute nogo area and below that I will use the following cost function well moving One Step at sea level costs me unit cost because of the amount of work of going forward for one step and from there on it gets harder and harder for me increasing linear with the height above sea level right so say here the additional cost is something around four so the overall cost would be five so you would say making one step at sea level costs me for unit cost but making one step at 254 m above sea level will cost me five unit costs and so here it is the perfect specification it is of course unclear if your friends will ever take you on this hike but nevertheless it is a perfect specification so which means that here at 255 M and above there is the absolute noo areas for you and since these no-go areas do not block the path from start to go entirely you can take part in the hiking and in fact this might be a possible outcome for the lowest cost path and note again this cost function is designed so that when you walk at certain height you incur this cost for every unit step you do so in real life this would mean for example that you suffer when you have to walk in thin air whereas one could think of other cost functions which only add to the cost if your step actually goes uphill but do not add any cost if you walk horizontally or go downhill so in our case every step adds to the cost now how does this fit with our algorithm well as you can imagine these values were chosen deliberately so in our obstacles array we so far used the value zero for free space and 255 for obstacle and with no value in between now the user interface will generate also values in between so for one part the algorithm stays the same when it encounters a value of 255 it will not enter the obstacle but if the value is smaller than 250 5 then it will add a value and the added value will be approximately between 0 and 4 and since we know that the value of obstacles is between 0 and 254 we will use obstacles ided by 64 right so for 256 this would be four so it is slightly below four for 254 now let's see what parts of the code have to be modified and you have to modify only a single line of code right so when you compute the cost which means the new cost and the new total cost you have to add to both of them the value of obstacles divided by 64 and you can of course experiment with other values here which then change the behavior of the algorithm but still make sure that you avoid obstacles all together if the obstacle value is 255 and astonishingly that's the only modification you'll have to do now before you implement this let me show you some of the very cool results you can obtain so let's start here and go to here and you can now also switch off the visited noes to make the result clearer now let's draw an obstacle and as soon as you release the mouse it will now show you the potential function in blue so the red area is still the absolute no-go area whereas the blue area just adds cost to the path now this blue area is computed by using a distance transform so the value decreases linearly with the distance to the red area so now if I start to touch the path with nonzero values of the potential function the path will try to avoid the blue area however if there is a second obstacle coming from above then as soon as this also hits my path you can notice that the path tries to go exactly in between those two obstacles so that is the actual Behavior we intended to have now let's do the parking lot example that we discussed earlier so you go from here to there and there is a Lar park car so it will avoid this and there is another large park car so it will avoid this as well so this is a more real world trajectory then what we would get if there is no potential function and indeed you can switch it off by clicking here you see without the potential function you still get the straight line and with the potential function it will avoid passing close to the obstacles let me show you something else so say again you have this wall coming from below and so it avoids the wall and you also have this wall coming from above and so it goes in between but now as those walls get closer it will suddenly get around the entire structure and this may be unusual at First Sight but remember in the hiking example it was also an option to get around the entire area of mountains just to avoid costs so it's just a tradeoff between this detour and the costs which you accumulate when going through these two obstacles and if this gets longer here then it reaches a point where it again uses the opening between the two obstacles now let me make another nice example say you're driving here along the street so this is the side of the street which you can't pass and you're looking for a parking lot and so you see here's a number of parked cars in one row here's the second row there's an empty spot and here is the next through right so you want to go from here to this empty spot and as you drive along here you suddenly see well here's the entrance to the parking lot so you can go over here and so your path plan immediately gives you a good solution if you switch off the potential function you see that this absolutely doesn't make sense but this here looks quite realistic so if there is a huge truck blocking your way here maybe heading even for your parking space then your plan will give you even a way to kind of overtake this guy and be the first in that spot at least if you have a car that manages to go between those two other cars in the end let me show you that you can also draw some kind of mountains and see the effect of this on your your plant path so if the mountains are as dense as that it will get around them if they are less dense it will find path between them and of course it will also find a path in between if there is no other options you can also switch on the visited nodes and and so you will notice how the search cost gets higher so here there are more nodes expanded when the nonzero values of the potential function start to touch the shortest path and now if I have pressure from above you will see the tradeoff very nicely between going through this narrow passage and going around the entire structure and this is actually a real world problem because the potential function may actually block narrow Solutions which are nonetheless good solutions by forcing a detour due to excessive costs so now go ahead and program this single line of code
SLAM_Lectures
SLAM_C_03.txt
so let's check it out so it ends at 200 or 2001 or 199 with a probability of 0.25 0.5 and 0.25 and the previous 99 result may now in the Second Step move exactly for 1 meter with a probability of 0.5 might also overshoot with probability of 0.25 or undershoot probability of 0.25 and end up in 198 cm and the same holds for the original move of 101 CM may undershoot be exact or overshoot so the correct answer is of course five I have now five possibilities now in order to get those probabilities these are the probabilities of my first experiment whereas these probabilities here are the probabilities of my second experiment given that I'm already in such a note given the result of the first experiment and so from base rule you know that P of ab equals P of B under the condition of a Time P of a so we have just to multiply the probabilities going this anti blue path here gives us a probability of 1/4 * 1/4 which is 1/ 16 going here then there gives us 1/4 * 1/2 which is 1/ 18 and going here then there gives us the same as here which is 1 over 16 now going first to Green Path here we will have 1/2 * a/4 which is 1/8 going down here here is 1/2 * 1/2 which is a quarter going down here is same as here and finally for the right Branch we get the same as for left branch which is 116th 1/8 and 116th and now since for example here all those three possible paths we lead to the same driven distance of 2 m we have to add up those probabilities and the same holds for the other values as well so we end up with 1/16th 1/4 3 over 8 1/4 and 1/6th and what we applied here is the total probability we were interested in the final outcome P of B which is the sum of B conted on a times the probability of a summing up overall I and so to answer our previous question we now get a distribution like that is 1/6th and 416th then 616 416th and 1/16th now to visualize the same thing using distributions we started with a distribution like that which was the unit pulse and then we convoluted it with with a distribution like that meaning we use that figure multiplied it by 1.0 and put it here and now in the second step we did once more the same thing we used that figure but now for this value since 0.5 we multiply that figure by 0.5 and put it here so this is 0.25 then we multiply the same figure by 0.25 and put it one to the left meaning we put it here and again took it the third time times 0.25 and put it here and then we just sumped that up so that's 1/16th that is 216 plus 216 from the blue one meaning 416 now this is 416 + 56 and so on and so this is our final distribution so in terms of programming the only thing to do is you have one input distribution and a second distribution and you convolute those two distributions by multiplying this distribution with one value of the other distribution and placing the result at the according position then doing the same for the next position and the next position and so on for all positions in this distribution and it doesn't matter which one is which and here's the code I want you to program so here's the move function that you previously programmed and here's the new convolve function that you should program this time so it takes two distributions and it Returns the convolution of those two distributions and down here in the main program our arena is still the same we still move three times by a step of 20 but this time we start with a distribution that is just a unit pulse located at the position 10 then again we plot it now down here in the loop we don't just move the distribution but we construct our move distribution which is a triangle distribution with a half withth of two located at M meaning for a move of M which is 20 in our case I set up the move distribution not being centered on zero but being centered on M and then a convolution is done between the old position distribution and the move distribution and the outcome is a new position distribution and again we plot this so now please implement the convolve function and before you start let me just show you how the result will look like first of all the robit is at position 10 it's a unit pulse here then it moves for the first time for 20 we get our triangular distribution and then we move again and our distribution gets wider so instead of three values we have 1 2 3 4 4 5 we move again and we have 1 2 3 4 5 6 7 values in our distribution so this is the outcome you should expect
SLAM_Lectures
SLAM_E_03.txt
now here's the second question remember the calman filter in our last unit so we had this Arena we had our landmarks and we had our uncertainty model by a covariance matrix and then we predicted and corrected and predicted and corrected and so on so now say initially we don't know our Precision so we just say it's a very wide Gauss say this would be one Sigma where the center is in the center of our Arena so the question is would our cman filter deliver correct results using such an initialization so a yes B no due to the large variances that are involved here or C no due to Landmark assignment problems
SLAM_Lectures
SLAM_B_01.txt
welcome to Unit B of our slam lecture and this will be about using sensor data or measurements to improve the robot state so if you remember the last time we did two things first of all we computed the robot's trajectory so we used the motor signals to determine when the robot goes straight and how fast it goes straight and when it turns and so from those motor tis we computed this trajectory in the real world and the second thing we did was we used the laser scan data so these 600 60 values here and we programmed an algorithm that finds the cylinders in the scene and so if you load all that into the viewer the viewer automatically projects this cylinder which is to the right of the robot to here in the real world and the next cylinder to here and so on for all of the six cylinders so in the beginning the extracted cylinders match very well with the landmark cylinders which are depicted as those gray discs but as the robot turns the radius of the curve is too narrow and so after a while if we go on like that we obtain this situation and as you can see here the robot detects three cylinders in the scene and it draws them here however if you look at the matches then it's most probable that those cylinders correspond to those landmarks in the real world and so our plan is now to find out that this point actually corresponds to that one this to this one and this to that one after we know this correspondence then we can shift not only this triangle to its correct position but also since our robot's position is connected to this triangle then by moving this triangle down here and turning it correspondingly we will also find out the true position of our robot so the first thing we'll have to do is for a given robot position and extracted landmarks compute the positions of the landmarks in the real world what you see here is computed by the viewer but since we will need the landmark positions in the real world for our further algorithms we will have to compute that ourselves and I programmed that code already for you so make sure you download the files for the unit B of our slam lecture and then locate and open the file slam for a project landmarks let's just go through the code so as usual I import from Lego robot and then I've made another thing I have put some of the functions from Unit A into a library which is just a normal python file which you'll find in the unit B directory so the filter step compute derivative fine cylinders and compute cartisian coordinates this is all routines we made during our last unit so the actual code is quite simple this function detects the cylinders in the scan and converts them to cartisian coordinates in the scanner's coordinate system and so what is returned here is just a list of tups which contains every cylinder that was found in the scan then there's a second function here WR cylinders and basically this just writes a list of of points to Output file now let's have a look at the main function so in the main function I set all the constants we need so we had a few for the movement step which were the scanner displacement the conversion factor to mm and the robot's width and with a second set of thresholds which we needed for the extraction of the cylinders in the scan and then here the start pose is set and those values were just given by me and I obtained them by measuring the approximate position as given by our overhead camera then I lo the lock file I load the motors I load the scan and then here is the main Loop the main Loop first computes the pose from the previous pose using the motor tis from the lock file and our thresholds so we start with this pose here and then in every step of the loop we compute an updated pose by using the motor tis then we use the function above to compute the cylinders in the scanners coordinate system and then we just project them to the world coordinate system it uses a list compreh ention to build a new list from the cartisian cylinders by transforming them using the scanner to World transformation so this transformation takes an XY point and takes the current pose and then transforms this point using that pose to the world coordinate system and it is a convenience function that is located in the legol lock file class and finally we print out all the results we print out our position our updated posst we print out the detected cylinders and we also print print out our world cylinders which we have just computed so the loop consists of only two things it computes the new pose and then it extracts the cylinders and using the pose IT projects them to the world coordinate system so after you run this it will generate a project landmarks. text file so open up the lock file viewer and load this file and now you will see the same scene as before we can travel along the trajectory and for every pose you see the landmarks which are projected from this canner coordinate system to the world coordinate system and this is the very same that we had before the only difference is that those points now were computed by our code and not by the viewer so the next thing we'll have to do is to assign the extracted cylinders to the reference cylinders that is we want to associate the cylinders from our measurements to our map so let's say this is reference cylinder number one 2 3 and so on whereas our measurement extracted the magenta cylinders 1 2 3 and four so we want our function to find the closest reference cylinder to an extracted cylinder and then output the result as a list of tupal so that if cylinder number one gets associated with landmark number two we will generate the Tuple one two and the same for cylinder number two and so on for all the other detected cylinders which are close to a landmark cylinder and as you remember we're also having situations like this so detected cylinder is far away from the reference cylinders and so we don't know if we should assign it the upper or the lower cylinder here and in order to prevent wrong assignments we Define a rejection radius and so if no reference cylinder is inside this rejection radius then we won't assign any reference cylinder to our measured cylinder so and here's your programming assignment it's called slam 4B find cylinder pair and there's only one function you'll have to implement which does exactly what I've just explained so it gets a list of cylinders and a list of reference cylinders and the maximum radius this is the rejection radius and it outputs a list of pairs and every pair contains one Association of a cylinder I with a reference cylinder J the remaining function is essentially the same as before so in the main function the pose is updated the cylinders are extracted and transformed into the world coordinate system and then the first new thing is here the call to your new fine Cinder pairs function which Returns the pairs then just to visualize the result I have made some modification down here where I output every reference cylinder which had a closest extracted cylinder in the scan so after you run this it will produce a fine cylinder pairs. text file load this into the lock file viewer and you will see the following at position number zero six landmarks were extracted from from the scan and they all had the closest neighbor in the map and so as you move along the trajectory you can see those landmark assignments then as we turn here you can see that we lose those assignments and our strategy to always match the closest point does not work correctly so our hope is that as we move along the trajectory if we always match our extract landmarks to the map and we use that information to correct our position then we will never lose our true position in in the real world so now please program The Landmark assignment
SLAM_Lectures
PP_07.txt
and this can be seen as follows if you started here I want you go to here then the algorithm will proceed walking towards the guild but now say there is an obstacle which looks like that then the algorithm will still try to walk towards to go and indeed the distance to the goal is always decreasing now if this obstacle goes on like that then while the distance is always decreasing the algorithm will walk a long way until it reaches the goal whereas the correct solution would have been to go backwards only once for a little bit and then the algorithm could have walked directly to the goal which would have been considerably shorter right so you see our Dijkstra algorithm uses G the known distance from the start to N whereas our greedy algorithm uses H an estimate of the distance to the goal and extra expense many notes whereas the greedy algorithm expands fewer nodes however the Dijkstra algorithm is guaranteed to find the optimal solution whereas the greedy may deliver good results in general but is not guaranteed to find the optimal solution and now we'll do something interesting let's just define a new cost function and we'll just take the Chi which we had in Dijkstra's algorithm and we'll just add the age which we had in our greedy solution and very astonishingly using the sum of those two functions we'll get the following properties it will usually expand less nodes then the Dijkstra algorithm and this I find very astonishing when you see it for the first time it is guaranteed to find the optimal solution so very interesting you just add a function which gives us the exact distance from the start and another function which is just an estimate off the distance to the goal and although taking age alone does not give me an optimal solution the sum of those two function is guaranteed to provide me to correct solution and this algorithm is pretty famous and it's called the a-star algorithm so now at any point in the algorithm when you want to go from the starch to the go and we have to decide if it should add this note then we'll have to compute the value of the function f which is Chi plus h so the Chi function that is the path back to the start and we know that exactly whereas the H function is an estimate of the remaining distance towards to go and I should add the following the H function must be admissible which means that the actual cost from n to the goal must be larger or equal to H the estimated cost right in there for the direct line distance is a useful admissible estimate because the direct line is always shorter or equal to the real distance between N and G but you can't find such an estimate for the remaining cost to the goal for other applications as well for example if this graph spans all the states of a checkerboard game where the start situation is the initial set up of the figures and the goal situation is that player a wins then you probably can show a certain port state to an expert chess player it just looks at it and will tell you well yes that is check made in ten moves and so by using his expertise he can give you an estimate of how far it is to reach the gold state that player events and if he underestimated the opponent of course it may take longer but in order for a star to give optimal answers it shouldn't take shorter so this is important age must be admissible now let me give you a little bit more insight into this function f equals Chi plus h so again let's have the situation this is our set of visited nodes and we're about to expand this node and so we have this cost made up of Chi the known cost from the start to node N and the estimated distance H now we know what G means G that is the known cost from the start to end and we also know what H means that's the unknown but estimated cost from n to G and we know that F is sum of G Plus H but what does it mean well as it's the minimum cost of a path from start to go which goes through n why don't you see this is pretty easy because if the path goes through this node then I know for this part of the path it has cost G and for the remaining part it has at least cost of H so the sum is the minimum of the cost I will have from s over N to G now let's also have a look how we have to change our implementation so as you remember in front we so far had tuples consisting of the distance which is G the node itself and the previous node and by placing G to be the first component of this tuple we achieved that those tuples are in the order of increasing cost when being sorted and so we used that fact when we pop the root element of our heap because then we know this will be the element with the smallest G so now we have to order our elements according to F so in order to make that work with our current implementation we just put f in front of cheap now here's the code in which we'll modified Dijkstra's algorithm to obtain the a-star algorithm now let's go down to the code we now have an additional function a helper function which we need to compute the Euclidean distance between two points and as you'll expect we need this function to compute the direct line distance to the goal now there's only a few modifications to be done in the main part of the algorithm the first change will be that since our tuples now should have four elements instead of three you will now have to add a component here as the first component of this tuple and what is called here the total cost that is actually f so that is the cost from the start to the node plus the estimate from the node to the goal and then entering the start node this total cost is of course that is between the start and the goal node so next change is here and I have already done this for you since the tuples now have four elements I have to assign the tuple now to four variables now including the total cost so leave that as is and the only other change is down here so you will have to provide those four elements which you push on the heap instead of three which you had previously so compute the new cost as usual but now add the distance from the new position to the go to this new cost to obtain the new total cost as well and make this the first component in your four tuple and that's all there is to do so mainly modify the start node tuple and the code which pushes the neighbors of a node so go ahead and implement this so let me give your teaser what you will see after you finish the implementation because now it's really cool to play with the interface so set start node here and nth note there and you'll see it found the path immediately and you also will see didn't expand any notes except for those along the path from the start to the go now if I start to block this path it will expand more nodes but as you see it is not expanding very many nodes in the wrong direction so if you remember the example we had earlier with the dead end then you'll remember that when the situation looked similar to this one the Dijkstra algorithm produced a huge number of nodes expanding here whereas the a-star only it's bent or a small number of nodes just a narrow passage around this corner here and in fact if I put the nth node rather here then it will expand only a single line of nodes here there's also something else which is interesting so if I place the note here and there's no obstacles a star will expand all those notes however if I place it here it will not expand any unnecessary notes also if I place it here or if I place it diagonally why does it expand much more nodes if I put it here the problem is that due to our simplistic distance function on the grid the length of the path from the start with the goal which we find eventually is longer than the direct line distance and so since our direct line distance underestimates the true distance this costs us some extra expansions of nodes however in this situation our simplistic distance measure along the grid is identical to the direct line distance and so there is no expansion of unnecessary nodes so overall you see now that the algorithm is much faster and you can use it add interactive speeds on a normal computer so you can think of complicated structures in fact even random like structures through which the algorithm will find a path
SLAM_Lectures
SLAM_B_04.txt
let's review what we did so far so starting from scan with many many points we took the approach to identify a certain subset of points namely those close to large jumps in the range values and from that computed positions of cylinders assuming that those combinations of a falling and rising edge belong to cylinders in the real world by doing so we assign the meaning to those points so whereas those points here are just XY coordinates those points are basically objects and then later we assigned those cylinders to a map which may mate by hand before starting our experiment and using that assignment we updated or corrected the position of our robot because we used the cylinders those features and not the original raw measurement data this is also called a feature based approach and this is called a feature based map so now the good thing with that approach is that instead of having to deal with 660 measurements which don't carry any meaning we can now deal with a subset of in that case only five meaningful objects so this makes the method efficient and also since these are meaningful objects assigning them is a relatively reliable operation but there's a drawback and we encountered that already so later on when we drive along our trajectory we will only have one single cylinder in our scan and so our assignment strategy doesn't work whereas if you look here there is still many points they don't have a meaning but we probably could match them somehow to a wall and so the major drawback is by concentrating on features or high level objects which are relatively rare in the scene we have less work to do in assigning those objects and the assignment is relatively reliable but since they are so rare occasionally there may be no objects in the scene at all like in this case and then even though we are having lots of measurements we may get lost now let's think about this wall idea once more so assume that when we have built this robot arena we have made a CAD model so we know the dimensions and let's put a wall around the entire array now to comply with modern university safety regulations so think about our robot is here and it's kinds all those values whatsit slider and maybe the here there's a detected poll and here there's the reference poll so I approach so far boss we obtained the correspondence by using this correspondence we estimate the transformation which is needed to align all the cylinders and in this case there's only one cylinder so that doesn't work now think about the ball now these measurements here they're actually lying on the wall but I don't know where because these are just points and not features they don't have a meaning and I don't know their corresponding partners on the wall but now let's just do the following if I just find the closest point on the wall and then I say this is a corresponding pair of points exactly as this was a corresponding pair of points then this correspondence would track my robot in that direction as we see here the correct correspondences would be rather something like that because from that point we see that we would have to shift the entire measurement in that direction so but now if I have lots of correspondences in that way and also lots of correspondences in that way then my hope is that all those will in some tear the robot towards here whereas those will tear it towards here so in sum I helped to get the correct transformation and as in the previous approach we will have two steps first of all find the correspondences and then estimate the transform so note that after we have set up such a pair of points this approach is not fundamentally different from our previous feature based approach of setting up a pair of cylinders so now how do we find the correspondences so in general this is not an easy problem because if my lighter tells me that it hit an object here then I would have to scan my scene for objects which may have been hit by the lighter and then assign the closest object but in our case things are much simpler because our arena consists of four axis parallel line segments in the left right and the top and bottom whenever there is a ray hitting a point at why we can do the following because the left wall of our arena is at x equals zero we may just test if this X is smaller than a given threshold meaning this point is inside this band and if this is the case we define this point being the corresponding point and this is very easy because this point has coordinates 0 & 1 the same holds here if X minus right smaller and threshold right is this value here then this partner will be right and Y and so we will get all the pairs for the estimation of our transformation now instead of using every single beam we will do a subsampling so we will only look at every and beam so that our matching algorithm is faster so I want you to implement this and the offline version is slam 5 a fine wall pairs so the only function you'll have to implement is this get corresponding points on wall so this function receives a list of points and it has the dimensions of the arena as default parameters and you can see here the arena is exactly 2 times 2 meters and there's this distance threshold being 150 millimeters or 15 centimeters now the task of this routine is for every point in points check if it is within this band of the left wall the right wall bottom ball or the top wall and if so at the point itself to the left list and its partner on the ball meaning the point with the modified coordinates to the right list and then return both lists so my implemented I checked for the left and right bottom and top ball in that order and when I had a match for the left wall I didn't care about possible other matches now let's have a look at the rest of the program so I told you we will use a sub sampling of the scan and this is done here that's already implemented then let's have a look at me main function is pretty short now we don't have those thresholds for cylinder detection anymore here we have our filtering loop we compute the pose as usual then we get the sub sampled points off the current scan and convert those to world coordinates then there using this matching route you'll have to write to get last and write pears and in the end the output a list which contains both the left and right list so what we write out is for every point that has a closest match on a wall you write out the point and the match now before you start let's have a look at how the results will look like so your code will produce a fine wall pairs text and if you open this you will see the following here's a robot in its start position and at that position looking down there it will scan almost the entire arena so we won't use all the scan points but we have this subsampling so what you see here is a set of subsample points from their current scan notice you down here is the corresponding points which are on the bottom wall and similarly you see here the corresponding points on the Left wall so as we go through the scene you see as we start to curtain there are still correspondences here but our distance threshold will omit point pairs which have a larger distance than 15 centimeters you can see this nicely here so if all those points only those correspondences remain so now please implement this
SLAM_Lectures
SLAM_F_01.txt
now welcome to unit F and this will be about simultaneous localization and mapping which is the topic that gave the slam lecture its name now let's have a look at what we did so far so here's our robot at a given position having a certain orientation and we also know there is an error associated with position and orientation which we expressed as an error ellipse and a dis segment both depicting plus minus one Sigma now our robot moved and the inaccuracies in the movement in used an increase in error for both the position and the orientation and in the filter this was also called the prediction and we noticed if you go on like that we will have an ever increasing error ellipse but fortunately we had some landmarks in the scene with known locations and by measuring the distances to those locations we were able to correct the position of the vehicle and after that correction we also obtained a smaller uncertainty for our position and heading and this here was also called the correction step and now for this correction step it was essential that we have those landmarks which lead to those measurements which in turn give us a better estimate for the position and decrease our uncertainty now for the implementation we did so far it was essential that we knew the coordinates of those landmarks in advance and if these were available we solved the problem using for example the extended common filter approach or the particle filter approach now what happens if you don't know the landmarks in advance which is actually the usual case because it is very hard to get hold of cadastral maps or floor plans and even if you manage to do so you will often notice that they are not very useful for localization because the buildings have been built in a different way or they have been modified later on or there are so many additional items like chairs and tables that are not part of the floor plan but make up a huge portion of what the robot sees so now let's think about the following I put my robot somewhere and since I don't have a map of landmarks I will have to produce that map on my own so since I start with an empty map I can Define the robot's position to be in the origin and the orientation to be along the x-axis and I can pretend the uncertainty in position and heading to be zero now the idea is as follows sitting there the robot sees some landmarks and so as in our case the laser scanner measures both the bearing angle as well as the distance I can use those measurements to Define The Landmark positions in the real world meaning in the world that has its origin in the start position of the robot and which is oriented along the heading of the robot at the start position and now after having defined those landmarks I will do the very same as in the previous case so the robot moves along it increases its uncertainty but then it also measures the bearing angle and distance to the landmarks which just defined and using those measurements I can correct the robot's position and orientation and thus I will also get a smaller error ellipse and a smaller uncertainty in heading now this looks simple instead of taking the landmarks from a map which was optained by some external means I do the following whenever I see a landmark for the first time I determine its position relative to the map which I currently build up and I enter this as a new Landmark which then can be used to subsequently for the localization of my robot at all subsequent positions where the landmark is also in the field of view of the robot's lighter now it is not exactly as easy as that because when we see the landmark for the first time our measurement of the bearing angle and the distance IND uses an error of the landmark position as well and later on saying that position I see this Landmark for the first time I am not only having the angle and distance error but I also have the error in the current position of the robot which add up to an even larger error and the position of the landmark however if I move on I will observe this Landmark once again and so having multiple observations the error in the landmark's position should also decrease and so what becomes clear is I cannot simply put a landmark into my map when I measure it for the first time and then assume that the landmark position is correct I can only put it into my map with a certain error and then as I move move on and measure that Landmark over and over again update this error just in the same way as I am updating the position and orientation of my robot so previously in our extended colon filter implementation our system State included the X position y position and the Theta The Heading angle of our robot and now doing the slam version of the extended colon filter we have the following we still have the robots x y and Theta but we also have F X and Y of landmark 1 Landmark 2 Landmark 3 and landmark 4 so these also become part of our state Vector what you see now immediately is our state Vector doesn't have constant size anymore but for each Landmark which we observe and which we didn't observe before our state Vector will grow by two elements now let me ask you the following question say our robot starts in 0 0 with heading zero but I feel uncomfortable with the idea of assigning a zero error to the position and orientation and so just give him a large error in position and heading now the robot will observe some landmarks and since the position error is large the arpes for those landmarks will be large too but now as I move I will observe those again and so all those eror ellipses will get a bit smaller and now the question is let's say I move for a real long time in this area observing those landmarks over and over again so my hope is that those Arrow ellipses will get smaller and smaller until they are really really small so that the landmarks are not uncertain anymore so that the situation is equivalent to what we had in our earlier extended C filter approach while the landmarks were assumed to be error free so what do you think will the error of the landmark positions go down to zero as the number of measurements by my robot goes to Infinity
SLAM_Lectures
SLAM_D_16.txt
so when you run your python script it will generate this common prediction. text now open this in the lock file viewer and you will see our well-known trajectory which is no surprise because we now implemented just the prediction step which is not different from what we did before however in addition to Computing the state we now compute The covariance Matrix and so the 2 * 2 submatrix for the position is visualized here as an arrow ellipse now in our start position this is a circle and this is because we have set the standard deviation to be 100 mm in both axes and we have also set the standard deviation in the heading angle to be 10° which is reflected by this plus - 10° circular Arc depicted here now as we move along you can see that the aror ellipse gets larger and also our uncertainty in heading quickly grows and how fast this grows has to do with our parameters which we called control motion factor and control turn factor and the way the arrow ellipse expand is in accordance with what we expect namely after we drove for a while our uncertainty along the trajectory is smaller than our uncertainty perpendicular to the trajectory because small differences in the left and right control will lead to a curve instead of a straight line and so the robot may be far off the trajectory although the error in the left and right control is relatively small and now if you go around this curve here our error ellipse grows even faster and it changes is the orientation of its main axis and in the end the aror ellipse is really large reflecting the fact that after we drive for a while without having any measurements the position errors will accumulate this also holds for the heading angle which ends up being plusus 135° and so what we need now is additional measurements so now let's have a look at the correction step so as you remember our assumption was that our measurement CT has a functional relationship with the state which can be expressed as a linear equation and from that we develop the equations which involved setting up a common gain and Computing our new mu and sigma as follows so that our mu after incorporating the measurement is obtained by the predicted me plus the common gain times The Innovation whereas our co-variance is the identity Matrix times our predicted co-variance minus the Caron gain time c times the predict co-variance so it is smaller than the predicted co-variance so now in general we do not have a linear equation here so this equation will be rather CT is some function which is nonlinear of XT and so in the extended colon filter we handle this case similar to the nonlinear transition function so our new mu is defined as the predicted mu plus the common gain and this is still exactly the same formula times our measurement minus and here's the difference function h of mwt and so instead of subtracting the linear transformed Mt I now subtract the nonlinear function H applied to my predicted mewt and so what happens to the covariance Matrix well this is the same however here I don't have a matrix C anymore so similar to The Case of the transition Matrix I multiply here by the Jacobian of my measurement function and multiply this with the predicted covariance Matrix so the differences are first of all my measurement equation now uses a possibly nonlinear function and all the occurrence of C have to be replaced by H and so H is the Jacobian of the function H which is all partial derivatives of H with respect to the state now let's have a look at this function which defines the relationship between our state and our measurements now let's have a look at this relationship so our robot is somewhere in the global XY plane with a heading of theta and now unfortunately our laser scanner does not coincide with the robot Center so here we have a displacement D we will to note that as XL or laser y l and so it's easy to see that XL y l equals the position XY after robot plus the displacement times the unit Vector in the direction of the heading so now assume there's a land mark which is seen by the laser scanner of our robot and what the laser scanner gives us is an angle Alpha relative to the orientation of the laser scanner which we assume is mounted in such a way that this coined with the orientation of our robot and so apart from the angle the laser can of course of course also gives us the distance let us say this point has the coordinates XM and ym for map it's easy to see that R is given by the distance between these two points so it is xmus XL position of the laser scanner squared + ym - y l and we'll have to take the square root whereas Alpha that is the arc tangent of ym - y l / XM - XL now if you have a look here this gives you actually this angle and so we'll have to ract this angle which is Theta and so this is the equations we were looking for this is C our measurement which is a vector consisting of R and Alpha whereas this is our function H which depends on the position x y and Theta and it also depends on the coordinates of our landmarks however as our map is fixed they become constants in our measurement function H now we have to compute the Jacobian matrix H similarly to what we did for G and now let's first do the partial derivatives of of r with respect to the state I'll write that down again here so in order to compute the derivative of r with respect to X we have to take the derivative of the square root and this is 1 / 2 * the square root times the derivative inside the square root so as we need the derivative with respect to X this is zero but the derivative of this is non zero so it is 2 * XM - XL times the derivative of this with respect to X now remember XL is X + D * the cosine of theta and so this- XL gives us a Min - x the derivative is min-1 so this cancels out and we will also introduce Q as being the term below the square root so we can write this in a more compact notation of minus XM - XL / square root of Q and similarly the derivative with respect to Y is minus ym - y l / theun of Q and so it gets more complicated with the derivative with respect to Theta first we obtain the derivative of the square root times the partial derivative of this with respect to Theta is 2 * XM - XL times the derivative of minus XL which is minus but then here the derivative of the cosine is minus s so we'll get a minus minus so it's D sin Theta and for the other part we will get 2 * ym - y l now here we have y l = y + D sin of theta so the derivative of the sign is the cosine but since we form the derivative of minus y l this minus remains so in a more compact form this cancels out this is D / the < TK of Q * XM - XL sin Theta - ym - y l cosine Theta and when you implement this you might use something like Delta X = XM - XL and Delta y y m minus y l so you can easily express those equations as- Delta x / < TK of Q - Delta Y and Delta x * sin Theta - Delta y * cosine Theta so that was the first part now let's do the second part now we need the partial derivative of alpha with respect to the state and remember Alpha was the AR tangent so the derivative of alpha with respect to X is the derivative of this and remember the derivative of the arct tangent is 1 / 1 + x^2 so this is 1 / 1 + this squar times the derivative of that that's the derivative of a quotient so we'll use the quotient rule which is min-1 / the squar denominator times the numerator where the derivative of the numerator is zero because it is with respect to X but we still have to use the chain rule for the derivative of minus XL and so we get an additional minus one and so this cancels out and this multiplied by that is XM - XL 2 + ym - y l 2 so this is ym - y l / Q or Delta y / q and similarly we get for the derivative with respect to Yus XM - XL / Q which is- Delta x / q and so finally the derivative with respect to Theta again this is more complicated and it's easy to forget this minus one here which results from the minus Theta so this is the same Asus D / Q * Delta x cosine Theta + Delta y sin Theta - 1 so now in the last two slides we collected all the six Expressions we need to compute the derivative of H with respect to the state so once again we'll have to compute a derivative this time for the measurement function H and I prepared slam 7e measurement derivative question for you and it follows the pattern that you have seen earlier when we did the derivatives for the state transition so here's the function H so it computes DX Dy and from there the radius and the angle but there's two specialities first of all we use the arc tangent 2 function of Delta Y and Delta X and then after subtracting the Theta we normalize everything to be in the range minus PK 2 + pi and here's the function you'll have to implement so you'll have to compute those six derivatives and put them into a 2 * 3 Matrix and down here we do the usual thing using H the function we compute the numerical derivative then here we call the function that you've just programmed and again we form the difference and we output true or false depending on whether your solution is sufficiently close to the numerical solution so now please program this
SLAM_Lectures
PP_10.txt
to see what happens let us compare our previous version of AAR to our new implementation using the kinematic State space of the vehicle so in our previous implementation we had a roster of cells so when we were in one cell we could explore all the other cells and check if one of those neighbors has to be put into our front then the next step for example exploring this note we had a look at that note but there's is wied already this note this note and now these notes so these are the only new notes that are added so we see that these discrete cells limit the amount of states that are added while we proceed in our search and so imagine this algorithm runs for a while starting from our start note towards our goal then as we have seen our worst case search space is that of the dyra algorithm it will look like that all those noes will have been visited so how many nodes are these so if we need D steps from our start to the goal then the number of nodes inside this dis is proportional to the area of this dis which is proportional to D raised to the power of 2 so that's bad but it's not too bad and as we know we can hope for a better shape of the search space so that this number gets smaller although of course in general we will end up with d s so now let's have a look at our new algorithm which operates in the kinematic State space and it justes the following starting from the start Noe it explores all the Alternatives and then it explores all alternatives for all those Alternatives and it looks pretty much the same as this algorithm however what we miss now is this feature that we had earlier namely that once we have this visited a note we can cross it off there's no need to visit this node again if we have visited it already now here all our states they are continuous they consist of XY and he now it doesn't make sense to store this XY heading in a list or another data structure of visited States because if you store that state the chance that we generate this continuous state with exactly the same x y and heading is basically zero so again drawing the larger picture with our kinematic State space we start here where we have only one note and then we generate three notes and then for each of those three notes we generate three notes again so overall this is one note then in the next stage it is already three notes in the next stage it is nine notes and since we never wasit a state that we generated earlier again we will carry on like this so when when our search proceeds for a long way until it reaches the goal we will have generated 1 + 3 + 9 and so on+ 3 raised to the power of D if D is that distance again which is the so-called geometric sum and is approximately 1 and 1/2 * 3 raed to the power of D so this is the problem in our algorithm so here we had D raised to the^ of 2 which is growth quadratic in the length of the path but here we have three raised to the power of D which is exponential in the length of the path and this will kill us any computer so imagine you buy a computer which is three times as fast as your previous computer and what you can do with that is you can solve a path that is just one step longer than your previous path so it grows so fast that we have no chance no no matter how fast our computers are so we need to do something else here so one way to solve this problem is to introduce a discrete space of possible poses so just as we did earlier in our a star case where we had this discrete roster and when you whenever we have marked those notes as being visited we do not have to add them later on because this flag tells us that this state is already in the set of optimal notes and now we do something similar here we had only XY but now we have a state space of poses XY and heading so effectively we're having now a 3D roster with the first two dimensions are X and Y at discrete steps and third dimension is the heading angle also at discrete steps so for example we could use unit steps for the raser in X and Y and discrete steps for example in terms of multiples of 10° where of course the maximum is 350° so that the space is subdivided into 36 cells so for example if we start at this continuous state which may be 2.4 1.2 and 10° we will enter this into our discrete R cell 2 one and one and then if we move on from here and we end up in this cell which is say 5.1 11.9 and 89.2 de we will enter this into the discrete Cell 5 11 and this divided by 10° in the Floor of that this is 8 and so by marking those cells as being visited we avoid the case that we later generate from a different position a similar note and this prevents then the rapid growth of our search space on the other hand it has to be noted that since in this case we do not generate this solution our search does probably not return the optimum path anymore however as you'll see it works pretty well so the only thing we'll have to implement for this to work is a function called post index which takes a continuous post and computes a discrete index consisting of an X index y index and a heading angle index and using those indices we have to keep a record of any index that we visit during our search now this brings us to your final programming assignment namely you'll have to modify the pp2a code to arrive at the pp2b car state space AAR solution code which is almost identical to the previous code but now has to include this discrete collection of states which have been visited already so it's almost the same up to here so now here is something I've integrated in addition and you will see that shortly you will be able to switch on and off in the user interface forwards and backwards movements so these are exactly the same curves but all of those three additional states do have negative distances so they drive backwards so here is this new function which takes a pose and computes a discrete version of this pose and for this it uses some roster size in X and Y and another roster size in heading which is set to 10° and so here is the exploration of the search space which is now called AAR because it is indeed an AAR like algorithm which keeps track of the notes that were already visited so all the start is identical to the previous version and it starts to get interesting here here you'll have to do some changes so at the very moment I pop my next pose from my Heap I will have to compute a discrete version of this pose using the above function so that's not too hard to do it's just one line and then I will check if this discrete version of the pose is in the dictionary of generated States already and if so I will skip the rest of the loop next we Mark the visited cells that's identical to the previous solution and then since we did not skip above here we now have to enter our new pose into the dictionary of generated States however you'll have to change this here because we do not want to enter the continuous pose here but rather the discrete version of the pose and this is all changes there is to do in particular there's no change to be done down here here continuous versions of the poses are pushed onto the Heap but that's fine because all the logic containing the discrete indices is up here and here finally remember when you put together the new code take this part here as this because this part is also different from the previous version of the code so now please go ahead and program this
SLAM_Lectures
SLAM_D_13.txt
now let's apply all this to our robot well first of all we need a motion model but as you remember in Unit A we already set up the motion model for our robot there's a track withd W and there's some left and right movement off the tracks and by that the robot will move on a curve segment where the radius are and after moving it will have changed its orientation by an angle of Alpha and so we obtained the following equations so Alpha was r- L / by the width of the robot and R the radius was L / by Al which only works for R not equal L and so for this case we obtained the new state that is the Old State plus r + half of the width time the S of theta + Alpha minus the S of theta and a similar expression for the second component and for the third component we just add Alpha to The Heading so and as you see this is just a function of our old State and Alpha and R but Alpha and R are ultimately computed from L and R and so you see this is the state and this is the control so we can write this equation in the form X Prime is a function of X and U and just for completeness if R equals L then we have the situation that the robot is just moving on in the direction of its heading so this down here is zero because if L and R are the same then the heading does not change and also this is L but you could as well use R because in this case R equals l so I implemented this function for you so in the downloads for this unit you will find this slam 7A extended col filter class and this starts a new class which is called extended common filter and we will learn shortly why it's called extended and so this is the function G which we just introduced so it gets the state which contains the X Y and Theta and gets the control which is left and right movement and then if those two are different then here is the computation that we just had on the previous slide it computes Alpha the radius R and then computes the three components which are called G1 2 3 here the only specialty being that the theta plus alpha is normalized to a range between minus pi and plus pi so this is obtained by adding Pi then doing a model division by 2 pi and then subtracting Pi again and so for the other case the robot doesn't turn and we use those equations which just move in the direction of the heading and leave the Theta as it is and so here you see a new construct which is called array now since we have to deal with vectors and matrices now we import the numerical python so this contains classes for handling matrices and vectors so in the end we take these three Scala values which have been computed either here or there and construct a new array by putting G1 2 and three into a list and calling this array Constructor which converts this list into an array of floats and in the main function there are some constants for our robot and these should be familiar so this is the conversion factor of motor takes 2 mm that is the drag birth of the robot and this is the scanner displacement which we'll need only for the output and then we start with our measured starting position which is XY that's the upper right corner of our Arena and a certain heading and we put that also into an array we read the lock file and then here this is the main Loop and it is very very simple we read the motor tis from the lock file convert them to an array and multiply the array with tick to millimeters so that left and right are multiplied and then we just call our function above the extended column on filter G the function of State control and using additional variables in this case only the robot width and so we get a new state and we just depend all the states to a list and then in the end we output all the states but now remember the states are actually the center of our robot but we used to track the laser scanner which is displaced by this scanner displacement and so right down here where we output the position we modify the position so that we output the position of the laser scanner had instead of the center of the robot and this is just to be compatible with our previous handling of the point that is tracked by our overhead camera now let's run this now after we run this it will produce a file called States from tex. text and now if we open this we will see our familiar curve we may load additional data like the landmarks so we can clearly see the trajectory is smooth but not correct all the Rob reference so this should look familiar because it is exactly the same algorithm that we had in Unit A now let's see how we can combine this with our common filter and I'll write down the equation once again so this was the equation for the transition of the state and due to this transition we obtained those two equations which together are the prediction step of the colum filter and as you see this is linear so the transition of the El state uses a linear multiplication by a matrix and the control is also multiplied by a matrix however in our description of our robot we found out that XT is some function of XT minus one and UT and we saw that this function has variables in the denominator and also has s and cosine function so it's nonlinear and so one way to proceed would be to linearize this function so that we obtain this and then we can use this in a common filtering and this is indeed the standard common filter now we will have a look at what what is called the extended common filter so instead of linearizing this we used the nonlinear function to compute the prediction of our new state so in the same manner as we moved here from our transition equation to the equation that computed the predicted state from the old State using just a replacement of XT minus1 by m t minus1 we now move here from our nonlinear equation that uses XT minus1 and U to the prediction equation that uses m t minus1 and u so we proceed in exactly the same manner and then for the predicted co-variance we do the following now we do not have this a matrix anymore because we replaced that by a nonlinear function and so what we have to put in here is the Jacobian matrix of G so it is all derivatives of G with respect to all variables in the state or you could also write this as all partial derivatives of G with respect to the state and this Matrix is computed at M T minus one and U so this is the prediction step of the extended common filter and to give an example so if our state is x y and Theta then the Matrix G would be the partial derivative of the first component of G with respect to x with respect to Y and with respect to Theta and we would do this for all three components of our function G so this is the chopan Matrix of G in case our three-dimensional state is x y and Theta now let us figure out G so in our case we had G G of X Y Theta Ln R which was given by X Y theta plus these terms and so this is G1 G2 and G3 so we have the partial derivative of G1 with respect to X and that's easy because there's no X in this part behind here it's just here so that's one and then the partial derivative of the first component with respect to Y well in the first component there is no Y at all so it's zero and it's more complicated for Theta so there's no Theta in this part so that's just a constant term but there's theeta in here and here so you have to form the derivative of those two terms so the derivative of the sign is the cosine so we have this now the second component we have no X in the second component so that's zero we have a y and for the derivative with respect to Theta we obtain this and so for the third component that is just theta plus Alpha so the derivatives with respect to X and Y are zero and with respect to Theta it is one so these are the nine elements of our Matrix G now there's one problem with those formulas because here there's the expression r + W2 but R is L / Alpha and so if R equals L Alpha is zero and we would have a division by zero when Computing this so we need to think about the case R equals L as well let's have a look at this component here let me rewrite this component now let's think about what happens if Alpha goes to zero now we multiply this out and obtain now if Alpha goes to zero then this goes to cosine of theta minus cosine of theta so this will go to zero multiply it with a constant Factor the entire term will go to zero now here it is a little bit more complicated because this will go to zero but this as we saw is L / by Alpha so this will go to infinity and so we need to have a closer look so this is L * and we're interested in the limit when Alpha goes to zero now we used the rule of lital a French mathematician who found out that we can find this limit by forming the derivative in the numerator and denominator so we have the derivative with respect to Alpha down here that's one and the derivative in the numerator that is minus sin of theta plus Alpha and the derivative with respect to Alpha of this term is zero so what remains is min - L * sin of theta and so similarly we can do the second term and so overall for the case Ral L we obtain that g equal 1 1 1 on the main diagonal then minus l s of theta this is the term which just derived on the last slide and L * the cosine of theta so now we obtain G which is the derivative of G with respect to this state for both cases for R equals L it's this and for R not equal to l see two slides ago so now we can program this so I prepared this code for you if you want to do it offline it is slam 7B State the ative question and this is the code we had previously so in our extended Calon filter this is the G that computes the prediction from an Old State and the control and here this is the function you'll have to implement so this is all derivatives of G with respect to all variables in the state and so as we saw this will result in a 3 * 3 Matrix and in order to show you how to construct this I've put here this array Constructor which constructs the 3 * 3 Matrix just with the elements 1 2 3 in the first row for 4 56 in the second row and 78 9 in the third row and so you're doing so by putting every row into a list and making a list of all rows and then giving that list to the array Constructor so you will need the Theta which is the third component so counting from zero it's index number two of the state and you will need left and right which is the two components of the control and you'll need to distinguish those two cases if R is not equal to l so the robot makes a turn or if R equals L well the robot goes straight and so in order to find out if your solution is correct the main function down here does the numeric derivative of G and Compares this to your analytic derivative so here we Define a small Delta and call the original function G with the state modified in X by this Delta with a state modified in y and modified in Theta and we divide this by Delta so this gives us the difference quotients which are if Delta is small enough close to the differential quotient which you are Computing down here and so in the end we output the difference quotient which is called the numerical derivative the differential quotient which is what you compute the difference between the two and then we call a function from numerical python which tests if all the values in these two matrices are similar and so if you run this you should ideally see something like that so this is the result of the difference quotient this is your result of the differential quotient and you see the one on the main diagonal and here the derivative with respect to Theta and you see the difference is very small and so the final test tells you it seems to be correct because the values in this Matrix are very small now before you start let me explain briefly this trick with the difference quotient so for example we are interested in the partial derivative of G1 with respect to X and so as you know from differential calculus this is the limit of G1 of X+ Delta Y and Theta minus G1 of XY and Theta divided by Delta and so what the main function does is that it computes this difference quotient for very small Delta like 10 ra to the^ of- 7 then it prints out this value and also prints out this value which results from your analytic differentiation of G1 with respect to X and then it prints out those two values and checks if they are approximately the same so now let's program this
SLAM_Lectures
SLAM_G_06.txt
so now finally as a last modification to our fast slam algorithm we're looking for a way to get rid of those landmarks that appear at some point in time but then are not measured subsequently and still stay in the list of landmarks in our particles now here is an approach to deal with those spurious landmarks so when our robot observes The Landmark for the first time and it initializes a new common filter for the landmark it would also initialize a counter and set this counter to one if our robot moves on and observes this Landmark for the second time we will increment this counter each time we observe this landmark and so this count value is a measure of confidence for the existence of this Landmark now what happens if we move on say here the field of view of the laser scanner is like this so it can't see this Landmark anymore and so we have to decide if it shall decrement that number or if it shall leave this as is now if it would DEC Rec commended the following would happen after this movement the count would be two 1 and zero or even minus one and so we would remove the landmark when its count is below zero Now using this strategy would mean that we forget our landmarks if we don't observe them regularly now this looks good at First Sight however remember the situation in our Arena we started here and going along here we observed some landmarks quite for a number of times and then being here they're clearly out of sight if the robot so after a while we would forget those landmarks however later on we come back and the landmarks would be in our field of view again however at that moment we would have forgotten them so we would have to reinitialize the landmarks and all the earlier measurements which have led to a precise position of the landmarks would be lost so we will adopt the following strategy if our robot sees the landmark for the first time it initializes it and sets our Landmark counter to B now later if the robot does not report that it sees this Landmark although The Landmark is within the visibility range of our scanner we will decrement count however if our robot is somewhere where the landmark is not within the visibility range of our laser scanner we will not decrement the count now we will use a simplified method to decide whether a landmark is within the visibility range of our lighter namely we know the minimum and maximum bearing angle of our scanner and we will consider any existing Landmark which has a bearing angle within that range as being visible by our scanner now this oversimplifies things a little bit for example as you know our Landmark detection works by detecting peaks in the scanner data and so certainly if a landmark is for example at the beginning of the scan measurement like this Landmark here our method would fail to detect The Landmark nevertheless our geometric test would say that it is within the range of the laser scanner and so for our simplified test here we just don't care about this also we don't handle occlusions correctly for example if there's one Landmark here and another landmark here then our lighter would detect something like that which means our function will only return this Landmark Although our Criterion would consider both landmarks to be visible and so we also don't care about this case technically we will handle our Landmark counter as follows so at a certain step of our fast slam processing a certain particle has collect to the number of landmarks so using the bearing angle Criterion we will identify the landmarks that are visible and decrement their counters whereas we do not touch the counters of all landmarks which are not visible now if we observe a landmark so we obtain an observation and assign it to certain Landmark using our maximum likelihood assignment we will add plus two so that for those landmarks we will indeed increment our counter by one whereas for the landmarks we do not touch anymore this result result in a decrement by one and for any Landmark which we observed for the first time we will directly set the counter to + one and after initializing the new landmarks we will Loop over all the landmarks of a particle and test if their count is smaller than zero so say If the previous count was two then it is two and a landmark stays if it was Zero the result is zero and that stays two but if this was Zero the result will be minus one and the landmark will be deleted from the list of landmarks of this particle so now let's program this I prepared slam 10f counter for you and this is mostly the same code as in our last exercise with some modifications now let's first have a look at the update and compute weights function in our fast slim class so there are two modifications here now in the loop overall particles we had to call to a function that decrements the landmark counters of the particle so we does the following for each Landmark that should be visible to our laser scanner it decrements the counter and for any Landmark outside the visibility range of our laser scanner it leaves the counter as is then here is our Loop overall measurements and this is not modified from our previous solution but here after Computing all the weights and updating the landmarks we call remove spurious landmarks and this function will remove any Landmark which has a counter smaller than zero from the list of landmarks of that particle now these are the modifications in our filter class now let's have a look at the modifications we'll have to do in the particle class so the first modification in the particle class is in the Constructor We Now set up a list of landmark counters which is initially empty below this there's our usual code and you will have to insert your previous code here and there and all the places are marked as usual with that sign and so here in update particle there's the first modification you'll have to do first insert your previous code but then modify this code as follows as you know after Computing the likelihoods your code has two Alternatives namely initializing a new landmark in which case you'll have to append one to the list of landmark counters or updating an existing landmark in which case you'll have to add two to the corresponding Landmark counter and I just explained why this has to be two namely because we decrement the counters of all visible landmarks in advance so we have to add two in order to end up with a plus one for landmark which is in the visibility range of the scanner and is indeed observed and then you'll also have to implement those two functions the first one decrements the landmark counters for any Landmark that is visible and the second one removes any Landmark which has a negative Landmark counter now after you program this run it and it will produce the file fast slam counter to text now load this and you should obtain a result that looks similar to this one in the beginning the covariances of the landmarks get smaller and then our particles travel along the trajectory but now here we encounter an undesired effect namely this Landmark that has been observed multiple times before is now occluded by this landmark and so it's counter is decremented and this results in the landmark being removed at least for the particle that we pick and whose landmarks are displayed here so shortly after the landmark is observed again so a new Landmark is initialized which you can tell from the large Arrow ellipse which gets then subsequently smaller however note that our modification has solved our previous problem so even though sometimes Furious landmarks appear they're usually removed shortly after so that overall we get a pretty good result so now please program this
SLAM_Lectures
SLAM_G_04.txt
now let's have a look at the third part the landmark update so now the situation is as follows a robot makes a measurement identifies a cylinder at a certain range and bearing angle and there's an existing cylinder some covariance matrix and after computing the likelihoods we decide that this measurement belongs to that landmark and so we want to update this landmark so if this is our old position of the landmark say mu old and this is the error ellipse belonging to our old covariance matrix then we would now expect that due to our measurement position of the landmark will move here so close to the measurement and as a result of this additional information the error ellipse will get smaller so this will be our new mu and this will be our new covariance matrix now remember each of those landmarks is a separate extended kalman filter and so in order to figure out how we have to update the position of the landmark we'll just have to write down the standard common filter equations so in order to update the state we'll first have to compute the common gain which is the sigma namely the old sigma times the jacobian of the measurement function transposed times this term now this is just copied from our standard equations which we have repeated by now for multiple times but now you understand what this term means because we derived this just a few slides ago this is the measurement noise due to the noise of our measurement device and this is the measurement noise that results from the noise of our landmark propagated through our measurement and so we just computed this and this was ql so this means our k equals sigma l times the transpose of t times the inverse of ql and from that we can compute our new mu which is our old mu plus the common gain times our measurement so this is our actual measurement minus our expected measurement which is computed by the h function given the state and our landmark position and our new covariance is computed by this standard formula so it's the identity matrix minus common gain times h times our old covariance matrix this is all there is to do now for the practical implementation remember that h and ql are computed by a function you've written earlier termed h ql jacobian and measurement covariance for landmark and h function is computed using h expected measurement for landmark and so you know everything you need to know you know h you know ql from this function sigma old stl covariance in the particle c is given to the function as a parameter h is computed using this function k is just computed here and the mu old is also taken from the current particle and everything else here is known as well so i prepared the slam 10d update landmark file for you and it contains as usual the particle class now this time i included the measurement function h and the derivative with respect to a landmark and you will also need the expected measurement for landmark function and the jacobian and measurement covariance function which you programmed earlier so just copy your code here and here the update landmark that is actually the new code that you'll have to program now looking at the previous slides that's not very hard to do and here are some additional hints which you'll hopefully find useful now the main code checks your implementation so it sets up a particle it inserts some landmarks and those light marks are actually the ones which result from our initialized new landmark code in the previous slam 10c programming exercise so the three positions of the landmarks and the three covariance matrices are just inserted into the particle here and so after we inserted those landmarks we print them and then we update our first landmark so landmark number zero using this measurement that is a range of 1000 and a bearing angle of zero and please note in this case we set the correspondence of the measurement to landmark zero manually so later on we'll have to replace this by the code that assigns this measurement to the landmark with the maximum likelihood so here we measure the first landmark once again at exactly the correct location and then we measure the second landmark once again with a location that is slightly different from its previous location namely its x value is larger by 100 millimeters and we don't update the third landmark and then we print out all landmarks again and so if you run that you should see this with our first landmark at one thousand zero with an error ellipse like that our second landmark at two thousand zero was an air ellipse the same size in the x dimension but twice the size and the y dimension and we have a third landmark which actually is here now look at the update the first landmark is updated with the exactly the same coordinate so it stays in the same place but the arrow ellipse gets smaller in x and y if you look at this this should look familiar so this is 200 divided by the square root of 2 which happens to be 100 times the square root of 2. and also the second half axis is this 261 divided by square root of 2. so i measured this landmark two times which means that the variance is one half of the variance after a single measurement which means that the standard deviation is the standard deviation of a single measurement divided by the square root of two so this gets smaller by a factor of square root of two now look at the secondary ellipse the very same happens here so the arrow ellipse gets smaller by the factor of square root of two but in this case also the position is different now because we did our measurement here at 2 100 the resulting position of 2 measurements with identical accuracies is exactly in between so here this will be 2050 and around this we will have our error ellipse which is smaller by a factor of square root of 2. so we observed this earlier that the variance behaves exactly the same as in this case even though here we didn't move but here we moved so once again we see that the computation of the estimated mean value and the computation of the estimated variance are independent so if we had measured a range of 2000 then the position would stay 2000 and we would get exactly the same result for the covariance matrix so now please program this and check your result against this outcome
SLAM_Lectures
SLAM_B_06.txt
and this is the outcome of our ICP algorithm it generates the ICP of all transform the text so open this and here's the result as you see the trajectory of our robot is much smoother than the previous result and as we step through the robots positions we see now that by using the ICP those scan points that were current in the last result very strongly are now matched to the waltz so interestingly this solution gives us a globally correct as well as a pretty smooth trajectory where of course the smoothness is related to the fact that you're now using much more points to determine the transformation which means that any noise that we have on a single point measurement does not influence the final solution of the transformation as much as in the case of our cylinders where we had only a maximum of six points that were used to determine the transformation and if you compare this now to the reference trajectory we see our solution is now pretty good it is globally good and it is also pretty smooth so congratulations if you made it that far then you can be really proud of yourself because you've developed a lot of useful algorithms during this unit B so what we covered was use Center data to improve the robots pose so far without having a rigorous error model so we explore two techniques the first was a feature based localization and what we did here was first the assignment of landmarks which in our case were cylinders then the direct solution of the similarity transform and we dived into mathematics a bit there and finally the correction of the robots pose using the transform we obtained from matching the landmarks so in the second part we explored featureless localization by matching scan points to the vault of the robots arena so the first step was to assign the points to the walls of the arena and in the second part we learned about the ICP to find the optimal transformation in an iterative fashion and on this unit you not only learn those fundamental localization techniques but you also implemented two very useful algorithms namely direct solution of the similarity transform which does not require any start values or any iteration and iterative closest point algorithm so this is it for unit b and hope to see you in the next unit