playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
MIT_RES6012_Introduction_to_Probability_Spring_2018
L033_Independence_of_Two_Events.txt
In the previous example, we had a model where the result of the first coin toss did not affect the probabilities of what might happen in the second toss. This is a phenomenon that we call independence and which we now proceed to define. Let us start with a first attempt at the definition. We have an event, B, that has a certain probability of occurring. We are then told that event A occurred, but suppose that this knowledge does not affect our beliefs about B in the sense that the conditional probability remains the same as the original unconditional probability. Thus, the occurrence of A provides no new information about B. In such a case, we may say that event B is independent from event A. If this is indeed the case, notice that the probability that both A and B occur, which is always equal by the multiplication rule to the probability of A times the conditional probability of B given A. So this is a relation that's always true. But if we also have this additional condition, then this simplifies to the probability of A times the probability of B. So we can find the probability of both events happening by just multiplying their individual probabilities. It turns out that this relation is a cleaner way of the defining formally the notion of independence. So we will say that two events, A and B, are independent if this relation holds. Why do we use this definition rather than the original one? This formal definition has several advantages. First, it is consistent with the earlier definition. If this equality is true, then the conditional probability of event B given A, which is the ratio of this divided by that, will be equal to the probability of B. So if this relation holds, then this relation will also hold, and so this more formal definition is consistent with our earlier intuitive definition. A more important reason is that this formal definition is symmetric with respect to the roles of A and B. So instead of saying that B is independent from A, based on this definition we can now say that events A and B are independent of each other. And in addition, since this definition is symmetric and since it implies this condition, it must also imply the symmetrical relation. Namely, that the conditional probability of A given B is the same as the unconditional probability of A. Finally, on the technical side, conditional probabilities are only defined when the conditioning event has non-zero probability. So this original definition would only make sense in those cases where the probability of the event A would be non-zero. In contrast, this new definition makes sense even when we're dealing with zero probability events. So this definition is indeed more general, and this also makes it more elegant. Let us now build some understanding of what independence really is. Suppose that we have two events, A and B, both of which have positive probability. And furthermore, these two events are disjoint. They do not have any common elements. Are these two events independent? Let us check the definition. The probability that both A and B occur is zero because the two events are disjoint. They cannot happen together. On the other hand, the probability of A times the probability of B is positive, since each one of the two terms is positive. And therefore, these two expressions are different from each other, and therefore this equality that's required by the definition of independence does not hold. The conclusion is that these two events are not independent. In fact, intuitively, these two events are as dependent as Siamese twins. If you know that A occurred, then you are sure that B did not occur. So the occurrence of A tells you a lot about the occurrence or non-occurrence of B. So we see that being independent is something completely different from being disjoint. Independence is a relation about information. It is important to always keep in mind the intuitive meaning of independence. Two events are independent if the occurrence of one event does not change our beliefs about the other. It does not affect the probability that the other event also occurs. When do we have independence in the real world? The typical case is when the occurrence or non-occurrence of each of the two events A and B is determined by two physically distinct and non-interacting processes. For example, whether my coin results in heads and whether it will be snowing on New Year's Day are two events that should be modeled as independent. But I should also say that there are some cases where independence is less obvious and where it happens through a numerical accident. You can now move on to answer some simple questions where you will have to check for independence using either the mathematical or intuitive definition.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L016_More_Properties_of_Probabilities.txt
We will now continue and derive some additional properties of probability laws which are, again, consequences of the axioms that we have introduced. The first property is the following. If we have two sets and one set is smaller than the other-- so we have a picture as follows. We have our sample space. And we have a certain set, A. And then we have a certain set, B, which is even bigger. So the set B is the bigger blue set. So if B is a set which is larger than A, then, naturally, the probability that the outcome falls inside B should be at least as big as the probability that the outcome falls inside A. How do we prove this formally? The set B can be expressed as a union of two pieces. One piece is the set A itself. The second piece is whatever elements of B there are, that do not belong in A. What are these elements? They are elements that belong to B. And they do not belong to A, which means that they belong to the complement of A. So we have expressed the set B as the union of two pieces. Now this piece is A. This piece here is outside A. So these two pieces are disjoint. And so we can apply the additivity axiom, and write that the probability of B is equal to the probability of A plus the probability of the other set. And since probabilities are non-negative, this expression here is at least as large as the probability of A. And this concludes the proof of the property that we wanted to show. Indeed, the probability of A is less than or equal to the probability of B. The next property we will show is the following. It allows us to write the probability of the union of two sets for the case now, where the two sets are not necessarily disjoint. So the picture is as follows. We have our two sets, A and B. These sets are not necessarily disjoint. And we want to say something about the probability of the union of A and B. Now the union of A and B consists of three pieces. One piece is this one here. And that piece consists of those elements of A that do not belong to B. So they belong to B complement. This set has a certain probability, let's call it little a and indicate it on this diagram. So a is the probability of this piece. Another piece is this one here, which is the intersection of A and B. It has a certain probability that we denote by little b. This is the probability of A intersection B. And finally, there's another piece, which is out here. And that piece has a certain probability c. It is the probability of that set. And what is that set? That set is the following. It's that part of B that consists of elements that do not belong in A. So it's B intersection with the complement of A. Now let's express the two sides of this equality here in terms of little a, little b, and little c, and see whether we get the same thing. So the probability of A union B. A union B consists of these three pieces that have probabilities little a, little b, and little c, respectively. And by the additivity axiom, the probability of the union of A and B is the sum of the probabilities of these three pieces. Let's look now at the right hand side of that equation and see whether we get the same thing. The probability of A plus the probability of B, minus the probability of A intersection B is equal to the following. A consists of two pieces that have probabilities little a and little b. The set B consists of two pieces that have probabilities little b and little c. And then we subtract the probability of the intersection, which is b. And we notice that we can cancel here one b with another b. And what we are left with is a plus b plus c. So this checks. And indeed we have this equality here. We have verified that it is true. One particular consequence of the equality that we derived is the following. Since this term here is always non-negative, this means that the probability of A union B is always less than or equal to the probability of A plus the probability of B. This inequality here is quite useful whenever we want to argue that a certain probability is smaller than something. And it has a name. It's called the union bound. We finally consider one last consequence of our axioms. And namely, we are going to derive an expression, a way of calculating the probability of the union of three sets, not necessarily disjoint. So we have our sample space. And within the sample space there are three sets-- set A, set B, and set C. We are going to use a set theoretic relation. We are going to express the union of these three sets as the union of three disjoint pieces. What are these disjoint pieces? One piece is the set A itself. The second piece is going to be that part of B which is outside A. So this is the intersection of B with the complement of A. The third piece is going to be whatever is left in order to form the union of the three sets. What is left is that part of C that does not belong to A and that does not belong to B. So that part is C intersection with A complement and B complement. Now this set here, of course, is the same as that set because intersection of two sets is the same no matter in which order we take the two sets. And similarly, the set that we have here is the same one that appears in that expression. Now we notice that these three pieces, the red, the blue, and the green, are disjoint from each other. So by the additivity axiom, the probability of this union here is going to be the sum of the probabilities of the three pieces. And that's exactly the expression the we have up here.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L062_Variance.txt
We have introduced the concept of expected value or mean, which tells us the average value of a random variable. We will now introduce another quantity, the variance, which quantifies the spread of the distribution of a random variable. So consider a random variable with a given PMF, for example like the PMF shown in this diagram. And consider another random variable that happens to have the same mean, but it's distribution is more spread out. So both random variables have the same mean, which we denote by mu, and which in this picture would be somewhere around here. However, the second PMF, the blue PMF, has typical outcomes that tend to have a larger distance from the mean. By distance from the mean what we mean is that if the result of the random variable, its numerical value, happens to be, let's say for example, this one, then this quantity here is X minus mu is the distance from the mean, how far away the outcome of the random variable happens to be from the mean of that random variable. Of course, the distance from the mean is a random quantity. It is a random variable. Its value is determined once we know the outcome of the experiment and the value of the random variable. What can we say about the distance from the mean. Let us calculate its average or expected value. The expected value of the distance from the mean, which is this quantity, using the linearity of expectations, is equal to the expected value of X minus the constant mu. But the expected value is by definition equal to mu. And so we obtain zero. So we see that the average value of the distance from the mean is always zero. And so it is uninformative. What we really want is the average absolute value of the distance from the mean, or something with this flavor. Mathematically, it turns out that the average of the squared distance from the mean is a better behaved mathematical object. And this is the quantity that we will consider. It has a name. It is called the variance. And it is defined as the expected value of the squared distance from the mean. The first thing to note is that the variance is always non-negative. This is because it is the expected value of non-negative quantities. How exactly do we computer the variance? The squared distance from the mean is really a function of the random variable X. So it is a function of the form g of X, where g is a particular function defined this way. So we can use the expected value rule applied to this particular function g. And we obtain the following. So what we have to do is to go over all numerical values of the random variable X. For each one, calculate its squared distance from the mean and weigh that quantity according to the corresponding probability of that particular numerical value. One final comment, the variance is a bit hard to interpret, because it is in the wrong units. If capital X corresponds to meters, then the variance has units of meters squared. A more intuitive quantity is the square root of the variance, which is called the standard deviation. It has the same units as the random variable and captures the width of the distribution. Let us now take a quick look at some of the properties of the variance. We know that expectations have a linearity property. Is this the case for the variance as well? Not quite. Instead we have this relation for the variance of a linear function of a random variable. Let us see why it is true. We use the shorthand notation mu for the expected value of X. We will proceed one step at a time and first consider what happens to the variance if we add the constant to a random variable. So let Y be X plus some constant b. And let us just define nu to be the expected value of Y, which, using linearity of expectations, is the expected value of X plus b. Let us now calculate the variance. By definition the variance of Y is the expected value of the distance squared of Y from its mean. Now we substitute, because in this case Y is equal to X plus b. Whereas the mean, nu, is mu plus b. And now we notice that this b cancels with that b. And we are left with the expected value of X minus mu squared, which is just the variance of X. So this proves this relation for the case where a is equal to 1. The variance of X plus b is equal to the variance of X. So we see that when we add a constant to a random variable, the variance remains unchanged. Intuitively, adding a constant just moves the entire PMF right or left by some amount, but without changing its shape. And so the spread of this PMF remains unchanged. Let us now see what happens if we multiply a random variable by a constant. Let again nu be the expected value of Y. And so in this case by linearity this is equal to a times the expected value of X. So it is a times mu. We calculate the variance once more using the definition and substituting in the place of Y what Y is in this case-- it's aX-- and subtracting the mean of Y, which is a mu, squared. We take out a factor of a squared. And then we use linearity of expectations to note that this is a squared times the expected value of X minus mu squared, which is a squared times the variance of X. So this establishes this formula for the case where b equals zero. Putting together these two facts, if we multiply a random variable by a, the variance gets multiplied by a squared. And if we add a constant, the variance doesn't change. And this establishes this particular fact. As an example, the variance of, let's say, 3 minus 4X is going to be equal minus 4 squared times the variance of X, which is 16 times the variance of X. Finally, let me mention an alternative way of computing variances, which is often a bit quicker. We have this useful formula here. We will see later a few examples of how it is used, but for now let me just show why it is true. We have by definition that the variance of X is the expected value of X minus mu squared. Now let us rewrite what is inside the expectation by just expanding this square, which is [X squared minus] 2 mu X plus mu squared. Using linearity of expectations, this is broken down into expected value of X squared minus the expected value of two times mu X. But mu is a constant. So we can take it outside the expected value. And we're left with 2mu expected value of X plus mu squared. But remember that mu is just the same as the expected value of X. So what we have here is twice the expected value of X, squared, plus the expected value of X, squared, and that leaves us just minus the expected value of X, squared. So we will now move in the next segment into a few examples of variance calculations.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L044_Combinations.txt
Let us now study a very important counting problem, the problem of counting combinations. What is a combination? We start with a set of n elements. And we're also given a non-negative integer, k. And we want to construct or to choose a subset of the original set that has exactly k elements. In different language, we want to pick a combination of k elements of the original set. In how many ways can this be done? Let us introduce some notation. We use this notation here, which we read as "n-choose-k," to denote exactly the quantity that we want to calculate, namely the number of subsets of a given n-element set, where we only count those subsets that have exactly k elements. How are we going to calculate this quantity? Instead of proceeding directly, we're going to consider a somewhat different counting problem which we're going to approach in two different ways, get two different answers, compare those answers, and by comparing them, we're going to get an equation, which is going to give us in the end, the desired number. The alternative problem that we're going to use is the following. We start, as before, with our given set that consists of n elements. But instead of picking a subset, what we want to do is to construct a list, an ordered sequence, that consists of k distinct elements taken out of the original set. So we think of having k different slots, and we want to fill each one of those slots with one of the elements of the original set. In how many ways can this be done? Well, we want to use the counting principle. So we want to decompose this problem into stages. So what we can do is to choose each one of the k items that go into this list one at a time. We first choose an item that goes to the first position, to the first slot. Having used one of the items in that set, we're left with n minus 1 choices for the item that can go into the second slot. And we continue similarly. When we're ready to fill the last slot, we have already used k minus one of the items, which means that the number of choices that we're going to have at that stage is n minus k plus 1. At this point, it's also useful to simplify that expression a bit. We observe that this is the same as n factorial divided by n the minus k factorial. Why is this the case? You can verify that this is correct by moving the denominator to the other side. And when you do that, you realize that you have the product of all terms from n down to n minus k plus 1. And then you have the product of n minus k going all the way down to one. And that's exactly the product, which is the same as n factorial. It's a product of all integers from n all the way down to 1. So this was the first method of constructing the list that we wanted. How about a second method? What we can do is to first choose k items out of the original set, and then take those k terms and order them in a sequence to obtain an ordered list. So we construct our ordered list in two stages. In the first stage, how many choices [do] we have? That's the number of subsets with k elements out of the original set. This number, we don't know what it is. That's what we're trying to calculate. But we have a symbol for it. It's n-choose-k. How about the second stage? We have k elements, and we want to arrange them in a sequence. That is, we want to form a permutation of those k elements. This is a problem that we have already studied, and we know that the answer is k factorial. According to the counting principle, the number of ways that this two-stage construction can be made is equal to the product of the number of ways, number of options that we have in the first stage times the number of options that we have in the second stage. So this is one answer for the number of possible ordered sequences. This is another answer. Of course, both of them are correct. And therefore, they have to be equal. And by using that equality, we can now find a formula for this coefficient n-choose-k simply by taking this k factorial factor and sending it to the denominator of that expression. So by equating this expression with that expression here, we find the final answer, which is that the number of choices, n-choose-k, is equal to this expression here. Now, this expression is valid only for numbers that make sense. So n can be any integer, any non-negative integer. And k, the only k's that make sense, would be k's from 0, 1 up to n. You may be wondering about some of the extreme cases of that formula. What does it mean for n to be 0 or for k equal to 0? So let us consider now some of these extreme cases and make a sanity check about this formula. So this is the formula that we have and that we want to check. The first case to consider is the extreme case of n-choose-n. What does that correspond to? Out of a set with n elements, we want to choose a subset that has n elements. There's not much of a choice here. We just have to take all of the elements of the original set and put them in the subset. So the subset is the same as the set itself. So we only have one choice here. That should be the answer. Let's check it with the formula. The formula gives us n factorial divided by n factorial. And then, since k is equal to n, here we get zero factorial. Is this correct? Well, it becomes correct as long as we adopt the convention that zero factorial is equal to 1. We're going to adopt this convention and keep it throughout this course. Let's look at another extreme case now, the coefficient n choose 0. This time let us start from the formula. The formula tells us that this should be n factorial divided by 0 factorial and divided by n factorial, since the number k is equal to 0. Using the convention that we have, this is equal to 1. So this is, again, equal to 1. Is it the correct answer? How many subsets of a given set are there that have exactly zero elements? Well, there's only one subset that has exactly 0 elements, and this is the empty set. So this explains this particular answer and shows that it is meaningful and that it makes sense. Now, let us use our understanding of those coefficients to solve a somewhat harder problem. Suppose that for some reason, you want to calculate this sum. What is it going to be? One way of approaching this problem is to use the formula for these coefficients, do a lot of algebra. And if you're really patient and careful, eventually you should be able to get the right answer. But this is very painful. Let us think whether there's a clever way, a shortcut, of obtaining this answer. Let us try to think what this sum is all about. This sum includes this term, which is the number of zero-element subsets. This number, which is the number of subsets that have one element. And we keep going all the way to the number of subsets that have exactly n elements. So we're counting zero-element subsets, one-element subsets, all the way up to n. So what we're counting really is the number of all subsets of our given set. But this is a number that we know what it is. The number of subsets of a given set with n elements is 2 to the n. So by thinking carefully and interpreting the terms in this sum, we were able to solve this problem very fast, something that would be extremely tedious if we had tried to do it algebraically. For some practice with this idea, why don't you pause at this point and try to solve a problem of a similar nature?
MIT_RES6012_Introduction_to_Probability_Spring_2018
S013_Sequences_and_their_Limits.txt
In this segment, we will discuss what a sequence is and what it means for a sequence to converge. So a sequence is nothing but some collection of elements that are coming out of some set, and that collection of elements is indexed by the natural numbers. We often use the notation, and we say that we have a sequence ai, or sometimes we use the notation that we have a sequence of this kind to emphasize the fact that it's a sequence and not just a single number. And what we mean by this is that we have i, an index that runs over the natural numbers, which is the set of positive integers, and each ai is an element of some set. In many cases, the set is going to be just the real line, in which case we're dealing with a sequence of real numbers. But it is also possible that the set over which our sequence takes values is Euclidean space n-dimensional space, in which case we're dealing with a sequence of vectors. But it also could be any other kind of set. Now, the definition that I gave you may still be a little vague. You may wonder how a mathematician would define formally a sequence. Formally, what a sequence is, is just a function that, to any natural number, associates an element of S. In particular, if we evaluate the function f at some argument i, this gives us the ith element of the sequence. So that's what a sequence is. Now, about sequences, we typically care whether a sequence converges to some number a, and we often use this notation. But to make it more precise, you also add this notation here. And we read this as saying that as i converges to infinity, the sequence ai converges to a certain number a. A more formal mathematical notation would be the limit as i goes to infinity of ai is equal to a certain number, a. But what exactly does this mean? What does it mean for a sequence to converge? What is the formal definition? It is as follows. Let us plot the sequence as a function of i. So this is the i-axis, and here we plot entries of ai. For a sequence to converge to a certain number a, we need to the following to happen. If we draw a small band around that number a, what we want is that the elements of the sequence, as i increases, eventually get inside this band and stay inside that band forever. Now, let us turn this into a more precise statement. What we mean is the following. If I give you some positive number epsilon, and I'm going to use that positive number epsilon to define a band around the number a. So it's this band here. If I give you a positive number epsilon, and therefore, this way, have defined a certain band, there exists a time after which the entries will get the inside the band. In this picture, it would be this time. So there exists a time-- let's call that time i0-- so i0 is here such that after that time, what we have is that the element of the sequence is within epsilon of a. So this is the formal definition of convergence of a sequence to a certain number a. The definition may look formidable and difficult to parse, but what it says in plain English is pretty simple. No matter what kind of band I take around my limit a, eventually, the sequence will be inside this band and will stay inside there. Convergence of sequences has some very nice properties that you're probably familiar with. For example, if I tell you that a certain sequence converges to a number a and another sequence converges to a number b, then we will have that ai plus bi, which is a new sequence-- the ith element of the sequence is this sum-- will converge to a plus b. Or similarly, ai times bi, which is another sequence, converges to a times b. And if, in addition, g is a continuous function, then g of ai will converge to g of a. So for example, if the ais converge to a, then the sequence ai squared is going to converge to a squared.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L063_The_Variance_of_the_Bernoulli_The_Uniform.txt
In this segment, we will go through the calculation of the variances of some familiar random variables, starting with the simplest one that we know, which is the Bernoulli random variable. So let X take values 0 or 1, and it takes a value of 1 with probability p. We have already calculated the expected value of X, and we know that it is equal to p. Let us now compute its variance. One way of proceeding is to use the definition and then the expected value rule. So if we now apply the expected value rule, we need the summation over all possible values of X. There are two values-- x equal to 1 or x equal to 0. The contribution when X is equal to 1 is 1 minus the expected value, which is p squared. And the value of 1 is taken with probability p. There is another contribution to this sum when little x is equal to 0. And that contribution is going to be 0 minus p, all of this squared, times the probability of 0, which is 1 minus p. And now we carry out some algebra. We expand the square here, 1 minus 2p plus p squared. And after we multiply with this factor of p, we obtain p minus 2p squared plus p to the third power. And then from here we have a factor of p squared times 1, p squared times minus p. That gives us a minus p cubed. Then we notice that this term cancels out with that term. p squared minus 2p squared leaves us with p minus p squared. And we factor this as p times 1 minus p. An alternative calculation uses the formula that we provided a little earlier. Let's see how this will go. We have the following observation. The random variable X squared and the random variable X-- they are one and the same. When X is 0, X squared is also 0. When X is 1, X squared is also 1. So as random variables, these two random variables are equal in the case where X is a Bernoulli. So what we have here is just the expected value of X minus the expected value of X squared, to the second power. And this is p minus p squared, which is the same answer as we got before-- p times 1 minus p. And we see that the calculations and the algebra involved using this formula were a little simpler than they were before. Now the form of the variance of the Bernoulli random variable has an interesting dependence on p. It's instructive to plot it as a function of p. So this is a plot of the variance of the Bernoulli as a function of p, as p ranges between 0 and 1. p times 1 minus p is a parabola. And it's a parabola that is 0 when p is either 0 or 1. And it has this particular shape, and the peak of this parabola occurs when p is equal to 1/2, in which case the variance is 1/4. In some sense, the variance is a measure of the amount of uncertainty in a random variable, a measure of the amount of randomness. A coin is most random if it is fair, that is, when p is equal to 1/2. And in this case, the variance confirms this intuition. The variance of a coin flip is biggest if that coin is fair. On the other hand, in the extreme cases where p equals 0-- so the coin always results in tails, or if p equals to 1 so that the coin always results in heads-- in those cases, we do not have any randomness. And the variance, correspondingly, is equal to 0. Let us now calculate the variance of a uniform random variable. Let us start with a simple case where the range of the uniform random variable starts at 0 and extends up to some n. So there is a total of n plus 1 possible values, each one of them having the same probability-- 1 over n plus 1. We calculate the variance using the alternative formula. And let us start with the first term. What is it? We use the expected value rule, and we argue that with probability 1 over n plus 1, the random variable X squared takes the value 0 squared, with the same probability, takes the value 1 squared. With the same probability, it takes the value 2 squared, and so on, all of the way up to n squared. And then there's the next term. The expected value of the uniform is the midpoint of the distribution by symmetry. So it's n over 2, and we take the square of that. Now to make progress here, we need to evaluate this sum. Fortunately, this has been done by others. And it turns out to be equal to 1 over 6 n, n plus 1 times 2n plus 1. This formula can be proved by induction, but we will just take it for granted. Using this formula, and after a little bit of simple algebra and after we simplify, we obtain a final answer, which is of the form 1 over 12 n times n plus 2. How about the variance of a more general uniform random variable? So suppose we have a uniform random variable whose range is from a to b. How is this PMF related to the one that we already studied? First, let us assume that n is chosen so that it is equal to b minus a. So in that case, the difference between the last and the first value of the random variable is the same as the difference between the last and the first possible value in this PMF. So both PMFs have the same number of terms. They have exactly the same shape. The only difference is that the second PMF is shifted away from 0, and it starts at a instead of starting at 0. Now what does shifting a PMF correspond to? It essentially amounts to taking a random variable-- let's say, with this PMF-- and adding a constant to that random variable. So if the original random variable takes the value of 0, the new random variable takes the value of a. If the original takes the value of 1, this new random variable takes the value of a plus 1, and so on. So this shifted PMF is the PMF associated to a random variable equal to the original random variable plus a constant. But we know that adding a constant does not change the variance. Therefore, the variance of this PMF is going to be the same as the variance of the original PMF, as long as we make the correspondence that n is equal to b minus a. So doing this substitution in the formula that we derived earlier, we obtain 1 over 12 b minus a times b minus a plus 2.
MIT_RES6012_Introduction_to_Probability_Spring_2018
S018_Countable_and_Uncountable_Sets.txt
Probability models often involve infinite sample spaces, that is, infinite sets. But not all sets are of the same kind. Some sets are discrete and we call them countable, and some are continuous and we call them uncountable. But what exactly is the difference between these two types of sets? How can we define it precisely? Well, let us start by first giving a definition of what it means to have a countable set. A set will be called countable if its elements can be put into a 1-to-1 correspondence with the positive integers. This means that we look at the elements of that set, and we take one element-- we call it the first element. We take another element-- we call it the second. Another, we call the third element, and so on. And this way we will eventually exhaust all of the elements of the set, so that each one of those elements corresponds to a particular positive integer, namely the index that appears underneath. More formally, what's happening is that we take elements of that set that are arranged in a sequence. We look at the set, which is the entire range of values of that sequence, and we want that sequence to exhaust the entire set omega. Or in other words, in simpler terms, we want to be able to arrange all of the elements of omega in a sequence. So what are some examples of countable sets? In a trivial sense, the positive integers themselves are countable, because we can arrange them in a sequence. This is almost tautological, by the definition. For a more interesting example, let's look at the set of all integers. Can we arrange them in a sequence? Yes, we can, and we can do it in this manner, where we alternate between positive and negative numbers. And this way, we're going to cover all of the integers, and we have arranged them in a sequence. How about the set of all pairs of positive integers? This is less clear. Let us look at this picture. This is the set of all pairs of positive integers, which we understand to continue indefinitely. Can we arrange this sets in a sequence? It turns out that we can. And we can do it by tracing a path of this kind. So you can probably get the sense of how this path is going. And by continuing this way, over and over, we're going to cover the entire set of all pairs of positive integers. So we have managed to arrange them in a sequence. So the set of all such pairs is indeed a countable set. And the same argument can be extended to argue for the set of all triples of positive integers, or the set of all quadruples of positive integers, and so on. This is actually not just a trivial mathematical point that we discuss for some curious reason, but it is because we will often have sample spaces that are of this kind. And it's important to know that they're countable. Now for a more subtle example. Let us look at all rational numbers within the range between 0 and 1. What do we mean by rational numbers? We mean those numbers that can be expressed as a ratio of two integers. It turns out that we can arrange them in a sequence, and we can do it as follows. Let us first look at rational numbers that have a denominator term of 2. Then, look at the rational numbers that have a denominator term of 3. Then, look at the rational numbers, always within this range of interest, that have a denominator of 4. And then we continue similarly-- rational numbers that have a denominator of 5, and so on. This way, we're going to exhaust all of the rational numbers. Actually, this number here already appeared there. It's the same number. So we do not need to include this in a sequence, but that's not an issue. Whenever we see a rational number that has already been encountered before, we just delete it. In the end, we end up with a sequence that goes over all of the possible rational numbers. And so we conclude that the set of all rational numbers is itself a countable set. So what kind of set would be uncountable? An uncountable set, by definition, is a set that is not countable. And there are examples of uncountable sets, most prominent, continuous subsets of the real line. Whenever we have an interval, the unit interval, or any other interval that has positive length, that interval is an uncountable set. And the same is true if, instead of an interval, we look at the entire real line, or we look at the two-dimensional plane, or three-dimensional space, and so on. So all the usual sets that we think of as continuous sets turn out to be uncountable. How do we know that they are uncountable? There is actually a brilliant argument that establishes that the unit interval is uncountable. And then the argument is easily extended to other cases, like the reals and the plane. We do not need to know how this argument goes, for the purposes of this course. But just because it is so beautiful, we will actually be presenting it to you.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L014_Probability_Axioms.txt
We have so far discussed the first step involved in the construction of a probabilistic model. Namely, the construction of a sample space, which is a description of the possible outcomes of a probabilistic experiment. We now come to the second and much more interesting part. We need to specify which outcomes are more likely to occur and which ones are less likely to occur and so on. And we will do that by assigning probabilities to the different outcomes. However, as we try to do this assignment, we run into some kind of difficulty, which is the following. Remember the previous experiment involving a continuous sample space, which was the unit square and in which we throw a dart at random and record the point that occurred. In this experiment, what do you think is the probability of a particular point? Let's say what is the probability that my dart hits exactly the center of this square. Well, this probability would be essentially 0. Hitting the center exactly with infinite precision should be 0. And so it's natural that in such a continuous model any individual point should have a 0 probability. For this reason instead of assigning probabilities to individual points, we will instead assign probabilities to whole sets, that is, to subsets of the sample space. So here we have our sample space, which is some abstract set omega. Here is a subset of the sample space. Call it capital A. We're going to assign a probability to that subset A, which we're going to denote with this notation, which we read as the probability of set A. So probabilities will be assigned to subsets. And these will not cause us difficulties in the continuous case because even though individual points would have 0 probability, if you ask me what are the odds that my dart falls in the upper half, let's say, of this diagram, then that should be a reasonable positive number. So even though individual outcomes may have 0 probabilities, sets of outcomes in general would be expected to have positive probabilities. So coming back, we're going to assign probabilities to the various subsets of the sample space. And here comes a piece of terminology, that a subset of the sample space is called an event. Why is it called an event? Because once we carry out the experiment and we observe the outcome of the experiment, either this outcome is inside the set A and in that case we say that event A has occurred, or the outcome falls outside the set A in which case we say that event A did not occur. Now we want to move on and describe certain rules. The rules of the game in probabilistic models, which are basically the rules that these probabilities should satisfy. They shouldn't be completely arbitrary. First, by convention, probabilities are always given in the range between 0 and 1. Intuitively, 0 probability means that we believe that something practically cannot happen. And probability of 1 means that we're practically certain that an event of interest is going to happen. So we want to specify rules of these kind for probabilities. These rules that any probabilistic model should satisfy are called the axioms of probability theory. And our first axiom is a nonnegativity axiom. Namely, probabilities will always be non-negative numbers. It's a reasonable rule. The second rule is that if the subset that we're looking at is actually not a subset but is the entire sample space omega, the probability of it should always be equal to 1. What does that mean? We know that the outcome is going to be an element of the sample space. This is the definition of the sample space. So we have absolute certainty that our outcome is going to be in omega. Or in different language we have absolute certainty that event omega is going to occur. And we capture this certainty by saying that the probability of event omega is equal to 1. These two axioms are pretty simple and very intuitive. The more interesting axiom is the next one that says something a little more complicated. Before we discuss that particular axiom, a quick reminder about set theoretic notation. If we have two sets, let's say a set A, and another set, another set B, we use this particular notation, which we read as "A intersection B" to refer to the collection of elements that belong to both A and B. So in this picture, the intersection of A and B is this shaded set. We use this notation, which we read as "A union B", to refer to the set of elements that belong to A or to B or to both. So in terms of this picture, the union of the two sets would be this blue set. After this reminder about set theoretic notation, now let us look at the form of the third axiom. What does it say? If we have two sets, two events, two subsets of the sample space, which are disjoint. So here's our sample space. And here are the two sets that are disjoint. In mathematical terms, two sets being disjoint means that their intersection has no elements. So their intersection is the empty set. And we use this symbol here to denote the empty set. So if the intersection of two sets is empty, then the probability that the outcome of the experiments falls in the union of A and B, that is, the probability that the outcome is here or there, is equal to the sum of the probabilities of these two sets. This is called the additivity axiom. So it says that we can add probabilities of different sets when those two sets are disjoint. In some sense we can think of probability as being one pound of some substance which is spread over our sample space and the probability of A is how much of that substance is sitting on top of a set A. So what this axiom is saying is that the total amount of that substance sitting on top of A and B is how much is sitting on top of A plus how much is sitting on top of B. And that is the case whenever the sets A and B are disjoint from each other. The additivity axiom needs to be refined a bit. We will talk about that a little later. Other than this refinement, these three axioms are the only requirements in order to have a legitimate probability model. At this point you may ask, shouldn't there be more requirements? Shouldn't we, for example, say that probabilities cannot be greater than 1? Yes and no. We do not want probabilities to be larger than 1, but we do not need to say it. As we will see in the next segment, such a requirement follows from what we have already said. And the same is true for several other natural properties of probabilities.
MIT_RES6012_Introduction_to_Probability_Spring_2018
S091_Buffons_Needle_Monte_Carlo_Simulation.txt
PROFESSOR: In this segment, we will look at the famous example, which was posed by Comte de Buffon-- a French naturalist-- back in the 18th century. And it marks the beginning of a subject that is known as the subject of geometric probability. The problem is pretty simple. We have the infinite plane, and we draw lines that are parallel to each other. And they're spaced apart d units. So this distance here is d. And the same for all the other lines. We take a needle that has a certain length-- l-- and we throw it at random on the plane. So the needle might fall this way, so that it doesn't cross any line, or it might fall this way, so that it ends up crossing one of the lines. If the needle is long enough, it might actually even end up crossing two of the lines. But we will make the assumption that the length of the needle is less than the distance between the two-- between two adjacent lines, so that we're going to have either this configuration, or that configuration. So in this setting, we're interested in the question of how likely is it that the needle is going to intersect one of the lines if the needle is thrown completely at random? We will answer this question, and we will proceed as follows. First, we need to model the experiment-- the probabilistic experiment-- mathematically. That is, we need to define an appropriate sample space, define some relevant random variables, choose an appropriate probability law, identify the event of interest, and then calculate. Let us see what it takes to describe a typical outcome of the experiment. Suppose that the needle fell this way, so that the nearest line is the one above. And let us mark here the center of the needle. One quantity of interest is this vertical distance between the needle and the nearest line. Let us call this quantity x. We're using here a lowercase x, because we're dealing with a numerical value in one particular outcome of the experiment. But we think of this x as being the realization of a certain random variable that we will denote by capital X. What else does it take to describe the needle? Suppose that the needle had fallen somewhere so that it is at the same vertical distance from the nearest line, but it has an orientation of this kind. This orientation compared to that one should make a difference. Because when it falls that way, it's more likely that it's going to cut the next line as opposed to this case. So the angle that the needle is making with the parallel lines should also be relevant. So let us give a name to that particular angle. So let's extend that line until it crosses one of the lines. And let us give a name to this angle, and call it theta. So if I tell you x and theta, you know how far away the needle is from the nearest line, and at what angle it is. It looks like these are two useful variables to describe the outcome of the experiment, so let us try working with these. So our model is going to involve two random variables defined the way we discussed it just now. What is the range of these random variables? Since we took x to be the distance from the nearest line, and the lines are d units apart, this means that x is going to be somewhere between 0 and d over 2. How about theta? So the needle makes two angles with the part of the line. It's this angle, and the complimentary one. Which one do we take? Well, we use a convention that theta is defined as the acute angle that the direction of the needle is making with the lines, so that theta will vary over a range from 0 to pi over 2. And our sample space for the experiments who will be the set of all pairs of x and theta, that satisfy these two conditions. These will be the possible x's and thetas. Having defined the sample space, next we need to define a probability law. At this point, we do not want to make any arbitrary assumptions. We only have the words completely at random to go by. But what do these words mean? We will interpret them to mean that there are no preferred x values, so that all x values are- in some sense-- equally likely. So we're going to assume that x is a uniform random variable. Since it is uniform, it's going to be a constant over this range. And in order to integrate to 1, that constant will have to be 2 over d. And we understand that the PDF of x is 0 outside that range. Similarly for theta, we do not want to assume that some orientations are more likely than other orientations. So we will again assume a uniform probability distribution. And therefore, that PDF must be equal to 2 over pi for theta's over this particular range. So far, we have specified the marginal PDFs of each one of the two random variables. How about the adjoined PDF? In order to have a complete model, we need to have a joint PDF in our hands. Here, we're going to make the assumption that x and theta are independent of each other. And in that case, the joint PDF is determined by just taking the product of the marginal PDFs. So the joint PDF is going to be equal to 4 divided by pi times d. By this point, we have completely specified a probabilistic model. We have made some assumptions, which you might even consider arbitrary. But these assumptions are a reasonable attempt at capturing the idea that the needle is thrown completely at random. This completes the subjective part-- the modeling part. The next step is much more streamlined. There's not going to be any choices. We just need to consider the event of interest, express it in terms of the random variables that we have in our hands, and then use the probability model that we have to calculate the probability of this particular event. So let us identify the event of interest. When will the needle intersect the nearest line? This will depend on the following. We can look at the vertical extent of the needle. By vertical extent, I mean the following. Let's see how far the needle goes in the vertical direction, which is the length of this green segment here. In this example, the vertical extent of the needle is less than the distance from the next line. And we do not have an intersection. If the figure was something like this, the vertical extent of the needle would have been that, but x would have been just this little segment. The vertical extent is bigger than x and the needle intersects the line. So we have an intersection if and only if the vertical extent-- which is this vertical green segment-- is larger than the distance x. Or equivalently, if x is less than the vertical extent. So we will have an intersection if x is less than or equal to the vertical extent of the needle. Now, how big is this vertical extent? Let's use some trigonometry here. This angle here is theta, so this angle here is also theta. Here, we have a right triangle and the hypotenuse of this triangle is l over 2. This angle is theta, therefore this vertical segment is equal to l over 2 times sine theta. So this is the geometrical condition that describes the event that the needle intersects the nearest line. And all we need to do now is to calculate the probability of this event. So here is what we have so far. This is the picture that we had before, but drawn in a somewhat nicer way. This is the joint PDF that we decided upon. And we wish to calculate the probability of this particular event-- that x is less than or equal to l over 2 sine theta. How do we calculate the probability of an event that has to do with two random variables? What we do is we take the joint PDF-- which in our case is four over pi d-- and integrate it over the set of x's and theta's for which the PDF is non-zero. So it's only going to be over x's and theta's in those ranges and also, only for those x theta pairs for which the event occurs. So what are these pairs? This event can occur with any choice of theta. So theta is free to vary from 0 up to pi over 2. How about x? For this event to occur, x can be anything that is non-negative as long as it is less than or equal to this number. So the upper limit of this integration is going to be l over 2 times sine theta. And all we need to do now is to evaluate this double integral. Let's start with the inner integral. Because we're just integrating a constant, the inner integral evaluates to the quantity that we're integrating-- the constant that we're integrating-- which is 4 times pi d times the length of the interval over which we're integrating, which is l over 2 sine theta. And now we need to carry out the outer integral. Let us pull out the constants, which is this 4 with this 2 give us a 2. We have 2l over pi d. And then the integral from 0 to pi over 2 of sine theta. Now the integral of sine theta is minus cosine theta. And we need to evaluate this at 0 and pi over 2. This turns out to be equal to 1. So the final result is 2 l over pi d. And this is the final answer to the problem that we have been considering. And now, a curious thought. Suppose that you do not know what the number pi is and all you have in your hands is your floor, lines drawn on your floor, and the needle. And you do know the length between adjacent lines on your floor. And you do know your length of your needle. How can you figure out the number pi? Take your needle, throw it at random a million times, and count the frequency with which the needle ends up crossing the line. If you believe that probabilities can be interpreted as frequencies, the frequency that you observe gives you a good estimate of this probability. So it gives you a good estimate of this particular number. And if you know the length of your needle and of the distance between the different lines, you can use the estimate of that number to determine the value of pi. This is a so-called Monte Carlo method, which uses simulation to evaluate experimentally the value, in this case, of the constant pi. Of course, for pi, we have much better ways of calculating it. But there are many applications in engineering and in physics where certain quantities are hard to calculate, but they can be calculated using a trick of this kind by simulation. Here's a typical situation. Consider the unit cube. And for simplicity, I'm only taking a cube in two dimensions. But in general, think of the unit cube in n dimensions, which is an object that has unit volume. Inside that unit cube, there is a complicated subset which is described maybe by some very complicated formulas. And you want to calculate the volume of this complicated subset. The description of the subset is so complicated that using integration, multiple integrals, and calculus is practically impossible. What can you do? What you can do is to start throwing at random points inside that unit cube. So you throw points. Some fault inside. Some fall outside. You count the frequency with which the points happen to be inside your set. And as long as you're throwing the points uniformly over the cube, then the probability of your complicated set is going to be the volume of that set. You estimate the probability by counting the frequency with which you get points in that set. And so, by using these observed frequencies, you can estimate the volume of a set-- something that might be very difficult to do through other numerical methods. It turns out that these days, physicists and many engineers use methods of this kind quite often and in many important applications.
MIT_RES6012_Introduction_to_Probability_Spring_2018
S0110_Bonferronis_Inequality.txt
PROFESSOR: In this segment, we will discuss a little bit the union bound and then discuss a counterpart, which is known as the Bonferroni inequality. Let us start with a story. Suppose that we have a number of students in some class. And we have a set of students that are smart, let's call that set a1. So this is the set of smart students. And we have a set of students that are beautiful. And let's call that set a2. So a2 is the set of beautiful students. If I tell you that the set of smart students is small, and the set of beautiful students are small, then you can probably conclude that there are very few students that are either smart or beautiful. What does this have to do with probability? Well, when we say very few are smart, we might mean that if I pick a student at random, there is only a small probability that I pick a smart student, and similarly for beautiful students. Can we make this statement more precise? Indeed we can. We have the union bound that tells us that the probability that I pick a student that is either smart or beautiful is less than or equal to the probability of being a smart student plus the probability of picking a beautiful student. So, if this probability is small and that probability is 1, then this probability will also be small, which means that there is only a small number of students that are either smart or beautiful. Now let us try to turn the statement around its head. Suppose that most of the students are smart and most of the students are beautiful. So in this case, I'm telling you that these sets a1 and a2 are big. Now, if the set a1 is big, then it means that this set here, the complement of a1, is a small set. And if I tell you that the set a2 is big, then it means that this set here, which is a complement of a2, is also small. So everything outside here is a small set, which means that whatever is left, which is the intersection of a1 and a2, should be a big set. So we should be able to conclude that, in this case, most of the students belong to the intersection. So they're both smart and beautiful. How can we turn this into a mathematical statement? It's the following inequality that we will prove shortly. But what it says is that the probability of the intersection is larger than or equal to something. And if this probability is close to 1, which says that most of the students are smart, and this probability is close to 1, which says that more students are beautiful, then this difference here is going to be close to 1 plus 1 minus 1, which is 1. Therefore, the probability of the intersection is going to be larger than or equal to some number that's close to 1. So this one will also be close to 1, which is the conclusion that indeed most students fall in this intersection and they're both smart and beautiful. So what we will do next will be to derive this inequality and actually generalize it. So here is the relation that we wish to establish. We want to show that the probability of a certain event is bigger than something. How do we show that? One way is to show that the probability of the complement of this event, namely this event here, we want to show that this event has small probability. Now what is this event? Here we can use DeMorgan's laws, which tell us that this event is the same as this one. That is, the complement of an intersection is the union of the complements. Since these two sets or events are identical, it means that their probabilities will also be equal. And next we will use the union bound to write this probability as being less than or equal to the sum of the probabilities of the two events whose union we are taking. Now we're getting close, except that here we have complements all over, whereas up here we do not have any complements. What can we do? Well, the probability of a complement of an event is the same as 1 minus the probability of that event. And we do the same thing for the terms that we have here. This probability here is equal to 1 minus the probability of a1. And this probability here is equal to 1 minus the probability of a2. And now if we take this inequality, cancel this term with that term, and then moved terms around, what we have is exactly this relation that we wanted to prove. It turns out that this inequality has a generalization to the case where we take the intersection of n events. And this has, again, the same intuitive content. Suppose that each one of these events a1 up to a(n) is almost certain to occur. That is, it has a probability close to 1. In that case, this term will be close to n. We subtract n minus 1, so this term on the right hand side will be close to 1. Therefore, the probability of the intersection will be larger than or equal to something that's close to 1. So this is big. Essentially what it's saying is that we have big sets and we take their intersection, then that intersection will also be big in terms of having large probability. How do we prove this relation? Exactly the same way as it was proved for the case of two sets. Namely, instead of looking at this event, we look at the complement of this event. And we use DeMorgan's laws to write this complement as the union of the complements. These two are the same sets or events, so they have the same probability. And then we use the union bound to write this as being less than or equal to the probabilities of all of those sets. Now this is equal to 1 minus the probability of the intersection. This side here is equal to 1 minus the probability of a1. This is one term. We get n such terms, the last one being 1 minus the probability of a(n). And we still have an inequality going this way. We collect those 1s that we have here. There's n of them, and one here, so we're left with n minus 1 terms that are equal to 1. And this gives rise to this term. We have all the probabilities of the various events that appear with the same sign. This gives rise to this term. And finally, this term here will correspond to that term. Namely, if we started with this inequality and just rearrange a few terms, we obtain this inequality up here. So these Bonferroni inequalities are a nice illustration of how one can combine DeMorgan's laws, set theoretical operations, and the union bound in order to obtain some interesting relations between probabilities.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L0510_The_Expected_Value_Rule.txt
In this segment, we discuss the expected value rule for calculating the expected value of a function of a random variable. It corresponds to a nice formula that we will see shortly, but it also involves a much more general idea that we will encounter many times in this course, in somewhat different forms. Here's what it is all about. We start with a certain random variable that has a known PMF. However, we're ultimately interested in another random variable Y, which is defined as a function of the original random variable. We're interested in calculating the expected value of this new random variable, Y. How should we do it? We will illustrate the ideas involved through a simple numerical example. In this example, we have a random variable, X, that takes values 2, 3, 4, or 5, according to some given probabilities. We are also given a function that maps x-values into y-values. And this function, g, then defines a new random variable. So if the outcome of the experiment leads to an X equal to 4, then the random variable, Y, will also take a value equal to 4. How do we calculate the expected value of Y? The only tool that we have available in our hands at this point is the definition of the expected value, which tells us that we should run a summation over the y-axis, consider different values of y one at the time. And for each value for y, multiply that value by its corresponding probability. So in this case, we start with Y equal to 3, which needs to be multiplied by the probability that Y is equal to 3. What is that probability? Well, Y is equal to three, if and only if X is 2 or 3, which happens with probability, 0.1 plus 0.2. Then we continue with the summation by considering the next value of little y. The next possible value is 4. And this gives us a contribution of 4, weighted by the probability of obtaining a 4. The probability that Y is equal to 4 is the probability that X is either equal to 4 or to 5, which happens with probability. 0.3 plus 0.4. So this way, we obtain an arithmetic expression which we can evaluate. And it's going to give us the expected value of Y. But here's an alternative way of calculating the expected value. And this corresponds to the following type of thinking. 10% of the time, X is going to be equal to 2. And when that happens, Y takes on a value of 3. So this should give us a contribution to the average value of Y, which is 3 times 0.1. Then, 20% of the time, X is 3 and Y is also 3. So 20% of the time, we also get 3's in Y. Then 30% of the time, X is 4, which results in a Y that's equal to 4. So we obtain a 4 30% of the time. And finally, 40% of the time, X equals to [5], which results in a Y equal to 4. And we obtain this arithmetic expression. Now you can compare the two arithmetic expressions, the red and the blue one, and you will notice that they're equal, except that the terms are arranged in a slightly different way. Conceptually, however, there's a very big difference. In the first summation, we run over the values of Y one at the time. In the second summation, we run over the different values of X one at a time, and took into account their individual contributions. This second way of calculating the expected value of Y is called the expected value rule. And it corresponds to the following formula. We carry out a summation over the x-axis. For each x-value that we consider, we calculate what is the corresponding y-value, that's g of x, and also weigh this term according to the probability of this particular x. So for instance, a typical term here would be when x is equal to 2, g of x would be equal to 3. And the corresponding probability, that's the probability of a 2, would be 0.1. The advantage of using the expected value rule instead of the definition of the expectation is that the expected value rule only involves the PMF of the original random variable, so we do not need to do any additional work to find the PMF of the new random variable. Now we argued in favor of the expected value rule by considering this numerical example, and by checking that it gives the right result. But now let us verify. Let us argue more generally that it's going to give us the right answer. So what we're going to do is to take this summation and argue that it's equal to the expected value of Y, which is defined by that summation. So let us start with this. It's a sum over all x's. Let us first fix a particular value of y, and add over all those x's that correspond to that particular y. So we're fixing a particular y. And so we're adding only over those x's that lead to that particular y. And we carry out to the summation. So this is the part of this sum associated with one particular choice of y. And it's a sum, really, over this set of x's. But in order to exhaust all x's, we need to consider all possible values of y. And this gives rise to an outer summation over the different y's. So for any fixed y, we add over the associated x's. But we want to consider all the possible y's. Now at this point, we make the following observation. Here, we have a summation over y's. And let's look at the inner summation. The inner summation involves x's, all of which are associated with a specific value of y. Having fixed y, all the terms inside this sum have the property that g of x is equal to y. So g of x is equal to that particular y. And we can make this substitution here. Now when we look at this summation, we now realize that it's a summation over x's while y is being fixed. And so we can take this term of y and pull it outside the summation. What this leaves us with is a sum over all y's of y, and then a further sum over all x's that lead to that particular y, of the probabilities of those x's. Now what can we say about this inner summation? We have fixed a y. For that particular y, we're adding the probabilities of all the x's that lead to that particular y. Fixing y, consider all the x's that lead to it. This is just the probability of that particular y. But what we have now is just the definition of the expected value of Y. And this concludes the proof that this expression, as given by the expected value rule, gives us the same answer as the original definition of the expected value of Y. Now before closing, a few observations. The expected value rule is really simple to use. For example, if you want to calculate the expected value of the square of a random variable, then you're dealing with a situation where the g function is the square function. And so, the expected value of X-squared will be the sum over x's of x squared weighted according to the probability of a particular x. And finally, one important word of caution, that in general, the expected value of the function-- so for example, the expected value of X-squared. In general, it's not going to be the same as taking the expected value of X and squaring it. So this is a word [of] caution, that in general, you cannot interchange the order with which you apply a function, and then you calculate expectation. There are exceptions, however, in which we happen to have equality here. And this is going to be our next topic.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L094_Memorylessness_of_the_Exponential_PDF.txt
We now revisit the exponential random variable that we introduced earlier and develop some intuition about what it represents. We do this by establishing a memorylessness property, similar to the one that we established earlier in the discrete case for the geometric PMF. Suppose that it is known that light bulbs have a lifetime until they burn out, which is an exponential random variable. You go to a store, and you are given two choices, to buy a new light bulb, or to buy a used light bulb that has been working for some time and has not yet burned out. Which one should you take? We want to approach this question mathematically. So let us denote by capital T the lifetime of the bulb. So time starts at time 0, and then at some random time that we denote by capital T, the light bulb will burn out. And we assume that this random variable is exponential with some given parameter lambda. In one of our earlier calculations, we have shown that the probability that capital T is larger than some value little x falls exponentially with that value x. We are now told that a certain light bulb has already been operating for t time units without failing. So we know that the value of the random variable capital T is larger than little t. We are interested in how much longer the light bulb will be operating, and so we look at capital X, which is the remaining lifetime from the current time until the light bulb burns out. So capital X is this particular random variable here, and it is equal to capital T minus little t. Let us now calculate the probability that the light bulb lasts for another little x time units. That is, that this random variable, capital X, is at least as large as some little x. That is, that the light bulb remains alive until time t plus x. We use the definition of conditional probabilities to write this expression as the probability that capital X is bigger than little x. On the other hand, capital X is T minus t, so we write it this way-- T minus t is bigger than little x, and also that T is bigger than little t, divided by the probability of the conditioning event. Just write this event in a cleaner form, capital T being larger than little t plus x and being larger than little t, again divided by the probability of the conditioning event. And now notice that capital T will be greater than little t and also greater than little t plus x, that is, capital T is larger than this number and this number, if and only if it is larger than this second number here. So in other words, the intersection of these two events is just this event here, that capital T is larger than little t plus x. Now, we can use the formula for the probability that capital T is larger than something. We apply this formula, except that instead of little x, we have t plus x. And so here we have e to the minus lambda t plus x divided by the probability that capital T is bigger than t. So we use this formula, but with little t in the place of little x, and we obtain e to the minus lambda t. We have a cancellation, and we're left with e to the minus lambda x, which is a final answer in this calculation. What do we observe here? The probability that the used light bulb will live for another x time units is exactly the same as the corresponding probability that the new light bulb will be alive for another x time units. So new and used light bulbs are described by the same probabilities, and they're probabilistically identical, the same. Differently said, the used light bulb does not remember, and it is not affected by how long it has been running. And this is the memorylessness property of exponential random variables. Let us now build some additional insights on exponential random variables. We have a formula for the density, the PDF. And from this, we can calculate the probability that T lies in a small interval. For example, for a small delta, this probability here is going to be approximately equal to the density of T evaluated at 0 times delta, which is lambda times e to the 0, which is 1, times delta. What if we are told that the light bulb has been alive for t time units? What is the probability that it burns out during the next delta times units? Since a used but still alive light bulb is probabilistically identical to a new one, this conditional probability is the same as this probability here that a new light bulb burns out in the next delta times units. And so this is also approximately equal to lambda delta. So we see that independently of how long a light bulb has been alive, during the next delta time units it will have a lambda delta probability of failing. One way of thinking about this situation is that the time interval is split into little intervals of length delta. And as long as the light bulb is alive, if it is alive at this point, it will have probability lambda delta of burning out during the next interval of length delta. This is like flipping a coin. Once every delta time steps, there is a probability lambda delta that there is a success in that coin flip, where success corresponds to having the light bulb actually burn down, and the exponential random variable corresponds to the total time elapsed until the first success. In this sense, the exponential random variable is a close analog of the geometric random variable, which was the time until the first success in a discrete time setting. This analogy turns out to be the foundation behind the Poisson process that we will be studying later in this course.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L097_Joint_PDFs.txt
In this segment, we start a discussion of multiple continuous random variables. Here are some objects that we're already familiar with. But exactly as in the discrete case, if we are dealing with two random variables, it is not enough to know their individual PDFs. We also need to model the relation between the two random variables, and this is done through a joint PDF, which is the continuous analog of the joint PMF. We will use this notation to indicate joint PDFs where we use f to indicate that we're dealing with a density. So what remains to be done is to actually define this object and see how we use it. Let us start by recalling that joint PMFs were defined in terms of the probability that the pair of random variables X and Y take certain specific values little x and little y. Regarding joint PDFs, we start by saying that it has to be non-negative. However, a more precise interpretation in terms of probabilities has to wait a little bit. Joint PDFs will be used to calculate probabilities. And this will be done in analogy with the discrete setting. In the discrete setting, the probability that the pair of random variables falls inside a certain set is just the sum of the probabilities of all of the possible pairs inside that particular set. For the continuous case, we introduce an analogous formula. We use the joint density instead of the joint PMF. And instead of having a summation, we now integrate. As in the discrete setting, we have one total unit of probability. The joint PDF tells us how this unit of probability is spread over the entire continuous two-dimensional plane. And we use it, we use the joint PDF, to calculate the probability of a certain set by finding the volume under the joint PDF that lies on top of that set. This is what this integral really represents. We integrate over a particular two-dimensional set, and we take this value that we integrate. And we can think of this as the height of an object that's sitting on top of that set. Now, this relation here, this calculation of probabilities, is not something that we are supposed to prove. This is, rather, the definition of what a joint PDF does. A legitimate joint PDF is any function of two variables, which is non-negative and which integrates to 1. And we will say that two random variables are jointly continuous if there is a legitimate joint PDF that can be used to calculate the associated probabilities through this particular formula. So we have really an indirect definition. Instead of defining the joint PDF as a probability, we actually define it indirectly by saying what it does, how it will be used to calculate probabilities. A picture will be helpful here. Here's a plot of a possible joint PDF. These are the x and y-axes. And the function being plotted is the joint PDF of these two random variables. This joint PDF is higher at some places and lower at others, indicating that certain regions of the x,y plane are more likely than others. The joint PDF determines the probability of a set B by integrating over that set B. Let's say it's this set. Integrating the PDF over that set. Pictorially, what this means is that we look at the volume that sits on top of that set, but below the PDF, below the joint PDF, and so we obtain some three-dimensional object of this kind. And this integral corresponds to actually finding this volume here, the volume that sits on top of the set B but which is below the joint PDF. Let us now develop some additional understanding of joint PDFs. As we just discussed, for any given set B, we can integrate the joint PDF over that set. And this will give us the probability of that particular set. Of particular interest is the case where we're dealing with a set which is a rectangle, in which case the situation is a little simpler. So suppose that we have a rectangle where the x-coordinate ranges from A to B and the y-coordinate ranges from some C to some D. Then, the double integral over this particular rectangle can be written in a form where we first integrate with respect to one of the variables that ranges from A to B. And then, we integrate over all possible values of y as they range from C to D. Of particular interest is the special case where we're dealing with a small rectangle such as this one. A rectangle with sizes equal to some delta where delta is a small number. In that case, the double integral, which is the volume on top of that rectangle, is simpler to evaluate. It is equal to the value of the function that we're integrating at some point in the rectangle --- let's take that corner --- times the area of that little rectangle, which is equal to delta square. So we have an interpretation of the joint PDF in terms of probabilities of small rectangles. Joint PDFs are not probabilities. But rather, they are probability densities. They tell us the probability per unit area. And one more important comment. For the case of a single continuous random variable, we know that any single point has 0 probability. This is again, true for the case of two jointly continuous random variables. But more is true. If you take a set B that has 0 area. For example, a certain curve. Suppose that this curve is the entire set B. Then, the volume under the joint PDF that's sitting on top of that curve is going to be equal to 0. So 0 area sets have 0 probability. And this is one of the characteristic features of jointly continuous random variables. Now, let's think of a particular situation. Suppose that X is a continuous random variable, and let Y be another random variable, which is identically equal to X. Since X is a continuous random variable, Y is also a continuous random variable. However, in this situation, we are certain that the outcome of the experiment is going to fall on the line where x equals y. All the probability lies on top of a line, and a line has 0 area. So we have positive probability on the set of 0 area, which contradicts what we discussed before. Well, this simply means that X and Y are not jointly continuous. Each one of them is continuous, but together they're not jointly continuous. Essentially, joint continuity is something more than requiring each random variable to be continuous by itself. For joint continuity, we want the probability to be really spread over two dimensions. Probability is not allowed to be concentrated on a one-dimensional set. On the other hand, in this example, the probability is concentrated on a one-dimensional set. And we do not have joint continuity.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L096_Mixed_Random_Variables.txt
We now look at an example similar to the previous one, in which we have again two scenarios, but in which we have both discrete and continuous random variables involved. You have $1 and the opportunity to play in the lottery. With probability 1/2, you do nothing and you're left with the dollar that you started with. With probability 1/2, you decide to play the lottery. And in that case, you get back an amount of money which is random and uniformly distributed between zero and two. Is the random variable, X, discrete? The answer is no, because it takes values on a continuous range. Is the random variable, X, continuous? The answer is no, because the probability that X takes the value of exactly one is equal to 1/2. Even though X takes values in a continuous range, this is not enough to make it a continuous random variable. We defined continuous random variables to be those that can be described by a PDF. And you have seen it in such a case, any individual point should have zero probability. But this is not the case here, and so X is not continuous. We call X a mixed random variable. More generally, we can have a situation where the random variable X with some probability is the same as a particular discrete random variable, and with some other probability it is equal to some other continuous random variable. Such a random variable, X, does not have a PMF because it is not discrete. Also, it does not have a PDF because it is not continuous. How do we describe such a random variable? Well, we can describe it in terms of a cumulative distribution function. CDFs are always well defined for all kinds of random variables. We have two scenarios, and so we can use the Total Probability Theorem and write that the CDF is equal to the probability of the first scenario, which is p, times the probability that the random variable Y is less than or equal to x. This is a conditional model under the first scenario. And with some probability, we have the second scenario. And under that scenario, X will take a value less than little x, if and only if our random variable Z will take a value less than little x. Or in CDF notation, this is p times the CDF of the random variable Y evaluated at this particular x plus another weighted term involving the CDF of the random variable Z. We can also define the expected value of X in a way that is consistent with the Total Expectation Theorem, namely define the expected value of X to be the probability of the first scenario, in which case X is discrete times the expected value of the associated discrete random variable, plus the probability of the second scenario, under which X is continuous, times the expected value of the associated continuous random variable. Going back to our original example, we have two scenarios, the scenarios that we can call A1 and A2. Under the first scenario, we have a uniform PDF, and the corresponding CDF is as follows. It's flat until zero, then it rises linearly. And then it stays flat, and the value here is equal to one. So the slope here is 1/2. So the slope is equal to the corresponding PDF. Under the second scenario, we have a discrete, actually a constant random variable. And so the CDF is flat at zero until this value, and at that value we have a jump equal to one. We then use the Total Probability Theorem, which tells us that the CDF of the mixed random variable will be 1/2 times the CDF under the first scenario plus 1/2 times the CDF under the second scenario. So we take 1/2 of this plot and 1/2 of that plot and add them up. What we get is a function that rises now at the slope of 1/4. Then we have a jump, and the size of that to jump is going to be equal to 1/2. And then it continues at a slope of 1/4 until it reaches this value. And after that time, it remains flat. So this is a simple illustration that for mixed random variables it's not too hard to obtain the corresponding CDF even though this random variable does not have a PDF or a PMF of its own.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L032_A_Coin_Tossing_Example.txt
As an introduction to the main topic of this lecture sequence, let us go through a simple example and on the way, review what we have learned so far. The example that we're going to consider involves three tosses of a biased coin. It's a coin that results in heads with probability p. We're going to make this a little more precise. And the coin is biased in the sense that this number p is not necessarily the same as one half. We represent this particular probabilistic experiment in terms a tree that shows us the different stages of the experiment. Each particular branch corresponds to a sequence of possible results in the different stages, and the leaves of this tree correspond to the possible outcomes. The branches of this tree are annotated by certain numbers, and these numbers are to be interpreted appropriately as probabilities or conditional probabilities. So for example, this number here is interpreted as the probability of heads in the first toss, an event that we denote as H1. This number here is to be interpreted as a conditional probability. It's the conditional probability of obtaining heads in the second toss given that the first toss resulted in heads. And finally, this number here is to be interpreted as the conditional probability of heads in the third toss, given that the first toss resulted in heads and the second toss also resulted in heads. Let us now continue with some calculations. First, we're going to practice the multiplication rule, which allows us to calculate the probability of a certain outcome. In this case, the outcome of interest is tails followed by heads followed by tails. So we're talking about this particular outcome here. According to the multiplication rule, to find the probability of a particular final outcome, we multiply probabilities and conditional probabilities along the path that leads to this particular outcome. So in this case, it's (1 minus p) times p times (1 minus p). Let us now calculate the probability of a certain event. The event of interest is the event that we obtain exactly one head in the three tosses. This is an event that can happen in multiple ways. Here is one possibility where we have a single head. Here's another possibility. And here's a third one. These are the three possible ways that we can have exactly one head, depending on where exactly that single head appears. Is it in the first toss, in the second, or in the third. To find the total probability of this event, we need to add the probability of the different outcomes that correspond to this event. The probability of this outcome is p times (1 minus p) squared, the probability of this outcome is what we calculated. It's, again, p times (1 minus p) squared. And the probability of the third one is also p times 1 minus p squared. So the answer is 3p times (1 minus p) squared. Notice that each one of the 3 different ways that this event can happen have the same probability. So these 3 outcomes are equally likely. Finally, let us calculate a conditional probability. And this is essentially the Bayes rule. Suppose that we were told that there was exactly one head. So in particular, the blue event has occurred. And we're interested in the probability that the first toss is heads, which corresponds to this event here. These are all the outcomes in which the first toss is equal to heads. So given that the blue event happened, what is the probability that the green event happens? You can guess the answer, that it should be 1/3. Why is that? Each one of these blue outcomes has the same probability. So when you condition on the blue outcome having happened, the conditional probability of each one of these should be 1/3. So given that the blue outcome happened, there's probability 1/3 that this particular one has happened. And this is the only one that makes the green event happen. But let us see if we can derive this answer in a formal manner. Let's see if we're going to get 1/3. We use the definition of conditional probabilities. The conditional probability is the ratio, first, of the probability that both events happen, divided by the probability of the conditioning event, which is the probability of 1 head. Now, the probability of both events happening. That we have exactly one head and the first toss is heads. This is the intersection of the blue event and the green event which can happen only in this particular outcome, namely the sequence heads, tails, tails. And has probability p times (1 minus p) squared. The denominator is something that we have already calculated. It's 3p times (1 minus p) squared. And so the final answer is 1/3 as we had guessed. Let me now make a few comments about this particular example. This particular example is pretty special in the following respect. We have that of the probability of H2, heads in the second toss, given that the first one was heads, is equal to p. And the same is true for the conditional probability of heads in the second toss given that the first one was tails. In other words, our beliefs about what may happen in the second toss remain the same. There's a probability, p, of obtaining heads no matter what happened in the first toss. Telling you the result of the first toss doesn't change your beliefs about what may happen, and with what probability, in the second toss. And if you were to calculate the unconditional probability of heads in the second toss, what you would get using the total probability theorem would be the following. It's probability of heads in the first toss times the probability of heads in the second, given heads in the first, plus the probability of tails in the first toss times the probability of heads in the second toss, given tails in the first. And if you do the algebra, this turns out to be equal to p again. So the unconditional probability of heads in the second toss turns out to be the same as the conditional probabilities. Again, knowing what happened in the first toss doesn't change your beliefs about the second toss, which were associated with this particular probability, p. So what we're going to do next is to generalize this special situation by giving a definition of independence of events, and then discuss various properties and concepts associated with independence.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L045_Binomial_Probabilities.txt
The coefficients n-choose-k that we calculated in the previous segment are known as the binomial coefficients. They are intimately related to certain probabilities associated with coin tossing models, the so-called binomial probabilities. This is going to be our subject. We consider a coin which we toss n times in a row, independently. For each one of the tosses of this coin, we assume that there is a certain probability, p, that the result is heads, which of course, implies that the probability of obtaining tails in any particular toss is going to be 1 minus p. The question we want to address is the following. We want to calculate the probability that in those n independent coin tosses, we're going to observe exactly k heads. Let us start working our way towards the solution to this problem by looking first at a simple setting and then move on. So let us answer this simple question. What is the probability that we observe this particular sequence? Of course here we take n equal to six, and we wish to calculate this probability. Now, because we have assumed that the coin tosses are independent, we can multiply probabilities. So the probability of this sequence is equal to the probability that the first toss is heads times the probability that the second toss is tails, which is 1 minus p, times the probability that the third toss is tails, which is 1 minus p, times the probability of heads, times the probability of heads, times the probability of heads. And by collecting terms, this is p to the 4th times 1 minus p to the second power. More generally, if I give you a particular sequence of heads and tails, as in this example, and I ask you, what is the probability that this particular sequence is observed, then by generalizing from this answer or from the derivation of this answer, you see that you're going to get p to the power number of heads. And the reason is that each time that there's a head showing up in this sequence, there's a corresponding factor of p in this numerical answer. And then there are factors associated with tails. Each tail contributes a factor of 1 minus p. And so we're going to have here 1 minus p to a power equal to the number of tails. Now, if I ask you about the probability of a particular sequence and that particular sequence has happened to have exactly k heads, what is the probability of that sequence? Well, we already calculated what it is. It is the previous answer, except we use the symbol k instead of just writing out explicitly "number of heads." And the number of tails is the number of tosses minus how many tosses resulted in heads. Now, we're ready to consider the actual problem that we want to solve, which is calculate the probability of k heads. The event of obtaining k heads can happen in many different ways. Any particular k-head sequence makes that event to occur. Any particular k-head sequence has a probability equal to this expression. The overall probability of k heads is going to be the probability of any particular k-head sequence, times the number of k-head sequences that we have. Now, the reason why we can carry out this argument is the fact that any k-head sequence has the same probability. Otherwise, we wouldn't be able to write down an answer which is just the product of two terms. But because every k-head sequence has the same probability, to find the overall probability, we take the probability of each one of them and multiply it with the number of how many of these we have. So to make further progress, now we need to calculate the number of possible k-head sequences. How many are there? Well, specifying a k-head sequence is the same as the following. You think of having n time slots. These time slots corresponds to the different tosses of your coin. And to specify a k-head sequence, you need to tell me which ones of these slots happen to contain a head. You need to tell me k of those slots. So in other words, what you're doing is you're specifying a subset of the set of these n slots, a subset that has k elements. You need to choose k of the slots out of the n and tell me that those k slots have heads. That's the way of specifying a particular k-head sequence. So what's the number of k-head sequences? Well, it's the same as the number of ways that you can choose k slots out of the n slots, which is our binomial coefficient, n-choose-k. Therefore, the answer to our problem is this expression here, times n-choose-k, which is shown up here. At this point, we can pause and consider a simple question to check your understanding of the binomial probabilities.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L087_Cumulative_Distribution_Functions.txt
We have seen that several properties, such as, for example, linearity of expectations, are common for discrete and continuous random variables. For this reason, it would be nice to have a way of talking about the distribution of all kinds of random variables without having to keep making a distinction between the different types-- discrete or continuous. This leads us to describe the distribution of a random variable in a new way, in terms of a so-called cumulative distribution function or CDF for short. A CDF is defined as follows. The CDF is a function of a single argument, which we denote by little x in this case. And it gives us the probability that the random variable takes a value less than or equal to this particular little x. We will always use uppercase Fs to indicate CDFs. And we will always have some subscripts that indicate which random variable we're talking about. The beauty of the CDF is that it just involves a probability-- a concept that is well defined, no matter what kind of random variable we're dealing with. So in particular, if X is a continuous random variable, the probability that X is less than or equal to a certain number-- this is just the integral of the PDF over that range from minus infinity up to that number. As a more concrete example, let us consider a uniform random variable that ranges between a and b, and let us just try to plot the corresponding CDF. The CDF is a function of little x. And the form that it takes depends on what kind of x we're talking about. If little x falls somewhere here to the left of a, and we ask for the probability that our random variable takes values in this interval, then this probability will be 0 because all of the probability of this uniform is between a and b. Therefore, the CDF is going to be 0 for values of x less than or equal to a. How about the case where x lies somewhere between a and b? In that case, the probability that our random variable falls to the left of here-- this is whatever mass there is under the PDF when we consider the integral up to this particular point. So we're looking at the area under the PDF up to this particular point x. This area is of the form the base of the rectangle, which is x minus a, times the height of the rectangle, which is 1 over b minus a. This is a linear function in x that takes the value of 0 when x is equal to a, grows linearly, and when x reaches a value of b, it becomes equal to 1. How about the case where x lies to the right of b? We're talking about the probability that our random variable takes values less than or equal to this particular x. But this includes the entire probability mass of this uniform. We have unit mass on this particular interval, so the probability of falling to the left of here is equal to 1. And this is the shape of the CDF for the case of a uniform random variable. It starts at 0, eventually it rises, and eventually it reaches a value of 1 and stays constant. Coming back to the general case, CDFs are very useful, because once we know the CDF of a random variable, we have enough information to calculate anything we might want to calculate. For example, consider the following calculation. Let us look at the range of numbers from minus infinity to 3 and then up to 4. If we want to calculate the probability that X is less than or equal to 4, we can break it down as the probability that X is less than or equal to 3-- this is one term-- plus the probability that X falls between 3 and 4, which would be this event here. So this equality is true because of the additivity property of probabilities. This event is broken down into two possible events. Either x is less than or equal to 3 or x is bigger than 3 but less than or equal to 4. But now we recognize that if we know the CDF of the random variable, then we know this quantity. We also know this quantity, and this allows us to calculate this quantity. So we can calculate the probability of a more general interval. So in general, the CDF contains all available probabilistic information about a random variable. It is just a different way of describing the probability distribution. From the CDF, we can recover any quantity we might wish to know. And for continuous random variables, the CDF actually has enough information for us to be able to recover the PDF. How can we do that? Let's look at this relation here, and let's take derivatives of both sides. On the left, we obtain the derivative of the CDF. And let's evaluate it at a particular point x. What do we get on the right? By basic calculus results, the derivative of an integral, with respect to the upper limit of the integration, is just the integrand itself. So it is the density itself. So this is a very useful formula, which tells us that once we have the CDF, we can calculate the PDF. And conversely, if we have the PDF, we can find the CDF by integrating. Of course, this formula can only be correct at those places where the CDF has a derivative. For example, at this corner here, the derivative of the CDF is not well defined. We would get a different value if we differentiate from the left, a different value when we differentiate from the right, so we cannot apply this formula. But at those places where the CDF is differentiable, at those places we can find the corresponding value of the PDF. For instance, in this diagram, at this point the CDF is differentiable. The derivative is equal to the slope, which is this quantity. And this quantity happens to be exactly the same as the value of the PDF. So indeed, here, we see that the PDF can be found by taking the derivative of the CDF. Now, as we discussed earlier, CDFs are relevant to all types of random variables. So in particular, they are also relevant to discrete random variables. For a discrete random variable, the CDF is, of course, defined the same way, except that we calculate this probability by adding the probabilities of all possible values of the random variable that are less than [or equal to] the particular little x that we're considering. So we have a summation instead of an integral. Let us look at an example. This is an example of a discrete random variable described by a PMF. And let us try to calculate the corresponding CDF. The probability of falling to the left of this number, for example, is equal to 0. And all the way up to 1, there is 0 probability of getting a value for the random variable less than that. But now, if we let x to be equal to 1, then we're talking about the probability that the random variable takes a value less than or equal to 1. And because this includes the value of 1, this probability would be equal to 1/4. This means that once we reach this point, the value of the CDF becomes 1/4. At this point, the CDF makes a jump. At 1, the value of the CDF is equal to 1/4. Just before 1, the value of the CDF was equal to 0. Now what's the probability of falling to the left of, let's say, 2? This probability is again 1/4. There's no change in the probability as we keep moving inside this interval. So the CDF stays constant, until at some point we reach the value of 3. And at that point, the probability that the random variable takes a value less than or equal to 3 is going to be the probability of a 3 plus the probability of a 1 which becomes 3 over 4. For any other x in this interval, the probability that the random variable takes a value less than this number will stay at 1/4 plus 1/2, so the CDF stays constant. And at this point, the probability of being less than or equal to 4, this probability becomes 1. And so the CDF jumps once more to a value of 1. Again, at the places where the CDF makes a jump, which one of the two is the correct value? The correct value is this one. And this is because the CDF is defined by using a less than or equal sign in the probability involved here. So in the case of discrete random variables, the CDF takes the form of a staircase function. It starts at 0. It ends up at 1. It has a jump at those points where the PMF assigns a positive mass. And the size of the jump is exactly equal to the corresponding value of the PMF. Similarly, the size of the PMF here is 1/4, and so the size of the corresponding jump at the CDF will also be equal to 1/4. CDFs have some general properties, and we have seen a hint of those properties in what we have done so far. So the CDF is, by definition, the probability of obtaining a value less than or equal to a certain number little x. It's the probability of this interval. If I were to take a larger interval, and go up to some larger number y, this would be the probability of a bigger interval. So that probability would only be bigger. And this translates into the fact that the CDF is an non-decreasing function. If y is larger than or equal to x, as in this picture, then the value of the CDF evaluated at that point y is going to be larger than or equal to the CDF evaluated at that point little x. Other properties that the CDF has is that as x goes to infinity, we're talking about the probability essentially of the entire real line. And so the CDF will converge to 1. On the other hand, if x tends to minus infinity, so we're talking about the probability of an interval to the left of a point that's all the way out, further and further out. That probability has to diminish, and eventually converge to 0. So in general, CDFs asymptotically start at 0. They can never go down. They can only go up. And in the limit, as x goes to infinity, the CDF has to approach 1. Actually in the examples that we saw earlier, it reaches the value of 1 after a certain finite point. But in general, for general random variables, it might only reach the value 1 asymptotically
MIT_RES6012_Introduction_to_Probability_Spring_2018
L083_Uniform_Piecewise_Constant_PDFs.txt
Let us now give an example of a continuous random variable-- the uniform random variable. It is patterned after the discrete random variable. Similar to the discrete case, we will have a range of possible values. In the discrete case, these values would be the integers between a and b. In the continuous case, any real number between a and b will be possible. In the discrete case, these values were equally likely. In the continuous case, at all points, we have the same height for the probability density function. And as a consequence, if we take two intervals that have the same length, then these two intervals will be assigned the same probability. Intuitively, uniform random variables model the following situation. We know that the numerical value of the random variable will be between a and b. But we know nothing more. We have no reason to believe that certain locations are more likely than others. And in this sense, the uniform random variable models a situation of complete ignorance. By the way, since probabilities must add to 1, the area of this rectangle must be equal to 1. And therefore, the height of this rectangle has to be 1 over b minus a, so that we have a height of 1 over b minus a. We have a length of b minus a. So the product of the two, which is the area, is equal to 1. Finally, here's a more general PDF, which is piecewise constant. One thing to notice is that this, in particular, tells us that PDFs do not have to be continuous functions. They can have discontinuities. Of course, for this to be a legitimate PDF, the total area under the curve, which is the sum of the areas of the rectangles that we have here, must be equal to 1. With a piecewise constant PDF, we can calculate probabilities of events fairly easy. For example, if you wish to find the probability of this particular interval, which is going to be the area under the curve, that area really consists of two pieces. We find the areas of these two rectangles, add them up, and this gives us the total probability of this particular interval. So at this point, our agenda, moving forward, will be twofold. First, we will introduce some interesting continuous random variables. We just started with the presentation of the uniform random variable. And then, we will also go over all of the concepts and results that we have developed for discrete random variables and develop them again for their continuous counterparts.
MIT_RES6012_Introduction_to_Probability_Spring_2018
S051_Supplement_Functions.txt
We are defining a random variable as a real valued function on the sample space. So this is a good occasion to make sure that we understand what a function is. To define a function, we start with two sets. One set-- call it A-- is the domain of the function. And we have our second set. Then a function is a rule that for any element of A associates an element of B. And we use a notation of this kind to indicate that we are dealing with a function f that maps elements of A into elements of B. Now, two elements of A may be mapped to the same element of B. This is allowed. What is important, however, is that every element of A is mapped to exactly one element of B, not more. But it is also possible that we have some elements of B that do not correspond to any of the elements of A. Now, I said that a function is a rule that assigns points of A to points in B. But what exactly do we mean by a rule? If we want to be more precise, a function would be defined as follows. It would be defined as a set of pairs of values. It would be a set of pairs of the form x, y such that x is always an element of A, y is always an element of B, and also-- most important-- each x in A appears in exactly one pair. So this would be a formal definition of what a function is. It is collection of ordered pairs of this kind. As a concrete example, let us start with the set consisting of these elements here. And let B be the set of real numbers. And consider the function that corresponds to what we usually call the square. So it's a function that squares its argument. Then this function would be represented by the following collection of pairs. So this is the value of x. And this is the corresponding value of y. Any particular x shows up just once in this collection of pairs. But a certain y-- for example, y equal to 1-- shows up twice, because minus 1 and plus 1 both map to the same element of B. Now, this is a representation in terms of ordered pairs. But we could also think of the function as being described by a table. We could, for instance, put this information here in a form of a table of this kind and say that this table describes the function. For any element x, it tells us what the corresponding element y is. However, when the set A is an infinite set it is not clear what we might mean by saying a table, an infinite table, whereas this definition in terms of ordered pairs still applies. For example, if you're interested in the function which is, again, the square function from the real numbers, the way you would specify that function abstractly would be as follows. You could write, it's the set of all pairs of this form such that x is a real number. And now such pairs, of course, belong to the two dimensional plane because it's a pair of numbers. So this set here can be viewed as a formal definition or a specification of the squaring function. Now, what this set is is something that we can actually plot. If we go in the two dimensional plane, the points of this form are exactly the points that belong to the graph of the square function. So this abstract definition, really all that it says is that a function is the same thing as the plot of that function. But it's important here to make a distinction. The function is the entire plot-- so this set here is the function f-- whereas if I tell you a specific number x, the corresponding value here would be f of x. So here x is a number and f of x is also a number. And those two values, x and f of x, define this particular point on this plot. But the function itself is the entire plot. Let us also take this occasion to talk a little bit about the notation and the proper way of talking about functions. Here is the most common way that one would describe a function. And this is an appropriate way. We've described the domain. We've described the set on which the function takes values. And I'm telling you for any x in that set what the value of the function is. On the other hand, sometimes people use a more loose language, such as for example, they would say, the function x squared. What does that mean? Well, what this means is exactly this statement. Now let us consider this function. The function f-- again, from the reals to the reals-- that's defined by f of z equal to z squared. Is this a different function or is it the same function? It's actually the same function, because these two involve the same sets. And they produce their outputs, the values of f, using exactly the same rule. They take an argument and they square that argument. Now, if you were to use informal notation, you would be referring to that second function as the function z squared. And now, if you use informal language, it's less clear that the function x squared and the function z squared are one and the same thing, whereas with this terminology here, it would be pretty clear that we're talking about the same function. Finally, suppose that we have already defined a function. How should we refer to it in general? Should we call it the function f, or should we say the function f of x? Well, when x is a number, f of x is also a number. So f of x is not really a function. The appropriate language is this one. We talk about the function f, although quite often, people will abuse language and they will use this terminology. But it's important to keep in mind what we really mean. The idea is that we need to think of a function as some kind of box or even a computer program, if you wish, that takes inputs and produces outputs. And there's a distinction between f, which is the box, from the value f of x that the function takes if we feed it with a specific argument.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L011_Lecture_Overview.txt
Welcome to the first lecture of this class. You may be used to having a first lecture devoted to general comments and motivating examples. This one will be different. We will dive into the heart of the subject right away. In fact, today we will accomplish a lot. By the end of this lecture, you will know about all of the elements of a probabilistic model. A probabilistic model is a quantitative description of a situation, a phenomenon, or an experiment whose outcome is uncertain. Putting together such a model involves two key steps. First, we need to describe the possible outcomes of the experiment. This is done by specifying a so-called sample space. And then, we specify a probability law, which assigns probabilities to outcomes or to collections of outcomes. The probability law tells us, for example, whether one outcome is much more likely than some other outcome. Probabilities have to satisfy certain basic properties in order to be meaningful. These are the axioms of probability theory. For example probabilities cannot be negative. Interestingly, there will be very few axioms, but they are powerful, and we will see that they have lots of consequences. We will see that they imply many other properties that were not part of the axioms. We will then go through a couple of very simple examples involving models with either discrete or continuous outcomes. As you will be seeing many times in this class, discrete models are conceptually much easier. Continuous models involve some more sophisticated concepts, and we will point out some of the subtle issues that arise. And finally, we will talk a little bit about the big picture, about the role of probability theory, and its relation with the real world.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L022_Conditional_Probabilities.txt
Conditional probabilities are probabilities associated with a revised model that takes into account some additional information about the outcome of a probabilistic experiment. The question is how to carry out this revision of our model. We will give a mathematical definition of conditional probabilities, but first let us motivate this definition by examining a simple concrete example. Consider a probability model with 12 equally likely possible outcomes, and so each one of them has probability equal to 1/12. We will focus on two particular events, event A and B, two subsets of the sample space. Event A has five elements, so its probability is 5/12, and event B has six elements, so it has probability 6/12. Suppose now that someone tells you that event B has occurred, but tells you nothing more about the outcome. How should the model change? First, those outcomes that are outside event B are no longer possible. So we can either eliminate them, as was done in this picture, or we might keep them in the picture but assign them 0 probability, so that they cannot occur. How about the outcomes inside the event B? So we're told that one of these has occurred. Now these 6 outcomes inside the event B were equally likely in the original model, and there is no reason to change their relative probabilities. So they should remain equally likely in revised model as well, so each one of them should have now probability 1/6 since there's 6 of them. And this is our revised model, the conditional probability law. 0 probability to outcomes outside B, and probability 1/6 to each one of the outcomes that is inside the event B. Let us write now this down mathematically. We will use this notation to describe the conditional probability of an event A given that some other event B is known to have occurred. We read this expression as probability of A given B. So what are these conditional probabilities in our example? So in the new model, where these outcomes are equally likely, we know that event A can occur in two different ways. Each one of them has probability 1/6. So the probability of event A is 2/6 which is the same as 1/3. How about event B. Well, B consists of 6 possible outcomes each with probability 1/6. So event B in this revised model should have probability equal to 1. Of course, this is just saying the obvious. Given that we already know that B has occurred, the probability that B occurs in this new model should be equal to 1. How about now, if the sample space does not consist of equally likely outcomes, but instead we're given the probabilities of different pieces of the sample space, as in this example. Notice here that the probabilities are consistent with what was used in the original example. So this part of A that lies outside B has probability 3/12, but in this case I'm not telling you how that probability is made up. I'm not telling you that it consists of 3 equally likely outcomes. So all I'm telling you is that the collective probability in this region is 3/12. The total probability of A is, again, 5/12 as before. The total probability of B is 2 plus 4 equals 6/12, exactly as before. So it's a sort of similar situation as before. How should we revise our probabilities and create-- construct-- conditional probabilities once we are told that event B has occurred? First, this relation should remain true. Once we are told that B has occurred, then B is certain to occur, so it should have conditional probability equal to 1. How about the conditional probability of A given that B has occurred? Well, we can reason as follows. In the original model, and if we just look inside event B, those outcomes that make event A happen had a collective probability which was 1/3 of the total probability assigned to B. So out of the overall probability assigned to B, 1/3 of that probability corresponds to outcomes in which event A is happening. So therefore, if I tell you that B has occurred, I should assign probability equal to 1/3 that event A is also going to happen. So that, given that B happened, the conditional probability of A given B should be equal to 1/3. By now, we should be satisfied that this approach is a reasonable way of constructing conditional probabilities. But now let us translate our reasoning into a formula. So we wish to come up with a formula that gives us the conditional probability of an event given another event. The particular formula that captures our way of thinking, as motivated before, is the following. Out of the total probability assigned to B-- which is this-- we ask the question, which fraction of that probability is assigned to outcomes under which event A also happens? So we are living inside event B, but within that event, we look at those outcomes for which event A also happens. So this is the intersection of A and B. And we ask, out of the total probability of B, what fraction of that probability is allocated to that intersection of A with B? So this formula, this definition, captures our intuition of what we did before to construct conditional probabilities in our particular example. Let us check that the definition indeed does what it's supposed to do. In this example, the probability of the intersection was 2/12 and the total probability of B was 6/12, which gives us 1/3, which is the answer that we had gotten intuitively a little earlier. At this point, let me also make a comment that this definition of conditional probabilities makes sense only if we do not attempt to divide by zero. That this, only if the event B on which we're conditioning, has positive probability. If B, if an event B has 0 probability, then conditional probabilities given B will be left undefined. And one final comment. This is a definition. It's not a theorem. What does that mean? It means that there is no question whether this equality is correct or not. It's just a definition. There's no issue of correctness. The earlier argument that we gave was just a motivation of the definition. We tried to figure out what the definition should be if we want to have a certain intuitive and meaningful interpretation of the conditional probabilities. Let us now continue with a simple example.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L065_Total_Expectation_Theorem.txt
An important reason why conditional probabilities are very useful is that they allow us to divide and conquer. They allow us to split complicated probability modes into simpler submodels that we can then analyze one at a time. Let me remind you of the Total Probability Theorem that has his particular flavor. We divide our sample space into three disjoint events-- A1, A2, and A3. And these events form a partition of the sample space, that is, they exhaust all possibilities. They correspond to three alternative scenarios, one of which is going to occur. And then we may be interested in a certain event B. That event B may occur under either scenario. And the Total Probability Theorem tells us that we can calculate the probability of event B by considering the probability that it occurs under any given scenario and weigh those probabilities according to the probabilities of the different scenarios. Now, let us bring random variables into the picture. Let us fix a particular value-- little x-- and let the event B be the event that the random variable takes on this particular value. Let us now translate the Total Probability Theorem to this situation. First, the picture will look slightly different. Our event B has been replaced by the particular event that we're now considering. Now, what is this probability? The probability that event B occurs, having fixed the particular choice of little x, is the value of PMF at that particular x. How about this probability here? This is the probability that the random variable, capital X, takes on the value little x-- that's what a PMF is-- but in the conditional universe. So we're dealing with a conditional PMF. And so on with the other terms. So this equation here is just the usual Total Probability Theorem but translated into PMF notation. Now this version of the Total Probability Theorem, of course, is true for all values of little x. This means that we can now multiply both sides of this equation by x and them sum over all possibles choices of x. We recognize that here we have the expected value of the random variable X. Now, we do the same thing to the right hand side. We multiply by x. And then we sum over all possible values of x. This is going to be the first term. And then we will have similar terms. Now, what do we have here? This expression is just the conditional expectation of the random variable X under the scenario that event A1 has occurred. So what we have established is this particular formula, which is called the Total Expectation Theorem. It tells us that the expected value of a random variable can be calculated by considering different scenarios. Finding the expected value under each of the possible scenarios and weigh them. Weigh the scenarios according to their respective probabilities. The picture is like this. Under each scenario, the random variable X has a certain conditional expectation. We take all these into account. We weigh them according to their corresponding probabilities. And we add them up to find the expected value of X. So we can divide and conquer. We can replace a possibly complicated calculation of an expected value by hopefully simpler calculations under each one of possible scenarios. Let me illustrate the idea by a simple example. Let us consider this PMF, and let us try to calculate the expected value of the associated random variable. One way to divide and conquer is to define an event, A1, which is that our random variable takes values in this set, and another event, A2, which is that the random variable takes values in that set. Let us now apply the Total Expectations Theorem. Let us calculate all the terms that are required. First, we find the probabilities of the different scenarios. The probability of event A1 is 1/9 plus 1/9 plus 1/9 which is 1/3. And the probability of event A2 is 2/9 plus 2/9 plus 2/9 which adds up to 2/3. How about conditional expectations? In a universe where event A1 one has occurred, only these three values are possible. They had equal probabilities, so in the conditional model, they will also have equal probabilities. So we will have a uniform distribution over the set {0, 1, 2}. By symmetry, the expected value is going to be in the middle. So this expected value is equal to 1. And by a similar argument, the expected value of X under the second scenario is going to be the midpoint of this range, which is equal to 7. And now we can apply the Total Probability Theorem and write that the expected value of X is equal to the probability of the first scenario times the expected value under the first scenario plus the probability of the second scenario times the expected value under the second scenario. In this case, by breaking down the problem in these two subcases, the calculations that were required were somewhat simpler than if you were to proceed directly. Of course, this is a rather simple example. But as we go on with this course, we will apply the Total Probability Theorem in much more interesting and complicated situations.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L089_Calculation_of_Normal_Probabilities.txt
We have claimed that normal random variables are very important, and therefore we would like to be able to calculate probabilities associated with them. For example, given a normal random variable, what is the probability that it takes a value less than 5? Unfortunately, there are no closed form expressions that can help us with this. In particular, the CDF, the Cumulative Distribution Function of normal random variables, is not given in closed form. But fortunately, we do have tables for the standard normal random variable. These tables, which take the form shown here, give us the following information. If we have a normal random variable, which is a standard normal, they tell us the values of the cumulative distribution function for different values of little y. In terms of a picture, if this is the PDF of a standard normal and I give you a value little y, I'm interested in the corresponding value of the CDF, which is the area under the curve. Well, that value, the area under this curve, is exactly what this table is giving to us. And there's a shorthand notation for referring to the CDF of the standard normal, which is just phi of y. Let us see how we use this table. Suppose we're interested in phi of 0. Which is the probability that our standard normal takes a value less than or equal to 0? Well, by symmetry since the PDF is symmetric around 0, we know that this probability should be 0.5. Let's see what the table tells us. 0 corresponds to this entry, which is indeed 0.5. Let us look up the probability that our standard normal takes a value less than, let's say, 1.16. How do we find this information? 1 is here. And 1.1 is here. 1.1, and then we have a 6 in the next decimal place, which leads us to this entry. And so this value is 0.8770. Similarly, we can calculate the probability that the normal is less than 2.9. How do we look up this information? 2.9 is here. We do not have another decimal digit, so we're looking at this column. And we obtain this value, which is 0.9981. And by looking at this number we can actually tell that a standard normal random variable has extremely low probability of being bigger than 2.9. Now notice that the table specifies phi of y for y being non-negative. What if we wish to calculate the value, for example, of phi of minus 2? In terms of a picture, this is a standard normal. Here is minus 2. And we wish to calculate this probability. There's nothing in the table that gives us this probability directly, but we can argue as follows. The normal PDF is symmetric. So if we look at 2, then this probability here, which is phi of minus 2, is the same as that probability here, of that tail. What is the probability of that tail? It's 1, which is the overall area under the curve, minus the area under the curve when you go up to the value of 2. So this quantity is going to be the same as phi of minus 2. And this one we can now get from the tables. It's 1 minus-- let us see, 2 is here. It's 1 minus 0.9772. The standard normal table gives us probabilities associated with a standard normal random variable. What if we're dealing with a normal random variable that has a mean and a variance that are different from those of the standard normal? What can we do? Well, there's a general trick that you can do to a random variable, which is the following. Let us define a new random variable Y in this fashion. Y measures how far away is X from the mean value. But because we divide by sigma, the standard deviation, it measures this distance in standard deviations. So if Y is equal to 3 it means that X is 3 standard deviations away from the mean. In general, Y measures how many deviations away from the mean are you. What properties does this random variable have? The expected value of Y is going to be equal to 0, because we have X and we're subtracting the mean of X. So the expected value of this term is equal to 0. How about the variance of Y? Whenever we multiply a random variable by a constant, the variance gets multiplied by the square of that constant. So we get this expression. But the variance of X is sigma squared. So this is equal to 1. So starting from X, we have obtained a closely related random variable Y that has the property that it has 0 mean and unit variance. If it also happens that X is a normal random variable, then Y is going to be a standard normal random variable. So we have managed to relate X to a standard normal random variable. And perhaps you can rewrite this expression in this form, X equals to mu plus sigma Y where Y is now a standard normal. So, instead of doing calculations having to do with X, we can try to calculate in terms of Y. And for Y we do have available tables. Let us look at an example of how this is done. The way to calculate probabilities associated with general normal random variables is to take the event whose probability we want calculated and express it in terms of standard normal random variables. And then use the standard normal tables. Let us see how this is done in terms of an example. Suppose that X is normal with mean 6 and variance 4, so that the standard deviation sigma is equal to 2. And suppose that we want to calculate the probability that X lies between 2 and 8. Here's how we can proceed. This event is the same as the event that X minus 6 takes a value between 2 minus 6 and 8 minus 6. This event is the same as the original event we were interested in. We can also divide both sides of this inequality by the standard deviation. And the event of interest has now been expressed in this form. But at this point we recognize that this is of the form X minus mu over sigma. So this random variable here is a standard normal random variable. So the probability that X lies between 2 and 8 is the same as the probability that a standard normal random variable, call it Y, falls between these numbers minus 4 divided by 2, that's minus 2. Then Y less than 1. And now we can use the standard normal tables to calculate this probability. We have here 1 and here we have minus 2. And we want to find the probability that our standard normal falls inside this range. This is the probability that it is less than 1. But we need to subtract the probability of that tail so that we're left just with this intermediate area. So this is the probability that Y is less than 1 minus the probability that Y is less than minus 2. And finally, as we discussed earlier, the probability that Y is less than minus 2, this is 1 minus the probability that Y is less than or equal to 2. And now we can go to the normal tables, identify the values that we're interested in, the probability that Y is less than 1, the probability that Y is less than 2, and plug these in. And this gives us the desired probability. Again, the key step is to take the event of interest and by subtracting the mean and dividing by the standard deviation express that same event in an equivalent form, but which now involves a standard normal random variable. And then finally, use the standard normal tables.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L061_Lecture_Overview.txt
In the previous lecture we introduced random variables, probability mass functions and expectations. In this lecture we continue with the development of various concepts associated with random variables. There will be three main parts. In the first part we define the variance of a random variable, and calculate it for some of our familiar random variables. Basically the variance is a quantity that measures the amount of spread, or the dispersion of a probability mass functions. In some sense, it quantifies the amount of randomness that is present. Together with the expected value, the variance summarizes crisply some of the qualitative properties of the probability mass function. In the second part we discuss conditioning. Every probabilistic concept or result has a conditional counterpart. And this is true for probability mass functions, expectations and variances. We define these conditional counterparts and then develop the total expectation theorem. This is a powerful tool that extends our familiar total probability theorem and allows us to divide and conquer when we calculate expectations. We then take the opportunity to dive deeper into the properties of geometric random variables, and use a trick based on the total expectation theorem to calculate their mean. In the last part we show how to describe probabilistically the relation between multiple random variables. This is done through a so-called joint probability mass function. We take the occasion to generalize the expected value rule, and establish a further linearity property of expectations. We finally illustrate the power of these tools through the calculation of the expected value of a binomial random variable.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L075_Example.txt
Let us now consider a simple example. Let random variables X and Y be described by a joint PMF which is the one shown in this table. Question-- are X and Y independent? We can try to answer this question by using the definition of independence. But it is actually more instructive to proceed in a somewhat more intuitive way. We look at this table, and we observe that the value of one is possible for X. In particular, the probability that X takes the value of one, this is the marginal probability, this can be found by adding the probabilities of all of the outcomes in this column, which, in this case, is 3 over 20. Suppose now that somebody tells you the value of Y. For example, I tell you that Y happens to be equal to one, in which case you are transported into this universe. In this universe, the conditional probability that X takes a value of one, given that Y takes a value of one, what is it? In this universe, there's zero probability associated to this outcome. So this probability is zero, which is different than 3 over 20. And since these two numbers are different, this means that information from Y changes our beliefs about what's going to happen to X. And so, we do not have independence. So again, intuitively, in the beginning, we thought that X equal to one was possible. But information given by Y, namely that Y is equal to one, tells us that, actually, X equals to one is impossible. Information about Y changed our beliefs about X, so X and Y are dependent. Now, when we first introduced the notion of independence some time ago, we also introduced the notion of conditional independence. And we said that conditional independence is the same as ordinary independence, except that it would be applied to a conditional universe. Something similar can be done for the case of random variables as well. So suppose, for example, that someone tells us that the outcome of the experiment was such that it belongs to this blue set. This is the set where X is less than or equal to 2, and Y is larger than or equal to three. So we're given this information, and this is now our new conditional model. The question is, within this new conditional model are random variables X and Y independent? Let's just right down the conditional model, where I'm only showing the four possible outcomes that are allowed in the conditional model. All the others, of course, will have zero probability in the conditional model. So in the conditional model, probabilities will keep the same proportions as in the unconditional model-- and the proportions are 1, 2, 2, 4-- but then they need to be scaled, or normalized, so that they add to 1. And to make them add to 1, we need to divide them all by 9. In this conditional model, this is the joint PMF of the two random variables X and Y. Let us find the marginal PMFs. To find the marginal PMF of X, we add the entries in this column. And we get here 1/3, and here 2/3. And to find the marginal PMF of y, we add the entries in this [row] to find 2/3. And we adds the entries in that [row] to find 1/3. So this is the marginal PMF of x. That's the marginal PMF of Y. And now we notice that this entry of the joint PMF is 1/3 times 1/3, the product of the marginals. This entry is the product of 1/3 times 2/3, the product of the marginals, and so on for the remaining entries. So each entry of the joint PMF is equal to the product of the corresponding entries of the marginal PMFs. And this is the definition of independence. So in this conditional blue universe, we do have independence. And the way that this was established was to check that the joint PMF factors as a product of marginal PMFs.
MIT_RES6012_Introduction_to_Probability_Spring_2018
S072_The_Variance_of_the_Geometric.txt
In this segment, we will derive the formula for the variance of the geometric PMF. The argument will be very much similar to the argument that we used to drive the expected value of the geometric PMF. And it relies on the memorylessness properties of geometric random variables. So let X be a geometric random variable with some parameter p. The way to think about X is like the number of coin flips that it takes until we obtain heads for the first time, where p is the probability of heads at each toss. Recall now the memorylessness property. If I tell you that X is bigger than 1-- which means that the first trial was a failure--- we obtained tails. Given that event, the remaining number of tosses has the same geometric PMF as if we were just starting at this time. So it has the same geometric PMF as the unconditional PMF of X. And this is the property that we exploited in order to find the expected value of X. Now let us take this observation and add one to the random variables involved and turn this statement to the following version. The conditional PMF of X-- which is this random variable plus 1-- is the same as the unconditional PMF of this random variable plus 1. So it's the same statement as before except that we added 1. One consequence of the memorylessness that we have already seen and exploited is that the expected value of X in the conditional universe where the first coin flip was wasted is equal to 1-- that's the wasted coin flip-- plus how long you expect to have to flip the coin until you obtain heads for the first time, starting from the second flip. Since the conditional distribution of X in this universe is the same as the unconditional distribution of this random variable, it means that the corresponding expected value in this universe is going to be equal to the expected value of this random variable, which is 1 plus the expected value of X. And by exactly the same argument, the random variable X squared has the same distribution in the conditional universe as the random variable X plus 1 squared in the unconditional universe. So since X in the conditional universe has the same distribution as X plus 1, it means that X squared in the conditional universe has the same distribution as X plus 1 squared in the unconditional universe. So now let us take those facts and use a divide and conquer method to calculate the expected value of X squared. We will use exactly the same method that we used in order to calculate the expected value. We separate into two scenarios. In one scenario, X is equal to 1. And then we have the expected value of X squared given that X is equal to 1. And then we have another scenario-- the scenario that X is bigger than 1. And then we have the expected value of X squared given that X is bigger than 1. So this is just the total expectation theorem. Now let us calculate terms. The probability that the first toss results in success, that X is equal to 1-- this is p. And if X is equal to 1, then the value of X squared is also equal to 1. And then there is probability 1 minus p that the first trial was not a success. So we get to continue. We have this conditional expectation here. But it is equal to this unconditional expectation up there. And now let us expand the terms in this quadratic and write this as expected value of X squared plus twice the expected value of X plus 1. Now we know what this expected value here is. The expected value of a geometric is just 1/p. And what we're left with is an equation that involves a single unknown. Namely, this quantity is the unknown. And we can solve this linear equation for this unknown. We carry out some algebra, which is not so interesting by itself. And after we carry out the algebra, what we obtain is that the expected value of X squared is equal to 2 over p squared minus 1 over p. And then we use the formula that the variance of a random variable is equal to the expected value of the square of that random variable minus the square of the expected value. We already know what that expected value is. We found the expected value of the square. And putting all that together, we obtain a final answer. And this is the expression for the variance of a geometric random variable. It goes without saying that for this calculation to make sense, we need to assume that the parameter that we're dealing with is positive.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L091_Lecture_Overview.txt
In this lecture, we continue our discussion of continuous random variables. We will start by bringing conditioning into the picture and discussing how the PDF of a continuous random variable changes when we are told that a certain event has occurred. We will take the occasion to develop counterparts of some of the tools that we developed in the discrete case such as the total probability and total expectation theorems. In fact, we will push the analogy even further. In the discrete case, we looked at the geometric PMF in some detail and recognized an important memorylessness property that it possesses. In the continuous case, there is an entirely analogous story that we will follow, this time involving the exponential distribution which has a similar memorylessness property. We will then move to a second theme which is how to describe the joint distribution of multiple random variables. We did this in the discrete case by introducing joint PMFs. In the continuous case, we can do the same using appropriately defined joint PDFs and by replacing sums by integrals. As usual, we will illustrate the various concepts through some simple examples and also take the opportunity to introduce some additional concepts such as mixed random variables and the joint cumulative distribution function.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L0511_Linearity_of_Expectations.txt
We end this lecture sequence with the most important property of expectations, namely linearity. The idea is pretty simple. Suppose that our random variable, X, is the salary of a random person out of some population. So that we can think of the expected value of X as the average salary within that population. And now suppose that everyone gets a raise, and Y is the new salary. And generously, the new salary is twice the old salary plus a bonus of $100. What happens to the expected value of the salary, or the average salary? Well the new average salary, which is the expected value of 2X plus 100, is twice the old average plus 100. So doubling everyone's salary and giving to everyone an additional $100, what it does to the average is that it doubles the average and adds 100 to it. This is the linearity property of expectation in one particular example. It's a most intuitive property, but it's worth also deriving it in a formal way. And the derivation proceeds through the expected value rule. We're dealing here with a particular function, g, which is a linear function. So we're dealing with a linear function, ax plus b. And we're dealing with a random variable, Y, which is g applied to an original random variable, X. So the expected value of Y can be calculated according to the expected value rule. It's the sum over all x's of g of x times the probability of that particular x. And we plug-in the specific form of the function, g, which is ax plus b. And then we separate the sum into two sums. The first sum, after pulling out a constant of a, takes this form. And the second sum, after pulling out the constant, b, takes this form. Now, the first sum is a times the expected value of X. This is just the definition of the expected value. As, for the second sum, we realize that this quantity is equal to 1 because it is the sum of the probabilities of all the different values of X. And this concludes the proof of the linearity of expected values. Notice that for expected values, what we have is that the expected value of Y, which is expected value of g of X, is this same as g of the expected value of X. The expected value of a linear function is the same linear function applied to the expected value. But this is an exceptional case. This does not happen in general. It's an exceptional function g that makes this happen. This property is true for linear functions. But for non-linear functions, it is generally false.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L024_Conditional_Probabilities_Obey_the_Same_Axioms.txt
I now want to emphasize an important point. Conditional probabilities are just the same as ordinary probabilities applied to a different situation. They do not taste or smell or behave any differently than ordinary probabilities. What do I mean by that? I mean that they satisfy the usual probability axioms. For example, ordinary probabilities must also be non-negative. Is this true for conditional probabilities? Of course it is true, because conditional probabilities are defined as a ratio of two probabilities. Probabilities are non-negative. So the ratio will also be non-negative, of course as long as it is well-defined. And here we need to remember that we only talk about conditional probabilities when we condition on an event that itself has positive probability. How about another axiom? What is the probability of the entire sample space, given the event B? Let's check it out. By definition, the conditional probability is the probability of the intersection of the two events involved divided by the probability of the conditioning event. Now, what is the intersection of omega with B? B is a subset of omega. So when we intersect the two sets, we're left just with B itself. So the numerator becomes the probability of B. We're dividing by the probability of B, and so the answer is equal to 1. So indeed, the sample space has unit probability, even under the conditional model. Now, remember that when we condition on an event B, we could still work with the original sample space. However, possible outcomes that do not belong to B are considered impossible, so we might as well think of B itself as being our sample space. If we proceed like that and think now of B as being our new sample space, what is the probability of this new sample space in the conditional model? Let's apply the definition once more. It's the probability of the intersection of the two events involved, B intersection B, divided by the probability of the conditioning event. What is the numerator? The intersection of B with itself is just B, so the numerator is the probability of B. We're dividing by the probability of B. So the answer is, again, 1. Finally, we need to check the additivity axiom. Recall what the additivity axiom says. If we have two events, two subsets of the sample space that are disjoint, then the probability of their union is equal to the sum of their individual probabilities. Is this going to be the case if we now condition on a certain event? What we want to prove is the following statement. If we take two events that are disjoint, they have empty intersection, then the probability of the union is the sum of their individual probabilities, but where now the probabilities that we're employing are the conditional probabilities, given the event B. So let us verify whether this relation, this fact is correct or not. Let us take this quantity and use the definition to write it out. By definition, this conditional probability is the probability of the intersection of the first event of interest, the one that appears on this side of the conditioning, intersection with the event on which we are conditioning. And then we divide by the probability of the conditioning event, B. Now, let's look at this quantity, what is it? We're taking the union of A with C, and then intersect it with B. This union consists of these two pieces. When we intersect with B, what is left is these two pieces here. So A union C intersected with B is the union of two pieces. One piece is A intersection B, this piece here. And another piece, which is C intersection B, this is the second piece here. So here we basically used a set theoretic identity. And now we divide by the same [denominator] as before. And now let us continue. Here's an interesting observation. The events A and C are disjoint. The piece of A that also belongs in B, therefore, is disjoint from the piece of C that also belongs to B. Therefore, this set here and that set here are disjoint. Since they are disjoint, the probability of their union has to be equal to the sum of their individual probabilities. So here we're using the additivity axiom on the original probabilities to break this probability up into two pieces. And now we observe that here we have the ratio of an intersection by the probability of B. This is just the conditional probability of A given B using the definition of conditional probabilities. And the second part is the conditional probability of C given B, where, again, we're using the definition of conditional probabilities. So we have indeed checked that this additivity property is true for the case of conditional probabilities when we consider two disjoint events. Now, we could repeat the same derivation and verify that it is also true for the case of a disjoint union, of finitely many events, or even for countably many disjoint events. So we do have finite and countable additivity. We're not proving it, but the argument is exactly the same as for the case of two events. So conditional probabilities do satisfy all of the standard axioms of probability theory. So conditional probabilities are just like ordinary probabilities. This actually has a very important implication. Since conditional probabilities satisfy all of the probability axioms, any formula or theorem that we ever derive for ordinary probabilities will remain true for conditional probabilities as well.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L038_Independence_Versus_Pairwise_Independence.txt
We will now consider an example that illustrates the difference between the notion of independence of a collection of events and the notion of pairwise independence within that collection. The model is simple. We have a fair coin which we flip twice. So at each flip, there is probability 1/2 of obtaining heads. Furthermore, we assume that the two flips are independent of each other. Let H1 be the event that the first coin toss resulted in heads, which corresponds to this event in this diagram. Let H2 be the event that the second toss resulted in heads, which is this event in the diagram-- the two ways that we can have the second toss being heads. Now, we're assuming that the tosses are independent. So the event heads-heads has a probability which is equal to the probability that the first toss resulted in heads-- that's 1/2-- times the probability that the second toss resulted in heads, which is 1/2. So the product is 1/4. We have probability 1/4 for this outcome. Now, the total probability of event H1 is 1/2, which means that the probability of what remains should be 1/4, so that the sum of these two numbers is 1/2. By the same argument, the probability of this outcome, tails-heads , should be 1/4. We have a total of 3/4. So what's left is 1/4. And that's going to be the probability of the outcome tails-tails . Let us now introduce a new event, namely the event that the two tosses had the same result. So this is the event that we obtain either heads heads or tails-tails. Schematically, event C corresponds to this blue region in the diagram. Is this event C independent from the events H1 and H2? Let us first look for pairwise independence. Let's look at the probability that H1 occurs and C occurs as well. So the first toss resulted in heads. And the two tosses had the same result. So this is the same as the probability of obtaining heads followed by heads. And this corresponds to just this outcome that has probability 1/4. How about the product of the probabilities of H1 and of C? Is it the same? Well, the probability of H1 is 1/2. And the probability of C-- what is it? Event C consists of two outcomes. Each one of these outcomes has probability 1/4. So the total is, again, 1/2. And therefore, the product of these probabilities is 1/4. So we notice that the probability of the two events happening is the same as the product of their individual probabilities, and therefore, H1 and C are independent events. By the same argument, H2 and C are going to be independent. It's a symmetrical situation. H1 and H2 are also independent from each other. So we have all of the conditions for pairwise independence. Let us now check whether we have independence. To check for independence, we need to also look into the probability of all three events happening and see whether it is equal to the product of the individual probabilities. So the probability of all three events happening-- this is the probability that H1 occurs and H2 occurs and C occurs. What is this event? Heads in the first toss, heads in the second toss, and the two tosses are the same-- this happens if and only if the outcome is heads followed by heads. And this has probability 1/4. On the other hand, if we calculate the probability of H1 times the probability of H2 times the probability of C, we get 1/2 times 1/2 times 1/2, which is 1/8. These two numbers are different. And therefore, one of the conditions that we had for independence is violated. So in this example, H1, H2, and C are pairwise independent, but they're not independent in the sense of an independent collection of events. How are we to understand this intuitively? If I tell you that event H1 occurred and I ask you for the conditional probability of C given that H1 [occurred], what is this? This is the probability that we obtain, given that the first event is heads, the first result is heads, the only way that you can have the two tosses having the same result is going to be in the second toss also resulting in heads. And since H2 and H1 are independent, this is just the probability that we have heads in the second toss. And this number is 1/2. And 1/2 is also the same as the probability of C. That's another way of understanding the independence of H1 and C. Given that the first toss resulted in heads, this does not help you in any way in guessing whether the two tosses will have the same result or not. The first one was heads, but the second one could be either heads or tails with equal probability. So event H1 does not carry any useful information about the occurrence or non-occurrence of event C. On the other hand, if I were to tell you that both events, H1 and H2, happened, what would the conditional probability of C be? If both H1 and H2 occurred, then the results of the two coin tosses were identical, so you know that C also occurred. So this probability is equal to 1. And this number, 1, is different from the unconditional probability of C, which is 1/2. So we have here a situation where knowledge of H1 having occurred does not help you in making a better guess on whether C is going to occur. H1 by itself does not carry any useful information. But the two events together, H1 and H2, do carry useful information about C. Once you know that H1 and H2 occurred, then C is certain to occur. So your original probability for C, which was 1/2, now gets revised to a value of 1. So H1 and H2 carry information relevant to C. And therefore, C is not independent from these two events collectively. And we say that events H1. H2, and C are not independent.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L068_Linearity_of_Expectations_The_Mean_of_the_Binomial.txt
Let us now revisit the subject of expectations and develop an important linearity property for the case where we're dealing with multiple random variables. We already have a linearity property. If we have a linear function of a single random variable, then expectations behave in a linear fashion. But now, if we have multiple random variables, we have this additional property. The expected value of the sum of two random variables is equal to the sum of their expectations. Let us go through the derivation of this very important fact because it is a nice exercise in applying the expected value rule and also manipulating PMFs and joint PMFs. We're dealing with the expected value of a function of two random variables. Which function? If we write it this way, we are dealing with the function g, which is just the sum of its two entries. So now we can continue with the application of the expected value rule. And we obtain the sum over all possible x, y pairs. Here, we need to write to g of x,y. But in our case, the function we're dealing with is just x plus y. And then we weigh, according to the entries of the joint PMF. So this is just an application of the expected value rule to this particular function. Now let us take this sum and break it into two pieces-- one involving only the x-term, and another piece involving only the y-term. Now, if we look at this double summation, look at the inner sum. It's a sum over y's. While we're adding over y's, the value of x remains fixed. So x is a constant, as far as the sum is concerned. So x can be pulled outside this summation. Let us just continue with this term, the first one, and see that a simplification happens. This quantity here is the sum of the probabilities of the different y's that can go together with a particular x. So it is just equal to the probability or that particular x. It's the marginal PMF. If we carry out a similar step for the second term, we will obtain the sum over y's. It's just a symmetrical argument. And at this point we recognize that what we have in front of us is just the expected value of X, this is the first term, plus the expected value of Y. So this completes the derivation of this important linearity property. Of course, we proved the linearity property for the case of the sum of two random variables. But you can proceed in a similar way, or maybe use induction, and one can easily establish, by following the same kind of argument, that we have a linearity property when we add any finite number of random variables. The expected value of a sum is the sum of the expected values. Just for a little bit of practice, if, for example, we're dealing with this expression, the expected value of that expression would be the expected value of 2X plus the expected value of 3Y minus the expected value of Z. And then, using the linearity property for linear functions of a single random variable, we can pull the constants out of the expectations. And this would be twice the expected value of X plus 3 times the expected value of Y minus the expected value of Z. What we will do next is to use the linearity property of expectations to solve a problem that would otherwise be quite difficult. We will use the linearity property to find the mean of a binomial random variable. Let X be a binomial random variable with parameters n and p. And we can interpret X as the number of successes in n independent trials where each one of the trials has a probability p of resulting in a success. Well, we know the PMF of a binomial. And we can use the definition of expectation to obtain this expression. This is just the PMF of the binomial. And therefore, what we have here is the usual definition of the expected value. Now, if you look at this sum, it appears quite formidable. And it would be quite hard to evaluate it. Instead, we're going to use a very useful trick. We will employ what we have called indicator variables. So let's define a random variable Xi, which is a one if the ith trial is a success, and zero otherwise. Now if we want to count successes, what we want to count is how many of the Xi's are equal to 1. So if we add the Xi's, this will have a contribution of 1 from each one of the successes. So when you add them up, you obtain the total number of successes. So we have expressed a random variable as a sum of much simpler random variables. So at this point, we can now use linearity of expectations to write that the expected value of X will be the expected value of X1 plus all the way to the expected value of Xn. Now what is the expected value of X1? It is a Bernoulli random variable that takes the value 1 with probability p and takes the value of 0 with probability 1 minus p. The expected value of this random variable is p. And similarly, for each one of these terms in the summation. And so the final end result is equal to n times p. This answer, of course, makes also intuitive sense. If we have to p equal to 1/2, and we toss a coin 100 times, the expected number, or the average number, of heads we expect to see will be 1/2 half times 100, which is 50. The higher p is, the more successes we expect to see. And of course, if we double n, we expect to see twice as many successes. So this is an illustration of the power of breaking up problems into simpler pieces that are easier to analyze. And the linearity of expectations is one more tool that we have in our hands for breaking up perhaps complicated random variables into simpler ones and then analyzing them separately.
MIT_RES6012_Introduction_to_Probability_Spring_2018
S016_The_Geometric_Series.txt
One particular series that shows up in many applications, examples, or problems is the geometric series. [In] the geometric series, we are given a certain number, alpha, and we want to sum all the powers of alpha, starting from the 0th power, which is equal to 1, the first power, and so on, and this gives us an infinite series. It's the sum of alpha to the i where i ranges from 0 to infinity. Now, for this series to converge, we need subsequent terms, the different terms in the series, to become smaller and smaller. And for this reason, we're going to make the assumption that the number alpha is less than 1 in magnitude, which implies that consecutive terms go to zero. Let us introduce some notation. Let us denote the infinite sum by s, and we're going to use that notation shortly. One way of evaluating this series is to start from an algebraic identity, namely the following. Let us take 1 minus alpha and multiply it by the terms in the series, but going only up to the term alpha to the n. So it's a finite series. We do this multiplication, we get a bunch of terms, we do the cancellations, and what is left at the end is 1 minus alpha to the power n plus 1. What we do next is we take the limit as n goes to infinity. On the left hand side, we have the term 1 minus alpha, and then the limit of this finite series is by definition the infinite series, which we're denoting by s. On the right hand side, we have the term 1. How about this term? Since alpha is less than 1 in magnitude, this converges to 0 as alpha goes to infinity, so that term disappears. We can now solve this relation, and we obtain that s is equal to 1 over 1 minus alpha, and this is the formula for the infinite geometric series. There's another way of deriving the same result, which is interesting, so let us go through it as well. The infinite geometric series has one first term and then the remaining terms, which is a sum for i going from 1 to infinity of alpha to the i. Now, we can take a factor of alpha out of this infinite sum and write it as 1 plus alpha, the sum of alpha to the i, but because we took out one factor of alpha, here, we're going to have smaller powers. So now the sum starts from 0 and goes up to infinity. Now, this is just 1 plus alpha times s because here, we have the infinite geometric series. Therefore, if we subtract alpha s from both sides of this equality, we get s times 1 minus alpha equal to 1. And now by moving 1 minus alpha to the denominator, we get again the same expression. So this is an alternative way of deriving the same result. However, there's one word of caution. In this step, we subtracted alpha s from both sides of the equation. And in order to do that, this is only possible if we take for granted that s is a finite number. So this is taken for granted in order to carry out this derivation. This is to be contrasted with the first derivation, in which we didn't have to make any such assumption. So strictly speaking, for this derivation here to be correct, we need to have some independent way of verifying that s is less than infinity. But other than that, it's an interesting algebraic trick.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L058_Expectation.txt
Our discussion of random variable so far has involved nothing but standard probability calculations. Other than using the PMF notation, we have done nothing new. It is now time to introduce a truly new concept that plays a central role in probability theory. This is the concept of the expected value or expectation or mean of a random variable. It is a single number that provides some kind of summary of a random variable by telling us what it is on the average. Let us motivate with an example. You play a game of chance over and over, let us say 1,000 times. Each time that you play, you win an amount of money, which is a random variable, and that random variable takes the value 1, with probability 2/10, the value of 2, with probability 5/10, the value of 4, with probability 3/10. You can plot the PMF of this random variable. It takes values 1, 2, and 4. And the associated probabilities are 2/10, 5/10, and 3/10. How much do you expect to have at the end of the day? Well, if you interpret probabilities as frequencies, in a thousand plays, you expect to have about 200 times this outcome to occur and this outcome about 500 times and this outcome about 300 times. So your average gain is expected to be your total gain, which is 1, 200 times, plus 2, 500 times, plus 4, 300 times. This is your total gain. And to get to the average gain, you divide by 1,000. And the expression that you get can also be written in a simpler form as 1 times 2/10 plus 2 times 5/10 plus 4 times 3/10. So this is what you expect to get, on the average, if you keep playing that game. What have we done? We have calculated a certain quantity which is a sort of average of the random variable of interest. And what we did in this summation here, we took each one of the possible values of the random variable. Each possible value corresponds to one term in the summation. And what we're adding is the numerical value of the random variable times the probability that this particular value is obtained. So when x is equal to 1, we get 1 here and then the probability of 1. When we add the term corresponding to x equals 2, we get little x equals to 2 and next to it the probability that x is equal to 2, and so on. So this is what we call the expected value of the random variable x. This is the formula that defines it, but it's also important to always keep in mind the interpretation of that formula. The expected value of a random variable is to be interpreted as the average that you expect to see in a large number of independent repetitions of the experiment. One small technical caveat, if we're dealing with a random variable that takes values in a discrete but infinite set, this sum here is going to be an infinite sum or an infinite series. And there's always a question whether an infinite series has a well-defined limit or not. In order for it to have a well-defined limit, we will be making the assumption that this infinite series is, as it's called, absolutely convergent, namely that if we replace the x's by their absolute values-- so we're adding positive numbers, or nonnegative numbers-- the sum of those numbers is going to be finite. So this is a technical condition that we need in order to make sure that this expected value is a well-defined and finite quantity. Let us now calculate the expected value of a very simple random variable, the Bernoulli random variable that takes the value 1 with probability p and the value 0 with probability 1 minus p. The expected value consists of two terms. X can take the value 1. This happens with probability p. Or it can take the value of zero. This happens with probability 1 minus p. And therefore, the expected value is just equal to p. As a special case, we may consider the situation where X is the indicator random variable of a certain event, A, so that X is equal to 1 if and only if event A occurs. In this case, the probability that X equals to 1, which is our parameter p, is the same as the probability that event A occurs. And we have this relation. And so with this correspondence, we readily conclude that the expected value of an indicator random variable is equal to the probability of that event. Let us move now to the calculation of the expected value of a uniform random variable. Let us consider, to keep things simple, a random variable which is uniform on the set from 0 to n. It's uniform, so the probability of the values that it can take are all equal to each other. It can take one of n plus 1 possible values, and so the probability of each one of the values is 1 over n plus 1. We want to calculate the expected value of this random variable. How do we proceed? We just recall the definition of the expectation. It's a sum where we add over all of the possible values. And for each one of the values, we multiply by its corresponding probability. So we obtain a summation of this form. We can factor out a factor of 1 over n plus 1, and we're left with 0 plus 1 plus all the way up to n. And perhaps you remember the formula for us summing those numbers, and it is n times n plus 1 over 2. And after doing the cancellations, we obtain a final answer, which is n over 2. Incidentally, notice that n over 2 is just the midpoint of this picture that we have here in this diagram. This is always the case. Whenever we have a PMF which is symmetric around a certain point, then the expected value will be the center of symmetry. More general, if you do not have symmetry, the expected value turns out to be the center of gravity of the PMF. If you think of these bars as having weight, where the weight is proportional to their height, the center of gravity is the point at which you should put your finger if you want to balance that diagram so that it doesn't fall in one direction or the other. And we now close this segment by providing one more interpretation of expectations. Suppose that we have a class consisting of n students and that the ith student has a weight which is some number xi. We have a probabilistic experiment where we pick one of the students at random, and each student is equally likely to be picked as any other student. And we're interested in the random variable X, which is the weight of the student that was selected. To keep things simple, we will assume that the xi's are all distinct. And we first find the PMF of this random variable. Any particular xi that this possible is associated to exactly one student, because we assumed that the xi's are distinct. So this probability would be the probability or selecting the ith student, and that probability is 1 over n. And now we can proceed and calculate the expected value of the random variable X. This random variable X takes values, and the values that it takes are the xi's. A particular xi would be associated with a probability 1 over n, and we're adding over all the i's or over all of the students. And so this is the expected value. What we have here is just the average of the weights of the students in this class. So the expected value in this particular experiment can be interpreted as the true average over the entire population of the students. Of course, here we're talking about two different kinds of averages. In some sense, we're thinking of expected values as the average in a large number of repetitions of experiments. But here we have another interpretation as the average over a particular population.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L098_From_The_Joint_to_the_Marginal.txt
In the discrete case, we saw that we could recover the PMF of X and the PMF of Y from the joint PMF. Indeed, the joint PMF is supposed to contain a complete probabilistic description of the two random variables. It is their probability law, and any quantity of interest can be computed if we know the joint. Things are similar in the continuous setting. You can easily guess the formula through the standard recipe. Replace sums by integrals, and replace PMFs by PDFs. But a proof of this formula is actually instructive. So let us start by first finding the CDF of X. The CDF of X is, by definition, the probability that the random variable X takes a value less than or equal to a certain number little x. And this is the probability of a particular set that we can visualize on the two dimensional plane. If here is the value of little x, then we're talking about the set of all pairs x, y, for which the x component is less than or equal to a certain number. So we need to integrate over this two-dimensional set the joint density. So it will be a double integral of the joint density over this particular two-dimensional set. Now, since we've used the symbol x here to mean something specific, let us use different symbols for the dummy variables that we will use in the integration. And we need to integrate with respect to the two variables, let's say with respect to t and with respect to s. The variable t can be anything. So it ranges from minus infinity to infinity. But the variable s, the first argument, ranges from minus infinity up to this point, which is x. Think of this double integral as an integral with respect to the variable s of this complicated function inside the brackets. Now, to find the density of X, all we need to do is to differentiate the CDF of X. And when we have an integral of this kind and we differentiate with respect to the upper limit of the integration, what we are left with is the integrand. That is this expression here. It is an integral with respect to the second variable. And it's an integral over the entire space, from minus infinity to plus infinity. Here is an example. The simplest kind of a joint PDF is a PDF of that is constant on a certain set, S, and is 0 outside that set. So the overall probability, one unit of probability, is spread uniformly over that set. Because the total volume under the joint PDF must be equal to 1, the height of the PDF must be equal to 1 over the area. To calculate the probability of a certain set A, we want to ask how much volume is sitting on top of that set. And because in this case, the PDF is constant, we need to take the height of the PDF times the relevant area. What is the relevant area? Well, actually, the PDF is 0 outside the set S. So the relevant area is only this part here, which is the intersection of the two sets, S and A. So the total volume sitting on top of this little set is going to be the base, the area of the base, which is the area of A intersection S times the height of the PDF at those places. Now, the height of the PDF is 1 over the area of S. So this is the formula for calculating the probability of a certain set, A. Let's now look at a specific example. Suppose that we have a uniform PDF over this particular set, S. This set has an area that is equal to 4. It consists of four units rectangles arranged next to each other. So the height of the joint PDF in this example is going to be 1/4. It is one 1/4 on that set, but of course, it's going to be 0 outside that set. We can now find the marginal PDF at some particular x. So we can fix a particular value of x, let's say this one. To find the value of the marginal PDF, we need to integrate over y along that particular line. And the integral is going to have a contribution only on that segment. On that segment, the value of the joint PDF is 1/4. And we're integrating over an interval that has a length of one. So the integral is going to be equal to 1/4. But if x is somewhere around here, as we integrate over that line, we integrate the value of 1/4, the value of the PDF, over an interval that has a length equal to 3. And so the result turns out to be 3/4. There's a similar calculation for the marginal PDF of y. For any particular value of little y, to find the marginal PDF, we integrate along this line the joint PDF. The joint PDF is 0 out here. It's nonzero only on that interval. And on that interval, it has a value of 1/4. And the interval has a length of 1, so the integral is going to end up equal to 1/4. But if we were to take a line somewhere here, we integrate the value of 1/4 over an interval of length 2. And so the result would be 1/2. So we have recovered from the joint PDF the marginal PDF of X and also the marginal PDF of Y.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L092_Conditioning_A_Continuous_Random_Variable_on_an_Event.txt
In this segment, we pursue two themes. Every concept has a conditional counterpart. We know about PDFs, but if we live in a conditional universe, then we deal with conditional probabilities. And we need to use conditional PDFs. The second theme is that discrete formulas have continuous counterparts in which summations get replaced by integrals, and PMFs by PDFs. So let us recall the definition of a conditional PMF, which is just the same as an ordinary PMF but applied to a conditional universe. In the same spirit, we can start with a PDF, which we can interpret, for example, in terms of probabilities of small intervals. If we move to a conditional model in which event A is known to have occurred, probabilities of small intervals will then be determined by a conditional PDF, which we denote in this manner. Of course, we need to assume throughout that the probability of the conditioning event is positive so that conditional probabilities are well-defined. Let us now push the analogy further. We can use a PMF to calculate probabilities. The probability that X takes [a] value in a certain set is the sum of the probabilities of all the possible values in that set. And a similar formula is true if we're dealing with a conditional model. Now, in the continuous case, we use a PDF to calculate the probability that X takes values in a certain set. And by analogy, we use a conditional PDF to calculate conditional probabilities. We can take this relation here to be the definition of a conditional PDF. So a conditional PDF is a function that allows us to calculate probabilities by integrating this function over the event or set of interest. Of course, probabilities need to sum to 1. This is true in the discrete setting. And by analogy, it should also be true in the continuous setting. This is just an ordinary PDF, except that it applies to a model in which event A is known to have occurred. But it still is a legitimate PDF. It has to be non-negative, of course. But also, it needs to integrate to 1. When we condition on an event and without any further assumption, there's not much we can say about the form of the conditional PDF. However, if we condition on an event of a special kind, that X takes values in a certain set, then we can actually write down a formula. So let us start with a random variable X that has a given PDF, as in this diagram. And suppose that A is a subset of the real line, for example, this subset here. What is the form of the conditional PDF? We start with the interpretation of PDFs and conditional PDFs in terms of probabilities of small intervals. The probability that X lies in a small interval is equal to the value of the PDF somewhere in that interval times the length of the interval. And if we're dealing with conditional probabilities, then we use the corresponding conditional PDF. To find the form of the conditional PDF, we will work in terms of the left-hand side in this equation and try to rewrite it. Let us distinguish two cases. Suppose that little X lies somewhere out here, and we want to evaluate the conditional PDF at that point. So trying to evaluate this expression, we consider a small interval from little x to little x plus delta. And now, let us write the definition of a conditional probability. A conditional probability, by definition, is equal to the probability that both events occur divided by the probability of the conditioning event. Now, because the set A and this little interval are disjoint, these two events cannot occur simultaneously. So the numerator here is going to be 0. And this will imply that the conditional PDF is also going to be 0. This, of course, makes sense. Conditioned on the event that X took values in this set, values of X out here cannot occur. And therefore, the conditional density out here should also be 0. So the conditional PDF is 0 outside the set A. And this takes care of one case. Now, the second case to consider is when little x lies somewhere inside here inside the set A. And in that case, our little interval from little x to little x plus delta might have this form. In this case, the intersection of these two events, that X lies in the big set and X lies in the small set, the intersection of these two events is the event that X lies in the small set. So the numerator simplifies just to the probability that the random variable X takes values in the interval from little x to little x plus delta. And then we rewrite the denominator. Now, the numerator is just an ordinary probability that the random variable takes values inside a small interval. And by our interpretation of PDFs, this is approximately equal to the PDF evaluated somewhere in that small interval times delta. At this point, we notice that we have deltas on both sides of this equation. By cancelling this delta with that delta, we finally end up with a relation that the conditional PDF should be equal to this expression that we have here. So to summarize, we have shown a formula for the conditional PDF. The conditional PDF is 0 for those values of X that cannot occur given the information that we are given, namely that X takes values at that interval. But inside this interval, the conditional PDF has a form which is proportional to the unconditional PDF. But it is scaled by a certain constant. So in terms of a picture, we might have something like this. And so this green diagram is the form of the conditional PDF. The particular factor that we have here in the denominator is exactly that factor that is required, the scaling factor that is required so that the total area under the green curve, under the conditional PDF is equal to 1. So we see once more the familiar theme, that conditional probabilities maintain the same relative sizes as the unconditional probabilities. And the same is true for conditional PMFs or PDFs, keeping the same shape as the unconditional ones, except that they are re-scaled so that the total probability under a conditional PDF is equal to 1. We can now continue the same story and revisit everything else that we had done for discrete random variables. For example, we have the expectation of a discrete random variable and the corresponding conditional expectation, which is just the same kind of object, except that we now rely on conditional probabilities. Similarly, we can take the definition of the expectation for the continuous case and define a conditional expectation in the same manner, except that we now rely on the conditional PDF. So this formula here is the definition of the conditional expectation of a continuous random variable given a particular event. We have a similar situation with the expected value rule, which we have already seen for discrete random variables in both of the unconditional and in the conditional setting. We have a similar formula for the continuous case. And at this point, you can guess the form that the formula will take in the continuous conditional setting. This is the expected value rule in the conditional setting, and it is proved exactly the same way as for the unconditional continuous setting, except that here in the proof, we need to work with conditional probabilities and conditional PDFs, instead of the unconditional ones. So to summarize, there is nothing really different when we condition on an event in the continuous case compared to the discrete case. We just replace summations with integrations. And we replace PMFs by PDFs.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L0110_Interpretations_Uses_of_Probabilities.txt
We end this lecture sequence by stepping back to discuss what probability theory really is and what exactly is the meaning of the word probability. In the most narrow view, probability theory is just a branch of mathematics. We start with some axioms. We consider models that satisfy these axioms, and we establish some consequences, which are the theorems of this theory. You could do all that without ever asking the question of what the word "probability" really means. Yet, one of the theorems of probability theory, that we will see later in this class, is that probabilities can be interpreted as frequencies, very loosely speaking. If I have a fair coin, and I toss it infinitely many times, then the fraction of heads that I will observe will be one half. In this sense, the probability of an event, A, can be interpreted as the frequency with which event A will occur in an infinite number of repetitions of the experiment. But is this all there is? If we're dealing with coin tosses, it makes sense to think of probabilities as frequencies. But consider a statement such as the "current president of my country will be reelected in the next election with probability 0.7". It's hard to think of this number, 0.7, as a frequency. It does not make sense to think of infinitely many repetitions of the next election. In cases like this, and in many others, it is better to think of probabilities as just some way of describing our beliefs. And if you're a betting person, probabilities can be thought of as some numerical guidance into what kinds of bets you might be willing to make. But now if we think of probabilities as beliefs, you can run into the argument that, well, beliefs are subjective. Isn't probability theory supposed to be an objective part of math and science? Is probability theory just an exercise in subjectivity? Well, not quite. There's more to it. Probability, at the minimum, gives us some rules for thinking systematically about uncertain situations. And if it happens that our probability model, our subjective beliefs, have some relation with the real world, then probability theory can be a very useful tool for making predictions and decisions that apply to the real world. Now, whether your predictions and decisions will be any good will depend on whether you have chosen a good model. Have you chosen a model that's provides a good enough representation of the real world? How do you make sure that this is the case? There's a whole field, the field of statistics, whose purpose is to complement probability theory by using data to come up with good models. And so we have the following diagram that summarizes the relation between the real world, statistics, and probability. The real world generates data. The field of statistics and inference uses these data to come up with probabilistic models. Once we have a probabilistic model, we use probability theory and the analysis tools that it provides to us. And the results that we get from this analysis lead to predictions and decisions about the real world.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L013_Sample_Space_Examples.txt
Let us now look at some examples of sample spaces. Sample spaces are sets. And a set can be discrete, finite, infinite, continuous, and so on. Let us start with a simpler case in which we have a sample space that is discrete and finite. The particular experiment we will be looking at is the following. We take a very special die, a tetrahedral die. So it's a die that has four faces numbered from 1 up 4. We roll it once. And then we roll it twice [again]. Were not dealing here with two probabilistic experiments. We're dealing with a single probabilistic experiment that involves two rolls of the die within that experiment. What is the sample space of that experiment? Well, one possible representation is the following. We take note of the result of the first roll. And then we take note of the result of the second roll. And this gives us a pair of numbers. Each one of the possible pairs of numbers corresponds to one of the little squares in this diagram. For example, if the first roll is 1 and the second is also 1, then this particular outcome has occurred. If the first roll is it 2 and the second is a 3, then this particular outcome occurs. If the first roll is a 3 and then the next one is a 2, then this particular outcome occurs. Notice that these two outcomes are pretty closely related. In both cases, we observe a 2 and we observe a 3. But we distinguish those two outcomes because in those two outcomes, the 2 and the 3 happen in different order. And the order in which they appear may be a detail which is of interest to us. And so we make this distinction in the sample space. So we keep the (3, 2) and the (2, 3) as separate outcomes. Now this is a case of a model in which the probabilistic experiment can be described in phases or stages. We could think about rolling the die once and then going ahead with the second roll. So we have two stages. A very useful way of describing the sample space of experiments-- whenever we have an experiment with several stages, either real stages or imagined stages. So a very useful way of describing it is by providing a sequential description in terms of a tree. So a diagram of this kind, we call it a tree. You can think of this as the root of the tree from which you start. And the endpoints of the tree, we usually call them the leaves. So the experiment starts. We carry out the first phase, which in this case is the first roll. And we see what happens. So maybe we get a 2 in the first roll. And then we take note of what happened in the second roll. And maybe the result was a 3. So we follow this branch here. And we end up at this particular leaf, which is the leaf associated with the outcome 2, 3. Notice that in this tree we once more have a distinction. The outcome 2 followed by a 3 is different from the outcome 3 followed by a 2, which would correspond to this particular place in the diagram. In both cases, we have 16 possible outcomes. 4 times 4 makes 16. And similarly, if you count here, the number of leaves is equal to 16. The previous example involves a sample space that was discrete and finite. There were only 16 possible outcomes. But sample spaces can also be infinite. And they could also be continuous sets. Here's an example of an experiment that involves a continuous sample space. So we have a rectangular target which is the unit square. And you throw a dart on that target. And suppose that you are so skilled that no matter what, when you throw the dart, it always falls inside the target. Once the dart hits the target, you record the coordinates x and y of the particular point that resulted from your dart throw. And we record x and y with infinite precision. So x and y are real numbers. So in this experiment, the sample space is just the set of x, y pairs that lie between 0 and 1.
MIT_RES6012_Introduction_to_Probability_Spring_2018
S071_The_InclusionExclusion_Formula.txt
In this segment, we develop the inclusion-exclusion formula, which is a beautiful generalization of a formula that we have seen before. Let us look at this formula and remind ourselves what it says. If we have two sets, A1 and A2, and we're interested in the probability of their union, how can we find it? We take the probability of the first set, we add to it the probability of the second set, but then we realize that by doing so we have double counted this part of the diagram. And so we need to correct for that and we need to subtract the probability of this intersection. And that's how this formula comes about. Can we generalize this thinking, let's say, to the case of three events? Suppose that we have three events, A1, A2, and A3. And we want to calculate the probability of their union. We first start by adding the probabilities of the different sets. But then we realize that, for example, this part of the diagram has been counted twice. It shows up once inside the probability of A1 and once inside the probability of A2. So, for this reason, we need to make a correction and we need to subtract the probability of this intersection. Similarly, subtract the probability of that intersection and of this one. So we subtract the probabilities of these intersections. But, actually, the intersections are not just what I drew here. The intersections also involve this part. So now, let us just focus on this part of the diagram here. A typical element that belongs to all three of the sets will show up once here, once here and once there. But it will also show up in all of these intersections. And so it shows up three times with a plus sign, three times with a minus sign, which means that these elements will not to be counted at all. In order to count them, we need to add one more term which is the probability of the three way intersection. So this is the formula for the probability of the union of three events. It has a rationale similar to this formula, and you can convince yourself that it is a correct formula by just looking at the different pieces of this diagram and make sure that each one of them is accounted properly. But instead of working in terms of such a picture, let us think about a more formal derivation. And the formal derivation will use a beautiful trick. Namely, indicator functions. So here is the formula that we want to establish. And let us remind ourselves what indicator functions are. To any set or event, we can associate an indicator function. Let's say that this is the set Ai. We're going to associate an indicator function, call it Xi, which is equal to 1 when the outcome is inside this set, and it's going to be 0 when the outcome is outside. What is the indicator function of the complement? The indicator function of the complement is 1 minus the indicator of the event. Why is this? If the outcome is in the complement, then Xi is equal to 0, and this expression is equal to 1. On the other hand, if the outcome is inside Ai, then the indicator function will be equal to 1 and this quantity is going to be equal to 0. If we have the intersection of two events, Ai and Aj, what is their indicator function? It is Xi times Xj. This expression is equal to 1, if and only if, Xi is equal to 1 and Xj is equal to 1, which happens, if and only if, the outcome is inside Ai and also inside Aj. Now, what about the indicator of the intersection of the complements? Well, it's an intersection. So the associated indicator function is going to be the product of the indicator function of the first set, which is 1 minus Xi times the indicator function of the second set, which is 1 minus Xj. And finally, what is the indicator function of this event? Here we remember De Morgan's Laws. De Morgan's Laws tell us that the complement of this set-- the complement of a union-- is the intersection of the complements. So this event here is the complement of that event. And, therefore, the associated indicator function is going to be 1 minus this expression. And if we were dealing with more than two sets-- and here we had, for example, three way intersections-- you would get the product of three terms. And if we had a three way union, we would get a similar expression, except that here we would have, again, a product of three terms instead of two. So now, let us put to use what we have done so far. We are interested in the probability that the outcome falls in the union of three sets. Now, an important fact to remember is that the probability of an event is the same as the expected value of the indicator of that event. This is because the indicator is equal to 1, if and only if, the outcome happens to be inside that set. And so the contribution that we get to the expectation is 1 times the probability that the indicator is 1, which is just this probability. Now, the indicator of a three way union is going to be, by what we just discussed, 1 minus a product of this kind, but now with three terms. Let us now calculate this expectation by expanding the product involved. We have this first term, then, when we multiply those three terms together, we're going to get a bunch of contributions. One contribution with a minus sign is 1 times 1 times 1. Another contribution would be minus minus-- that's a plus-- X1 times 1 times 1. And similarly, we get a contribution of X2 and X3. And then we have a contribution such as X1 times X2 times 1. And if you look at the minus signs-- there are three minuses involved-- so, overall, it's going to be a minus. Minus X1 times X2. And then there is going to be similar terms, such as X1 X3 and X2 X3. And, finally, there's going to be a term X1 times X2 times X3. There's a total of four minus signs involved, so everything shows up in the end with a plus sign. So the probability of this event is equal to the expectation of this random variable here. We notice that the ones cancel out. The expected value of X1 for an indicator variable is just the probability of that event. And we get this term. The expected value of X2 and X3 give us these terms. The expected value of X1 times X2. This is the indicator random variable of the intersection. So the expected value of this term is just the probability of the intersection of A1 and A2. And, similarly, these terms here give rise to those two terms here. Finally, X1 times X2 times X3 is the indicator variable for the event A1 intersection A2 intersection A3. Therefore, the expected value of this term, is equal to this probability. And, therefore, we have established exactly the formula that we wanted to establish. Now this derivation that we carried out here, there's nothing special about the case of three. We could have the union of many more events, we would just have here the product of more terms, and we would need to carry out the multiplication and we would get cross terms of all types involving just one of the indicator variables, or products of two indicator variables, or products of three indicator variables, and so on. And after you carry out this exercise and keep track of the various terms, you end up with this general version of what is called the inclusion-exclusion formula. So the probability of a union is-- there's the sum of the probabilities, but then you subtract all possible probabilities of two way intersections. Then we add probabilities of three way intersections, then you subtract probabilities of four way intersections, and you keep going this way alternating sings until you get to the last term, which is the probability of the intersection of all the events involved. And this exponent here of n minus 1 is the exponent that you need so that the last term has the correct sign. So, for example, if n is equal to 3, the exponent would be 2, so this would be a plus sign, which is consistent with what we got here. So this is a formula that is quite useful when you want to calculate probabilities of unions of events. But also, this derivation using indicator functions, is quite beautiful.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L099_Continuous_Analogs_of_Various_Properties.txt
In this segment we will go very fast through a few definitions and facts that remain true in the continuous case. Everything is completely analogous to the discrete case. And there are absolutely no surprises here. So, for example, we have defined joint PMFs for the case of more than two discrete random variables. And we have a bunch of facts about them. In a similar manor, we can define joint PDFs for more than two random variables. And if you have understood the material so far, you can guess how such a joint PDF will be used. For example, you can calculate the probability of a three dimensional set by integrating the joint PDF over that three dimensional set. And there are analogs off all of the other formulas that we have here where we follow the usual recipe. Sums become integrals, and PMFs are replaced by PDFs. Finally, when you deal with a random variable, which is defined as a function of jointly continuous random variables, we can use an expected value rule that takes the same form as in the discrete case. And using the expected value rule, we can establish, once more, the usual linearity properties of expectations. So absolutely no surprises here. The derivations are either completely straightforward. Or they follow exactly the same line of argument as in the discrete case, with just minor changes in notation.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L056_Binomial_Random_Variables.txt
The next random variable that we will discuss is the binomial random variable. It is one that is already familiar to us in most respects. It is associated with the experiment of taking a coin and tossing it n times independently. And at each toss, there is a probability, p, of obtaining heads. So the experiment is completely specified in terms of two parameters-- n, the number of tosses, and p, the probability of heads at each one of the tosses. We can represent this experiment by the usual sequential tree diagram. And the leaves of the tree are the possible outcomes of the experiment. So these are the elements of the sample space. And a typical outcome is a particular sequence of heads and tails that has length n. In this diagram here, we took n to be equal to 3. We can now define a random variable associated with this experiment. Our random variable that we denote by capital X is the number of heads that are observed. So for example, if the outcome happens to be this one-- tails, heads, heads-- we have 2 heads that are observed. And the numerical value of our random variable is equal to 2. In general, this random variable, a binomial random variable, can be used to model any kind of situation in which we have a fixed number of independent trials and identical trials, and each trial can result in success or failure, and we have a probability of success equal to some given number, p. The number of successes obtained in these trials is, of course, random and it is modeled by a binomial random variable. We can now proceed and calculate the PMF of this random variable. Instead of calculating the whole PMF, let us look at just one typical entry of the PMF. Let's look at this entry, which, by definition, is the probability that our random variable takes the value of 2. Now, the random variable taking the numerical value of 2, this is an event that can happen in three possible ways that we can identify in the sample space. We can have 2 heads followed by a tail. We can have heads, tails, heads. Or we can have tails, heads, heads. The probability of this outcome is p times p times (1 minus p). So it's p squared times (1 minus p). And the other two outcomes also have the same probability, so the overall probability is 3 times this. Which can also be written this way, 3 is the same as 3-choose-2. It's the number of ways that you can choose 2 heads, where they will be placed in a sequence of 3 slots or 3 trials. More generally, we have the familiar binomial formula. So this is a formula that you have already seen. It's the probability of obtaining k successes in a sequence of n independent trials. The only thing that is new is that instead of using the traditional probability notation, now we're using PMF notation. To get a feel for the binomial PMF, it's instructive to look at some plots. So suppose that we toss the coin three times and that the coin tosses are fair, so that the probability of heads is equal to 1/2. Then we see that 1 head or 2 heads are equally likely, and they are more likely than the outcome of 0 or 3 heads. Now, if we change the number of tosses and toss the coin 10 times, then we see that the most likely result is to have 5 heads. And then as we move away from 5 in either direction, the probability of that particular result becomes smaller and smaller. Now, if we toss the coin many times, let's say 100 times, the coin is still fair, then we see that the number of heads that we're going to get is most likely to be somewhere in this range between, let's say, 35 and 65. These are values of the random variable that have some noticeable or high probabilities. But anything below 30 or anything about 70 is extremely unlikely to occur. We can generate similar plots for unfair coins. So suppose now that our coin is biased and the probability of heads is quite low, equal to 0.2. In that case, the most likely result is that we're going to see 0 heads. And then, there's smaller and smaller probability of obtaining more heads. On the other hand, if we toss the coin 10 times, we expect to see a few heads, not a very large number, but some number of heads between, let's say, 0 and 4. Finally, if we toss the coin 100 times and we take the coin to be an extremely unfair one, what do we expect to see? If we think of probabilities as frequencies, we expect to see heads roughly 10% of the time. So, given that n is 100, we expect to see about 10 heads. But when we say about 10 heads, we do not mean exactly 10 heads. About 10 heads, in this instance, as this plot tells us, is any number more or less in the range from 0 to 20. But anything above 20 is extremely unlikely.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L077_Independence_Variances_the_Binomial_Variance.txt
Let us now revisit the variance and see what happens in the case of independence. Variances have some general properties that we have already seen. However, since we often add random variables, we would like to be able to say something about the variance of the sum of two random variables. Unfortunately, the situation is not so simple, and in general, the variance of the sum is not the same as the sum of the variances. We will see an example shortly. On the other hand, when X and Y are independent, the variance of the sum is equal to the sum of the variances, and this is a very useful fact. Let us go through the derivation of this property. But to keep things simple, let us assume just for the sake of the derivation, that the two random variables have 0 mean. So in that case, the variance over the sum is just the expected value of the square of the sum. And we can expand the quadratic and write this as the expectation of X squared plus 2 X Y plus Y squared. Then we use linearity of expectations to write this as the expected value of X squared plus twice the expected value of X times Y and then plus the expected value of Y squared. Now, the first term is just the variance of X because we have assumed that we have 0 mean. The last term is similarly the variance of Y. How about the middle term? Because of independence, the expected value of the product is the same as the product of the expected values, and the expected values are 0 in our case. So this term, because of independence, is going to be equal to 0. In particular, what we have is that the expected value of XY equals the expected value of X times the expected value of Y, equal to 0. And so we have verified that indeed the variance of the sum is equal to the sum of the variances. Let us now look at some examples. Suppose that X is the same random variable as Y. Clearly, this is a case where independence fails to hold. If I tell you the value of X, then you know the value of Y. So in this case, the variance of the sum is the same as the variance of twice X. Since X is the same as Y, X plus Y is 2 times X. And then using this property for the variance, what happens when we multiply by a constant? This is going to be 4 times the variance of X. In another example, suppose that X is the negative of Y. In that case, X plus Y is identically equal to 0. So we're dealing with a random variable that takes a constant value. In particular, it is always equal to its mean, and so the difference from the mean is always equal to 0, and so the variance will also evaluate to 0. So we see that the variance of the sum can take quite different values depending on the sort of interrelation that we have between the two random variables. So these two examples indicate that knowing the variance of each one of the random variables is not enough to say much about the variance of the sum. The answer will generally depend on how the two random variables are related to each other and what kind of dependencies they have. As a last example, suppose now that X and Y are independent. X is independent from Y, and therefore X is also independent from minus 3Y. Therefore, this variance is equal to the sum of the variances of X and of minus 3Y. And using the facts that we already know, this is going to be equal to the variance of X plus 9 times the variance of Y. As an illustration of the usefulness of the property of the variance that we have just established, we will now use it to calculate the variance of a binomial random variable. Remember that a binomial with parameters n and p corresponds to the number of successes in n independent trials. We use indicator variables. This is the same trick that we used to calculate the expected value of the binomial. So the random variable X sub i is equal to 1 if the i-th trial is a success and is a 0 otherwise. And as we did before, we note that X, the total number of successes, is the sum of those indicator variables. Each success makes one of those variables equal to 1, so by adding those indicator variables, we're just counting the number of successes. The key point to note is that the assumption of independence that we're making is essentially the assumption that these random variables Xi are independent of each other. So we're dealing with a situation where we have a sum of independent random variables, and according to what we have shown, the variance of X is going to be the sum of the variances of the Xi's. Now, the Xi's all have the same distribution so all these variances will be the same. It suffices to consider one of them. Now, X1 is a Bernoulli random variable with parameter p. We know what its variance is-- it is p times 1 minus p. And therefore, this is the formula for the variance of a binomial random variable.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L012_Sample_Space.txt
Putting together a probabilistic model-- that is, a model of a random phenomenon or a random experiment-- involves two steps. First step, we describe the possible outcomes of the phenomenon or experiment of interest. Second step, we describe our beliefs about the likelihood of the different possible outcomes by specifying a probability law. Here, we start by just talking about the first step, namely, the description of the possible outcomes of the experiment. So we carry out an experiment. For example, we flip a coin. Or maybe we flip five coins simultaneously. Or maybe we roll a die. Whatever that experiment is, it has a number of possible outcomes, and we start by making a list of the possible outcomes-- or, a better word, instead of the word "list", is to use the word "set", which has a more formal mathematical meaning. So we create a set that we usually denote by capital omega. That set is called the sample space and is the set of all possible outcomes of our experiment. The elements of that set should have certain properties. Namely, the elements should be mutually exclusive and collectively exhaustive. What does that mean? Mutually exclusive means that, if at the end of the experiment, I tell you that this outcome happened, then it should not be possible that this outcome also happened. At the end of the experiment, there can only be one of the outcomes that has happened. Being collectively exhaustive means something else-- that, together, all of these elements of the set exhaust all the possibilities. So no matter what, at the end, you will be able to point to one of the outcomes and say, that's the one that occurred. To summarize-- this set should be such that, at the end of the experiment, you should be always able to point to one, and exactly one, of the possible outcomes and say that this is the outcome that occurred. Physically different outcomes should be distinguished in the sample space and correspond to distinct points. But when we say physically different outcomes, what do we mean? We really mean different in all relevant aspects but perhaps not different in irrelevant aspects. Let's make more precise what I mean by that by looking at a very simple, and maybe silly, example, which is the following. Suppose that you flip a coin and you see whether it resulted in heads or tails. So you have a perfectly legitimate sample space for this experiment which consists of just two points-- heads and tails. Together these two outcomes exhaust all possibilities. And the two outcomes are mutually exclusive. So this is a very legitimate sample space for this experiment. Now suppose that while you were flipping the coin, you also looked outside the window to check the weather. And then you could say that my sample space is really, heads, and it's raining. Another possible outcome is heads and no rain. Another possible outcome is tails, and it's raining, and, finally, another possible outcome is tails and no rain. This set, consisting of four elements, is also a perfectly legitimate sample space for the experiment of flipping a coin. The elements of this sample space are mutually exclusive and collectively exhaustive. Exactly one of these outcomes is going to be true, or will have materialized, at the end of the experiment. So which sample space is the correct one? This sample space, the second one, involves some irrelevant details. So the preferred sample space for describing the flipping of a coin, the preferred sample space is the simpler one, the first one, which is sort of at the right granularity, given what we're interested in. But ultimately, the question of which one is the right sample space depends on what kind of questions you want to answer. For example, if you have a theory that the weather affects the behavior of coins, then, in order to play with that theory, or maybe check it out, and so on, then, in such a case, you might want to work with the second sample space. This is a common feature in all of science. Whenever you put together a model, you need to decide how detailed you want your model to be. And the right level of detail is the one that captures those aspects that are relevant and of interest to you.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L0310_The_Kings_Sibling.txt
Let us now conclude with a fun problem, which is also a little bit of a puzzle. We are told that the king comes from a family of two children. What is the probability that his sibling is female? Well, the problem is too loosely stated, so we need to start by making some assumptions. First, let's assume that we're dealing with an anachronistic kingdom where boys have precedence. In other words, if the royal family has two children, one of which is a boy and one is a girl, it is always the boy who becomes king, even if the girl was born first. Let us also assume that when a child is born, it has 50% probability of being a boy and 50% probability of being a girl. And in addition, let's assume that different children are independent as far as their gender is concerned. Given these assumptions, perhaps we can argue as follows. The king's sibling is a child which is independent from the king. Its gender is independent from the king's gender, so it's going to be a girl with probability 1/2. And so this is one possible answer to this problem. Is this a correct answer? Well, let's see. We have to make a more precise model, so let's go ahead with it. We have two children, so there are four possible outcomes-- boy, boy; boy, girl; girl, boy; and girl, girl. Each one of these outcomes has probability 1/4 according to our assumptions. For example, the probability of a boy followed by a boy is 1/2 times 1/2, where we're also using independence. So each one of these four outcomes has the same probability, 1/4. Now, we know that there is a king, so there must be at least one boy. Given this information, one of the outcomes becomes impossible, namely the outcome girl, girl. And we're restricted to a smaller universe with only three possible outcomes. Our new universe is this green universe, which includes all outcomes that have at least one boy, so that we can get a king. We should, therefore, use the conditional probabilities that are appropriate to this new universe. Since these three outcomes inside the green set have equal unconditional probabilities, they should also have equal conditional probabilities. So each one of these three outcomes should have a conditional probability equal to 1/3. In two of these outcomes the sibling is a girl and therefore, the conditional probability given that there is a king and therefore given that there is a boy, the conditional probability is going to be 2/3. So this is actually the official answer to this problem, and this answer is incorrect. Are we satisfied with this answer? Maybe yes, maybe no. Actually, some more assumptions are needed in order to say that 2/3 is the correct answer. Let me state what these assumptions are. We assume that the royal family decided to have exactly two children. So the number two that we have here is not random. It was something that was predetermined. Once they decided to have the two children, they had them. At least one turned out to be a boy and that boy became a king. Under this situation, indeed, the probability that the sibling of the king is female is 2/3. But these assumptions that I just stated are not the only possible ones. Let's consider some alternative assumptions. For example, suppose that the royal family operated as follows. They decided to have children until they get one boy. What does this tell us? Well, since they had two children, this tells us something-- that the first child was a girl. So in this case, the probability that the king's sibling is a girl is equal to 1. The only reason why they had two children was because the first was a girl and then the second was a boy. Suppose that the royal family made some different choices. They decided to have children until they would get two boys, just to be sure that the line of succession was secured. In this case, if we are told that there are only two children, this means that there were exactly two boys, because if one of the two children was a girl, the royal family would have continued. So in this particular case, the probability that the sibling is a girl is equal to zero. And you can think of other scenarios, as well, that might give you different answers. So 2/3 is the official answer, as long as we make the precise assumptions that the number of children, the number two, was predetermined before anything else happened. The general moral from this story is that when we deal with situations that are described in words somewhat vaguely, we must be very careful to state whatever assumptions are being made. And that needs to be done before we are able to fix a particular probabilistic model. This process of modeling will always be something of an art in which judgment calls will have to be made.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L048_Each_Person_Gets_An_Ace.txt
We will now apply our multinomial formula for counting the number of partitions to solve the following probability problem. We have a standard 52-card deck, which we deal to four persons. Each person gets 13 cards as, for example, in bridge. What is the probability that each person gets exactly one ace? Well, before we start, as always we will need a probability model. We deal the cards fairly, and this is going to be our model. But we still need to interpret our statement. To give this interpretation, let us first think of the outcomes of the experiment. What are the possible outcomes? An outcome of this experiment is a partition of the 52 cards into the four persons so that each person gets exactly 13 cards. Our statement about dealing the cards fairly will be an assumption that all partitions are equally likely. So since all partitions, all outcomes of the experiment, are equally likely, this means that we can solve a probability question by counting. We need to count the number of elements of our sample space, the number of possible outcomes, and then count the number of outcomes that make the event of interest to occur. Let us start with the number of elements of the sample space. This is the problem that we just dealt with a little while ago-- the number of outcomes, the number of partitions of 52 items into four persons, where we give 13 cards to person one, 13 cards to person two, 13 cards to person three, and 13 cards to person four. The number of possible ways of doing this is equal to this multinomial coefficient. So now let us count the number of outcomes that belong to the event of interest, namely the outcomes where each person gets an ace. We think of the process of constructing such an outcome as a multi-stage process. And we count the number of choices that we have at each stage. The process is as follows. We first distribute the four aces. We take the ace of spades and give it to one person. In how many ways can we do it? We can do it in four ways. Then we take the next ace. The next ace must be given to a different person. And so at that stage, we have three different choices about who to give that ace to. Then we consider the next ace. At this point, two persons already have aces. So we have two available choices for who can get the next ace. And finally for the last ace, we do not have any choice. We give it to the only remaining person who doesn't yet have an ace. Having distributed the four aces, then we need to somehow distribute the remaining 48 cards to the four people. But we can do that in any way we want. So all we need to do is to just partition the 48 cards into four subsets of given cardinalities. And this can be done by a number of ways, which is the number of such partitions. We have already found what that number is. And it is this particular multinomial coefficient. So the number of ways that we can distribute the cards so that each person gets an ace, according to the counting principle, is going to be the number of ways that we can distribute the aces times the number of ways that we can distribute the remaining cards. The product of this number gives us the count, gives us the cardinality, of the event of interest. We also have the cardinality of the sample space. So the desired probability can be found by dividing these two numbers. And the final answer takes this form. Let us now look at the same problem but in a different way. Probability problems can often be solved in multiple ways. And some can be faster than others. So we want to look for a smarter solution that perhaps will get us in a faster way to the desired answer. We will use the following trick. We will think about a very specific way of dealing the cards which is the following. We take the 52 cards, the card deck, and stack it so that the four aces are at the top. So they are first. And then we deal those cards to the players as follows. We think of each player having 13 slots of his own. And the cards will be placed randomly into the different slots. So we can do this one card at a time, starting from the top. We take the first ace and send it to a random location. Then we will take the second ace, send it to a random location, and so on. What we want to calculate is the probability that the four aces will end up in locations or in slots that are associated with different persons. So let us calculate this probability. The first ace can go anywhere. It doesn't matter. For the second ace, it has 51 slots to choose from. It's 51 because we started with 52, but one slot has already been taken by that particular ace. So for the ace of hearts, we have 51 slots that it can go to. And out of those 51, we have 39 of them that belong to people who do not yet have an ace. So this is the probability that the ace of hearts gets placed into a slot that belongs to a person who is different than the person who got the first ace. Now let us consider this ace. What is the probability that this ace will get into a slot which belongs to either this person or that person? It has 26 slots in which this desired event is going to happen. And it's 26 out of the 50 available slots. Finally, let us consider this ace. So having placed that ace and assuming that it got to a different person, what is the probability now that this ace is going to go to this person who doesn't yet have one? The probability of this happening is the number of slots associated with that person, which is equal to 13 divided by the number of slots that this card can choose from. And the number of slots is 52 minus the 3 slots that have already been taken, so it's 49. And so this is the answer to our problem. This expression looks very different from the expression that we derived a little earlier. But you can do the algebra, the arithmetic, simplify the answer, and you will verify that indeed it's exactly the same answer. And in case you're curious, the numerical value turns out to be 0.105. So there's about 10% [chance] that when you deal the cards in bridge, each one of the players is going to end up having exactly one ace. So this was a faster way of getting to the answer to our problem, compared to the previous one. But it raises a legitimate question. Is the way that we dealt the cards by putting the aces on top and then dealing them, is that way a fair way of dealing the cards? Is it true that with this way of dealing the cards all partitions are equally likely? It turns out that this is indeed the case. But it does require a bit of thinking. Maybe you can see it intuitively that this is the case. But if not, then it is something that one can prove. It can be proved formally as follows. One first needs to check that all permutations, that is all possible allocations of cards into slots, are equally likely. And because of this, one can then argue that any possible partition into subsets of [13] is also equally likely. So this is an equivalent way of dealing the cards to the one that we considered earlier, which was that every partition is equally likely. Therefore, we did indeed solve the same problem, and so this is a legitimate alternative way of getting to the answer. And of course, it's reassuring to check that this numerical expression agrees with the numerical expression we had derived earlier.
MIT_RES6012_Introduction_to_Probability_Spring_2018
S012_De_Morgans_Laws.txt
We will now discuss De Morgan's laws that are some very useful relations between sets and their complements. One of the De Morgan's laws takes this form. If we take the intersection of two sets and then take the complement of this intersection, what we obtain is the union of the complements of the two sets. Pictorially, here is the situation. We have our universal set. Inside that set, we have a set, S, which is this one. And we have another set, T, which is this one. Let us look at this side. The complement of S is this part of the diagram. The complement of T is this part of the diagram. What is left? What is left is just this region here, which is the intersection of S with T. So anything that does not belong here belongs to the intersection. This means that the complement of the intersection is everything out there, which is the set. If you're not convinced by this pictorial proof, let us go through an argument that is a little more formal. What does it take for an element to belong to the first set? In order to belong to that set, x belongs to the complement of S intersection T. This is the same as saying that x does not belong to the intersection [of] S with T. What does that mean? Since it is not in the intersection, this is the same as saying that x does not belong to S or x does not belong to T. But this is the same as saying that x belongs to the complement of S or x belongs to the complement of T. And this is equivalent to saying that x belongs to the union of the complement of S with the complement of T. So this establishes this first De Morgan's law. There's another De Morgan's law, which is obtained from this one by a syntactic substitution. We're going to play the following trick. Wherever we see an S, we're going to replace it by S complement. And wherever we see an S complement, we will replace it with an S. And similarly, whenever we see a T, we'll replace it by T complement. And when we see a T complement, we will replace it by T. So doing this syntactic substitution, what we obtain is S complement intersection with T complement-- everything gets complemented-- is the same as S union T. Now, let us take complements of both sides. The complement of a complement is the set itself. So we obtain this. And now, we take the complement of the other side, which is this one. And this is the second De Morgan's law. It tells us that the complement of a union is the same as the intersection of the complements. We derived it from the first De Morgan's law by a syntactic substitution. If you're not convinced, it would be useful for you to go through an argument of this kind to show that something is an element of this set if and only if it is an element of that set as well. Finally, it turns out that De Morgan's laws are valid when we take unions or intersections of more than two sets. There is a more general form. And the general form is as follows-- an analogy with this one. If we have a collection of sets, Sn, perhaps an infinite collection, we take the intersection of those sets and then the complement, what that is is the union of the complements. So this is analygous to this law. And this law extends to this one: if we have the union of certain sets and we take the complement of the union, what we obtain is the intersection of the complements. We will have many occasions to use De Morgan's laws. They're actually very useful. They allow us, in general, to go back and forth between unions and intersections.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L073_Conditional_Expectation_the_Total_Expectation_Theorem.txt
We will now talk about conditional expectations of one random variable given another. As we will see, there will be nothing new here, except for older results but given in new notation. Any PMF has an associated expectation. And so conditional PMFs also have associated expectations, which we call conditional expectations. We have already seen them for the case where we condition on an event, A. The case where we condition on random variables is exactly the same. We let the event, A, be the event that Y takes on a specific value. And then we calculate the expectation using the relevant conditional probabilities, those that are given by the conditional PMF. So the conditional expectation of X given that Y takes on a certain value is defined as the usual expectation, except that we use the conditional probabilities that apply given that Y takes on a specific value little y. Recall now the expected value rule for ordinary expectations. And also the Expected Value Rule for conditional expectations given an event, something that we have already seen. Now, in PMF notation, the expected value rule takes a similar form. The event, A is replaced by the specific event that Y takes on a specific value. And in that case, the conditional PMF given the event A is just the conditional PMF given that random variable Y takes on a specific value, little y. For the case where we condition on events, we also developed a version of the total probability theorem and the total expectation theorem. We can do the same when we condition on random variables. So suppose that the sample space has been partitioned into n, disjoint scenarios. The total probability theorem tells us that the probability of the event that random variable X takes on a value little x, can be found by taking the probabilities of this event under each one of the possible scenarios. And then weighing those probabilities according to the probabilities of the different scenarios. Now, suppose that we are dealing with a random variable that takes values in a set consisting of n elements. And let us consider scenarios Ai, the i-th scenario is the event that the random variable Y takes on the i-th possible value. We can apply the total probability theorem to this situation. We can find the probability that the random variable X takes on a certain value, little x, by considering the probability of this event happening under each possible scenario, where a scenario is that Y took on a specific value, and then weigh those probabilities according to the probabilities of the different scenarios. The story with the total expectation theorem is similar. We know that an expectation can be found by taking the conditional expectations under each one of the scenarios and weighing them according to the probabilities of the different scenarios. Again, let the event that Y takes on a specific value be a different scenario. And with this correspondence we obtain the following version of the total expectation theorem. We have a sum of different terms. And each term in the sum is the probability of a given scenario times the expected value of X under this particular scenario. At this point, I have to add a comment of a more mathematical flavor. We have been talking about a partition of the sample space into finitely many scenarios. But if Y takes on values in a discrete but infinite set, for example, if Y can take on any integer value, the argument that we have given is not quite complete. Fortunately, the total probability theorem and the total expectation theorem, they both remain true, even for the case where Y ranges over an infinite set as long as the random variable X has a well-defined expectation. For the total probability theorem, the proof for the general case can be carried out without a lot of difficulty, just using the countable additivity axiom. However, for the total expectation theorem, it takes some harder mathematical work. And this is beyond our scope. But we will just take this fact for granted, that the total expectation theorem carries over to the case where we're adding over an infinite sequence of possible values of Y. In the rest of the course we will often use the total expectation theorem, including in cases where Y ranges over an infinite discrete set. In fact, we will see that this theorem is an extremely useful tool that can be used to divide and conquer complicated models.
MIT_RES6012_Introduction_to_Probability_Spring_2018
S010_Mathematical_Background_Overview.txt
In this sequence of segments, we review some mathematical background that will be useful at various places in this course. Most of what is covered, with the exception of the last segment, is material that you may have seen before. But this could still be an opportunity to refresh some of these concepts. I should say that this is intended to be just a refresher. Our coverage is not going to be complete in any sense. What we will talk about is sets, various definitions related to sets, and some basic properties, including De Morgan's laws. We will talk about what a sequence is and what it means for a sequence to converge to something. We will talk about infinite series. And as an example, we will look at the geometric series. Then we will talk about some subtleties that arise when you have sums of terms that are indexed with multiple indices. And finally, probably the most sophisticated part, will be a discussion of countable versus uncountable sets. Countable sets are like the integers. Uncountable sets are like the real line. And they're fundamentally different. And this fundamental difference reflects itself into fundamentally different probabilistic models-- models that involve discrete experiments and outcomes versus models that involve continuous outcomes.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L026_The_Multiplication_Rule.txt
As promised, we will now start developing generalizations of the different calculations that we carried out in the context of the radar example. The first kind of calculation that we carried out goes under the name of the multiplication rule. And it goes as follows. Our starting point is the definition of conditional probabilities. The conditional probability of A given another event, B, is the probability that both events have occurred divided by the probability of the conditioning event. We now take the denominator term and send it to the other side of this equality to obtain this relation, which we can interpret as follows. The probability that two events occur is equal to the probability that a first event occurs, event B in this case, times the conditional probability that the second event, event A, occurs, given that event B has occurred. Now, out of the two events, A and B, we're of course free to choose which one we call the first event and which one we call the second event. So the probability of the two events happening is also equal to an expression of this form, the probability that A occurs times the conditional probability that B occurs, given that A has occurred. We used this formula in the context of a tree diagram. And we used it to calculate the probability of a leaf of this tree by multiplying the probability of taking this branch, the probability that A occurs, times the conditional probability of taking this branch, the probability that event B also occurs given that event A has occurred. How do we generalize this calculation? Consider a situation in which the experiment has an additional third stage that has to do with another event, C, that may or may not occur. For example, if we have arrived here, A and B have both occurred. And then C also occurs, then we reach this particular leaf of the tree. Or there could be other scenarios. For example, it could be the case that A did not occur. Then event B occurred, and finally, event C did not occur, in which case we end up at this particular leaf. What is the probability of this scenario happening? Let us try to do a calculation similar to the one that we used for the case of two events. However, we need to deal here with three events. What should we do? Well, we look at the intersection of these three events and think of it as the intersection of a composite event, A complement intersection B, then intersected with the event C complement. Clearly, you can form the intersection of three events by first taking the intersection of two of them and then intersecting with a third. After we group things this way, we're dealing with the probability of two events happening, this composite event and this ordinary event. And the probability of two events happening is equal to the probability that the first event happens, and then the probability that the second event happens, given that the first one has happened. Can we simplify this even further? Yes. The first term is the probability of two events happening. So it can be simplified further as the probability that A complement occurs times the conditional probability that B occurs, given that A complement has occurred. And then we carry over the last term exactly the way it is. The conclusion is that we can calculate the probability of this leaf by multiplying the probability of the first branch times the conditional probability of the second branch, given that the first branch was taken, and then finally multiply with the probability of the third branch, which is the probability that C complement occurs, given that A complement and B have already occurred. In other words, we can calculate the probability of a leaf by just multiplying the probabilities of the different branches involved and where we use conditional probabilities for the intermediate branches. At this point, you can use your imagination to see that such a formula should also be valid for the case of more than three events. The probability that a bunch of events all occur should be the probability of the first event times a number of factors, each corresponding to a branch in a tree of this kind. In particular, the probability that events A1, A2, up to An all occur is going to be the probability that the first event occurs times a product of conditional probabilities that the i-th event occurs, given that all of the previous events have already occurred. And we obtain a term of this kind for every event, Ai, after the first one, so this product ranges from 2 up to n. And this is the most general version of the multiplication rule and allows you to calculate the probability of several events happening by multiplying probabilities and conditional probabilities.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L027_Total_Probability_Theorem.txt
Let us now revisit the second calculation that we carried out in the context of our earlier example. In that example, we calculated the total probability of an event that can occur under different scenarios. And it involves the powerful idea of divide and conquer where we break up complex situations into simpler pieces. Here is what is involved. We have our sample space. And our sample space is partitioned into a number of subsets or events. In this picture we take that number to be 3, so we'll have it partitioned into three possible scenarios. It is a partition which means that these events cover the entire sample, space and they're disjoint from each other. For each one of the scenarios we're given their probabilities. If you prefer, you can also draw this situation in terms of a tree. There are three different scenarios that can happen. We're interested in a particular event, B. That event B can happen in three different ways. It can happen under scenario one, under scenario two, or under scenario three. And this corresponds to these particular sub-events. So for example, this is the event where scenario A1 happens. And then event B happens as well. In terms of a tree diagram, the picture becomes as follows. If scenario A1 materializes, event B may occur or event B might not occur. Finally, we are given conditional probabilities that event B will materialize under each one of the different possible scenarios. Under those circumstances, can we calculate the probability of event B? Of course we can. And here's how we do it. First we realize that event B consists of a number of disjoint pieces. One piece is when event B occurs together with event A1. Another piece is when event B occurs together with A2. Another piece is when event B occurs together with A3. These three sets are disjoint from each other, as we see in this picture. And together they form the event B. Therefore, the probability of B is going to be, by the additivity axiom of probabilities, equal to the sum of the probabilities of these sub-events. Furthermore, for each one of these sub-events we can use the multiplication rule and write their probabilities as follows. The probability that B and A1 both occur is the probability that scenario one materializes times the conditional probability that B occurs given that A1 occurred. And then we're going to have similar terms under the second scenario and a similar term under the third scenario. So putting everything together, we have arrived at a formula of this form. The total probability of event B is the sum of the probabilities of the different ways that B may occur, that is, B occurring under the different scenarios. And those particular probabilities are the product of the probability of the scenario times the conditional probability of B given that scenario. Now, note that the sum of the probabilities of the different scenarios is of course equal to 1. And this is because the scenarios form a partition of our sample space. So if we look at this formula here, we realize that it is a weighted average of the conditional probabilities of event B, weighted average of the conditional probabilities where these probabilities of the individual scenarios are the weights. In words, the probability that an event occurs is a weighted average of the probability that it has under each possible scenario, where the weights are the probabilities of the different scenarios. One final comment-- our derivation was for the case of three events. But you can certainly see that the same derivation would go through if we had any finite number of events. But even more, if we had a partition of our sample space into an infinite sequence of events, the same derivation would still go through, except that in this place in the derivation, instead of using the ordinary additivity axiom we would have to use the countable additivity axiom. But other than that, all the steps would be the same. And we would end up with the same formula, except that now this would be an infinite sum over the infinite set of scenarios.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L021_Lecture_Overview.txt
Suppose I look at the registry of residents of my town and pick a person at random. What is the probability that this person is under 18 years of age? The answer is about 25%. Suppose now that I tell you that this person is married. Will you give the same answer? Of course not. The probability of being less than 18 years old is now much smaller. What happened here? We started with some initial probabilities that reflect what we know or believe about the world. But we then acquired some additional knowledge, some new evidence-- for example, about this person's family situation. This new knowledge should cause our beliefs to change, and the original probabilities must be replaced with new probabilities that take into account the new information. These revised probabilities are what we call conditional probabilities. And this is the subject of this lecture. We will start with a formal definition of conditional probabilities together with the motivation behind this particular definition. We will then proceed to develop three tools that rely on conditional probabilities, including the Bayes rule, which provides a systematic way for incorporating new evidence into a probability model. The three tools that we introduce in this lecture involve very simple and elementary mathematical formulas, yet they encapsulate some very powerful ideas. It is not an exaggeration to say that much of this class will revolve around the repeated application of variations of these three tools to increasingly complicated situations. In particular, the Bayes rule is the foundation for the field of inference. It is a guide on how to process data and make inferences about unobserved quantities or phenomena. As such, it is a tool that is used all the time, all over science and engineering.
MIT_RES6012_Introduction_to_Probability_Spring_2018
S019_Proof_That_a_Set_of_Real_Numbers_is_Uncountable.txt
For those of you who are curious, we will go through an argument that establishes that the set of real numbers is an uncountable set. It's a famous argument known as Cantor's diagonalization argument. Actually, instead of looking at the set of all real numbers, we will first look at the set of all numbers, x, that belong to the open unit interval-- so numbers between 0 and 1-- and such that their decimal expansion involves only threes and fours. Now, the choice of three and four is somewhat arbitrary. It doesn't matter. What really matters is that we do not have long strings of nines. So suppose that this set was countable. If the set was countable, then that set could be written as equal to a set of this form, x1, x2, x3 and so on, where each one of these is a real number inside that set. Now, suppose that this is the case. Let us take those numbers and write them down in decimal notation. For example, one number could be this one, and it continues forever. Since we're talking about real numbers, their decimal expansion will go on forever. Suppose that the second number is of this kind, and it has its own decimal expansion. Suppose that the third number is, again, with some decimal expansion and so on. So we have assumed that our set is countable and therefore, the set is equal to that sequence. So this sequence exhausts all the numbers in that set. Can it do that? Let's construct a new number in the following fashion. The new number looks at this digit and does something different. Looks at this digit, the second digit of the second number, and does something different. Looks at the third digit of the third number and does something different. And we continue this way. This number that we have constructed here is different from the first number. They differ in the first digit. It's different from the second number. They differ in the second digit. It's different from the third number because it's different in the third digit and so on. So this is a number, and this number is different from xi for all i. So we have an element of this set which does not belong to this sequence. Therefore, it cannot be true that this set is equal to the set formed by that sequence. And so this is a contradiction to the initial assumption that this set could be written in this form, and this contradiction establishes that since this is not possible, that the set that we have here is an uncountable set. Now, this set is a subset of the set of real numbers. Since this one is uncountable, it is not hard to show that the set of real numbers, which is a bigger set, will also be uncountable. And so this is this particular famous argument. We will not need it or make any arguments of this type in this class, but it's so beautiful that it's worth for everyone to see it once in their lifetime.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L041_Lecture_Overview.txt
A basketball coach has 20 players available. Out of them, he needs to choose five for the starting lineup, and seven who would be sitting on the bench. In how many ways can the coach choose these 5 plus 7 players? It is certainly a huge number, but what exactly is it? In this lecture, we will learn how to answer questions of this kind. More abstractly, we will develop methods for counting the number of elements of a given set which is described in some implicit way. Now, why do we care? The reason is that in many models, the calculation of probabilities reduces to counting. Counting the number of elements of various sets. Suppose that we have a probability model in which the sample space, Omega, is finite, and consists of n equally likely elements. So each element has probability 1/n. Suppose now that we're interested in the probability of a certain set, A, which has k elements. Since each one of the elements of A has probability 1/n, and since A has k distinct elements, then by the additivity axiom, the probability of A is equal to k times 1 over n. Therefore to find the probability of A, all we have to do is to count the number of elements of Omega and the number of elements of A, and so determine the numbers k and n. Of course, if a set is described explicitly through a list of its elements, then counting is trivial. But when a set is given through some abstract description, as in our basketball team example, counting can be a challenge. In this lecture, we will start with a powerful tool, the basic counting principle, which allows us to break a counting problem into a sequence of simpler counting problems. We will then count permutations, subsets, combinations, and partitions. We will see shortly what all of these terms mean. In the process we will solve a number of example problems, and we will also derive the formula for the binomial probabilities, the probabilities that describe the number of heads in a sequence of independent coin tosses. So, let us get started.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L059_Elementary_Properties_of_Expectation.txt
We now note some elementary properties of expectations. These will be some properties that are extremely natural and intuitive, but even so, they are worth recording. The first property is the following. If you have a random variable which is non-negative, then its expected value is also non-negative. What does it mean that the random variable is non-negative? What it means is that for all possible outcomes of the experiment, no matter what the outcome is, the associated numerical value of the random variable is a non-negative number. What's the implication of this? When we calculate an expectation we're adding over all the possible numerical values of the random variable. All the possible numerical values of the random variable under this assumption are non-negative. Probabilities are also non-negative. So we have a sum of non-negative entries and therefore, the expected value is also going to be non-negative. The next property is a generalization of this. Consider now a random variable that has the property that no matter what the outcome of the experiment is, the value of this random variable lies in the range between two constants, a and b. In this case, we argue as follows. The expected value, by definition, is a sum over all possible values of the random variable of certain terms. Now, the possible numerical values of the random variable are all of them at least as large as a, so this gives us an inequality of this type. Then, we pull a factor of a outside of the summation. And finally, we recall that the sum of a PMF over all possible values of little x is equal to 1. Why is that the case? Well, these are the probabilities for the different numerical values of the random variable. The sum of the probabilities of all the possible numerical values has to be equal to 1, because that exhausts all the possibilities. So we obtain a times 1, which is a. So, what we have proved is that the expected value is at least large as a. You can use a symmetrical argument where the inequalities will go the opposite way and where a's will be replaced by b's, to prove the second inequality, as well. The last fact we want to take note of is the following. If we have a constant and we take its expected value, we obtain the same constant. What does that mean? We have only been talking about expected values of random variables. What does it mean to take the expected value of a constant? Well, as we discussed earlier, we can think of a constant as being a random variable of a very special type. A random variable whose PMF takes this form. This random variable can take only a single value and the probability of that single value is equal to 1. This means that in the formula for the expected value there's going to be only one term in this summation, and that term is going to be c times the probability that our random variable takes the value c. Now, that probability is equal to 1, and we're left with c. So this equality makes sense, of course, as long as you understand that a constant can also be viewed as a random variable of a very degenerate type. Now, intuitively, of course, it's certainly clear what this is saying. That if a certain quantity is always equal to c, then on the average, it will also be equal to c.
MIT_RES6012_Introduction_to_Probability_Spring_2018
S073_Independence_of_Random_Variables_Versus_Independence_of_Events.txt
By now, we have defined the notion of independence of events and also the notion of independence of random variables. The two definitions look fairly similar, but the details are not exactly the same, because the two definitions refer to different situations. For two events, we know what it means for them to be independent. The probability of their intersection is the product of their individual probabilities. Now, to make a relation with random variables, we introduce the so-called indicator random variables. So for example, the random variable X is defined to be equal to 1 if event A occurs and to be equal to 0 if event [A] does not occur. And there is a similar definition for random variable Y. In particular, the probability that random variable X takes the value of 1, this is the probability that event A occurs. It turns out that the independence of the two events, A and B, is equivalent to the independence of the two indicator random variables. And there is a similar statement, which is true more generally. That is, n events are independent if and only if the associated n indicator random variables are independent. This is a useful statement, because it allows us to sometimes, instead of manipulating events, to manipulate random variables, and vice versa. And depending on the context, one maybe easier than the other. Now, the intuitive content is that events A and B are independent if the occurrence of event A does not change your beliefs about B. And in terms of random variables, one random variable taking a certain value, which indicates whether event A has occurred or not does not give you any information about the other random variable, which would tell you whether event B has occurred or not. It is instructive now to go through the derivation of this fact, at least for the case of two events, because it gives us perhaps some additional understanding about the precise content of the definitions we have introduced. So let us suppose that random variables X and Y are independent. What does that mean? Independence means that the joint PMF of the two random variables, X and Y, factors as a product of the corresponding marginal PMFs. And this factorization must be true no matter what arguments we use inside the joint PMF. And the combination of X and Y in this instance have a total of four possible values. These are the combinations of zeroes and ones that we can form. And for this reason, we have a total of four equations. These four equalities are what is required for X and Y to be independent. So suppose that this is true, that the random variables are independent. Let us take this first relation and write it in probability notation. The random variable X taking the value of 1, that's the same as event A occurring. And random variable Y taking the value of 1, that's the same as event B occurring. So the joint PMF evaluated at 1, 1 is the probability that events A and B both occur. On the other side of the equation, we have the probability that X is equal to 1, which is the probability that A occurs, and similarly, the probability that B occurs. But if this is true, then by definition, A and B are independent events. So we have verified one direction of this statement. If the random variables are independent, then events A and B are independent. Now, we would like to verify the reverse statement. So suppose that events A and B are independent. In that case, this relation is true. And as we just argued, this relation is the same as this relation but just written in different notation. So we have shown that if A and B are independent, this relation will be true. But how about the remaining three relations? We have more work to do. Here's how we can proceed. If A and B are independent, we have shown some time ago that events A and B complement will also be independent. Intuitively, A doesn't tell you anything about B occuring or not. So A does not tell you anything about whether B complement will occur or not. Now, these two events being independent, by the definition of independence, we have that the probability of A intersection with B complement is the product of the probabilities of A and of B complement. And then we realize that this equality, if written in PMF notation, corresponds exactly to this equation here. Event A corresponds to X taking the value of 1, event B complement corresponds to the event that Y takes the value of 0. By a similar argument, B and A complement will be independent. And we translate that into probability notation. And then we translate this equality into PMF notation. And we get this relation. Finally, using the same property that we used to do the first step here, we have that A complement and B complement are also independent. And by following the same line of reasoning, this implies the fourth relation as well. So we have verified that if events A and B are independent, then we can argue that all of these four equations will be true. And therefore, random variables X and Y will also be independent.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L036_Independence_Versus_Conditional_Independence.txt
We have already seen an example in which we have two events that are independent but become dependent in a conditional model. So that [independence] and conditional independence is not the same. We will now see another example in which a similar situation is obtained. The example is as follows. We have two possible coins, coin A and coin B. This is the model of the world given that coin A has been chosen. So this is a conditional model given that we have in our hands coin A. In this conditional model, the probability of heads is 0.9. And, moreover, the probability of heads is 0.9 in the second toss no matter what happened in the first toss and so on as we continue. So given a particular coin, we assume that we have independent tosses. This is another way of saying that we're assuming conditional independence. Within this conditional model, coin flips are independent. And the same assumption is made in the other possible conditional universe. This is a universe in which we're dealing with coin B. Once more, we have, conditionally independent tosses. And this time, the probability of heads at each toss is 0.1. Suppose now that we choose one of the two coins. Each coin is chosen with the same probability, 0.5. So we're equally likely to obtain this coin-- and then start flipping it over and over-- or that coin-- and start flipping it over and over. The question we will try to answer is whether the coin tosses are independent. And by this, we mean a question that refers to the overall model. In this general model, are the different coin tosses independent? Where you do not know ahead of time which coin is going to be. We can approach this question by trying to compare conditional and unconditional probabilities. That's what independence is about. Independence is about certain conditional probabilities being the same as the unconditional probabilities. So this here, this comparison here is essentially the question of whether the 11th coin toss is dependent or independent from what happened in the first 10 coin tosses. Let us calculate these probabilities. For this one, we use the total probability theorem. There's a certain probability that we have coin A, and then we have the probability of heads in the 11th toss given that it was coin A. There's also a certain probablility that it's coin B and then a conditional probability that we obtain heads given that it was coin B. We use the numbers that are given in this example. We have 0.5 probability of obtaining a particular coin, 0.9 probability of heads for coin A, 0.5 probability that it's coin B, and 0.1 probability of heads if it is indeed coin B. We do the arithmetic, and we find that the answer is 0.5, which makes perfect sense. We have coins with different biases, but the average bias is 0.5. If we do not know which coin it's going to be, the average bias is going to be 0.5. So the probability of heads in any particular toss is 0.5 if we do not know which coin it is. Suppose now that someone told you that the first 10 tosses were heads. Will this affect your beliefs about what's going to happen in the 11th toss? We can calculate this quantity using the definition of conditional probabilities, or the Bayes' rule, but let us instead think intuitively. If it is coin B, the events of 10 heads in a row is extremely unlikely. So if I see 10 heads in a row, then I should conclude that there is almost certainty that I'm dealing with coin A. So the information that I'm given tells me that I'm extremely likely to be dealing with coin A. So we might as well condition on this equivalent information that it is coin A that I'm dealing with. But if it is coin A, then the probability of heads is going to be equal to 0.9. So the conditional probability is quite different from the unconditional probability. And therefore, information on the first 10 tosses affects my beliefs about what's going to happen in the [11th] toss. And therefore, we do not have independence between the different tosses.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L017_A_Discrete_Example.txt
Let us now move from the abstract to the concrete. Recall the example that we discussed earlier where we have two rolls of a tetrahedral die. So there are 16 possible outcomes illustrated in this diagram. To continue, now we need to specify a probability law, some kind of probability assignment. To keep things simple, we're going to make the assumption that the 16 possible outcomes are all equally likely. And each outcome has a probability of 1 over 16. Given this assumption, we will now proceed to calculate certain probabilities. Let us look first at the probability that X, which stands the result of the first roll, is equal to 1. The way to calculate this probability is to identify what exactly that event is in our picture of the sample space, and then calculate. The event that X is equal to 1 can happen in four different ways that correspond to these four particular outcomes. Each one of these outcomes has a probability of 1 over 16. The probability of this event is the sum of the probabilities of the outcomes that it contains. So it is 4 times 1 over 16, equal to one fourth. Let now Z stand for the smaller of the two numbers that came up in our two rolls. So for example, if X is 2 and Y is equal to 3, then Z is equal to 2, which is the smaller of the two. Let us try to calculate the probability that the smaller of the two outcomes is equal to 4. Now for the smaller of the two outcomes to be equal to 4, we must have that both X and Y are equal to 4. So this outcome here is the only way that this particular event can happen. Since there's only one outcome that makes the event happen, the probability of this event is the probability of that outcome and is equal to 1 over 16. For another example, let's calculate the probability that the minimum is equal to 2. What does it mean that the minimum is equal to 2? It means that one of the dice resulted in a 2, and the other die resulted in a number that's 2 or larger. So we could have both equal to 2. We could have X equal to 2, but Y larger. Or we could have Y equal to 2 and X something larger. This green event, this green set, is the set of all outcomes for which the minimum of the two rolls is equal to 2. There's a total of five such outcomes. Each one of them has probably 1 over 16. And we have discussed that for finite sets, the probability of a finite set is the sum of the probabilities of the elements of that set. So we have five elements here, each one with probability 1 over 16. And this is the answer to this problem. This particular example that we saw here is a special case of what is called a discrete uniform law. In a discrete uniform law, we have a sample space which is finite. And it has n elements. And we assume that these n elements are equally likely. Now since the probability of omega, the probability of the entire sample space, is equal to 1, this means that each one of these elements must have probability 1 over n. That's the only way that the sum of the probabilities of the different outcomes would be equal to 1 as required by the normalization axiom. Consider now some subset of the sample space, an event A that, exactly k elements. What is the probability of the set A? It's the sum of the probabilities of its elements. There are k elements. And each one of them has a probability of 1 over n. And this way we can find the probability of the set A. So when we have a discrete uniform probability law, we can calculate probabilities by simply counting the number of elements of omega, which is n, finding the number n, and counting the number of elements of the set A. That's the reason why counting will turn out to be an important skill. And there will be a whole lecture devoted to this particular topic.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L076_Independence_Expectations.txt
When we have independence, does anything interesting happen to expectations? We know that, in general, the expected value of a function of random variables is not the same as applying the function to the expected values. And we also know that there are some exceptions where we do get equality. This is the case where we are dealing with linear functions of one or more random variables. Note that this last property is always true and does not require any independence assumptions. When we have independence, there is one additional property that turns out to be true. The expected value of the product of two independent random variables is the product of their expected values. Let us verify this relation. We are dealing here with the expected value of a function of random variables, where the function is defined to be the product function. So to calculate this expected value, you can use the expected value rule. And we are going to get the sum over all x, the sum over all y, of g of xy, but in this case, g of xy is x times y. And then we weigh all those values according to the probabilities as given by the joint PMF. Now, using independence, this sum can be changed into the following form-- the joint PMF is the product of the marginal PMFs. And now when we look at the inner sum over all values of y, we can take outside the summation those terms that do not depend on y, and so this term and that term. And this is going to yield a summation over x of x times the marginal PMF of X, and then the summation over all y of y times the marginal PMF of Y. But now we recognize that here we have just the expected value of Y. And then we will be left with another expression, which is the expected value of X. And this completes the argument. Now, consider a function of X and another function of Y. X and Y are independent. Intuitively, the value of X does not give you any new information about Y, so the value of g of X does not to give you any new information about h of Y. So on the basis of this intuitive argument, the functions g of X and h of Y are also independent of each other. Therefore, we can apply the fact that we have already proved, but with g of X in the place of X and h of Y in the place of Y. And this gives us this more general fact that the expected value of the product of two functions of independent random variables is equal to the product of the expectations of these functions. We could also prove this property directly without relying on the intuitive argument. We could just follow the same steps as in this derivation. Wherever there is an X, we would write g of X, and wherever there is a Y, we would write h of Y. And the same algebra would go through, and we would end up with the expected value of g of X times the expected value of h of Y.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L078_The_Hat_Problem.txt
We will now study a problem which is quite difficult to approach in a direct brute force manner but becomes tractable once we break it down into simpler pieces using several of the tricks that we have learned so far. And this problem will also be a good opportunity for reviewing some of the tricks and techniques that we have developed. The problem is the following. There are n people. And let's say for the purpose of illustration that we have 3 people, persons 1, 2, and 3. And each person has a hat. They throw their hats inside a box. And then each person picks a hat at random out of that box. So here are the three parts. And one possible outcome of this experiment is that person 1 ends up with hat number 2, person 2 ends up with hat number 1, person 3 ends up with hat number 3. We could indicate the hats that each person got by noting here the numbers associated with each person, the hat numbers. And notice that this sequence of numbers, which is a description of the outcome of the experiment, is just a permutation of the numbers 1, 2, 3 of the hats. So we permute the hat numbers so that we can place them next to the person that got each one of the hats. In particular, we have n factorial possible outcomes. This is the number of possible permutations. What does it mean to pick hats at random? One interpretation is that every permutation is equally likely. And since we have n factorial permutations, each permutation would have a probability of 1 over n factorial. But there's another way of describing our model, which is the following. Person 1 gets a hat at random out of the three available. Then person 2 gets a hat at random out of the remaining hats. Then person 3 gets the remaining hat. Each time that there is a choice, each one of the available hats is equally likely to be picked as any other hat. Let us calculate the probability, let's say, that this particular permutation gets materialized. The probability that person 1 gets hat number 2 is 1/3. Then we're left with two hats. Person 2 has 2 hats to choose from. The probability that it picks this particular hat is going to be 1/2. And finally, person 3 has only 1 hat available, so it will be picked with probability 1. So the probability of this particular permutation is one over 3 factorial. But you can repeat this argument and consider any other permutation, and you will always be getting the same answer. Any particular permutation has the same probability, one over 3 factorial. The same argument goes through for the case of general n, n people and n hats. And we will find that any permutation will have the same probability, 1/n factorial. Therefore, the process of picking one hat at a time is probabilistically identical to a model in which we simply state that all permutations are equally likely. Now that we have described our model and our process and the associated probabilities, let us consider the question we want to answer. Let X be the number of people who get their own hat back. For example, for the outcome that we have drawn here, the only person who gets their own hat back is person 3. And so in this case X happens to take the value of 1. What we want to do is to calculate the expected value of the random variable X. The problem is difficult because if you try to calculate the PMF of the random variable X and then use the definition of the expectation to calculate this sum, you will run into big difficulties. Calculating this quantity, the PMF of X, is difficult. And it is difficult because there is no simple expression that describes it. So we need to do something more intelligent, find some other way of approaching the problem. The trick that we will use is to employ indicator variables. Let Xi be equal to one 1 if person i selects their own hat and 0 otherwise. So then, each one of the Xi's is 1 whenever a person has selected their own hat. And by adding all the 1's that we may get, we obtain the total number of people who have selected their own hats. This makes things easier, because now to calculate the expected value of X it's sufficient to calculate the expected value of each one of those terms and add the expected values, which we're allowed to do because of linearity. So let's look at the typical term here. What is the expected value of Xi? If you consider the first description or our model, all permutations are equally likely, this description is symmetric with respect to all of the persons. So the expected value of Xi should be the same as the expected value of X1. Now, to calculate the expected value of X1, we will consider the sequential description of the process in which 1 is the first person to pick a hat. Now, since X1 is a Bernoulli random variable that takes values 0 or 1, the expected value of X1 is just the probability that X1 is equal to 1. And if person 1 is the first one to choose a hat, that person has probability 1/n of obtaining the correct hat. So each one of these random variables has an expected value of 1/n. The expected value of X by linearity is going to be the sum of the expected values. There is n of them. Each expected value is 1/n. And so the final answer is 1. This is the expected value of the random variable X. Let us now move and try to calculate a more difficult quantity, namely, the variance of X. How shall we proceed? Things would be easiest if the random variables Xi were independent. Because in that case, the variance of X would be the sum of the variances of the Xi's. But are the Xi's independent? Let us consider a special case. Suppose that we only have two persons and that I tell you that the first person got their own hat back. In that case, the second person must have also gotten their own hat back. If, on the other hand, person 1 did not to get their own hat back, then person 2 will not get their own hat back either. Because in this scenario, person 1 gets hat 2, and that means that person 2 gets hat 1. So we see that knowing the value of the random variable X1 tells us a lot about the value of the random variable X2. And that means that the random variables X1 and X2 are dependent. More generally, if I were to tell you that the first n minus 1 people got their own hats back, then the last remaining person will have his or her own hat available to be picked. That's going to be the only available hat. And then person n we also get their hat back. So we see that the information about some of the Xi's gives us information about the remaining Xn. And again, this means that the random variables are dependent. Since we do not have independence, we cannot find the variance by just adding the variances of the different random variables. But we need to do a lot more work in that direction. In general, whenever we need to calculate variances, it is usually simpler to carry out the calculation using this alternative form for the variance. So let us start towards a calculation of the expected value of X squared. Now the random variable X squared, by simple algebra, is this expression times itself. And by expanding the product we get all sorts of cross terms. Some of these cross terms will be of the type X1 times Xi or X2 times X2. These will be terms of this form, and there is n of them. And then we get cross terms, such as X1 times X2, X1 times X3, X2 times X1, and so on. How many terms do we have here? Well, if we have n terms multiplying n other terms we have a total of n squared terms. n are already here, so the remaining terms, which are the cross terms, will be n squared minus n. Or, in a simpler form, it's n times n minus 1. So now how are we going to calculate the expected value of X squared? Well, we will use linearity of expectations. So we need to calculate the expected value of Xi squared, and we also need to calculate the expected value of Xi Xj when i is different from j. Let us start with Xi squared. First, if we use the symmetric description of our model, all permutations are equally likely, then all persons play the same role. There's symmetry in the problem. So Xi squared has the same distribution as X1 squared. Then, X1 is a 0-1 random variable, a Bernoulli random variable. So X1 squared will always take the same numerical value as the random variable X1. This is a very special case which happens only because a random variable takes values in {0, 1}. And 0 squared is the same as 0. 1 squared is the same as 1. This expected value is something that we have already calculated, and it is 1/n. Let us now move to the calculation of the expectation of a typical term inside the sum. So let i be different than j, and look at the expected value of Xi Xj. Once more, because of the symmetry of the probabilistic model, it doesn't matter which i and j we are considering. So we might as well consider the product of X1 with X2. Now, X1 and X2 take values 0 and 1. And the product of the two also takes values 0 and 1. So this is a Bernoulli random variable, and so the expectation of that random variable is just the probability that this random variable is equal to 1. But for the product to be equal to 1, the only way that this can happen is if both of these random variables happen to be equal to 1. Let us now turn to the sequential description of the model. The probability that the first person gets their own hat back and the second person gets their own hat back is the probability that the first one gets their own hat back, and then multiplied by the conditional probability that the second person gets their own hat back, given that the first person got their own hat back. What are these probabilities? The probability that a person gets their own hat back is 1/n. Given that person 1 got their own hat back, person 2 is faced with a situation where there are n minus 1 available hats. And one of those is that person's hat. So the probability that person 2 will also pick his or her own hat is 1 over n minus 1. Now we are in a position to calculate the expected value of X squared. The expected value of X squared consists of the sum of n expected values, each one of which is equal to 1/n plus so many expected values, because we have so many terms, each one of which, by this calculation, is 1/n times 1 over n minus 1. And we see that we get cancellations here. And we obtain 1 plus 1, which is equal to 2. On the other hand we have this term that we need to subtract. We found previously that the expected value of X is equal to 1. So we need to subtract 1. And the final answer to our problem is that the variance of X is also equal to 1. So what we saw in this problem is that we can deal with quite complicated models, but by breaking them down into more manageable pieces, first break down the random variable X as a sum of different random variables, then taking the square of this and break it down into a number of different terms, and then by considering one term at a time, we can often end up with the solutions or the answers to problems that would have been otherwise quite difficult.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L042_The_Counting_Principle.txt
In this segment we introduce a simple but powerful tool, the basic counting principle, which we will be using over and over to deal with counting problems. Let me describe the idea through a simple example. You wake up in the morning and you find that you have in your closet 4 shirts, 3 ties, and 2 jackets. In how many different ways can you get dressed today? To answer this question, let us think of the process of getting dressed as consisting of three steps, three stages. You first choose a shirt, let's say this one, and you have 4 choices of shirts. But each shirt can be used together with 1 of the 3 available ties to make 3 different shirt-tie combinations. But since we had 4 choices for the shirt, this means that we have 4 times 3, equals 12, shirt-tie combinations. Finally, you choose a jacket. Each shirt-tie combination can go together with either jacket, and so the fact that you have 2 jackets available doubles the number of options that you have, leading to 24 different options overall. So 24 is the answer to this simple problem. And how did the number 24 come about? Well, 24 is the same as the number of options you had in the first stage times the number of options you had in the second stage times the number of options you had in the third stage. Let us generalize. Suppose we want to construct some kind of object, and we're going to construct it through a sequential process, through a sequence of r different stages. In the example that we just considered, the number of stages was equal to 3. At each one of the stages, you have a number of options that are available. So in our example, at the first stage we had 4 options, at the second stage we had 3 options, and at the last stage we had 2 options. What is important is that when you reach stage i, no matter what you chose, no matter what you did at the previous stages, the number of options that you will have available at stage i is going to be that fixed number, n-sub-i. So what is the answer? How many different objects can you construct this way? Well, just generalizing from what we did in our specific example, the answer is the product of the number of choices or options that you had at each stage. This is the counting principle. It's a very simple idea, but it is powerful. It will allow us to solve fairly complicated counting problems. However, before we go into more complicated problems, let us first deal with a few relatively easy examples. In our first example, let us consider license plates that consist of 2 letters followed by 3 digits. The question is, how many different license plates are there? We think of the process of constructing a license plate as a sequential process. At the first stage we choose a letter, and we have 26 choices for the first letter. Then we need to choose the second letter, and we have 26 choices for that one. Then we choose the first digit. We have 10 choices for it. We choose the second digit, for which we have 10 choices. And finally, we choose the last digit, for which we also have 10 choices. So if you multiply these numbers, you can find the number of different license plates that you can make with 2 letters followed by 3 digits. Now let us change the problem a little bit and require that no letter and no digit can be used more than once. So, let us think of a process by which we could construct license plates of this kind. In the first stage, we choose the first letter that goes to the license plate, and we have 26 choices. Now, let us go into a second stage where we choose the second letter. Because we used 1 letter in the first stage, this means that there's only 25 available letters that can be used. We only have 25 choices at the second stage. Now, let us start dealing with the digits. We choose the first digit, and we have 10 choices for it. However, when we go and choose the next digit we will only have 9 choices, because 1 of the digits has already been used. At this point, 2 digits have been used, which means that at the last stage we have only 8 digits to choose from. So by multiplying these numbers, we can find out the answer to this question, the number of license plates if repetition is prohibited. Let us now consider a different example. Suppose that we start with a set that consists of n elements. What we want to do is to take these n elements and order them. A terminology that's often used here is that we want to form a permutation of these n elements. One way of visualizing permutations is to say that we're going to take these elements of the set, which are unordered, and we're going to place them in a sequence slots. So we create n slots. And we want to put each one of these elements into one of these slots. How do we go about it? We think of putting the elements into slots, one slot at a time. We first consider the first slot. We pick one of the elements and put it there. How many choices do we have at this stage? We have n choices, because we can pick any of the available elements and place it in that slot. Next, we pick another element and put it inside the second slot. How many choices do we have at this step? Well, we have already used one of the available elements, which means that there's n minus 1 elements to choose from at the next stage. At this point, we have used 2 of the elements. There is n minus 2 that are left. We pick one of them and put it in the third slot, and we have n minus 2 choices at this point. We continue this way. We keep going on. At some point we have placed n minus 1 of the elements into slots. There's only one element left, and that element, necessarily, will get into the last slot. There are no choices to be made at this point. So the overall number of ways that we can carry out this process, put the elements into the n slots, by the counting principle is going to be the product of the number of choices that we had at each one of the stages. So it's the product of the numbers n, n minus 1, n minus 2, all the way down to 1. And this product we denote as a shorthand this way, which we read as n factorial. n factorial is the product of all integers from 1 all the way up to n. And in particular, the number of permutations of n elements is equal to n factorial. Let us now consider another example. We start again with a general set, which consists of n elements. And we're interested in constructing a subset of that set. In how many different ways can we do that? How many different subsets are there? Let us think of a sequential process through which we can choose the subset. The sequential process proceeds by considering each one of the elements of our set, one at a time. We first consider the first element, and here we have 2 choices. Do we put it inside the set or not? So 2 choices for the first element. Then we consider the second element. Again, we have 2 choices. Do we put it in the subset or not? We continue this way until we consider all the elements. There's n of them. And the overall number of choices that we have is the product of 2 times 2 times 2, n times, which is 2 to the power n. At this point, we can also do a sanity check to make sure that our answer is correct. Let us consider the simple and special case where n is equal to 1, which means we're starting with this set with 1 element, and we want to find the number of subsets that it has. According to the answer that we derived, this should have 2 to the first, that is 2 subsets. Which ones are they? One subset of this set is the set itself and the other subset is the empty set. So we do have, indeed, 2 subsets out of that set, which agrees with the answer that we found. Notice that when we count subsets of a given set, we count both the set itself, the whole set, and we also count the empty set. All of these are subsets of our set. At this point, we can now pause and you can try to answer some simple questions of the same kind as the ones that we just practiced.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L074_Independence_of_Random_Variables.txt
We now come to a very important concept, the concept of independence of random variables. We are already familiar with the notion of independence of two events. We have the mathematical definition, and the interpretation is that conditional probabilities are the same as unconditional ones. Intuitively, when you are told that B occurred, this does not change your beliefs about A, and so the conditional probability of A is the same as the unconditional probability. We have a similar definition of independence of a random variable and an event A. The mathematical definition is that event A and the event that X takes on a specific value, that these two events are independent in the ordinary sense. So the probability of both of these events happening is the product of their individual probabilities. But we require this to be true for all values of little x. Intuitively, if I tell you that A occurred, this is not going to change the distribution of the random variable x. This is one interpretation of what independence means in this context. And this has to be true for all values of little x, that is, when [the] event occurs, the probabilities of any particular little x [are] going to be the same as the original unconditional probabilities. We also have a symmetrical interpretation. If I tell you the value of X, then the conditional probability of event A is not going to change. It's going to be the same as the unconditional probability. And again, this is going to be the case for all values of X. So, no matter what they tell you about X, your beliefs about A are not going to change. We can now move and define the notion of independence of two random variables. The mathematical definition is that the event that X takes on a value little x and the event that Y takes on a value little y, these two events are independent, and this is true for all possible values of little x and little y. In PMF notation, this relation here can be written in this form. And basically, the joint PMF factors out as a product of the marginal PMFs of the two random variables. Again, this relation has to be true for all possible little x and little y. What does independence mean? When I tell you the value of y, and no matter what value I tell you, your beliefs about X will not change. So that the conditional PMF of X given Y is going to be the same as the unconditional PMF of X. And this has to be true for any values of the arguments of these PMFs. There is also a symmetric interpretation, which is that the conditional PMF of Y given X is going to be the same as the unconditional PMF of Y. We have the symmetric interpretation because, as we can see from this definition, X and Y have symmetric roles. Finally, we can define the notion of independence of multiple random variables by a similar relation. Here, the definition is for the case of three random variables, but you can imagine how the definition for any finite number of random variables will go. Namely, the joint PMF of all the random variables can be expressed as the product of the corresponding marginal PMFs. What is the intuitive interpretation of independence here? It means that information about some of the random variables will not change your beliefs, the probabilities, about the remaining random variables. Any conditional probabilities and any conditional PMFs will be the same as the unconditional ones. In the real world, independence models situations where each of the random variables is generated in a decoupled manner, in a separate probabilistic experiment. And these probabilistic experiments do not interact with each other and have no common sources of uncertainty.
MIT_RES6012_Introduction_to_Probability_Spring_2018
S017_About_the_Order_of_Summation_in_Series_with_Multiple_Indices.txt
We now continue our discussion of infinite series. Sometimes we have to deal with series where the terms being added are indexed by multiple indices, as in this example here. We're given numbers, aij, and i ranges over all the positive integers. j also ranges over all the positive integers. So what does this sum represent? We can think of it as follows. We have here a two-dimensional grid that corresponds to all the pairs (i,j). And in essence, each one of those points corresponds to one of the terms that we want to add. So we can sum the different terms in some arbitrary order. Let's say we start from here. Take that term, add this term, then add this term here, then add this term, then the next term, next term, and so on. And we can keep going that way, adding the different terms according to some sequence. So essentially, what we're doing here is we're taking this two-dimensional grid and arranging the terms associated with that grid, in some particular linear order. And we're summing those terms in sequence. As long as this sum converges to something as we keep adding more and more terms, then this double series will be well defined. Notice, however, we can add those terms in many different orders. And in principle, those different orders might give us different kinds of results. On the other hand, as long as the sum of the absolute values of all the terms turns out to be finite, then the particular order in which we're adding the different terms will turn out that it doesn't matter. There's another way that we can add the terms together, and this is the following. Let us consider fixing a particular choice of i, and adding all of the terms that are associated with this particular choice of i, as j ranges from 1 to infinity. So what we're doing is we're taking the summation from j equal to 1 to infinity, while keeping the value of i fixed. We do this for every possible i. So for every possible i, we're going to get a particular number. And then we take the numbers that we obtain for the different choices if i, so i ranges from 1 to infinity. And we add all those terms together. So this is one particular order, one particular way of doing the infinite summation. Now, why start with the summation over j's while keeping i fixed? There's no reason for that. We could also carry out the summation by fixing a particular choice of j and summing over all i's. So now it is i that ranges from 1 to infinity. And we look at this infinite sum. This is the infinite sum of those terms. We obtain one such infinite sum for every choice of j. And then we take that sum that we obtain for any particular choice of j, and add over the different possible values of j. So j goes from 1 to infinity. This is a different way of carrying out the summation. And these are going to give us the same result, and the same result that we would also obtain if we were to add the terms in this particular order, as long as the double series is well-defined, in the following sense. If we have a guarantee that the sum of the absolute values of those numbers is finite, no matter which way we add them, then it turns out that we can use any particular order to add the terms in the series. We're going to get the same result. And we can also carry out the double summation by doing-- by adding over one index at a time. A word of caution-- this condition is not always satisfied. And in those cases, strange things can happen. Suppose that the sequences we're dealing with, the aij's, take those particular values indicated in this picture. And all the remaining terms, the aij's associated with the other dots, are all 0's. So all these terms out there will be 0's. If we carry out the summation by fixing a j and adding over all i's, what we get here is 0, and a 0, and a 0, and a 0. That's because in each row we have a 1 and a minus 1, which cancel out and give us 0's. So if we carry out the summation in this manner, we get a sum of 0's, which is 0. But if we carry out the summation in this order, fix an i, and then add over all j's, the first term that we get here is going to be 1, because in this column, this is the only non-zero number. And then in the remaining columns, as we add the terms, we're going to get 0's, and 0's, and so on. And so if we carry out the summation in this way, we obtain a 1. So this is an example that shows you that the order of summation actually may matter. In this example, the sum of the absolute values of all of the terms that are involved is infinity, because we have infinitely many plus or minus 1's, so this condition here is not satisfied in this example. Let us now consider the case where we want to add the terms of a double sequence, but over a limited range of indices as in this example, where we have coefficients aij, which we want to add, but only for those i's and j's for which j is less than or equal to i. Graphically, this means that we only want to consider the pairs shown in this picture. So these points here correspond to i,j pairs for which i is equal to j. Terms on the right, or points to the right, correspond to i,j pairs for which i is at least as large as j. We can carry out this summation in two ways. One way is the following. We fix a value of i, and we consider all of the corresponding terms, that correspond to different choices of j. But we only go up to the point where i is equal to j. This is the largest term. So what are we doing here? We're taking the coefficients aij, and we are adding over all j's, starting from 1, which corresponds to this term. And j goes up to the point where it becomes equal to i. We do this for every value of i. And so we get a number for the sum of each one of the columns, and then we add those numbers together. So we're adding over all i's, and i ranges from 1 up to infinity. This is one way of carrying out the summation. Alternatively, we could fix a value of j, and consider doing the summation over all choices of i. So this corresponds to the sum over all choices of i, from where? The smallest term, the first term, happens when i is equal to the value of j. And then we have larger choices of i, numbers for which i is bigger than the corresponding value of j. And so i ranges from j all the way to infinity. And this is the sum over one of the rows in this diagram. We do this for every j. We get a result, and then we need to add all those results together. So we're summing for all j's from 1 up to infinity. So these are two different ways that we can evaluate this series associated with a double sequence. We can either add over all j's first and then over i's, or we can sum over all i's first, and then over all j's. The two ways of approaching this problem, this summation, should give us the same answer. And this is going to be, again, subject to the usual qualification. As long as the sum of the absolute values of the terms that we're trying to add is less than infinity-- if this condition is true, then the two ways of carrying out the summation are equal, and they're just two different alternatives.
MIT_RES6012_Introduction_to_Probability_Spring_2018
S011_Sets.txt
In this segment, we will talk about sets. I'm pretty sure that most of what I will say is material that you have seen before. Nevertheless, it is useful to do a review of some of the concepts, the definitions, and also of the notation that we will be using. So what is a set? A set is just a collection of distinct elements. So we have some elements, and we put them together. And this collection, we call it the set S. More formally, how do we specify a set? We could specify a set by listing its elements, and putting them inside braces. So this is a set that consists of four elements, the letters, a, b, c, d. Another set could be the set of all real numbers. Notice a distinction here-- the first set is a finite set. It has a finite number of elements, whereas the second set is infinite. And in general, sets are of these two kinds. Either they're finite, or their infinite. A piece of notation now. We use this notation to indicate that a certain object x is an element of a set S. We read that as x belongs to S. If x is not an element of S, then we use this notation to indicate it, and we read it as x does not belong to S. Now, one way of specifying sets is as follows. We start with a bigger set-- for example, the set of real numbers-- and we consider all of those x's that belong to that big set that have a certain property. For example, that the cosine of this number is, let's say, bigger than 1/2. This is a way of specifying a set. We start with a big set, but we then restrict to those elements of that set that satisfy a particular property. One set of particular interest is the following. Sometimes in some context, we want to fix a collection of all possible objects that we might ever want to consider, and that collection will be a set. We denote it usually by omega, and we call it the universal set. So having fixed a universal set, we will only consider smaller sets that lie inside that big universal set. And once we have a universal set, we can talk about the collection of all objects, or elements, that belong to our universal set, but do not belong to the set S. So that would be everything outside the set S. Everything outside the set S, we denote it this way, and we call it the complement of the set S. And it is defined formally as follows-- an element belongs to the complement of S if x is an element of our universal set, and also x does not belong to S. Notice that if we take the complement of the complement-- that is, anything that does not belong to the green set-- we get back the red set. So what this is saying is that the complement of the complement of a set is the set itself. Another set of particular interest is the so-called empty set. The empty set is a set that contains no elements. In particular, if we take the complement of the universal set-- well, since the universal set contains everything, there is nothing in its complement, so its complement is going to be the empty set. Finally, one more piece of notation. Suppose that we have two sets, and one set is bigger than the other. So S is the small set here, and T is the bigger set. We denote this relation by writing this expression, which we read as follows-- S is a subset of the set T. And what that means is that if x is an element of S, then such an x must be also an element of T. Note that when S is a subset of T, there is also the possibility that S is equal to T. One word of caution here-- the notation that we're using here is the same as what in some textbooks is written this way-- that is, S is a subset of T, but can also be equal to T. We do not use this notation, but that's how we understand it. That is, we allow for the possibility that the subset is equal to the larger set. Now when we have two sets, we can talk about their union and their intersection. Let's say that this is set S, and this is set T. The union of the two sets consists of all elements that belong to one set or the other, or in both. The union is denoted this way, and the formal definition is that some element belongs to the union if and only if this element belongs to one of the sets, or it belongs to the other one of the sets. We can also form the intersection of two sets, which we denote this way, and which stands for the collection of elements that belong to both of the sets. So formally, an element belongs to the intersection of two sets if and only if that element belongs to both of them. So x must be an element of S, and it must also be an element of T. By the way, we can also define unions and intersections of more than two sets, even of infinitely many sets. So suppose that we have an infinite collection of sets. Let's denote them by Sn. So n ranges over, let's say, all of the positive integers. So pictorially, you might think of having one set, another set, a third set, a fourth set, and so on, and we have an infinite collection of such sets. Given this infinite collection, we can still define their union to be the set of all elements that belong to one of those sets Sn that we started with. That is, an element is going to belong to that union if and only if this element belongs to some of the sets that we started with. We can also define the intersection of an infinite collection of sets. We say that an element x belongs to the intersection of all these sets if and only if x belongs to Sn for all n. So if x belongs to each one of those Sn's, then we say that x belongs to their intersection. Set operations satisfy certain basic properties. One of these we already discussed. This property, for example, is pretty clear. The union of a set with another set is the same as the union if you consider the two sets in different orders. If you take the union of three sets, you can do it by forming, first, the union of these two sets, and then the union with this one; or, do it in any alternative order. Both expressions are equal. Because of this, we do not really need the parentheses, and we often write just this expression here, which is the same as this one. And the same would be true for intersections. That is, the intersection of three sets is the same no matter how you put parentheses around the different sets. Now if you take a union of a set with a universal set, you cannot get anything bigger than the universal set, so you just get the universal set. On the other hand, if you take the intersection of a set with the universal set, what is left is just the set itself. Perhaps the more complicated properties out of this list is this one and this one, which are sort of a distributive property of intersections and unions. And I will let you convince yourselves that these are true. The way that you verify them is by proceeding logically. If x is an element of this, then x must be an element of S, and it must also be an element of either T or U. Therefore, it's going to belong either to this set-- it belongs to S, and it also belongs to T-- or it's going to be an element of that set-- it belongs to S, and it belongs to U. So this argument shows that this set here is a subset of that set. Anything that belongs here belongs there. Then you need to reverse the argument to convince yourself that anything that belongs here belongs also to the first set, and therefore, the two sets are equal. Here, I'm using the following fact-- that if S is a subset of T, and T is a subset of S, this implies that the two sets are equal. And then you can use a similar argument to convince yourselves about this equality, as well. So this is it about basic properties of sets. We will be using some of these properties all of the time without making any special comment about them.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L018_A_Continuous_Example.txt
We will now go through a probability calculation for the case where we have a continuous sample space. We revisit our earlier example in which we were throwing a dart into a square target, the square target being the unit square. And we were guaranteed that our dart would fall somewhere inside this set. So our sample space is the unit square itself. We have a description of the sample space, but we do not yet have a probability law. We need to specify one. The choice of a probability law could be arbitrary. It's up to us to choose how to model a certain situation. And to keep things simple, we're going to assume that our probability law is a uniform one, which means that the probability of any particular subset of the sample space is going to be the area of that subset. So if we have some subset lying somewhere here and we ask what is the probability that we fall into that subset? The probability is exactly the area of that particular subset. Once more, this is an arbitrary choice of a probability law. There's nothing in our assumptions so far that would force us to make this particular choice. And we just use it for the purposes of this example. So now let us calculate some probabilities. Let us look at this event. This is the event that the sum of the two numbers that we get in our experiment is less than or equal to 1/2. It is always useful to work in terms of a picture and to depict that event in a picture of the sample space. So in terms of that sample space, the points that make this event to be true are just a triangle that lies below the line, where this is the line, that's x plus y equals 1/2. Anything below that line, these are the outcomes that make this event happen. So we're trying to find the probability of this red event. We have assumed that probability is equal to area. Therefore, the probability we're trying to calculate is the area of a triangle. And the area of a triangle is 1/2 times the base of the triangle, which is 1/2 in our case, times the height of the triangle, which is again 1/2 in our case. And the end result is 1/8. Let us now calculate another probability. Now, this is an event that consists of only a single element. We take the point 0.5, 0.3, which sits somewhere here. The event of interest is a set, but that set consists of a single point. So we're asking for the probability that our dart falls exactly on top of that point. What is it? Well, it is the area of a set that consists of a single point. What is the area of a single point? It is 0. And similarly for any other single point inside that sample space that we might have considered, the answer is going to be 0. Let us now abstract from this example, as well as the previous one, and note the following. Probability calculations involve a sequence of four steps. Starting with a word description of a problem, of a probabilistic experiment, we first write down the sample space. Then we specify a probability law. Let me emphasize again here that this step has some arbitrariness in it. You can choose any probability law you like, although for your results to be useful it would be good if your probability law captures the real-world phenomenon you're trying to model. Typically you're interested in calculating the probability of some event. That event may be described in some loose manner, so you need to describe it mathematically. And if possible, it's always good to describe it in terms of a picture. Pictures are immensely useful when going through this process. And finally, the last step is to go ahead and calculate the probability of the event of interest. Now, a probability law in principle specifies the probability of every event, and there's nothing else to do. But quite often the probability law will be given in some implicit manner, for example, by specifying the probabilities of only some of the events. In that case, you may have to do some additional work to find the probability of the particular event that you care about. This last step sometimes will be easy. Sometimes it may be complicated. But in either case, by following this four-step procedure and by being systematic you will always be able to come up with a single correct answer.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L025_A_Radar_Example_and_Three_Basic_Tools.txt
Let us now examine what conditional probabilities are good for. We have already discussed that they are used to revise a model when we get new information, but there is another way in which they arise. We can use conditional probabilities to build a multi-stage model of a probabilistic experiment. We will illustrate this through an example involving the detection of an object up in the sky by a radar. We will keep our example very simple. On the other hand, it turns out to have all the basic elements of a real-world model. So, we are looking up in the sky, and either there's an airplane flying up there or not. Let us call Event A the event that an airplane is indeed flying up there, and we have two possibilities. Either Event A occurs, or the complement of A occurs, in which case nothing is flying up there. At this point, we can also assign some probabilities to these two possibilities. Let us say that through prior experience, perhaps, or some other knowledge, we know that the probability that something is indeed flying up there is 5% and with probability 95% nothing is flying. Now, we also have a radar that looks up there, and there are two things that can happen. Either something registers on the radar screen or nothing registers. Of course, if it's a good radar, probably Event B will tend to go together with Event A. But it's also possible that the radar will make some mistakes. And so we have various possibilities. If there's a plane up there, it's possible that the radar will detect it, in which case Event B will also happen. But it's also conceivable that the radar will not detect it, in which case we have a so-called miss. And so a plane is flying up there, but the radar missed it, did not detect it. Another possibility is that nothing is flying up there, but the radar does detect something, and this is a situation that's called a false alarm. Finally, there's the possibility that nothing is flying up there, and the radar did not see anything either. Now, let us focus on this particular situation. Suppose that Event A has occurred. So we are living inside this particular universe. In this universe, there are two possibilities, and we can assign probabilities to these two possibilities. So let's say that if something is flying up there, our radar will find it with probability 99%, but will also miss it with probability 1%. What's the meaning of this number, 99%? Well, this is a probability that applies to a situation where an airplane is up there. So it is really a conditional probability. It's the conditional probability that we will detect something, the radar will detect the plane, given that the plane is already flying up there. And similarly, this 1% can be thought of as the conditional probability that the complement of B occurs, so the radar doesn't see anything, given that there is a plane up in the sky. We can assign similar probabilities under the other scenario. If there is no plane, there is a probability that there will be a false alarm, and there is a probability that the radar will not see anything. Those four numbers here are, in essence, the specs of our radar. They describe how the radar behaves in a world in which an airplane has been placed in the sky, and some other numbers that describe how the radar behaves in a world where nothing is flying up in the sky. So we have described various probabilistic properties of our model, but is it a complete model? Can we calculate anything that we might wish to calculate? Let us look at this question. Can we calculate the probability that both A and B occur? It's this particular scenario here. How can we calculate it? Well, let us remember the definition of conditional probabilities. The conditional probability of an event given another event is the probability of their intersection divided by the probability of the conditioning event. But this doesn't quite help us because if we try to calculate the numerator, we do not have the value of the probability of A given B. We have the value of the probability of B given A. What can we do? Well, we notice that we can use this definition of conditional probabilities, but use it in the reverse direction, interchanging the roles of A and B. If we interchange the roles of A and B, our definition leads to the following expression. The conditional probability of B given A is the probability that both events occur divided by the probability, again, of the conditioning event. Therefore, the probability that A and B occur is equal to the probability that A occurs times the conditional probability that B occurs given that A occurred. And in our example, this is 0.05 times the conditional probability that B occurs, which is 0.99. So we can calculate the probability of this particular event by multiplying probabilities and conditional probabilities along the path in this tree diagram that leads us here. And we can do the same for any other leaf in this diagram. So for example, the probability that this happens is going to be the probability of this event times the conditional probability of B complement given that A complement has occurred. How about a different question? What is the probability, the total probability, that the radar sees something? Let us try to identify this event. The radar can see something under two scenarios. There's the scenario where there is a plane up in the sky and the radar sees it. And there's another scenario where nothing is up in the sky, but the radar thinks that it sees something. So these two possibilities together make up the event B. And so to calculate the probability of B, we need to add the probabilities of these two events. For the first event, we already calculated it. It's 0.05 times 0.90. For the second possibility, we need to do a similar calculation. The probability that this occurs is equal to 0.95 times the conditional probability of B occurring under the scenario where A complement has occurred, and this is 0.1. If we add those two numbers together, the answer turns out to be 0.1445. Finally, a last question, which is perhaps the most interesting one. Suppose that the radar registered something. What is the probability that there is an airplane out there? How do we do this calculation? Well, we can start from the definition of the conditional probability of A given B, and note that we already have in our hands both the numerator and the denominator. So the numerator is this number, 0.05 times 0.99, and the denominator is 0.1445, and we can use our calculators to see that the answer is approximately 0.34. So there is a 34% probability that an airplane is there given that the radar has seen or thinks that it sees something. So the numerical value of this answer is somewhat interesting because it's pretty small. Even though we have a very good radar that tells us the right thing 99% of the time under one scenario and 90% under the other scenario. Despite that, given that the radar has seen something, this is not really convincing or compelling evidence that there is an airplane up there. The probability that there's an airplane up there is only 34% in a situation where the radar thinks that it has seen something. So in the next few segments, we are going to revisit these three calculations and see how they can generalize. In fact, a large part of what is to happen in the remainder of this class will be elaboration on these three ideas. They are three types of calculations that will show up over and over, of course, in more complicated forms, but the basic ideas are essentially captured in this simple example.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L085_Mean_Variance_of_the_Uniform.txt
As an example of a mean-variance calculation, we will now consider the continuous uniform random variable which we have introduced a little earlier. This is the continuous analog of the discrete uniform, for which we have already seen formulas for the corresponding mean and variance. So let us now calculate the mean or expected value for the continuous case. The mean is defined as an integral that ranges over the entire real line. On the other hand, we recognize that the density is equal to 0 outside the interval from a to b, and therefore, there is going to be no contribution to the integral from those x's outside that interval. This means that we can integrate just over the interval from a to b. And inside that interval, the value of the density is 1 over b minus a. We can carry out this integration and find an answer equal to a plus b over 2, which, interestingly, also happens to be the same as in the discrete case. In fact, we could find this answer without having to run this integration. We could just recognize that this PDF is symmetric around the midpoint of the interval, and the midpoint is a plus b over 2. We now continue with what is involved in the calculation of the expected value of the square of the random variable. Using the expected value rule, this is the integral of x squared times the density, but because of the same argument as before, we only need to integrate from a to b. We can evaluate this integral, and the answer turns out to be 1 over (b minus a) times (b cube over 3 minus a cube over 3). The reason why these cubic terms appear is that the integral of the x square function is x cube divided by 3. Now that we have this quantity available, we're ready to calculate the variance using this alternative formula, which, as we have often discussed, usually provides us a quicker way to carry out the calculation. We take this term, insert it here. We take the square of this term, insert it here. Carry out some algebra, and eventually we find an answer which is equal to b minus a squared over 12. And this is the formula for the variance of a uniform random variable. We can take the square root of this expression to find the standard deviation, and the standard deviation is going to be b minus a divided by the square root of 12. A few observations. First, the formula looks quite similar to the formula for the variance that we had in the discrete case, except that in the discrete case, we have this extra additive factor of 2. More interestingly, and perhaps more important, is that the standard deviation is proportional to the width of this uniform. The wider it is, the larger the standard deviation will be. And this conforms to our intuition that the standard deviation captures the width of a particular distribution. And the variance, of course, becomes larger when the width is larger. And as far as the variance is concerned, it increases with the square of the length of the interval over which we have our distribution.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L031_Lecture_Overview.txt
In this lecture, we introduce and develop the concept of independence between events. The general idea is the following. If I tell you that a certain event A has occurred, this will generally change the probability of some other event B. Probabilities will have to be replaced by conditional probabilities. But if the conditional probability turns out to be the same as the unconditional probability, then the occurrence of event A does not carry any useful information on whether event B will occur. In such a case, we say that events A and B are independent. We will develop some intuition about the meaning of independence of two events and introduce an extension, the concept of conditional independence. We will then proceed to define the independence of a collection of more than two events. If, for any two of the events in the collection we have independence between them, we will say that we have pairwise independence. But we will see that independence of the entire collection is something different. It involves additional conditions. Finally, we will close with an application in reliability analysis and with a nice puzzle that will serve as a word of caution about putting together probabilistic models.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L081_Lecture_Overview.txt
In this lecture, we start our discussion of continuous random variables. We will focus on the case of a single continuous random variable, and we'll describe its distribution using a so-called probability density function, an object that will replace the PMFs from the discrete case. We will then proceed to define the expectation and the variance of a continuous random variable, and we'll see that their basic properties remain unchanged. There will be one new concept-- the cumulative distribution function, which will allow us to describe, in a unified manner, both discrete and continuous random variables, even so-called mixed random variables that have both a discrete and a continuous component. In the course of this lecture, we will also introduce some of the most common continuous random variables-- uniform, exponential, and normal. We will pay special attention to the normal distribution and the ways that we can calculate the associated probabilities.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L015_Simple_Properties_of_Probabilities.txt
The probability axioms are the basic rules of probability theory. And they are surprisingly few. But they imply many interesting properties that we will now explore. First we will see that what you might think of as missing axioms are actually implied by the axioms already in place. For example, we have an axiom that probabilities are non-negative. We will show that probabilities are also less than or equal to 1. We have another axiom that says that the probability of the entire sample space is 1. We will show a counterpart that the probability of the empty set is equal to 0. This makes perfect sense. The empty set has no elements, so it is impossible. There is 0 probability that the outcome of the experiment would lie in the empty set. We also have another intuitive property. The probability that an event happens plus the probability that the vendor does not happen exhaust all possibilities. And these two probabilities together should add to 1. For instance, if the probability of heads is 0.6, then the probability of tails should be 0.4. Finally, we can generalize the additivity axiom, which was originally given for the case of two disjoint events to the case where we're dealing with the union of several disjoint events. By disjoint here we mean that the intersection of any two of these events is the empty set. We will prove this for the case of three events and then the argument generalizes for the case where we're taking the union of k disjoint events, where k is any finite number. So the intuition of this result is the same as for the case of two events. But we will derive it formally and we will also use it to come up with a way of calculating the probability of a finite set by simply adding the probabilities of its individual elements. All of these statements that we just presented are intuitive. And you do not to really need to be convinced about their validity. Nevertheless, it is instructive to see how these statements follow from the axioms that we have put in place. So we will now present the arguments based only on the three axioms that we have available. And in order to be able to refer to these axioms, let us give them some names, call them axioms A, B, and C. We start as follows. Let us look at the sample space and a subset of that sample space. Call it A. And consider the complement of that subset. The complement is the set of all elements that do not belong to the set A. So a set together with its complement make up everything, which is the entire sample space. On the other hand, if an element belongs to a set A, it does not belong to its complement. So the intersection of a set with its complement is the empty set. Now we argue as follows. We have that the probability of the entire sample space is equal to 1. This is true by our second axiom. Now the sample space, as we just discussed, can be written as the union of an event and the complement of that event. This is just a set theoretic relation. And next since a set and its complement our disjoint, this means that we can apply the additivity axiom and write this probability as the sum of the probability of event A with the probability of the complement of A. This is one of the relations that we had claimed and which we have now established. Based on this relation, we can also write that the probability of an event A is equal to 1 minus the probability of the complement of that event. And because, by the non-negativity axiom this quantity here is non-negative, 1 minus something non-negative is less than or equal to 1. We're using here the non-negativity axiom. And we have established another property, namely that probabilities are always less than or equal to 1. Finally, let us note that 1 is the probability, always, of a set plus the probability of a complement of that set. And let us use this property for the case where the set of interest is the entire sample space. Now, the probability of the entire sample space is itself equal to 1. And what is the complement of the entire sample space? The complement of the entire sample space consists of all elements that do not belong to the sample space. But since the sample space is supposed to contain all possible elements, its complement is just the empty set. And from this relation we get the implication that the probability of the empty set is equal to 0. This establishes yet one more of the properties that we had just claimed a little earlier. We finally come to the proof of the generalization of our additivity axiom from the case of two disjoint events to the case of three disjoint events. So we have our sample space. And within that sample space we have three events, three subsets. And these subsets are disjoint in the sense that any two of those subsets have no elements in common. And we're interested in the probability of the union of A, B, and C. How do we make progress? We have an additivity axiom in our hands, which applies to the case of the union of two disjoint sets. Here we have three of them. But we can do the following trick. We can think of the union of A, B, and C as consisting of the union of this blue set with that green set. Formally, what we're doing is that we're expressing the union of these three sets as follows. We form one set by taking the union of A with B. And we have the other set C. And the overall union can be thought of as the union of these two sets. Now since the three sets are disjoint, this implies that the blue set is disjoint from the green set and so we can use the additivity axiom here to write this probability as the probability of A union B plus the probability of C. And now we can use the additivity axiom once more since the sets A and B are disjoint to write the first term as probability of A plus probability of B. We carry over the last term and we have the relation that we wanted to prove. This is the proof for the case of three events. You should be able to follow this line of proof to write an argument for the case of four events and so on. And you might want to continue by induction. And eventually you should be able to prove that if the sets A1 up to Ak are disjoint then the probability of the union of those sets is going to be equal to the sum of their individual probabilities. So this is the generalization to the case where we're dealing with the union of finitely many disjoint events. A very useful application of this comes in the case where we want to calculate the probability of a finite set. So here we have a sample space. And within that sample space we have some particular elements S1, S2, up to Sk, k of them. And these elements together form a finite set. What can we say about the probability of this finite set? The idea is to take this finite set that consists of k elements and think of it as the union of several little sets that contain one element each. So set theoretically what we're doing is that we're taking this set with k elements and we write it as the union of a set that contains just S1, a set that contains just the second element S2, and so on, up to the k-th element. We're assuming, of course, that these elements are all different from each other. So in that case, these sets, these single element sets, are all disjoint. So using the additivity property for a union of k disjoint sets, we can write this as the sum of the probabilities of the different single element sets. At this point, it is usual to start abusing, or rather, simplifying notation a little bit. Probabilities are assigned to sets. So here we're talking about the probability of a set that contains a single element. But intuitively, we can also talk as just the probability of that particular element and use this simpler notation. So when using the simpler notation, we will be talking about the probabilities of individual elements. Although in terms of formal mathematics, what we really mean is the probability of this event that's comprised only of a particular element S1 and so on.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L019_Countable_Additivity.txt
We have seen so far an example of a probability law on a discrete and finite sample space as well as an example with an infinite and continuous sample space. Let us now look at an example involving a discrete but infinite sample space. We carry out an experiment whose outcome is an arbitrary positive integer. As an example of such an experiment, suppose that we keep tossing a coin and the outcome is the number of tosses until we observe heads for the first time. The first heads might appear in the first toss or the second or the third, and so on. So in this example, any positive integer is possible. And so our sample space is infinite. Let us not specify a probability law. A probability law should determine the probability of every event, of every subset of the sample space. That is, the probability of every set of positive integers. But instead I will just tell you the probability of events that contain a single element. I'm going to tell you that there is probability 1 over 2 to the n that the outcome is equal to n. Is this good enough? Is this information enough to determine the probability of any subset? Before we look into that question, let us first do a quick sanity check to see whether these numbers that we are given look like legitimate probabilities. Do they add to 1? Let's do a quick check. So the sum over all the possible values of n of the probabilities that we're given, which is an infinite sum starting from 1, all the way up to infinity, of 1 over 2 to the n, is equal to the following. First we take out a factor of 1/2 from all of these terms, which reduces the exponent from n to n minus 1. This is the same as running the sum from n equals 0 to infinity of 1/2 and to the n. And now we have a usual infinite geometric series and we have a formula for this. The geometric series has a value of 1 over 1 minus the number whose power we're taking, which is 1/2. And after we do the arithmetic, this turns out to be equal to 1. So indeed, it appears that we have the basic elements of what it would take to have a legitimate probability law. But now let us look into how we might calculate the probability of some general event. For example, the probability that the outcome is even. We proceed as follows. The probability that the outcome is even, this is the probability of an infinite set that consists of all the even integers. We can write this set as the union of lots of little sets that contain a single element each. So it's the set containing the number 2, the set containing the number 4, the set containing the number 6, and so on. At this point we notice that we're talking about the probability of a union of sets and these sets are disjoint because they contain different elements. So we can use an additivity property and say that this is the probability of obtaining a 2, plus the probability of obtaining a 4, plus the probability of obtaining a 6 and so on. If you're curious about doing this calculation and actually obtaining a numerical answer, you would proceed as follows. You notice that this is 1 over 2 to the second power plus 1 over 2 to the fourth power plus 1 over 2 to the sixth power and so on. Now you factor out a factor of 1/4 and what you're left is 1 plus 1 over 2 to the second power, which is 1/4, plus 1 over 2 to the fourth power, which is the same as 1/4 to the second power and so on. And now we have 1/4 times the infinite sum of a geometric series, which gives us 1 over 1 minus 1/4. And after you do the algebra you obtain a numerical answer, which is equal to 1/3. But leaving the details of the calculation aside, the more important question I want to address is the following. Is this calculation correct? We seem to have used an additivity property at this point. But the additivity properties that we have in our hands at this point only talk about disjoint unions of finitely many subsets. Our initial axiom talked about a disjoint union of two subsets and then later on we established a similar property for a disjoint union of finitely many subsets. But here we're talking about the union of infinitely many subsets. So this step here is not really allowed by what we have in our hands. On the other hand, we would like our theory to allow this kind of calculation. The way out of this dilemma is to introduce an additional axiom that will indeed allow this kind of calculation. The axiom that we introduce is the following. If we have an infinite sequence of disjoint events, as for example in this picture. We have our sample space. We have a first event, A1. We have a second event, A2. The third event, A3. And so we keep continuing and we have an infinite sequence of such events. Then the probability of the union of these events, of these infinitely many events, is the sum of their individual probabilities. The key word here is the word sequence. Namely, these events, these sets that we're dealing with, can be arranged so that we can talk about the first event, A1, the second event, A2, the third one, A3, and so on. To appreciate the issue that arises here and to see why the word sequence is so important, let us consider the following calculation. Our sample space is the unit square. And we consider a model where the probability of a set is its area, as in the examples that we considered earlier. Let us now look at the probability of the overall sample space. Our sample space is the unit square and the unit square can be thought of as the union of various sets that consist of single points. So it's the union of subsets with one element each. And it's a union taken over all the points in the unit square. Then we think about additivity. We observe that these subsets are disjoint. If we're considering different points, then we get disjoint single element sets. And then an additivity property would tells us that the probability of these union is the sum of the probabilities of the different single element subsets. Now, as we discussed before, single element subsets have 0 probability. So we have a sum of lots of 0s and the sum of 0s should be equal to 0. On the other hand, by the probability axioms, the probability of the entire sample space should be equal to 1. And so we have established that 1 is equal to 0. This looks like a paradox. Is it? The catch is that there is nothing in the axioms we have introduced so far or the properties we have established that would justify this step. So this step here is questionable. You might argue that the unit square is the union of disjoint single element sets, which is the case that we have in additivity axioms. But the additivity axiom only applies when we have a sequence of events. And this is not what we have here. This is not a union of a sequence of single element sets. In fact, there is no way that the elements of the unit square can be arranged in a sequence. The unit square is said to be an uncountable set. This is a deep and fundamental mathematical fact. What it essentially says is that there are two kinds of infinite sets. Discrete ones or in formal terminology countable. These are sets whose elements can be arranged in a sequence, like the integers. And also uncountable sets, such as the unit square or the real line, whose elements cannot be arranged in a sequence. If you're curious, you can find the proof of this important fact in the supplementary materials that we are providing. After all these discussion, you may now have legitimate suspicions about the models we have been looking at. Is area a legitimate probability law? Does it even satisfy countable additivity? This question takes us into deep waters and has to do with a deep subfield of mathematics called Measure Theory. Fortunately, it turns out that all is well. Area is a legitimate probability law. It does indeed satisfy the countable additivity axiom as long as we only deal with nice subsets of the unit square. Fortunately, the subsets that arise in whatever we do in this course will be "nice". Subsets that are not nice are quite pathological and we will not encounter them. At this stage we are not in a position to say anything more that would be meaningful about these issues because they're quite complicated and mathematically deep. We can only say that there are some serious mathematical subtleties. But fortunately, they can all be overcome in a rigorous manner. And for the rest of this class, you can just forget about these subtle issues.
MIT_RES6012_Introduction_to_Probability_Spring_2018
S014_When_Does_a_Sequence_Converge.txt
So we looked at the formal definition of what it means for a sequence to converge, but as a practical matter, how can we tell whether a given sequence converges or not? There are two criteria that are the most commonly used for that purpose, and it's useful to be aware of them. The first one deals with the case where we have a sequence of numbers that keep increasing, or at least, they do not go down. In that case, those numbers may go up forever without any bound, so if you look at any particular value, there's going to be a time at which the sequence has exceeded that value. In that case, we say that the sequence converges to infinity. But if this is not the case, if it does not converge to infinity, which means that the entries of the sequence are bounded-- they do not grow arbitrarily large-- then, in that case, it is guaranteed that the sequence will converge to a certain number. This is not something that we will attempt to prove, but it is a useful fact to know. Another way of establishing convergence is to derive some bound on the distance or our sequence from the number that we suspect to be the limit. If that distance becomes smaller and smaller, if we can manage to bound that distance by a certain number and that number goes down to 0, then it is guaranteed that since this distance goes down to 0, that the sequence, ai, converges to a. And there's a variation of this argument, which is the so-called sandwich argument, and it goes as follows. If we have a certain sequence that converges to some number, a, and we have another sequence that converges to that same number, a, and our sequence is somewhere in-between, then our sequence must also converge to that particular number, a. So these are the usual ways of quickly saying something about the convergence of a given sequence, and we will be often using this type of argument in this class, but without making a big fuss about them, or without even referring to these facts in an explicit manner.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L047_Partitions.txt
We now come to our last major class of counting problems. We will count the number of ways that a given set can be partitioned into pieces of given sizes. We start with a set that consists of n different elements. And we have r persons. We want to give n1 items to the first person, give n2 items to the second person, and so on. And finally, we want to give n-sub-r items to the rth person. These numbers, n1, n2, up to nr are given to us, how many items each person should get. And these numbers must add to n so that every item in the original set is given to some person. We want to count to the number of ways that this can be done. This is the number of ways that we can partition a given set into subsets of prescribed sizes. Let's use c to denote the number of ways this can be done. We want to calculate this number c. Instead of calculating directly, we're going to use the same trick that we employed when we counted combinations and derived the binomial coefficient. That is, we're going to consider, in a much simpler counting problem, the problem of ordering n items, taking the n items in our original set and putting them in an ordered list. Of course, we know in how many ways this can be done. Ordering n items can be done in n factorial ways. This is the count of the number of permutations of n items. But now let us think of a different way of ordering the n items, an indirect way. It proceeds according to the following stages. We start with the n items. And we first distribute them to the different persons. Having done that, then we ask person one to take their items, order them, and put them in the first n1 slots of our list. Then person two takes their items and puts them into the next n2 slots in our list. We continue this way. And finally, the last person takes the items that they possess and puts them in the last n-sub-r slots in this list. In how many ways can this process be carried out? We have c choices on how to partition the given set into subsets. Then person one has n1 factorial choices on how to order the n1 items that that person processes. Person two has n2 factorial choices for how to order the n2 items that it possesses, and so on until the last person, who has nr factorial choices for ordering their elements. This multi-stage process results in an ordered list of the n terms. This is the number of ways these multi-stage process can be carried out. On the other hand, we know that the number of possible orderings of the items is n factorial. So we have this equality. We can solve this for c. And we find the answer, that the number of ways that the n items can be partitioned into subsets of the given sizes is n factorial divided by the product of the factorials of the different ni's. This particular expression is called the multinomial coefficient, and it generalizes the binomial coefficient. The binomial coefficient was referring to the case where we essentially split our set into one subset with k elements, and then the second subset gets the remaining elements. So the special case where r is equal to 2, and n1 is equal to k, n2 equals to n minus k, this corresponds to a partition of a set into two subsets, or what is the same just selecting the first subset and putting everything else in the second subset. And you can check that in this particular case, the expression for the multinomial coefficient agrees with the expression that we had derived for the binomial coefficient.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L035_Conditional_Independence.txt
Conditional probabilities are like ordinary probabilities, except that they apply to a new situation where some additional information is available. For this reason, any concept relevant to probability models has a counterpart that applies to conditional probability models. In this spirit, we can define a notion of conditional independence, which is nothing but the notion of independence applied to a conditional model. Let us be more specific. Suppose that we have a probability model and two events, A and B. We are then told that event C occurred, and we construct a conditional model. Conditional independence is defined as ordinary independence but with respect to the conditional probabilities. To be more precise, remember that independence is defined in terms of this relation, that the probability of two events happening is the product of the probabilities that one of them is happening times the probability that the other one is happening. This is the definition of independence in the original unconditional model. Now, in the conditional model we just use the same relation, but with conditional probabilities instead of ordinary probabilities. So this is the definition of conditional independence. We may now ask, is there a relation between independence and conditional independence? Does one imply the other? Let us look at an example. Suppose that we have two events and these two events are independent. We then condition on another event, C. And suppose that the picture is like the one shown here. Are A and B conditionally independent? Well, in the new universe where C has happened, events A and B have no intersection. As we discussed earlier this means that events A and B are extremely dependent. Within G, if A occurs, this tells us that B did not occur. The conclusion from this example is that independence does not imply conditional independence. So in this particular example, we saw that the answer here is no. Given C, A and B are not independent.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L064_Conditional_PMFs_Expectations_Given_an_Event.txt
We now move to a new topic-- conditioning. Every probabilistic concept or probabilistic fact has a conditional counterpart. As we have seen before, we can start with a probabilistic model and some initial probabilities. But then if we are told that the certain event has occurred, we can revise our model and consider conditional probabilities that take into account the available information. But as a consequence, the probabilities associated with any given random variable will also have to be revised. So a PMF will have to be changed to a conditional PMF. Let us see what is involved. Consider a random variable X with some given PMF, whose values, of course, sum to 1, as must be true for any valid PMF. We are then told that a certain event, A, has occurred. In that case, the event that X is equal to-- little x-- will now have a conditional probability of this form. We will use this notation here to denote the conditional probability that the random variable takes the value little x. Notice that the subscripts are used to indicate what we're talking about. In this case, we are talking about the random variable X in a model where event A is known to have occurred. Of course, for this conditional probability to be well defined, we will have to assume that the probability of A is positive. This conditional PMF is like an ordinary PMF, except that it applies to a new or revised conditional model. As such, its entries must also sum to 1. Now the random variable X has a certain mean, expected value, which is defined the usual way. In the conditional model, the random variable X will also have a mean. It is called the conditional mean or the conditional expectation. And it is defined the same way as in the original case, except that now the calculation involves the conditional probabilities, or the conditional PMF. Finally, as we discussed some time ago, a conditional probability model is just another probability model, except that it applies to a new situation. So any fact about probability models-- any theorem that we derive-- must remain true in the conditional model as well. As an example, the expected value rule will have to remain true in the conditional model, except that, of course, in the conditional model, we will have to use the conditional probabilities instead of the original probabilities. So to summarize, conditional models and conditional PMFs are just like ordinary models and ordinary PMFs, except that probabilities are replaced throughout by conditional probabilities. Let us now look at an example. Consider a random variable, which in this case, is uniform, takes values from 1 up to 4. So each one of the possible values has probability 1 over 4. For this random variable, we can calculate the expected value, which by symmetry is going to be the midpoint. So it is equal to 2 and 1/2. We can also calculate the variance. And here we can apply the formula that we have derived earlier-- 1/2 times b minus a times b minus a plus 2. And in this case, it's 1 over 12 times b minus a is 4 minus 1, which is 3. And the next term is 5. And after we simplify, this is 5 over 4. Suppose that now somebody tells us that event A has occurred, where event A is that the random variable X takes values in the range 2, 3, 4. What happens now? What is the conditional PMF? In the conditional model, we are told that the value of 1 did not occur, so this probability is going to be 0. The other three values are still possible. What are their conditional probabilities? Well, these three values had equal probabilities in the original model, so they should have equal probabilities in the conditional model as well. And in order for probabilities to sum to 1, of course, these probabilities will have to be 1/3. So this is the conditional PMF of our random variable, given this new or additional information about the outcome. The expected value of the random variable X in the conditional universe-- that is, the conditional expectation-- is just the ordinary expectation but applied to the conditional model. In this conditional model, by symmetry, the expected value will have to be 3, the midpoint of the distribution. And we can also calculate the conditional variance. This is a notation that we have not yet defined, but at this point, it should be self-explanatory. It is just the variance of X but calculated in the conditional model using conditional probabilities. We can calculate this variance using once more the formula for the variance of a uniform distribution, but we can also do it directly. So the variance is the expected value of the squared distance from the mean. So with probability 1/3, our random variable will take a value of 4, which is one unit apart from the mean, or more explicitly, we have this term. With probability 1/3, our random variable takes a value of 3. And with probability 1/3, our random variable takes the value of 2. This term is 0. This is 1 times 1/3. From here we get another 1 times 1/3. So adding up, we obtain that the variance is 2/3. Notice that the variance in the conditional model is smaller than the variance that we had in the original model. And this makes sense. In the conditional model, there is less uncertainty than there used to be in the original model. And this translates into a reduction in the variance. To conclude, there is nothing really different when we deal with conditional PMFs, conditional expectations, and conditional variances. They are just like the ordinary PMFs, expectations, and variances, except that we have to use the conditional probabilities throughout instead of the original probabilities.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L053_Probability_Mass_Functions.txt
A random variable can take different numerical values depending on the outcome of the experiment. Some outcomes are more likely than others, and similarly some of the possible numerical values of a random variable will be more likely than others. We restrict ourselves to discrete random variables, and we will describe these relative likelihoods in terms of the so-called probability mass function, or PMF for short, which gives the probability of the different possible numerical values. The PMF is also sometimes called the probability law or the probability distribution of a discrete random variable. Let me illustrate the idea in terms of a simple example. We have a probabilistic experiment with four possible outcomes. We also have a probability law on the sample space. And to keep things simple, we assume that all four outcomes in our sample space are equally likely. We then introduce a random variable that associates a number with each possible outcome as shown in this diagram. The random variable, X, can take one of three possible values-- namely 3, 4, or 5. Let us focus on one of those numbers-- let's say the number 5. So let us focus on x being equal to 5. We can think of the event that X is equal to 5. Which event is this? This is the event that the outcome of the experiment led to the random variable taking a value of 5. So it is this particular event which consists of two elements, namely a and b. More formally, the event that we're talking about is the set of all outcomes for which the value, the numerical value of our random variable, which is a function of the outcome, that numerical value happens to be equal to 5. And in this example it is a set consisting of two elements. It's a subset of the sample space. So it is an event. And it has a probability. And that probability we will be denoting with this notation. And in our case this probability is equal to 1/2. Because we have two outcomes, each one has probability 1/4. The probability of this event is equal to 1/2. More generally, we will be using this notation to denote the probability of the event that the random variable, X , takes on a particular value, x. This is just a piece of notation, not a new concept. We're dealing with a probability, and we indicate it using this particular notation. More formally, the probability that we're dealing with is the probability, the total probability, of all outcomes for which the numerical value of our random variable is this particular number, x. A few things to notice. We use a subscript, X, to indicate which random variable we're talking about. This will be useful if we have several random variables involved. For example, if we have another random variable on the same sample space, Y, then it would have its own probability mass function which would be denoted with this particular notation here. The argument of the PMF, which is x, ranges over the possible values of the random variable, X. So in this sense, here we're really dealing with a function. A function that we could denote just by p with a subscript x. This is a function as opposed to the specific values of this function. And we can produce plots of this function. In this particular example that we're dealing with, the interesting values of x are 3, 4, and 5. And the associated probabilities are the value of 5 is obtained with probability 1/2, the value of 4-- this is the event that the outcome is c, which has probability 1/4. And the value of 3 is also obtained with probability 1/4 because the value of 3 is obtained when the outcome is d, and that outcome has probability 1/4. So the probability mass function is a function of an argument x. And for any x, it specifies the probability that the random variable takes on this particular value. A few more things to notice. The probability mass function is always non-negative, since we're talking about probabilities and probabilities are always non-negative. In addition, since the total probability of all outcomes is equal to 1, the probabilities of the different possible values of the random variable should also add to 1. So when you add over all possible values of x, the sum of the associated probabilities should be equal to 1. In terms of our picture, the event that x is equal to 3, which is this subset of the sample space, the event that x is equal to 4, which is this subset of the sample space, and the event that x is equal to 5, which is this subset of the sample space. These three events-- the red, green, and blue-- they are disjoint, and together they cover the entire sample space. So their probabilities should add to 1. And the probabilities of these events are the probabilities of the different values of the random variable, X. So the probabilities of these different values should also add to 1. Let us now go through a simple example to illustrate the general method for calculating the PMF of a discrete random variable. We will revisit our familiar example involving two rolls of the tetrahedral die. And let X be the result of the first roll, Y be the result of the second roll. And notice that we're using uppercase letters. And this is because X and Y are random variables. In order to do any probability calculations, we also need the probability law. So to keep things simple, let us assume that every possible outcome, there's 16 of them, has the same probability which is therefore 1 over 16 for each one of the outcomes. We will concentrate on a particular random variable defined to be the sum of the random variables, X and Y. So if X and Y both happen to be 1, then Z will take the value of 2. If X is 2 and Y is 1 our random variable will take the value of 3. And similarly if we have this outcome, in those outcomes here, the random variable takes the value of 4. And we can continue this way by marking, for each particular outcome, the corresponding value of the random variable of interest. What we want to do now is to calculate the PMF of this random variable. What does it mean to calculate the PMF? We need to find this value for all choices of z, that is for all possible values in the range of our random variable. The way we're going to do it is to consider each possible value of z, one at a time, and for any particular value find out what are the outcomes-- the elements of the sample space-- for which our random variable takes on the specific value, and add the probabilities of those outcomes. So to illustrate this process, let us calculate the value of the PMF for z equal to 2. This is by definition the probability that our random variable takes the value of 2. And this is an event that can only happen here. It corresponds to only one element of the sample space, which has probability 1 over 16. We can continue the same way for other values of z. So for example, the value of PMF at z equal to 3, this is the probability that our random variable takes the value of 3. This is an event that can happen in two ways-- it corresponds to two outcomes-- and so it has probability 2 over 16. Continuing similarly, the probability that our random variable takes the value of 4 is equal to 3 over 16. And we can continue this way and calculate the remaining entries of our PMF. After you are done, you end up with a table-- or rather a graph-- a plot that has this form. And these are the values of the different probabilities that we have computed. And you can continue with the other values. It's a reasonable guess that this was going to be 4 over 16, this is going to be 3 over 16, 2 over 16, and 1 over 16. So we have completely determined the PMF of our random variable. We have given the form of the answers. And it's always convenient to also provide a plot with the answers that we have.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L023_A_Die_Roll_Example.txt
This is a simple example where we want to just apply the formula for conditional probabilities and see what we get. The example involves a four-sided die, if you can imagine such an object, which we roll twice, and we record the first roll, and the second roll. So there are 16 possible outcomes. We assume to keep things simple, that each one of those 16 possible outcomes, each one of them has the same probability, so each outcome has the probability 1/16. Let us consider now a particular event B on which we're going to condition. This is the event under which the smaller of the two die rolls is equal to 2, which means that one of the dice must have resulted in two, and the other die has resulted in something which is 2 or larger. So this can happen in multiple ways. And here are the different ways that it can happen. So at 2, 2, or 2, 3, or 2, 4; then a 3, 2 and a 4, 2. All of these are outcomes in which one of the dice has a value equal to 2, and the other die is at least as large. So we condition on this event. This results in a conditional model where each one of those five outcomes are equally likely since they used to be equally likely in the original model. Now let's look at this quantity. The maximum of the two die rolls-- that is, the largest of the results. And let us try to calculate the following quantity-- the conditional probability that the maximum is equal to 1 given that the minimum is equal to 2. So this is the conditional probability of this particular outcome. Well, this particular outcome cannot happen. If I tell you that the smaller number is 2, then the larger number cannot be equal to 1, so this outcome is impossible, and therefore this conditional probability is equal to 0. Let's do something a little more interesting. Let us now look at the conditional probability that the maximum is equal to 3 given the information that event B has occurred. It's best to draw a picture and see what that event corresponds to. M is equal to 3-- the maximum is equal to 3-- if one of the dice resulted in a 3, and the other die resulted in something that's 3 or less. So this event here corresponds to the blue region in this diagram. Now let us try to calculate the conditional probability by just following the definition. The conditional probability of one event given another is the probability that both of them-- both of the two events-- occur, divided by the probability of the conditioning event. That is, out of the total probability in the conditioning event, we ask, what fraction of that probability is assigned to outcomes in which the event of interest is also happening? So what is this event? The maximum is equal to 3, which is the blue event. And simultaneously, the red event is happening. These two events intersect only in two places. This is the intersection of the two events. And the probability of that intersection is 2 out of 16, since there's 16 outcomes and that event happens only with two particular outcomes. So this gives us 2/16 in the numerator. How about the denominator? Event B consists of a total of five possible outcomes. Each one has probability 1/16, so this is 5/16, so the final answer is 2/5. We could have gotten that same answer in a simple and perhaps more intuitive way. In the original model, all outcomes were equally likely. Therefore, in the conditional model, the five outcomes that belong to B should also be equally likely. Out of those five, there's two of them that make the event of interest to occur. So given that we live in B, there's two ways out of five that the event of interest will materialize. So the event of interest has conditional probability [equal to] 2/5.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L086_Exponential_Random_Variables.txt
We now introduce a new of random variable, the exponential random variable. It has a probability density function that is determined by a single parameter lambda, which is a positive number. And the form of the PDF is as shown here. Note that the PDF is equal to 0 when x is negative, which means that negative values of X will not occur. They have zero probability. And so our random variable is a non-negative random variable. The shape of the PDF is as shown in this diagram. It's 0 for negative values, and then for positive values, it starts off, it starts off at a value equal to lambda. This is because if you plug in x equal to 0 in this expression, you get lambda times e to the 0, which leaves you just with lambda. So it starts off with lambda, and then it decays at the rate of lambda. Notice that when lambda is small, the initial value of the PDF is small. But then the decay rate is also small, so that the PDF extends over a large range of x's. On the other hand, when lambda is large, then the PDF starts large, so there's a fair amount of probability in the vicinity of 0. But then it decays pretty fast, so there's much less probability for larger values of x. Another observation to make is that the shape of this exponential PDF is quite similar to the shape of the geometric PDF that we have seen before, the only difference being that here we have a discrete distribution, but here we have a continuous analog of that distribution. Let's now carry out a calculation. Let us fix some positive number a, and let us calculate the probability that our random variable takes a value larger than or equal to a. So what we're trying to do is to calculate the probability that the random variable falls inside this interval from a to infinity. Whenever we have a PDF, we can calculate the probability of falling inside an interval by integrating over that interval the value of the PDF. Therefore, we have to calculate this particular integral. And at this point, we can recall a fact from calculus, namely that the integral of the function e to the ax is 1 over a times e to the ax. We can use this fact by making the correspondence between a and minus lambda. And using this correspondence, we can now continue the calculation of our integral. We have a factor of lambda. And then a factor of 1 over a, where a stands for minus lambda. So we get the minus 1 over lambda. And then the same exponential function, e to the minus lambda x. And because the range of integration is from a to infinity, we need to evaluate the integral at a and infinity and take the difference. Now, this lambda cancels that lambda. We're left with a minus sign. And from the upper limit, we get e to the minus lambda times infinity. And then from the second term, we have a minus sign that cancels with that minus sign and gives us a plus term, plus e to the minus lambda a. Now, e to the minus infinity is 0. And so we're left just with the last term. And the answer is e to the minus lambda a. So this gives us the tail probability for an exponential random variable. It tells us that the probability of falling higher than a certain number falls off exponentially with that certain number. An interesting additional observation-- if we let a equal to 0 in this calculation, we obtain the integral of the PDF over the entire range of x's. And in that case, this probability becomes e to the minus lambda 0, which is equal to 1. So we have indeed verified that the integral of this PDF is equal to 1, as it should be. Now, let's move to the calculation of the expected value of this random variable. We can use the definition. Since the PDF is non-zero only for positive values of x, we only need to integrate from 0 to infinity. We integrate x times the PDF. And this is an integral that you may have encountered at some point before. It is evaluated by using integration by parts. And the final answer turns out to be 1 over lambda. Regarding the calculation of the expected value of the square of the random variable, we need to write down a similar integral, except that now we will have here x squared. This is just another integration by parts, only a little more tedious. And the answer turns out to be 2 over lambda squared. Finally, to calculate the variance, we use the handy formula that we have. And the expected value of X squared is this term. The expected value of X is this term. When we square it, it becomes similar to this term, but we have here a 2. There we have a 1. And so we're left with just 1 over lambda squared. And this is the variance of the exponential random variable. Notice that when lambda is small, the PDF, as we discussed before, falls off very slowly, which means that large x's are also quite possible. And so the average of this random variable will be on the higher side. The PDF extends over a large range, and that translates into having a large mean. And because when that happens, the PDF actually spreads, the variance also increases. And this is reflected in this formula for the variance. The exponential random variable is, in many ways, similar to the geometric. For example, the expression for the mean, which is 1 over lambda, can be contrasted with the expression for the mean of the geometric, which is 1 over p. And the relationship between these two distributions, the discrete and the continuous analog, is a theme that we will revisit several times. At this point, let me just say that the exponential random variable is used to model many important and real world phenomena. Generally, it models the time that we have to wait until something happens. In the discrete case, the geometric random variable models the time until we see a success for the first time. In the continuous case, an exponential can be used to model the time until a customer arrives, the time until a light bulb burns out, the time until a machine breaks down, the time until you receive an email, or maybe the time until a meteorite falls on your house.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L055_Uniform_Random_Variables.txt
In this segment and the next two, we will introduce a few useful random variables that show up in many applications-- discrete uniform random variables, binomial random variables, and geometric random variables So let's start with a discrete uniform. A discrete uniform random variable is one that has a PMF of this form. It takes values in a certain range, and each one of the values in that range has the same probability. To be more precise, a discrete uniform is completely determined by two parameters that are two integers, a and b, which are the beginning and the end of the range of that random variable. We're thinking of an experiment where we're going to pick an integer at random among the values that are between a and b with the end points a and b included. And all of these values are equally likely. To be more formal, our sample space is the set of integers from a until b. And the number of points that we have in our sample space is b minus a plus 1 possible values. What is the random variable that we're talking about? If this is our sample space, the outcome of the experiment is already a number. And the numerical value of the random variable is just the number that we happen to pick in that range. So in this context, there isn't really a distinction between the outcome of the experiment and the numerical value of the random variable. They are one in the same. Now since each one of the values is equally likely, and since we have so many possible values, this means that the probability of any particular value is going to be 1 over b minus a plus 1. This is the choice for the probability that would make all the probabilities in the PMF sum to one. What does this random variable model in the real world? It models a case where we have a range of possible values, and we have complete ignorance, no reason to believe that one value is more likely than the other. As an example, suppose that you look at your digital clock, and you look at the time. And the time that it tells you is 11:52 and 26 seconds. And suppose that you just look at the seconds. The seconds reading is something that takes values in the set from 0 to 59. So there are 60 possible values. And if you just choose to look at your clock at a completely random time, there's no reason to expect that one reading would be more likely than the other. All readings should be equally likely, and each one of them should have a probability of 1 over 60. One final comment-- let us look at the special case where the beginning and the endpoint of the range of possible values is the same, which means that our random variable can only take one value, namely that particular number a. In that case, the random variable that we're dealing with is really a constant. It doesn't have any randomness. It is a deterministic random variable that takes a particular value of a with probability equal to 1. It is not random in the common sense of the world, but mathematically we can still consider it a random variable that just happens to be the same no matter what the outcome of the experiment is.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L046_A_Coin_Tossing_Example.txt
Let us now put to use our understanding of the coin-tossing model and the associated binomial probabilities. We will solve the following problem. We have a coin, which is tossed 10 times. And we're told that exactly three out of the 10 tosses resulted in heads. Given this information, we would like to calculate the probability that the first two tosses were heads. This is a question of calculating a conditional probability of one event given another. The conditional probability of event A, namely that the first two tosses were heads, given that another event B has occurred, namely that we had exactly three heads out of the 10 tosses. However, before we can start working towards the solution to this problem, we need to specify a probability model that we will be working with. We need to be explicit about our assumptions. To this effect, let us introduce the following assumptions. We will assume that the different coin tosses are independent. In addition, we will assume that each coin toss has a fixed probability, p, the same for each toss, that the particular toss results in heads. These are the exact same assumptions that we made earlier when we derived the binomial probabilities. And in particular, we have the following formula that if we have n tosses, the probability that we obtain exactly k heads is given by this expression. So now, we have a model in place and also the tools that we can use to analyze this particular model. Let us start working towards a solution. Actually, we will develop two different solutions and compare them at the end. The first approach, which is the safest one, is the following. Since we want to calculate a conditional probability, let us just start with the definition of conditional probabilities. The conditional probability of an event given another event is the probability that both events happen, divided by the probability of the conditioning event. Now, let us specialize to the particular example that we're trying to solve. So in the numerator, we're talking about the probability that event A happens and event B happens. What does that mean? This means that event A happens-- that is, the first two tosses resulted in heads, which I'm going to denote symbolically this way. But in addition to that, event B happens. And event B requires that there is a total of three heads, which means that we had one more head in the remaining tosses. So we have one head in tosses 3 all the way to 10. As for the denominator, let's keep it the way it is for now. So let's continue with the numerator. We're talking about the probability of two events happening, that the first two tosses were heads and that in tosses 3 up to 10, we had exactly one head. Here comes the independence assumption. Because the different tosses are independent, whatever happens in the first two tosses is independent from whatever happened in tosses 3 up to 10. So the probability of these two events happening is the product of their individual probabilities. So we first have the probability that the first two tosses were heads, which is p squared. And we need to multiply it with the probability that there was exactly one head in the tosses numbered from 3 up to 10. These are eight tosses. The probability of one head in eight tosses is given by the binomial formula, with k equal to 1 and n equal to 8. So this expression, this part, becomes 8 choose 1, p to the first power times 1 minus p to the seventh power. So this is the numerator. The denominator is easier to find. This is the probability that we had three heads in 10 tosses. So we just use this formula. The probability of three heads is given by: 10 tosses choose three, p to the third, times 1 minus p to the seventh power. And here we notice that terms in the numerator and denominator cancel out, and we obtain 8 choose 1 divided by 10 choose 3. And to simplify things just a little more, what is 8 choose 1? This is the number of ways that we can choose one item out of eight items. And this is just 8. And let's leave the denominator the way it is. So this is the answer to the question that we had. And now let us work towards developing a second approach towards this particular answer. In our second approach, we start first by looking at the sample space and understanding what conditioning is all about. In our model, we have a sample space. As usual we can denote it by omega. And the sample space contains a bunch of possible outcomes. A typical outcome is going to be a sequence of length 10. It's a sequence of heads or tails. And it's a sequence that has length 10. We want to calculate conditional probabilities. And this places us in a conditional universe. We have the conditioning event B, which is some set. And conditional probabilities are probabilities defined inside this set B and define the probabilities, the conditional probabilities of the different outcomes. What are the elements of the set B? A typical element of the set B is a sequence, which is, again of length 10, but has exactly three heads. So these are the three-head sequences. Now, since we're conditioning on event B, we can just work with conditional probabilities. So let us find the conditional probability law. Recall that any three-head sequence has the same probability of occurring in the original unconditional probability model, namely as we discussed earlier, any particular three-head sequence has a probability equal to this expression. So three-head sequences are all equally likely. This means that the unconditional probabilities of all the elements of B are the same. When we construct conditional probabilities given an event B, what happens is that the ratio or the relative proportions of the probabilities remain the same. So conditional probabilities are proportional to unconditional probabilities. These elements of B were equally likely in the original model. Therefore, they remain equally likely in the conditional model as well. What this means is that the conditional probability law on the set B is uniform. Given that B occurred, all the possible outcomes now have the same probability. Since we have a uniform probability law, this means that we can now answer probability questions by just counting. We're interested in the probability of a certain event, A, given that B occurred. Now, given that B occurred, this part of A cannot happen. So we're interested in the probability of outcomes that belong in this shaded region, those outcomes that belong within the set B. To find the probability of this shaded region occurring, we just need to count how many outcomes belong to the shaded region and divide them by the number of outcomes that belong to the set B. That is, we work inside this conditional universe. All of the elements in this conditional universe are equally likely. And therefore, we calculate probabilities by counting. So the desired probability is going to be the number of elements in the shaded region, which is the intersection of A with B, divided by the number of elements that belong to the set B. How many elements are there in the intersection of A and B? These are the outcomes or sequences of length 10, in which the first two tosses were heads-- no choice here. And there is one more head. That additional head can appear in one out of eight possible places. So there's eight possible sequences that have the desired property. How many elements are there in the set B? How many three-head sequences are there? Well, the number of three-head sequences is the same as the number of ways that we can choose three elements out of a set of cardinality 10. And this is 10 choose 3, as we also discussed earlier. So this is the same answer as we derived before with our first approach. So both approaches, of course, give the same solution. This second approach is a little easier, because we never had to involve any p's in our calculation. We go to the answer directly. The reason that this approach worked was that the conditional universe, the event B, had a uniform probability law on it.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L049_Multinomial_Probabilities.txt
PROFESSOR: In this segment, we will discuss the multinomial model and the multinomial probabilities, which are a nice generalization of the binomial probabilities. The setting is as follows. We are dealing with balls and the balls come into different colors. There are r possible different colors. We pick a ball at random, and when we do that, there is a certain probability Pi that the ball that we picked has ith color. Now, we repeat this process n times independently. Each time we get a ball that has a random color. And we're interested in the following kind of question. Somebody fixes for us certain numbers-- n1, n2, up to nr that add up to n, and asks us, what is the probability that when you carry out the experiment, you get exactly n1 balls of the first color, exactly n2 balls of the second color, and so on? So the numbers n1, n2, up to nr are fixed given numbers. For a particular choice of those numbers, we want to calculate this probability. Now of course, this is a more general model. It doesn't necessarily deal with balls of different colors. For example, we might have an experiment that gives us random numbers, where the numbers range from 1 up to r, and at each time we get a random number with probability Pi we get a number which is equal to i. So we could use this to model die rolls, for example. And there's actually a special case of this problem, which should be familiar. Suppose that we have only two colors, and instead of thinking of colors, let us think of the two possibilities as being heads or tails. And we can make the following analogy. Somebody gives us numbers n1 and n2 that add up to n. And we're interested in the probability that we get n1 of the first color and n2 of the second color. Well, we could think of this as a setting in which we are asking for the probability that we obtain k heads and n minus k tails. So the question of what is the probability that we obtain k heads and n minus k tails is of the same kind as what is the probability that we get n1 of the first color and n2 of the second color. Now, if heads have a probability p of occurring, and tails has a probability of 1 minus p of occurring, then we would have the following analogy. The probability of obtaining the first color, which correspond to heads, that would be equal to p. The probability of obtaining the second color, which correspond to tails, this would be 1 minus p. Now, the probability of obtaining k heads in those n independent trials-- we know what it is. By the binomial probabilities, it is n choose k times p to the k times one minus p to the power n minus k. Now we can translate this answer to the multinomial case where we're dealing with colors, and we do these substitutions. So n choose k is n factorial divided by k factorial. In this case, k is the same as n1, so we get n1 factorial. And then we are going to have here n minus k factorial. But n minus k corresponds to n2. So here we get an n2 factorial. And then p corresponds to p1 and p2 corresponds to 1 minus p. So we get here p1 times p2. n to the power n minus k, again, by analogy, is n2. So this is the form of the multinomial probabilities for the special case where we're dealing with two colors. Let us now look at the general case. Let us start with an example, to be concrete. Suppose that the number of colors is equal to 3, and that we're going to pick n equal to 7 balls. We carry out the experiments, and we might obtain an outcome which would be a sequence of this type. So the first ball had the color 1, the second ball had the first color, the third ball had the third color, the fourth ball had the first color, and so on. And suppose that this was the outcome. One way of summarizing what happened in this outcome would be to say that we had four 1s, we had two 2s, and we had one 3. We could say that this is the type of the outcome. It's of type 4, 2, 1-- that is, we obtained four of the first color, two of the second color, and one of the third color. This is one possible outcome. What is the probability of obtaining this particular outcome? The probability of obtaining this particular outcome is, using independence, the probability that we obtain color 1 in the first trial, color 1 in the second trial, color 3 in the third trial, color 1 in the fourth trial, color 2 in the next trial, color 2 in the next trial, color 1 in the last trial. And we put all the factors together, and we notice that this is p1 to the fourth p2 to the second times p3. It's not a coincidence that the exponents that we have up here are exactly the count that we had when we specified the type of this particular outcome. Generalizing from this example, we realize that the probability of obtaining a particular sequence of a certain type, that probability is of this form. For each color, we have the probability of that color raised to the power of how many times that particular color appears in a sequence. So any particular sequence of this type has this probability. What we're interested in is to find the total probability of obtaining some sequence of this type. How can we find this probability? Well, we will take the probability of each sequence of this type-- which is this much, and it's the same for any particular sequence-- and multiply with the number of sequences of this type. So how many sequences are there of a certain type? Let us look back at our example. We had seven trials. So let us number here the different trials. And when I tell you that a particular sequence was obtained, that's the same as telling you that in this set of trials, we had the first color. In this set of trials, the fifth and sixth trial, we had the second color. And in this trial, the third trial, we had the third color. This is an alternative way of telling you what sequence we obtained. I tell you at which trials we had the first color, at which trials we had the second, at which trials we had the third. But What do we have here? Here we have a partition of the set of numbers from 1 up to 7 into three subsets. And the cardinalities of those subsets are the numbers that appear here in the type of the sequence. The conclusion is that a sequence of certain type is equivalent, or can be alternatively specified, by giving you a partition over this set of tosses, which is the set from 1 up to n, how many trials we've had, a partition into subsets of certain sizes. So this allows us now to count the number of sequences of a certain type. It's exactly the same as the number of partitions, and we know what this is. And putting everything together, the probability of obtaining a sequence of a certain type is equal to the count of how many sequences do we have of the certain type, which is the same as the number of partitions of a certain type, times the probability of any particular sequence of that type that we're interested in. So this is a formula that generalizes the one that we saw before for the case where we have only two colors, and which corresponded to the coin tossing setting. And it is a useful model, because you can think of many situations in which you have repeated trials, and at each trial, you obtain one out of a finite set of possible results. There are different possible results. You repeat those trials independently. And you may be interested in the question of how many results of the first kind, of the second kind, and so on there will be.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L034_Independence_of_Event_Complements.txt
Let us now discuss an interesting fact about independence that should enhance our understanding. Suppose that events A and B are independent. Intuitively, if I tell you that A occurred, this does not change your beliefs as to the likelihood that B will occur. But in that case, this should not change your beliefs as to the likelihood that B will not occur. So A should be independent of B complement. In other words, the occurrence of A tells you nothing about B, and therefore tells you nothing about B complement either. This was an intuitive argument that if A and B are independent, then A and B complement are also independent. But let us now verify this intuition through a formal proof. The formal proof goes as follows. We have the two events, A and B. And event A can be broken down into two pieces. One piece is the intersection of A with B. So that's the first piece. And the second piece is the part of A which is outside B. And that piece is A intersection with the complement of B. So these are the two pieces that together comprise event A. Now, these two pieces are disjoint from each other. And therefore, by the additivity axiom, the probability of A is equal to the probability of A intersection B plus the probability of A intersection with B complement. Using independence, the first term becomes probability of A times probability of B. And we leave the second term as is. Now let us move this term to the other side. And we obtain that the probability of A intersection with B complement is the probability of A minus the probability of A times the probability of B. We factor out the term probability of A, and we are left with 1 minus probability of B. And then we recognize that 1 minus the probability of B is the same as the probability of B complement. So we proved that the probability of A and B complement occurring together is the product of their individual probabilities. And that's exactly the definition of A being independent from B complement. And this concludes the formal proof.
MIT_RES6012_Introduction_to_Probability_Spring_2018
L082_Probability_Density_Functions.txt
In this segment, we introduce the concept of continuous random variables and their characterization in terms of probability density functions, or PDFs for short. Let us first go back to discrete random variables. A discrete random variable takes values in a discrete set. There is a total of one unit of probability assigned to the possible values. And the PMF tells us exactly how much of this probability is assigned to each value. So we can think of the bars in the PMF as point masses with positive weight that sit on top of each possible numerical value. And we can calculate the probability that the random variable falls inside an interval by adding all the masses that sit on top of that interval. So for example, if we're looking at the interval from a to b, the probability of this interval is equal to the sum of the probabilities of these three masses that fall inside this interval. On the other hand, a continuous random variable will be taking values over a continuous range-- for example, the real line or an interval on the real line. In this case, we still have one total unit of probability mass that is assigned to the possible values of the random variable, except that this unit of mass is spread all over the real line. But it is not spread in a uniform manner. Some parts of the real line have more mass per unit length. Some have less. How much mass exactly is sitting on top of each part of the real line is described by the probability density function, this function plotted here, which we denote with this notation. The letter f will always indicate that we are dealing with a PDF. And the subscript will indicate which random variable we're talking about. We use the probability density function to calculate the probability that X lies in a certain interval-- let's say the interval from a to b. And we calculate it by finding the area under the PDF that sits on top of that interval. So this area here, the shaded area, is the probability that X stakes values in this interval. Think of probability as snow fall. There is one pound of snow that has fallen on top of the real line. The PDF tells us the height of the snow accumulated over a particular point. We then find the weight of the overall amount of snow sitting on top of an interval by calculating the area under this curve. Of course, mathematically, area under the curve is just an integral. So the probability that X takes values in this interval is the integral of the PDF over this particular interval. What properties should the PDF have? By analogy with the discrete case, a PDF must be non-negative, because we do not want to get negative probabilities. In the discrete case, the sum of the PMF entries has to be equal to 1. In the continuous case, X is certain to lie in the interval between minus infinity and plus infinity. So letting a be minus infinity and b plus infinity, we should get a probability of 1. So the total area under the PDF, when we integrate over the entire real line, should be equal to 1. These two conditions are all that we need in order to have a legitimate PDF. We can now give a formal definition of what a continuous random variable is. A continuous random variable is a random variable whose probabilities can be described by a PDF according to a formula of this type. An important point-- the fact that a random variable takes values on a continuous set is not enough to make it what we call a continuous random variable. For a continuous random variable, we're asking for a bit more-- that it can be described by a PDF, that a formula of this type is valid. Now, once we have the probabilities of intervals as given by a PDF, we can use of additivity to calculate the probabilities of more complicated sets. For example, if you're interested in the probability that X lies between 1 and 3 or that X lies between 4 and 5-- so this is the probability that X falls in a region that consists of two disjoint intervals. We find the probability of the union of these two intervals, by additivity, by adding the probabilities of the two intervals, since these intervals are disjoint. And then we can use the PDF to calculate the probabilities of each one of these intervals according to this formula. At this point, you may be wondering what happened to the sample space in all this discussion. Well, there is still an underlying sample space lurking in the background. And different outcomes in the sample space result in different numerical values for the random variable of interest. And when we talk about the probability that X takes values between some numbers a and b, what we really mean is the probability of those outcomes for which the resulting value of X lies inside this particular interval. So that's what probability means. On the other hand, once we have a PDF in our hands, we can completely forget about the underlying sample space. And we can carry out any calculations we may be interested in by just working with the PDF. So as we move on in this course, the sample space will be moved offstage. There will be less and less mention of it. And we will be working just directly with PDFs or with PMFs if we are dealing with discrete random variables. Let us now build a little bit on our understanding of what PDFs really are by looking at probabilities of small intervals. Let us look at an interval that starts at some a and goes up to some number a plus delta. So here, delta is a positive number. But we're interested in the case where delta is very small. Let us look at the probability that X falls in this interval. The probability that X lies inside this interval is the area of this region. On the other hand, as long as f does not change too much over this little interval, which will be the case if we have a continuous density f, then we can approximate the area we have of this region by the area of a rectangle where we keep the height constant. The area of this rectangle is equal to the height, which is the value of the PDF at the point a, times the base of the rectangle, which is equal to delta. So this gives us an interpretation of PDFs in terms of probabilities of small intervals. If we take this factor of delta and send it to the other side in this approximate equality, we see that the value of the PDF can be interpreted as probability per unit length. So PDFs are not probabilities. They are densities. Their units are probability per unit length. Now, if the probability per unit length is finite and the length delta is sent to 0, we will get 0 probability. More formally, if we look at this integral and we let b to be the same as a, then we obtain the probability that X is equal to a. And on that side, we get an integral over a 0 length interval. And that integral is going to be 0. So we obtain that the probability that X takes a value equal to a specific, particular point-- that probability is going to be equal to 0. So for a continuous random variable, any particular point has 0 probability. Yet somehow, collectively, the infinitely many points in an interval together will have positive probablility. Is this a puzzle? Not really. That's exactly what happens, also, with the ordinary notion of length. Single points have 0 length, yet by putting together lots of points, we can create a set that has a positive length. And a final consequence of the fact that individual points have 0 length. Using the additivity axiom, the probability that our random variable takes values inside an interval is equal to the probability that our random variable takes a value of a plus the probability that our random variable takes a value of b plus the probability that our random variable is strictly between a and b. According to our discussion, this term is equal to 0. And this term is equal to 0. And so we conclude that the probability of a closed interval is the same as the probability of an open interval. When calculating probabilities, it does not matter whether we include the endpoints or not.
US_History
Womens_Suffrage_Crash_Course_US_History_31.txt
Episode 31: Feminism and Suffrage Hi, I’m John Green, this is Crash Course U.S. history and today we’re going to talk about women in the progressive era. My God, that is a fantastic hat. Wait, votes for women?? So between Teddy Roosevelt, and Woodrow Wilson, and all those doughboys headed off to war, women in this period have sort of been footnoted shockingly.. Mr. Green, Mr. Green. I’d NEVER make a woman a footnote. She’d be the center of my world, my raison d’etre, my joie de vivre. Oh, Me from the Past. I’m reminded of why you got a C+ in French 3. Let me submit to you, Me from the Past, that your weird worship of women is a kind of misogyny because you’re imagining women as these beautiful, fragile things that you can possess. It turns out that women are not things. They are people in precisely the same way that you are a person and in the progressive era, they demanded to be seen as full citizens of the United States. In short, women don’t exist to be your joie de vivre. They get to have their own joie de vivre. intro So, it’s tempting to limit ourselves to discussion of women getting the right to vote with the passage of the 19th amendment, but if we focus too much on the constitutional history, we’re gonna miss a lot. Some historians refer to the thirty years between 1890 and 1920 as the “women’s era” because it was in that time that women started to have greater economic and political opportunities. Women were also aided by legal changes, like getting the right to own property, control their wages and make contracts and wills. By 1900 almost 5 million women worked for wages, mainly in domestic service or light manufacturing, like the garment industry. Women in America were always vital contributors to the economy as producers and consumers and they always worked, whether for wages or taking care of children and the home. And as someone who has recently returned from paternity leave, let me tell you, that ain’t no joke. And American women were also active as reformers since, like, America became a thing. And those reform movements brought women into state and national politics before the dawn of the progressive era. Unfortunately, their greatest achievement, Prohibition, was also our greatest national shame. Oh, yeah, alright, okay. It’s actually not in our top 5 national shames. But, probably women’s greatest influence indeed came through membership AND leadership in the Women’s Christian Temperance Union. The WCTU was founded in 1874 and by 1890 it had 150,000 members, making it the largest female organization in the United States. Under the leadership of Frances Willard, the WCTU embraced a broad reform agenda. Like it included pushing for the right for women to vote. The feeling was that the best way to stop people from drinking was to pass local laws that made it harder to drink, and to do that it would be very helpful if women could vote. Because American men were a bunch of alcoholic scoundrels who darn well weren’t going to vote to get rid of beer hoses. In 1895 Willard boldly declared, “A wider freedom is coming to the women of America. Too long has it been held that woman has no right to enter these movements (…) Politics is the place for woman.” But the role of women in politics did greatly expand during the Progressive era. As in prior decades, many reformers were middle and upper class women, but the growing economy and the expansion of what might be called the upper-middle class meant that there were more educational opportunities and this growing group of college-educated women leaned in and became the leaders of new movements. Sorry, there was no way I was gonna get through this without one “lean in.” I love that book. So as we’ve talked about before, the 1890s saw the dawning of the American mass consumer society and many of the new products made in the second wave of industrialization were aimed at women, especially “labor-saving” devices like washing machines. If you’ve ever had an infant, you might notice that they poop and barf on everything all the time. Like, I recently called the pediatrician and I was like, “My 14-day-old daughter poops fifteen times a day.” And he was like, “If anything, that seems low.” So the washing machine is a real game-changer. And many women realized that being the primary consumers who did the shopping for the home gave them powerful leverage to bring about change. Chief among these was Florence Kelley, a college-educated woman who after participating in a number of progressive reform causes came to head the National Consumers League. The League sponsored boycotts and shaped consumption patterns encouraging consumers to buy products that were made without child or what we now would call sweatshop labor. Which at the time was often just known as “labor.” And there was also a subtle shift in gender roles as more and more women worked outside the home. African American women continued to work primarily as domestic servants or in agriculture, and immigrant women mostly did low-paying factory labor, but for native-born white women there were new opportunities, especially in office work. And this points to how technology created opportunities for women. Like, almost all the telephone operators in the U.S. were women. By 1920 office workers and telephone operators made up 25% of the female workforce, while domestic servants were only 15%. A union leader named Abraham Bisno remarked that working gave immigrant women a sense of independence: “They acquired the right to personality, something alien to the highly patriarchal family structures of the old country.” Of course this also meant that young women were often in conflict with their parents, as a job brought more freedom, money, and perhaps, if they were lucky, a room of one’s own. Oh, it’s time for the Mystery Document? Please let it be Virginia Woolf, please let it be Virginia Woolf. The rules here are simple. I guess the author of the Mystery Document. I’m either right or I get shocked. Alright, let’s see what we’ve got. “The spirit of personal independence in the women of today is sure proof that a change has come … the radical change in the economic position of women is advancing upon us… The growing individualization of democratic life brings inevitable changes to our daughters as well as to our sons … One of its most noticeable features is the demand in women not only for their own money, but for their own work for the sake of personal expression. Few girls today fail to manifest some signs of the desire for individual expression …” Well, that’s not Virginia Woolf. Stan, I’m going to be honest, I do not know the answer to this one. However, it has been Woodrow Wilson for the last two weeks. You wouldn’t do that again to me, or would you? I’m gonna guess Woodrow Wilson. Final answer. DANG IT. Charlotte Perkins Gilman in the book Women and Economics? What? Aaaaaah! The idea that having a job is valuable just for the independence that it brings and as a form of “individual expression” was pretty radical, as most women, and especially most men, were not comfortable with the idea that being a housewife was similar to being a servant to one’s husband and children. But of course that changes when staying at home becomes one of many choices rather than your only available option. And then came birth control. Huzzah! Women who needed to work wanted a way to limit the number of pregnancies. Being pregnant and having a baby can make it difficult to hold down a job and also babies are diaper-using, stuff-breaking, consumptive machines. They basically eat money. And we love them. But birth control advocates like Margaret Sanger and Emma Goldman also argued that women should be able to enjoy sex without having children. To which men said, “Women can enjoy sex?” Believe it or not, that was seen as a pretty radical idea and it lead to changes in sexual behavior including more overall skoodilypooping. Goldman was arrested more than 40 times for sharing these dangerous ideas about female sexuality and birth control and she was eventually deported. Sanger, who worked to educate working class women about birth control, was sentenced to prison in 1916 for opening a clinic in Brooklyn that distributed contraceptive devices to poor immigrant women. The fight over birth control is important for at least three reasons. First, it put women into the forefront of debates about free speech in America. I mean, some of the most ardent advocates of birth control were also associated with the IWW and the Socialist Party. Secondly, birth control is also a public health issue and many women during the progressive era entered public life to bring about changes related to public health, leading the crusade against tuberculosis, the so-called White Plague, and other diseases. Thirdly, it cut across class lines. Having or not having children is an issue for all women, regardless of whether they went to college, and the birth control movement brought upper, middle, and lower class women together in ways that other social movements never did. Another group of Progressive women took up the role of addressing the problems of the poor and spearheaded the Settlement House movement. The key figure here was Jane Addams. My God, there are still Adamses in American history? Oh, she spells it Addams-family-Addams, not like founding-fathers-Adams. Anyway, she started Hull House in Chicago in 1889. Settlement houses became the incubators of the new field of social work, a field in which women played a huge part. And Addams became one of America’s most important spokespeople for progressive ideas. And yet in many places, while all of this was happening, women could not technically vote. But their increasing involvement in social movements at the turn of the 20th century led them to electoral politics. It’s true that women were voting before the passage of the 19th amendment in 1920. Voting is a state issue, and in many western states, women were granted the right to vote in the late 19th century. States could also grant women the right to run for office, which explains how the first Congresswoman, Jeannette Rankin, could vote against America’s entry into World War I in 1917. That said, the passage and ratification of the 19th Amendment is a big deal in American history. It’s also a recent deal. Like, when my grandmothers were born, women could not vote in much of the United States. The amendment says that states cannot deny people the right to vote because they are women, which isn’t as interesting as the political organization and activity that led to its passage. Alright, let’s go to the Thought Bubble. The suffrage movement was extremely fragmented. There was a first wave of suffrage, exemplified by the women at Seneca Falls, and this metamorphosed into the National American Women’s Suffrage Association, or NAWSA. Most of the leadership of NAWSA was made up of middle to upper class women, often involved in other progressive causes, who unfortunately sometimes represented the darker side of the suffrage movement. Because these upper class progressives frequently used nativist arguments to make their claims for the right to vote. They argued that if the vote could be granted to ignorant immigrants, some of whom could barely speak English, then it should also be granted to native born women. This isn’t to say that the elitist arguments won the day, but they should be acknowledged. By the early 20th century a new generation of college-educated activists had arrived on the scene. And many of these women were more radical than early suffrage supporters. They organized the National Women’s Party and, under the leadership of Alice Paul, pushed for the vote using aggressive tactics that many of the early generation of women’s rights advocates found unseemly. Paul had been studying in Britain between 1907 and 1910 where she saw the more militant women’s rights activists at work. She adopted their tactics that included protests leading to imprisonment and loud denunciations of the patriarchy that would make tumblr proud. And during World War I she compared Wilson to the Kaiser and Paul and her followers chained themselves to the White House fence. The activists then started a hunger strike during their 7-month prison sentence and had to be force-fed. Woodrow Wilson had half-heartedly endorsed women’s suffrage in 1916, but the war split the movement further. Most suffrage organizations believed that wartime service would help women earn respect and equal rights. But other activists, like many Progressives, opposed the war and regarded it as a potential threat to social reform. But, in the end, the war did sort of end up helping the cause. Patriotic support of the war by women, especially their service working in wartime industries, convinced many that it was just wrong to deny them the right to vote. And the mistreatment of Alice Paul and other women in prison for their cause created outrage that further pushed the Wilson administration to support enfranchising women. Thanks, Thought Bubble. So, women’s long fight to gain the right to vote ended with the ratification of the 19th Amendment in 1920. But, in some ways, the final granting of the franchise was a bit anti-climactic. For one thing, it was overshadowed by the 18th Amendment, Prohibition, which affected both women and men in large numbers. Also Gatsbys. You could say a lot of bad things about Prohibition, and I have, but the crusade against alcohol did galvanize and politicize many women, and organizations such as the WCTU and the Anti-Saloon League introduced yet more to political activism. But, while the passage of the 19th amendment was a huge victory, Alice Paul and the National Women’s Party were unable to muster the same support for an Equal Rights Amendment. Paul believed that women needed equal access to education and employment opportunities. And here they came into contact with other women’s groups, especially the League of Women Voters and the Women’s Trade Union League, which opposed the ERA fearing that equal rights would mean an unraveling of hard-won benefits like mother’s pensions and laws limiting women’s hours of labor. So, the ERA failed, and then another proposed amendment that would have given Congress the power to limit child labor won ratification in only 6 states. So in many ways the period between 1890 and 1920, which roughly corresponds to the Progressive Era, was the high tide of women’s rights and political activism. It culminated in the ratification of the 19th amendment, but the right to vote didn’t lead to significant legislation that actually improved the lives of women, at least not for a while. Nor were there immediate changes in the roles that women were expected to play in the social order as wives and mothers. Still, women were able to increase their autonomy and freedom in the burgeoning consumer marketplace. But it’s important to note that like other oppressed populations in American history, women weren’t given these rights, they had to fight for the rights that were said to be inalienable. And we are all better off for their fight and for their victory. Women’s liberation is to be sure a complicated phrase and it will take a new turn in the Roaring 20s, which we’ll talk about next week. I’ll see you then. Crash Course is produced and directed by Stan Muller. Our script supervisor is Meredith Danko. The associate producer is Danica Johnson. The show is written by my high school history teacher, Raoul Meyer, Rosianna Rojas, and myself. And our graphics team is Thought Café. Every week there’s a new caption to the Libertage. You can suggest captions in comments where you can also ask questions about today’s video that will be answered by our team of historians. Thanks for watching Crash Course and as we say in my hometown, don’t forget to be awesome. I’m gonna go this way, Stan, just kiiidding! Suffrage -
US_History
Thomas_Jefferson_His_Democracy_Crash_Course_US_History_10.txt
Hi I'm John Green, this is Crash Course US History, and today we're going to discuss Thomas Jefferson. We're going to learn about how America became a thriving nation of small, independent farmers, eschewing manufacturing and world trade, and becoming the richest and most powerful nation in the world in the 19th century, all thanks to the vision of Thomas Jefferson, the greatest and most intellectually consistent Founding Father, who founded the University of Virginia and grew 20 varieties of peas at Monticello. Me From the Past, get to your desk! In a stunning turn of events, Me From the Past is an idiot and Jefferson is more complicated than that. [Theme Music] So in 1800, Thomas Jefferson, pictured here – this is the third time that we've featured Thomas Jefferson on the chalkboard, so we had to go a little Warhol on it. Right, so, Jefferson, the Republican, ran against John Adams, the Federalist. 1800 was the first election where both parties ran candidates and actually campaigned. And, surprisingly, the Federalist's elitist strategy of "vote for Adams because he's better than you" did not work. Now, both parties realize that it was important to coordinate their electoral strategy to make sure that the vice presidential candidate got at least one fewer electoral votes than the presidential candidate. But then the Republican elector who was supposed to throw his vote away forgot to, so there ended up being a tie between Thomas Jefferson and Aaron Burr. As per the Constitution, the election went to the House of Representatives, where it took 36 ballots and the intervention of Alexander Hamilton before Jefferson was finally named president. Incidentally, Burr and Hamilton really disliked each other, and not in like, the passive-aggressive way that politicians dislike each other these days, but in the four-years-later-they-would-have-a-duel-and-Burr-killed-Hamilton kind of way. A duel which occurred, wait for it, in New Jersey. But anyway, shortly after the election of 1800, the 12th Amendment was passed, making the Electoral College simpler, but not as simple as, say, you know, one person's vote counting as one vote. Anyway, complain about the Electoral College all you want, but without it, we would never have had President Rutherford B. Hayes, and just look at that beard! So Jefferson became president and his election showed that Americans wanted a more democratic politics, where common people were more free to express their opinions. The Federalists were never really a threat again in presidential politics. And, arguably, the best thing that John Adams ever did was transfer power in an orderly and honorable way to his rival, Jefferson. Jefferson's campaign slogan was "Jefferson and Liberty," but the liberty in question was severely limited. Only a fraction of white men were allowed to vote and, of course, there was no liberty for the slaves. There's a lot of contentious debate on the subject of Jefferson and slavery, but here's my two cents, which I should not be allowed to contribute because we should only round to the nearest nickel, which, by the way, features Thomas Jefferson. So Thomas Jefferson was a racist and he wrote about black people's inherent inferiority to whites and Native Americans, and the fact that he fathered children with one of his slaves doesn't change that. George Washington freed his slaves upon his death. Well, sort of. They were supposed to be freed upon his wife's death, but living in a house full of people who were waiting for you to die made Martha want to free them while she was still alive. But with few exceptions, Jefferson didn't free his slaves upon his death and throughout his life, he used the sale of slaves to finance his lavish lifestyle. And this leads to two big philosophical questions when it comes to history. First, if Jefferson clearly did not think that black people were the intellectual or moral equals of whites and was perfectly comfortable keeping them in bondage, then what does the most important phrase of the Declaration of Independence actually mean? And the second question is even broader: Does it matter if a person of tremendous historical importance had terrible aspects to their character? Does being a bad person diminish your accomplishments? I don't have a great answer for those questions, but I will tell you that no remembers Richard Nixon for starting the EPA. But this is very important to understand: Slaves were aware of concept of liberty and they wanted it. So in addition to an election, 1800 also saw one of the first large-scale slave uprisings. Gabriel's Rebellion was organized by a Richmond, Virginia blacksmith who hoped to seize the capital, kill some of its inhabitants, and hold the rest hostage until his demands for abolition were met. But the plot was discovered before they could carry it out and Gabriel, along with 25 other slaves, was hanged. But after the rebellion, Virginians, if they didn't know it already, were very aware that slaves wanted and expected liberty. And the response was predictable: Virginia made its laws concerning slaves much harsher. It became illegal for slaves to meet in groups on Sundays unless supervised by whites, and it became much more difficult for whites to legally free their slaves. Oh, it's time for the Mystery Document? The rules here are simple: Identify the author, no shock; fail to identify the author, shock. "The love of freedom, sir, is an inborn sentiment, which the god of nature has planted deep in the heart: long may it be kept under by the arbitrary institutions of society; but, at the first favorable moment, it springs forth, and flourishes with a vigour that defies all check. This celestial spark, which fires the breast of the savage, which glows in that of the philosopher, is not extinguished in the bosom of the slave. It may be buried in the embers; but it still lives; and the breath of knowledge kindles it to flame. Thus we find, sir, there have never been slaves in any country who have not seized the first favorable opportunity to revolt." I mean, from the bit at the beginning about the love of freedom, it seems like it could be Jefferson, but the rest does not seem like Jefferson. Probably wasn't a slave, since they were denied access to education precisely because the "breath of knowledge" is so dangerous to the institution of slavery. Oh, this is looking pretty bleak for me, Stan. Mmmmmm. John Jay? [buzzing noise] Dang it! Who was it? George Tucker?! Who the John C. Calhoun is George Tucker?! Is there a person watching this who knew that it was George Tucker? Fine! [electricity noise] Gah! Apparently George Tucker was a member of the General Assembly of Virginia and the Mystery Document was a description of Gabriel's Rebellion that suggested a solution to the inherent problem of rebellious slaves. He argued that we should set up a colony for them in Indian territory in Georgia, which, of course, also wouldn't have worked because we were soon to steal that territory. But back to Jefferson. His idea was to make the government smaller, lower taxes, shrink the military, and make it possible for America to become a bucolic, agrarian, empire of liberty, rather than an English-style, industrial, mercantile, nightmare landscape. So how did he do? Well, really well at first. Jefferson got rid of all the taxes, except for the tariff, especially the Whiskey Tax. And then, when he woke up with a terrible, cheap whiskey-induced hangover, he paid off part of the national debt. He shrunk the army and the navy, and basically made sure that America wouldn't become a centralized, English-style state for at least the next 60 years. Low taxes and small government sounds great, but no navy? That would be tough, especially when we needed ships and marines to fight the Barbary pirates "on the shores of Tripoli," who kept capturing our ships in the Mediterranean and enslaving their crews. This is yet another example of how foreign affairs keeps getting in the way of domestic priorities; in this case, the domestic priority of not wanting to spend money on a navy. Also, vitally, Jefferson's presidency really marks the last time in history when a Republican president didn't want to spend money on the military. Don't get me wrong: Democrats can do it too. I'm looking at you, LBJ. As much as he wanted to get rid of any trace of the Federalists, Jefferson found himself thwarted by that imminently conservative and undemocratic institution: the Supreme Court. Jefferson appointed Republicans to most government positions, but he couldn't do anything about the Supreme Court because they serve for life. And since the country was only like, 12 years old, they were all still pretty fresh. Most important among them was Chief Justice John Marshall, who happened to be a Federalist. Marshall was Chief Justice basically forever and is, without question, the most important figure in the history of the Supreme Court. He wrote a number of key opinions, but none was more important than the 1803 decision in Marbury v. Madison. Marbury v. Madison is so important because in that decision, the Supreme Court gave itself the power of judicial review, which allows it to uphold or invalidate federal laws. The Court then extended this power to state laws in Fletcher v. Peck and eventually even to executive actions. Like, we think of the main job of the Supreme Court being to declare laws unconstitutional, but that power isn't anywhere in the Constitution itself. Marbury v. Madison gave the Court that power and without it, the Supreme Court would probably be a footnote in American history. So unlike Marshall, Jefferson and the Republicans were big proponents of strict construction: the idea that the Constitution should be read as literally as possible as way of limiting the power of the federal government. The problem is, there might be things the government wants to do that the Constitution didn't account for, like, for instance, buying a large tract of land from Napoleon, who, as we remember from Crash Course World History, complicates everything. Let's go to the Thought Bubble. So yeah, Jefferson basically doubled the size of the US in what came to be known as the Louisiana Purchase. Napoleon was eager to sell it, because the rebellion in Haiti had soured him on the whole idea of colonies, and also because he needed money. Jefferson wanted to purchase New Orleans because western farmers were shipping their products through the city and when he approached France about this, Napoleon was like, "Hey! How bout I sell you, this!" Jefferson couldn't turn down that deal, so he bought the whole kit and caboodle for $15 million, which is worth about $250 million today. To put that into perspective, a new aircraft carrier costs about $4.5 billion, so he got a good deal. What's the problem with this? Well, nothing if you believe in a powerful government that can do stuff that's not in the Constitution. But if you're a strict Constructionist like Jefferson, you have to reconcile this obviously beneficial act with there being no mention in the Constitution of the president being able to purchase land in order to expand the size of the US. So laying scruple aside, Jefferson bought Louisiana and then sent Lewis and Clark to explore it, which they did, even going beyond the boundaries of the Purchase all the way to the Pacific. And this was so cool that it almost makes us forget that it was kind of unconstitutional and a huge power grab for the president. So the question is: why did he do it? Jefferson's desire to increase the size of the country prompted Federalists to complain that, "We are to give money, of which we have too little, for land, of which we already have too much." By doubling the size of the country, Jefferson could ensure that there would be enough land for every white man to have his own small farm. And this, in turn, would ensure that Americans would remain independent and virtuous. Because only a small farmer who doesn't have to depend on the market for food or shelter or anything really (well, except slaves), can be truly independent, and thus capable of participating in a nation of free men. Thanks, Thought Bubble. And this desire to create a nation of independent farmers producing only primary products helps explains Jefferson's other incredibly controversial policy: the embargo. Jefferson imposed the embargo in order to punish Britain for its practice of impressing American sailors, as well as its blockade of France, with whom Britain was once again, or possibly just still, at war. So basically, Jefferson wanted free trade among nations and his solution was to get Congress to forbid all American ships from sailing to foreign ports. The theory was that the British were so dependent on American primary products, like wood and cotton, that if we cut off trade with them, the British would stop impressing American sailors and end their blockade. What's the connection between free trade and Jefferson's agrarian ideal? Well, the idea was that America would trade its primary products for Europe's manufactured goods, so that the US wouldn't have to develop any manufacturing capacity of its own. Alas, or perhaps fortunately, this did not work. For one thing, Britain and France were too busy fighting each other even to notice America's embargo. So they just continued blockading and impressing. Also, the embargo devastated the American economy. I mean, exports dropped by 80%. Furthermore, not being able to import European manufactured goods only served to spur American manufacturing. I mean, Jefferson might have wanted Americans to be a bunch of self-sufficient farmers, but Americans wanted European manufactured stuff, like teapots and clocks and microwaves. Well, then how did they cook stuff, Stan? And if they couldn't get that stuff from Britain, they would just make it themselves. So in terms of Jefferson's agrarian ideal, the embargo was a massive failure. And lastly, the embargo limited the power of the federal government about as much as crystal meth limits cavities. I mean, imposing the embargo was a colossal use of federal power and it was also an imposition on people's liberties. The problem the embargo was supposed to solve didn't go away and, as we'll discuss next week, it eventually led to the US's first declared war. For now, I want to leave you with this: Thomas Jefferson is revered and reviled in almost equal measure in American history. The Declaration of Independence, which he mainly drafted, is a signal achievement delineating some heroic ideas for the founding of the United States, but also embedding some of its crucial shortcomings. And Jefferson's presidency is like that too. He claimed to champion small government, but he enlarged federal power more than Washington or Adams ever did. He imagined an agrarian republic, but his policies led to increased manufacturing. He wanted to foster freedom, but he owned slaves and took land from the Indians. In the end, Jefferson's life and policies encapsulate the best and the worst of us, which is why his presidency is still worth studying closely. I'll see you next week. Crash Course is produced and directed by Stan Muller. Our script supervisor is Meredith Danko. The associate producer is Danica Johnson. The show is written by my high school history teacher, Raoul Meyer, and myself. And our graphics team is Thought Cafe. If you have questions about today's video, please ask them in comments, where they'll be answered by our team of historians and we're also accepting submissions for the Libertage Captions. Thank you for watching Crash Course and, as we say in my hometown: don't forget to be awesome. Oh! That was a fake out, it's going this way!
US_History
Where_US_Politics_Came_From_Crash_Course_US_History_9.txt
Hi, I'm John Green, and this is Crash Course U.S. History, and now that we have a Constitution, it’s actually United States history. Today we’re going to look at the birth of America’s pastime. No, Stan, not baseball. Not football. Not eating. I mean politics, which in America has been adversarial since its very beginnings, despite what the founders wanted. [Theme Music] We looked at the first big conflict in American politics last week: Constitution or Articles of Confederation? I hope that I convinced you we made the correct choice, but regardless, we made it; the constitution passed. But immediately following the passage of the constitution a pretty fundamental conflict came up: what kind of a country should we be? Mr Green, Mr Green! The US is supposed to be the policeman of the world and keep the people in the green parts of Not-America from hurting themselves. Oh, Me From The Past, we don’t get into that stuff until 1823. [Patriotic Rock Music] So, one vision of America was put forward by Alexander Hamilton, who’d served in the war as Washington’s top aide and would go on to be his first Secretary of the Treasury and probably would have been President himself, had he not been born in the British West Indies. Hamilton had a strong personality, and as you can see, the beautiful wavy hair of a Caribbean god, and he had very definite ideas about what he wanted the future of America to look like: First, Hamilton wanted the country to be mercantile, which means that he believed that we should be deeply involved in world trade. Second, he wanted the U.S. to be a manufacturing powerhouse. We wouldn’t just buy and sell stuff; we would make it too. He even invested in a plan to make Patterson, New Jersey, a manufacturing hub, which of course ultimately failed, because New Jersey. But to make a manufacturing giant, he needed a strong government that could build infrastructure and protect patents. But you already knew that he was in favor of a strong government because, of course, he wrote so many of the Federalist Papers. Hamilton also envisioned an America that was governed primarily by the elite. His party, which came to be known as the Federalist Party, would be the one of “the rich, the able and the well-born.” I mean, just think if the federalist party had survived, we might have had a bunch of like, Bushes and Kennedys as president. Hamilton wanted America to be firmly affiliated with Great Britain. Which isn’t surprising, given his passion for elitism and trade. But there was an opposing view of what America should look like, and it is most associated with Thomas Jefferson. Let’s go to the Thought Bubble. Jefferson wanted an America that was predominantly agrarian, with most people being small scale subsistence level farmers. Maybe they would produce a little surplus for local markets, but certainly not for international consumers. There would be no international trade. And he didn’t want manufacturing either. This small scale local economy could best be served by a small scale, local government. It’s not a surprise to find that Jefferson’s sympathies lay with the anti-federalists, even though he benefited from the new constitution a little bit, since he eventually got to be president and everything. Unlike the elitist Hamilton, Jefferson was an avowed democrat, which meant that he distrusted concentrated power and privilege and believed that the masses could basically govern themselves. To him, government and concentrated economic power were greater threats to liberty than a tyrannical majority. Jefferson was a big fan of the French, and not only because he spent a fair amount of time in Paris as our ambassador there. He also liked the French because they fought with us in the war of independence against the British. And because, after 1789, he liked the way the French treated their aristocrats – that is, brutally. In general, Jefferson and his partisans who called themselves Republicans (although some current textbooks call them Democratic-Republicans just to make things incredibly confusing) preferred France just as the Hamiltonians preferred Britain. And this was a bit of a problem since France and England were pretty much constantly at war between 1740 and 1815. Thanks, Thought Bubble. So linked to these imagined Americas were the questions of how democratic we should be and how much free speech we should have. Jefferson and the Republicans wanted more democracy and more free speech, well, sort of I mean, during Washington’s presidency, Democratic-Republican Societies sprang up, the first opposition political parties. And in 1794, the Democratic-Republican society of Pennsylvania published an address which made the point that, “Freedom of thought, and a free communication of opinions by speech or through the medium of the press, are the safeguards of our Liberties.” The Federalists on the other hand saw too much free speech and democracy as a threat. And from this it sounds like the Republicans were “better democrats”, but it’s a lot more complicated than that. I mean, for one thing many Republicans, including Thomas Jefferson, were slaveholders, and slavery is kind of the opposite of Democracy. And for another, many were supporters of the French Revolution, and supporting the French Revolution after 1793 is pretty problematic. Because as you’ll remember from Crash Course World History, Robespierre was guillotining everyone, up until the point where he himself was guillotined. OK, so in the first real American presidential election there weren’t any political parties. There wasn’t even a campaign. The election was uncontested and George Washington won. He didn’t even have to run for office; he stood for it. Washington’s presidency is important for a number of precedents that he set, including the notion that a president should only serve two terms and the idea that even if he was a general the president should wear civilian clothing. But he wasn’t the real policy brains. Hamilton was. Washington probably wouldn’t have called himself a Federalist, but he backed Hamilton’s plan for a stronger nation. And to that end, Hamilton began the great American tradition of having a 5 point plan: Point 1: Establish the nation’s credit-worthiness. Hamilton realized that if the new nation wanted to be taken seriously it had to pay off its debts, most of which had come during the war. And to do this Hamilton proposed that the U.S. government assume the debts that the states had amassed. Point 2: Create a national debt. That’s something you don’t hear politicians say these days. Hamilton wanted to create new interest bearing bonds, hoping to give the rich people a stake in our nation’s success. Point 3: Create a Bank of the United States. This bank would be private and it would turn a profit for its shareholders but it would hold public funds and issue notes that would circulate as currency. And the bank would definitely be needed to house all the money that was expected to be raised from: Point 4: A Whiskey tax. Then, as now, Americans liked to drink. And one sure way to raise money was to set an excise tax on whiskey, which might reduce drinking on the margins or cause people to switch to beer. But what it would definitely do is hurt small farmers, who found the most profitable use of their grain was to distill it into sweet, sweet whiskey. So the Whiskey Tax really upset small farmers, as we will see in a moment. Point 5: Encourage domestic industrial manufacturing by imposing a tariff. For those of you who think that the U.S. was founded on free trade principles, think again. Now you will remember that the Republicans wanted an agrarian republic with freer trade, so they disliked pretty much all of Hamilton’s plan. They also argued that none of this was in the Constitution, and they were right. This position of expecting government to be limited by the text of the constitution came to be known as strict construction. But the Republicans lacked a five point plan of their own, so their only hope of success was to shave Hamilton’s five point plan down to four points, which is what they did. In 1790, many of the Republicans, who were Southerners like Jefferson, struck a bargain. They agreed to points 1-4 of Hamilton’s plan in exchange for a permanent capital on the Potomac (in the South as opposed to the first two temporary capitals of the US in New York and Philadelphia). So the Hamiltonian economy won out. For a while. Probably the most immediately controversial aspect of Hamilton’s program was the whiskey tax, and not just because people loved to drink. But also because farmers love to turn their rye into whiskey, into profits. In 1794, western Pennsylvania farmers even took up arms to protest the tax, and that clearly could not stand. Washington actually led (at least for part of the way) a force of 13,000 men to put down this Whiskey Rebellion, becoming the only sitting president to lead troops in the field, and America continued to tax booze, as it does to this day. On the subject of foreign affairs, there was much more agreement. Just kidding. Hamilton wanted the U.S. to have close ties to Britain for commercial reasons, but Britain was perpetually at war with France, which whom the U.S. technically had a perpetual alliance. You know, because they helped us with the American Revolution, they gave us the Statue of Liberty, and Marion Cotillard, etc. And the French revolution made things even more complicated, because Republicans liked it but Federalists, being somewhat conservative and elitist, were afraid of it. This was especially true when French emissary Citizen Genet showed up in 1793 and started hiring American ships to attack British ones. Britain in response began impressing American sailors, which sounds fun, but it isn’t. It doesn’t mean the British sailors wowed Americans with their awesome mermaid tattoos, it means they kidnapped them and forced them to serve in the British navy. Washington dispatched secretary of state John Jay to deal with the impressment issue and he negotiated the boringly named Jay Treaty, which improved trade relations between the U.S. and Britain and said absolutely nothing about impressment or American shipping rights. For the rest of his term, Washington just tried to ignore the problem, thereby inaugurating another presidential tradition: kicking big foreign policy problems down the line for future presidents. By the end of his presidency, George Washington was somewhat disillusioned by politics. His famous call for unity said that “with slight shades of difference, you Americans have the same religion, manners, habits and political principles,” Washington warned against the “baneful effects of the spirit of party generally,” saying that “it agitates the community with ill-founded jealousies and false alarms; kindles the animosity of one part against another; foments occasionally riot and insurrection. It opens the door to foreign influence and corruption.” Still, by the time the diminutive John Adams took over as the second president, Americans had already divided themselves into two groups: elitist Federalist and Republicans who stood for freedom and equality and… Oh, It’s time for the Mystery Document? The rules here are simple: I guess the author of the mystery document. If am right, I do not get shocked; if I am wrong, I do get shocked. All right, let’s give it a go. “Yes, ye lordly, ye haughty sex, our souls are by nature equal to yours; the same breath of God animates, enlivens, and invigorates us; Were we to grant that animal strength proved any thing, taking into consideration the accustomed impartiality of nature, we should be induced to imagine, that she had invested the female mind with superior strength as an equivalent for the bodily powers of man. But waiving this however palpable advantage, for equality only we wish to contend.” So the author of the Mystery Document is a badass woman. So we have here an argument, and a bit of a snarky one, for equality between men and women. All right, I can do this. Elizabeth Cady Stanton is too young, also, probably not funny enough. Ahhh, bleergh. Stan, my official guess is Sarah Grimké. [buzzing sound] [groaning] DANG IT! AHHHH! JUDITH SARGENT MURRAY? [groan] Well, you know, as part of the patriarchy I probably deserve this anyway. So Judith Sargent Murray reminds us that once unleashed, ideas like liberty and equality spread to places where neither the male Federalists nor the male Republicans wanted them to go. But back to Adams: his election in 1796 exposed a big flaw in our electoral system. Because the vice presidency went to whomever had the second highest total number of electoral votes, and that person happened to be Thomas Jefferson, we ended up with a situation where the president and the vice president were on opposite sides of the political spectrum, which was not good. So they changed the constitution, but not until after the next election which featured another screw up. We are awesome at this. Side note: The electoral college system would continue to misrepresent the will of the American voters, most notably in 1876, 1888, and 2000, but also in every election. Domestically, Adams continued Hamilton’s policies, but Adams’ presidency is best known for foreign problems, especially the way Adams’ administration totally overreacted to problems with France. Because we were trying to maintain good commercial relationships with England, and England was perpetually at war with France, France ended up in a “quasi-war” with the United States despite our eternal alliance. They disrupted our shipping; we felt nervous about their increasingly violent revolution. And then, after 3 French emissaries tried to extort a bribe from the U.S. government as part of negotiations – the so called “X, Y, Z affair" because we didn’t want to give the names of these bribe-seeking French scoundrels – the American public turned against France, somewhat hysterically, as it will. Taking advantage of the hysteria, Adams pushed through the Alien and Sedition Acts. The Alien Act lengthened the period of time it took to become a citizen, and the Sedition Act made it a crime to criticize the government. Among the more famous people prosecuted under the Sedition Act was Matthew Lyon, a Congressman from Vermont who was jailed for saying that John Adams was maybe not the best president ever. And while in jail, Lyon won reelection to Congress, which might indicate just how popular this law wasn’t. It was so unpopular that Virginia and Kentucky’s legislatures passed resolutions against it, claiming that it violated Americans’ liberties and that state legislatures had the power to overturn or nullify any federal law that they found to violate the constitution. This whole business of nullification and states rights? It will return. The Alien and Sedition acts were allowed to lapse under Jefferson, and they didn’t lead to widespread arrests of everyone who called Adams a tyrant or expressed admiration for the French Revolution. And even though they weren’t popular, they didn’t doom the Federalist party either, even though no Federalist was elected president after Adams. But the Alien and Sedition Acts and the response to them from Virginia and Kentucky are important, because they show us how unsettled American politics were in the first decade of this country's existence. Even something as basic as freedom of speech was up for grabs as America tried to figure out what kind of country it was going to be. That’s important to think about when studying American history, but it’s also important to think about when looking at new democracies. You might think that Thomas Jefferson winning the presidential election of 1800 settled all this stuff, but it wasn’t so simple. It never is, really. Thanks for watching. I’ll see you next week. Crash Course is produced and directed by Stan Muller. Our script supervisor is Meredith Danko. The associate producer is Danica Johnson. The show is written by my high school history teacher Raoul Meyer and myself. And our graphics team is Thought Café. If you have questions about today’s video, particularly if they’re actual questions, and not passive-aggressive attempts to impose your ideology upon other people, leave them in comments, where they will be answered by our team of historians. We’re also accepting your submissions for Libertage captions. Thanks for watching Crash Course, and as we say in my hometown, Don’t Forget To Be Awesome.
US_History
World_War_II_Part_1_Crash_Course_US_History_35.txt
Hi, I’m John Green, this is Crash Course U.S. history, and today we’re going to talk about a topic so huge to history buffs that we can only discuss a tiny, little fraction of it. I am of course referring to paratroopering. No World War II. World War II is the only historical event that has, like, its own cable channel. Well I should say it used to have its own cable channel. These days the History Channel is of course devoted primarily to lumberjacks and oh my gosh is that guy really going to shoot an alligator. Who knew how nostalgic we could be for documentaries about Joseph Stalin. Mr. Green, Mr. Green. Finally we get to the good stuff: like Patton, and Rommel, and Churchill, and Eisenhower, Stalingrad, Gomer Pyle! Oh I’m sorry to disappoint you, Me From the Past, but while Patton and Eisenhower were Americans, Rommel was a German (or General Monty Montgomery’s dog). Regardless, they were both from the green parts of not-America also no Americans fought at the battle of Stalingrad, although we did talk about that in Crash Course World History. And Gomer Pyle was a television character played by Jim Nabors. I believe that you mean to refer to the journalist Ernie Pyle. intro So here at CrashCourse we like to focus on causes and effects of wars rather than strategy and tactics, but given the importance that World War II has in the American imagination, we’re gonna discuss those a bit too today. We’re going to defy Maria von Trapp and start before the very beginning, because America’s ideas about foreign policy were shaped by two things: The Great Depression and World War I. After the American experience of World War I, it’s not surprising that Americans were just a smidge gun shy about involvement in foreign affairs. Seriously Stan? A gun pun? Now? No. Now America actually came out of World War I stronger than ever but man did a lot of people die for not much change. I mean I guess the Treaty of Versailles sort of re-made Europe, but it didn’t make it better. And the League of Nations was a flop and generally there was a lot of disappoin ted idealism. The period of time between 1920 and the U.S. entry into World War II has been called an age of isolationism, although that isn’t 100% accurate. I mean, for one thing the U.S. sponsored a series of arms reduction negotiations that resulted in the Washington treaties limiting the number of battleships that a country could possess. But of course those negotiations led to a fat lot of nothing because the idea of a nation limiting its battleships was a bigger joke even than the League of Nations, which I will remind you, we invented and then did not join. Another way that the U.S. was less-than isolationist was our pursuance of the Good Neighbor Policy with Latin America. So called because we were not a good neighbor. Our idea was to be less intrusive in Latin American politics, and we did remove troops from the Dominican Republic and Haiti, which was something but “good neighbor” is a bit of an exaggeration. I mean we continued to support repressive dictators like Somoza in Nicaragua and Batista in Cuba. You know, we’d never really been great neighbors. However, we were isolationist in the sense that the United States was much less involved in world trade, largely because of the Depression, you know that meant that there wasn’t much world trade, but also because of tariff policies. But there was also something isolationist about the formal actions of Congress, like after Europe and Asia began to become belligerent in the 1930s with Japan’s invasion of China, and Italy’s invasion of Ethiopia, and the rise of fascist dictators in Spain and, of course Germany, Congress responded by passing a series of Neutrality Acts which banned the sale of arms to belligerents. Even if they were really nice, tea-drinking belligerents who we were pals with. And that points to another reason why people tend to regard this as a time of isolationist sentiment, our old friend Eurocentrism. We were generally neutral in terms of foreign intervention when it came to Europe. Popular groups, like America First with celebrity members from Charles Lindbergh to E. E. Cummings cautioned against involvement in foreign affairs. But they mostly meant European affairs. The U.S. didn’t officially get involved in the war until two years after Hitler invaded Poland but America was deeply involved in the European war before we actually sent troops. FDR really wanted to help the Allies, especially the Brits, who after the French surrender in 1940 were the only ones actually fighting the Nazis until 1941, when there were a whole lot of Russians also fighting them. Even Congress recognized that the Nazis were a threat, and in 1940 it agreed to allow Cash and Carry arms sales to Great Britain. By the way, “Cash and Carry” is the name of a liquor store near Stan’s house, but anyway the sale of arms were “cash” sales meaning that they were not paid for with loans or IOUs and the carry part meant that the British would carry their own arms over, you know, to Britain. It’s the difference between buying a pizza at a grocery store and getting it delivery, except, you know, it’s not like that at all and I just want pizza. Then, in September 1940 Congress created the nation’s first peacetime draft, taking the next step toward involvement. And that was a huge deal because, you know, you don’t muster an army with no desire to eventually use it. By 1941, in spite of all our neutrality, FDR had pretty clearly sided with the Allies. America became the “arsenal of democracy” with the Lend Lease Act authorizing military aid to countries that promised to pay it back somehow after the war. We promise, we’ll figure it out. So, the U.S. essentially gave billions of dollars worth of arms and war material to Britain and, after the Nazis invaded in June of 1941, to the USSR as well. And the U.S. also froze Japanese assets here and basically ended all trade between America and Japan. But of course the event that pushed us fully into the war happened on December 7, 1941 when Japanese pilots attacked the American naval base at Pearl Harbor in Hawaii. 187 aircraft were destroyed, 18 naval vessels were damaged or destroyed, and more than 2000 American servicemen were killed. FDR asked Congress for a declaration of war, which they granted voting 477 to 1. And the day after that, Germany declared war on the United States and World War II officially became a world war. We almost always start the American story of World War II in Europe because, you know, Hitler, so I’m going to start in the Pacific, where until 1944 there were actually more American personnel deployed than in Europe. Things didn’t start well in the Pacific. Let’s go to the ThoughtBubble. Perhaps worse than Pearl Harbor was the surrender of 78,000 American and Filipino troops at Bataan. This was the largest surrender by American troops in history and it resulted in thousands dying on the Bataan Death March to prisoner of war camps where thousands more would die. But in May of 1942 we protected Australia from the Japanese fleet by winning the Battle of the Coral Sea, and then in June we won a huge victory at Midway island, midway between Hawaii and Japan I guess, and probably named by historians. The U.S. strategy in the Pacific has been called Island Hopping and it involved taking Japanese controlled islands one at a time to be used as bases for bombers that could then be used against Japan itself. It was a slow process and the fighting over these jungle-y South Pacific islands was fierce and extraordinarily costly. The battle at Guadalcanal went from August 1942 to February 1943 and they didn’t freeze like in Stalingrad, but conditions weren’t much better. And now let’s switch to the European theater. We call this the European war because we were fighting against Europeans and it ended in Europe, but the first U.S. troops to fight against Nazis actually did so in North Africa, so it’s kind of a misnomer. American weaponry was pretty poor but after our initial invasion in North Africa in November 1942 we got into it, and by 1943 we and the British defeated Rommel in the desert and we were ready to invade Europe, which should have made Stalin happy because up to this point Russians had been doing the bulk of the dying in the war. But Stalin wasn’t happy, first because he was a mean and nasty person and those kinds of people are rarely happy, and secondly, because rather than invading France and striking at Germany more directly, the Allies invaded Sicily and Italy where we fought for most of 1943 and much of 1944 until finally, on June 6th we joined some Brits and Canadians in invading Normandy on D-Day. And that was the beginning of the end for the Nazis. Thanks, ThoughtBubble. Oh it’s time for the Mystery Document already? Alright. The rules here are simple. I read the Mystery Document and usually I get it wrong and I get shocked. “They seemed terribly pathetic to me. They weren’t warriors. They were American boys who by mere chance of fate had wound up with guns in their hands, sneaking up a death-laden street in a strange and shattered city in a faraway country in a driving rain. They were afraid, but it was beyond their power to quit. They had no choice. They were good boys. I talked with them all afternoon as we sneaked slowly forward along the mysterious and rubbled streets, and I know they were good boys. And even though they weren’t warriors born to the kill, they won their battles. That was the point.” Man, that is some good writing, Stan. By famous war correspondent Ernie Pyle. Pewwww. That was me being a warrior. Pew, pew. I can’t even make finger guns. That’s-that’s how much of a not-warrior I am. I’m a worrier. I knew it was Ernie Pyle for two reasons. First, he’s talking about cities so it’s the European theatre. Secondly, he’s the best European theatre American writer in World War II by far. So while Americans did liberate Paris and were part of the final assault on Germany, and also liberated a number of concentration camps, Russians did most of the fighting in Europe, losing at least 20 million people, and in the end it was the Russians who captured Berlin. Although the Nazis never really had a chance to win the war after they started fighting the Russians and the Americans entered into it, it didn’t actually end until May 8th or 9th, 1945 (depending on when you got the news) And the war in the Pacific continued until August. Japan surrendered unconditionally after the United States dropped an atomic bomb on Hiroshima on August 6th and on Nagasaki on August 9th. We don’t celebrate the end of World War II in the United States, and I guess this is because we would have to decide whether to celebrate the end of the war in Europe or in Japan. Or maybe it’s just because it’s difficult to celebrate the use of atomic weapons. Atomic bombs were developed through the Manhattan Project, so called because the bombs were partly invented in Chicago and then built and tested in New Mexico. Trickery. That was the sort of covert thing the U.S. used to do really well before we developed the Internet. Although we weren’t that good at it since the Soviets did steal our technology and build a nuclear bomb like three years later. The two atomic bombs that were eventually dropped were the most destructive weapons the world had ever seen. The one dropped on Hiroshima killed 70,000 people instantly and by the end of 1945 another 70,000 had died from radiation poisoning. The bomb that was dropped on Nagasaki also killed 70,000 people. In fact, the death toll from those two bombs was greater than the number of American fatalities in the entire Pacific War. And that leads to one of the most hotly debated questions in recent history: was the use of atomic bombs justified or ethical? Those arguing against their use often point out that the Truman administration had good evidence that Japan would surrender if they were allowed to keep their emperor on the throne. And some also point out that the primary targets were not military, although there were 40,000 troops stationed in Hiroshima. Others argue that the real reason the United States dropped the bombs was to threaten the USSR, and prevent them from taking more territory in the east. And then there’s the argument that using such a destructive weapon was morally reprehensible because it was so destructive as to be qualitatively different from other weapons. For a couple centuries, our weapons had had the theoretical capability of eliminating all humans, but never before had it been so easy. But others reply that dropping the bombs helped save American lives. Some of Truman’s advisers worried that invasion of Japan would result in 250,000 American deaths and at least that many Japanese deaths. And that’s important to note because if there was one thing truly, horribly innovative about World War II, it was bombing. Sure there was radar and jets, but they weren’t nearly as significant as aerial bombardment, and by the time the a-bombs dropped, the idea of precision bombing only military targets wasn’t an option, in part because bombing was incredibly risky to planes and pilots. And by 1945, it was an acceptable and widespread strategy to target civilians as part of a total war. In World War II perhaps 40% of the estimated 50 million people killed were civilians. Compare that with World War I, where it was only 10%. We should be horrified that 140,000 people were killed in Hiroshima, but we should be horrified by all the civilian attacks in World War II. 25,000 people died in Dresden, more than 100,000 died in the firebombing of Tokyo in March of 1945. Thinking about Truman’s decision to drop the atomic bombs is important because it forces us to consider our understanding of history. Part of why we say that using atomic bombs was worse than conventional bombing was because we know what came after – the Cold War, the threat of nuclear annihilation. From the present, the dawn of atomic warfare is indeed terrifying. But people living at the time were living amid a different kind of terror and they couldn’t have known that there would be a nuclear arms race that threatened all of humanity. The Japanese didn’t look like they were going to give up and people on both sides were dying every day, so before we pass judgment, let’s try to put ourselves in the shoes of both the soldiers who were fighting, who didn’t have to fight on mainland Japan, and the civilians who were killed by the bombs. There’s no answer to be found there, but the opportunity of studying history is the opportunity to experience empathy. Now of course we’re never going to know what it’s like to be someone else, to have your life saved or taken by decisions made by the Allied command. Studying history and making genuine attempts at empathy helps us to grapple with the complexity of the world, not as we wish it were, but as we find it. Thanks for watching. I’ll see you next week. Crash Course is made through the combined efforts of all of these people and it exists because of you and your support through Subbable.com. Subbable is a voluntary subscription service that allows you to support Crash Course directly so that we can keep this show for free, forever, for everyone. You can check our our Subbable by clicking right there or there’s also a link in the video info. There are lots of great perks, but the greatest perk of all is knowing that you are making this show possible. Thank you so much. Thanks for watching. And as we saying my hometown, don’t forget to be awesome.
US_History
War_Expansion_Crash_Course_US_History_17.txt
Hi, I'm John Green, this is Crash Course U.S. history and today we’re going to discuss how the United States came to acquire two of its largest states, Texas and…there is another one. Mr. Green! Mr. Green! I believe the answer you’re looking for is Alaska. Oh me from the past, as you can clearly tell from the globe, Alaskan statehood never happened. No I am referring of course to California. Stan, are we using your computer today? Oh. Stan! We’ve talked about westward expansion a few times here on Crash Course, but it’s usually about, like, Kentucky or Ohio. This time we’re going really west, I mean, not like Hawaii west, but sea to shining sea west. [Theme Music] So you might remember that journalist John O’Sullivan coined the phrase Manifest Destiny to describe America’s god given right to take over all the land between the Atlantic and Pacific Oceans, regardless of who happened to be living there. Sorry Native Americans, Mexicans, French fur trappers, beavers, bison, prairie dogs, passenger pigeons. I’m not going to go so far as to give God credit for America’s internal imperialism, but I will say that our expansion had a lot to do with economics, especially when you consider Jefferson’s ideas about the empire of liberty. Stan, did I just say liberty? That means technically I also have to talk about slavery, but we’re gonna kick the slavery can down the road until later in the show. Just like American politicians did in the 19th century. By 1860 nearly 300,000 people had made the trip that has been immortalized by the classic educational video game “Oregon Trail.” Which, by the way, is inaccurate in the sense that a family of 6, even a very hungry one, cannot eat a buffalo. But is extremely accurate in that a lot of people died of dysentery and cholera. Frickin disease. So, Oregon at the time was jointly controlled by the U.S. and Britain. Northern Mexico at the time included what are now Texas, Arizona, Utah, Nevada, New Mexico, and California. But New Mexico and California were the only two with, like, big settlements. About 30,000 Mexicans lived in New Mexico, and about 3,500 in California, and in both places they were outnumbered by Native Americans. OK, let’s go to the Thought Bubble. When Mexico became independent, there were only about 2,000 Tejanos there. So to encourage economic development, Mexico’s government granted a huge tract of land to Moses Austin. Austin’s son Stephen made a tidy profit selling off smaller parcels of that land until there were 7,000 American Americans there. This made Mexico nervous so, backpedalling furiously, Mexico annulled the land contracts and banned further emigration into Texas. Even though slavery was already abolished in Mexico, up to now they had allowed Americans to bring slaves. Austin, joined by some Tejano elites, demanded greater autonomy and the right to use slave labor. Thinking the better of it, Mexican General Antonio Lopez de Santa Ana decided to assert control over the restive territory with an army, turning the elite’s demands for autonomy into a full-scale revolt for independence. On March 13, 1836, Santa Ana defeated the American defenders of the Alamo, killing 187 (or 188, sources differ) Americans including Davy Crockett. The Texas rebels would “remember the Alamo” and come back to defeat Santa Ana at the battle of San Jacinto. And Mexico was forced to recognize Texas’s independence. So Texas became the Lone Star Republic and quickly decided that it would be much better to be a less lonely star and join the United States. So, in 1837, Texas’ Congress called for union but all they heard back was, “not so fast, Texas.” Why? Because Texas wanted to be a slave state, and adding another slave state would disrupt the balance in the Senate, so Jackson and Van Buren did what good politicians always do: they ignored Texas. And then after Martin Van Buren wrote a letter denouncing any plan to annex Texas on the grounds that it would probably provoke a war, Democratic convention southerners threw their support behind slaveholding Andrew Jackson pal, James K. Polk. Polk just managed to get a presidential victory over perennial almost-president Henry Clay, and seeing the writing on the wall, Congress annexed Texas in March of 1845, days before Polk took office. Congress then forged an agreement with Britain to divide Oregon at the 49th parallel, which restored the slave state/free state balance in the Senate. Thanks, Thought Bubble. Hey, Stan, can I get the foreshadowing filter? I wonder if we’re going to be able to keep that slave state/free state balance...forever. The land-hungry James K. Polk had another goal as president: acquire California from Mexico. He tried to purchase it from Mexico, but they were like, “No,” which is Spanish for “No.” So Polk decided to do things the hard way. He sent troops under future president Zachary Taylor into this disputed border region. As expected, by which I mean intended, fighting broke out between American and Mexican forces. Polk, in calling for a declaration of war, claimed that the Mexicans had “shed blood upon American soil.” Although the soil in question was arguably not American, unless you think of America as being, you know, all of this. A majority of Americans supported this war. Although to be fair, a majority of Americans will support almost any war. I’m sorry, but it is true. At least at first. It was the first war fought by American troops primarily on foreign soil, as most of the fighting was done in Mexico. Among the dissenters was a Massachusetts Transcendentalist who is probably better known than the war itself. Henry David Thoreau was in fact thrown in jail for refusing to pay taxes in protest of the war, and wrote “On Civil Disobedience” in his defence, which many American high-schoolers are assigned to read and expected not to understand, lest they take the message to heart and stop doing assignments like reading “On Civil Disobedience.” Another critic was concerned about the increase in executive power that Polk seemed to show, saying: “Allow the president to invade a neighboring country whenever he shall deem it necessary to repel an invasion and you allow him to make war at pleasure” That critic was none other than noted peacenik Abraham Lincoln, who would go on to do more to expand executive power than any president in the 19th century except maybe Andrew Jackson. Right so Santa Ana’s army was defeated in February 1847 but Mexico refused to give up. So Winfield Scott, who had the unfortunate nickname “old fuss and feathers,” captured Mexico City itself in September. A final peace treaty, the Treaty of Guadalupe Hidalgo was signed in 1848, under which Mexico confirmed the annexation of Texas and further ceded California as well as several other places that would later become states, but we couldn’t fit on the map. In return, the U.S. paid Mexico $15 million and agreed to a no-backsies deal in re: Texas, thereby freeing Mexico from the shackles of Amarillo. I’m sorry Amarillians. No I’m not. I am. I am. I’m not. I am. This is great, Stan. The people of Amarillo hate me, also the people of New Jersey, Alaska is in the green-parts-of-not-America. We don’t even have Arizona and New Mexico on the chalkboard. Pretty soon I will have alienated everyone. Anyway, thanks to the land from Mexico, our dream of expanding from the Atlantic to the Pacific was finally complete. And as always happens when dreams come true, trouble started. After the treaty of Guadalupe Hidalgo, between 75,000 and 100,000 Spanish-speaking Mexicans and 150,000 Native Americans were under the jurisdiction of the United States. Despite the fact that the treaty granted Spanish descended Mexican “male citizens” legal and property rights, the Mexicans were still seen as inferior to Anglo-Saxons whose manifest destiny it was, of course, to overspread the continent. And the fact that these Mexicans were Catholic didn’t help either, especially because in the eastern part of the United States, there was a rising tide of anti-Catholic, anti-immigrant sentiment known as nativism. And there was a new political party, The American Party, dedicated entirely to such sentiment. They were referred to as the “Know-nothings” because when you asked them about their politics they would answer that they didn’t know anything. And indeed, they didn’t. This was not an expert branding strategy, although they did manage to win an unexpected number of local offices in a state heralded for its ignorance … Massachusetts. You thought I was going to say New Jersey, but I’m trying to make nice with the New Jersey people because they take it pretty personally. Meanwhile, in California, there weren’t enough white, English speaking American residents to apply for statehood – Until gold was discovered in 1848, leading of course to San Francisco’s NFL team, the San Francisco 48ers. By 1852, the non-Indian population in California had risen from 15,000 to 200,000 and it was 360,000 on the eve of the Civil War. Now not all of those migrants – mainly young men seeking their fortunes – were white. Nearly 25,000 Chinese people migrated to California, most as contract workers working for mining and railroad companies. And there were women, too, who ran restaurants, and worked as cooks, and laundresses, and prostitutes. But the ratio of men to women in California in 1860 was 3:1. Aw shmerg. It’s time for the Mystery Document? The rules here are simple. I read the Mystery Document and I’m either shocked by electricity or by the fact that I got it right. “We would beg to remind you that when your nation was a wilderness, and the nation from which you sprung barbarous, we exercised most of the arts and virtues of civilized life; that we are possessed of a language and a literature, and that men skilled in science and the arts are numerous among us; that the productions of our manufactories, our sail, and workshops, form no small share of commerce of the world; and that for centuries, colleges, schools, charitable institutions, asylums, and hospitals have been as common as in your own land. And we beg to remark, that so far as the history of our race in California goes, it stamps with the test of truth that we are not the degraded race you would make us.” So it’s someone who said that “we” had a great civilization when “you” were a wilderness. Plus they called us “barbarous,” so it’s either ancient Rome or China. I’m gonna lean toward China. That only gets me halfway there. Now I have to think of the name of the person. And I don’t know any famous people from mid-19th century China who lived in the U.S. People say I can’t sing. [buzzing] Norman Asing? Who the hell is Norman Asing? This doesn't- AAHHHH!! So these days California is known for its groovy, laid back, “oh your back hurts?" here’s some pot” attitude, but that was not the case in the 19th century. The California constitution of 1850 limited civil participation to whites – no Asians, no Black people or Native Americans could vote or testify in court. Indians were kicked off their land if it had any mineral value, and thousands of their orphaned children were sold as slaves. And all of this led to the Indian population of California dropping from 150,000 to about 30,000 between 1848 and 1860. So it wasn’t at all clear whether California was the kind of place to be admitted to the U.S. as a free state or as a slave state. The Missouri Compromise was of no help here because half of California is below the 36 30 line, and half is above it. So, a new “Free Soil” party formed in 1848 calling for the limiting of slavery’s expansion in the west so that it could be open for white people to live and work. I just want to be clear that most of the people who were for limiting slavery were not, like, un-racist. So, they nominated the admirably-whiskered Martin Van Buren for the presidency, and Van Buren and Democratic nominee Lewis Cass then split the northern vote, allowing the aforementioned Zachary Taylor, to win. So in 1850, when California finally did ask to be admitted into the Union, it was as a free state. Southerners freaked out because they saw it as the beginning of the end of slavery, but then, to the rescue came Henry Clay, for his last hurrah. He said, “We can kick this problem down the road once more” and brokered a four-part plan that became known rather anticlimactically as the Compromise of 1850. Historians, can you name nothing?! The four points were: 1. California would be admitted as a free state 2. The slave trade, but not slavery, would be outlawed in Washington D.C. 3. A new, super harsh fugitive slave law would be enacted, and 4. Popular sovereignty The idea was that in the remaining territories taken from Mexico, the local white inhabitants could decide for themselves whether the state would be slave or free when it applied to be part of the U.S. Ah, the Compromise of 1850. A great reminder that nothing protects the rights of minorities like the tyranny of the majority. There was a huge debate over the bill in which noted asshat John C. Calhoun was so sick that he had to have his pro-slavery, anti-compromise remarks read by a colleague. On the other side, New York’s Senator William Seward, an abolitionist, also argued against compromise, based on slavery being, you know, wrong. But, eventually the compromise did pass, thus averting a greater crisis for ten whole years. Ralph Waldo Emerson predicted that if the United States acquired part of Mexico, it would be like swallowing arsenic. And indeed, arsenic can be a slow-acting poison. Now I don’t think Ralph Waldo Emerson was a good enough writer to have thought that far ahead, but he was right. Some people say that manifest destiny made the Civil War inevitable. But, as we’ll see next week, what really made the Civil War inevitable was slavery. But, we see in the story of manifest destiny the underlying problem: the United States didn’t govern according to its own ideals. It didn’t extend liberties to Native Americans or Mexican Americans or immigrant populations or slaves. Thanks for watching, and we’ll see you next week, when things will get much worse. Crash Course is produced and directed by Stan Muller. Our script supervisor is Meredith Danko. The show is written by my high school history teacher Raoul Meyer and myself. Our associate producer is Danica Johnson. And our graphics team is Thought Café. If you’d like to contribute to the libertage, you can suggest captions. You can also ask questions in comments where they will be answered by our team of historians. Thank you for watching Crash Course and as we say in my hometown, don’t forget to be awesome.
US_History
The_Clinton_Years_or_the_1990s_Crash_Course_US_History_45.txt
Hi, I’m John Green, this is CrashCourse U.S. history, and today we have finally reached the Clinton years. Bill Clinton and I are really quite similar, actually. We were both brought up in the South. We both come from broken families … well, no, not actually. Also, I did not attended any Ivy League University. Yeah, I’m actually nothing like Bill Clinton. Well, except for the southern thing, and also both of us are married to women who are smarter than we are. Mr. Green, Mr. Green? But he was president. Whatever, I’m still young Me From the Past! Clinton wasn’t even governor of Arkansas until he was like (looks at computer)....oh, crap, he was 32, I’m finished! INTRO So Clinton’s presidency was focused on Domestic Policy and a sex scandal – in fact his campaign war room famously featured a sign that read “It’s the Economy, stupid.” His domestic legacy is pretty complex, though, so we’re going to start with his foreign policy. The Clinton years didn’t feature as many major foreign policy successes as Bush 41, but Clinton did have his moments. Like his administration achieved a partial success with the 1993 Oslo Accords when Israel recognized the legitimacy of the Palestinian Liberation Organisation. However, that eventually resulted in the PLO becoming progressively less powerful and as you may have noticed, it didn’t ultimately achieve peace in the Middle East. Clinton was more successful in Yugoslavia where he pushed NATO to actually do something for once in this case bombing, sending troops, kinda something. Now there had been widespread ethnic cleansing of Bosnian Muslims before the NATO intervention but the fighting ended with the Dayton Accords. And then there’s the Rwandan genocide, which the Clinton administration did absolutely nothing to prevent and where 800,000 people died in less than a month. The Rwandan genocide is probably the international community's greatest failure in the 2nd half of the 20th century and while certainly Clinton was among many people who were complicit to that including like, me, yeah… you know... so far it’s not such a great foreign policy record. Terrorism also became a bigger issue during Clinton’s presidency. The World Trade Center was bombed for the 1st time, the U.S.S. Cole was attacked. But the most destructive terrorist act during Clinton’s presidency was of course committed by Americans - Timothy McVeigh and Terry Nichols who blew up the Murrah Federal building in Oklahoma City. Which brings us to an awkward transition from domestic terrorism to domestic policy. So Bill Clinton was from Hope Arkansas and he ran as a centrist democrat who wanted to do things differently. He wasn’t going to be inside the Washington beltway. And he wasn’t going to be some old-fashioned liberal who was all about raising taxes funneling billions of dollars to Snuffleupagus. That centrism made him very electable but his first few domestic agenda items faltered, like he tried to end the ban on gay people entering in the military but opposition led him to compromise with the famous Don’t Ask Don’t Tell policy. Essentially you were allowed to be homosexual, if you were in the military, you just weren’t allowed to acknowledge it. And then there was the 1993 Health Care initiative led by Clinton’s wife, Hillary, which was also a failure. By the 90’s the United States was the last industrialized nation not to have universal health care and while Hillary Clinton’s plan would have resulted in Americans having universal health care it was too complicated to sell to us. Also, it faced very powerful opposition from like drug companies, and insurers, and medical device makers… lots of people. But at least it had a working website. What’s that, Stan? There was no web? What did they use, like a mobile app or something? There was no apps? I thought we were in modern history! So on the heels of these failed policy initiatives in 1994 Democrats were swept out of Congress and Republicans took control of both the Senate and the House. The new speaker of the House, whose real name was Newt Gingrich, and who would later run for president despite being named Newt Gingrich issued something called the Contract with America. It promised to cut government, cut taxes, cut regulation, overhaul welfare and end affirmative action -- and this led to a Government shutdown in 1995 over an inability to reach a budget agreement between the Congress and the president. Which in turn made all these new Congressional Republicans very unpopular with the American people as a whole and played into Clinton’s political strategy of “triangulation.” His strategy was to campaign against radical republicans while co-opting some of their ideas. The most obvious example was his declaration in January 1996 that “The era of big government is over”. Spoiler alert: It wasn’t. There has been no president since WWII who decreased the size of the government. And that will change when never because all of the things that actually cost the government a lot of money like Social Security and Medicare are very popular and both of those programs benefit old people who vote disproportionately because they have nothing to do since Murder She Wrote was cancelled. However, Clinton did actually shrink parts of the government with policies like the Telecommunications Act of 1996, which deregulated broadcasting. But Clinton’s signature economic policy was Welfare Reform – aka the Personal Responsibility and Work Opportunity Act – of 1996. This law replaced the Aid to Families with Dependent Children program, which had given money directly to poor mothers. But with Clinton’s welfare reform states received block grants that came with strings attached including work requirements and time limits for total benefits. Welfare rolls plummeted and many economists see this as the rare bipartisan victory in the 1990’s but it’s still controversial and many liberal people felt like Bill Clinton had betrayed them. But Clinton still remained popular through much of his presidency largely because it really is the economy stupid - and the economy got better. In fact by the time Clinton left office unemployment was below 4% which hadn’t happened since the 1960s. That meant there should have been inflation but somehow there wasn’t, possibly because of increased global competition that kept wages down and also energy prices that were remarkably low as worldwide oil production increased. Microchips made it possible to develop loads of new products, like personal computers and DVD players, and video games, and cell phones, and Crash Course. And computers completely transformed the American workplace. I mean until the 90’s people would go to work, and they would sit in their offices at their desks, and they would… I don’t know what did because they didn’t have computers! How did anything get done before computers, I mean how were books written, how was the Godfather edited, how was this globe made, I mean did some individual’s human hand sculpt it from clay? So no wonder the economy got better we had stumbled on the biggest innovation since like wheels. And during the Clinton administration we didn’t just have computers we had computers that began to connect to each other. I’m referring of course to the Internet which might have remained like a military communications network if computer scientists and entrepreneurs hadn’t worked out how to use it to sell things. This was the beginning of the e-commerce boom, which would be followed by an e-commerce bust, but then another e-commerce boom, which would eventually give us websites where you can buy Crash Course DVD’s, like DFTBA.com, and also lesser known e-commerce sites like Ebay and Amazon. Oh, it’s time for the mystery document? The rules here are simple. I read the mystery document, I either get the author correct, or I get shocked. Okay here we go. “The information highway will extend the electronic marketplace and make it the ultimate go-between, the universal middleman. Often the only humans involved in a transaction will be the actual buyer and the seller. All the goods for sale in the world will be available for you to examine, compare, and often customize. When you want to buy something you’ll be able to tell your computer to find if for you at the best price offered by any acceptable source or ask your computer to “haggle” with the computers of various sellers. Information about vendors and their products and services will be available to any computer connected to the highway. Servers distributed worldwide will accept bids, resolve offers into completed transactions, control authentication and security, and handle all other aspects of the marketplace, including the transfer of funds. This will carry us into a new world of low-friction, low-overhead capitalism, in which market information will be plentiful and transaction costs low. It will be a shopper’s heaven.” Stan, that sounds like something that Amazon founder Jeff Bezos would say. No? Dangit, Bill Gates. Let me tell you how much I enjoy this, none. Oh, the information super highway it made all of this possible including my shock pen. Ahhh! Now one of the lessons of history is that good news for someone is almost always bad news for someone else and that was certainly the case with the longest period of economic expansion in American history. Increased use of Information Technology facilitated the globalization of manufacturing and the pressure to manufacture cheaply pushed wages down and encouraged companies to locate factories in countries with lower environmental regulations and also lower wages. That’s great for companies, it’s good for prices, arguably good for workers in the developing world, not so great for the environment or for American workers. The deregulation of finance also contributed to global growth. Capital could flow more easily anywhere in the world but this also meant that it could flow out easily, making financial crises more likely and more widespread. The growth of free flowing capital in the 1990’s created a world in which the crash of 2008 was more or less inevitable. But before that we had the crash of 2000. As money flowed into the stock market, bubbles developed. And in some ways this was more problematic than it used to be because a much greater percentage of Americans had become investors in stocks - an actual majority of them by the year 2000. And many of these investors were buying into these hot new dot-com stocks, in fact the tech-heavy NASDAQ exchange soared in 1998 and 1999. And then it lost 80% of its value in 2000 when the bubble burst. It turns out that the Pets.com business model of selling you dog food at a loss is not a sustainable business model. Although to be fair Amazon has been selling stuff at a loss now for 20 years and they’re still at it. So… you know… maybe I’m wrong. So during this period real wages grew but the gains were very unequal like when you adjust for inflation, wages of nonsupervisory workers remained below what they were in the 1970s. And for the poor it was even worse. Our old friend Eric Foner reports that “Average after-tax income of the poorest 1/5 of Americans fell 12 percent, and that of the middle 1/5 decreased by 3 percent.”[1] Meanwhile, the income of the top fifth increased 38%. Now of course this trend towards inequality and the majority of jobs being created in low wage, insecure, service industries would continue into the 21st century. But the economic and political pictures that we’ve sought to paint only tell half of the story of the 1990s, because it was also a decade characterized by what has been called the Culture Wars. A big part of this was immigration, which rose enormously after immigration reform in 1965. Between 1965 and 2000 the US saw almost 24 million immigrants arrive, compared with 27 million during the peak immigration period between 1880 and 1924. Fully half of new immigrants came from Latin America and the Caribbean, 35% came from Asia, only 10% came from Europe and most of them were from the former USSR and the Balkans. As had always been the case, most immigrants were attracted by labor opportunities, but now more were highly educated. In fact, 40% had college educations. Let’s go to the thoughtbubble. Latinos were the largest immigrant group by far, with Mexicans making up the largest contingent and by 2007 Latinos would replace African Americans as the second largest ethnic group. Latinos suffered disproportionate poverty, and, despite significant economic gains during the 1990s, African Americans still found their economic opportunities limited. According to Eric Foner, “In 2007, the total assets of the median white family […] stood at $87,000. For black families, the figure was $5,400.”[2] Diversity also increased in other ways like single parent families became more accepted which was essential as 50% of marriages ended in divorce. Out-of-wedlock births declined, primarily because teenagers were practicing safer sex. And teens and adults were cohabiting before or instead of marriage. Eventually the Mom, Dad, and 2.4 kids standard American household became only one of a number of accepted options for families. Gay and trans people became increasingly visible in the national consciousness as a result of the GLBT rights movement and it becoming safer for people to come out of the closet. On the other hand, the AIDS epidemic, which disproportionately affected the GLBT community was disastrous. By 2000 400,000 Americans had died of AIDS. Then there’s the depressing rise in imprisonment. Politicians competed with each other to see who could be tougher on crime and as the War on Drugs continued, many state legislatures passed “three strikes” laws meaning that people who were convicted of three felonies would go to prison for life. The number of Americans in prison skyrocketed. By 2008 it was 2.3 million, ONE QUARTER of the total number of inmates on planet Earth. Thanks, thoughtbubble. Although I have to say I thought this was going to be a happy one, I mean the economy is growing, things are getting better for people in the GLBTQ community, and then boom, boom, boom, it’s all terrible! I don’t want to underplay the many benefits of our increased prosperity and diversity but all of this multiculturalism and change made for a very tense political atmosphere. To some people it seemed like the open free-wheeling liberalism of the 60’s had run amuck, and those people really started to hate the Clintons. But among Bill Clinton’s many flaws: facelessness, cigar smoking, his biggest was his inability to stop cheating on his wife. Clinton had dodged accusations of extramarital skoodilypooping while running for the presidential nomination which contributed to his unfortunate “Slick Willie” moniker. But while he was president, Clinton’s former employee Paula Jones sued him for sexual harassment that had occurred, allegedly occurred, I guess it probably occurred, allegedly occurred while he was governor of Arkansas. While gathering evidence for that lawsuit, investigators discovered that the president had carried on a sexual relationship with a young intern named Monica Lewinsky. The President denied having “sexual relations” with Monica Lewinsky which was a lie unless you are President Bill Clinton and have a very narrow definition of “sexual relations.” That lie to a justice department official was the basis for articles of impeachment for perjury and obstruction of justice. And so it was that the president of the United States was impeached for saying that he didn’t have sex with a woman that he did have sex with, unless of course you define sex very narrowly, and it all depends on what your definition of is is, and etc. In early 1999, Clinton was acquitted of these charges in a congressional vote that went right down party lines and he served out the remainder of his term but he was significantly weakened. Also, he served out the remainder of his presidency sleeping on the couch. So the 90s were a really pivotal decade to the world we live in right now, a globalized, multicultural, instagram-filtered world But as we became more globally connected political divisions grew within the United States. And this became especially problematic because with the growth of the Internet it was easier than ever to only hear voices that you already know you agree with. To live inside of an echo chamber where your news doesn’t necessarily resemble your neighbor’s news. In some ways Bill Clinton directed these changes but in most ways they directed him. But that’s what I find so fascinating about history, even the fancy people who get their heads on the chalkboard, even they are subject to historical forces. Thanks for watching. I’ll see you next week. Crash Course is made with all the help of these nice people and it’s possible because of your support through Subbable.com, a voluntary subscription service that allows you to pay monthly for Crash Course whatever you want so that we can keep it free for everyone forever. You’ll find lots of cool perks at Subbable like a chance to sponsor videos, and signed posters and stuff, so please check it out. You can click here on my face or there is a link in the video info below. Thank you so much for watching Crash Course, thanks for making it possible, and as we say in my hometown, “Don’t forget to be awesome.” ________________ [1] Foner. Give me Liberty ebook version p. 1141 [2] Foner, Give me Liberty. Ebook version p. 1150.
US_History
Civil_Rights_and_the_1950s_Crash_Course_US_History_39.txt
Episode 39: Consensus and Protest: Civil Rights LOCKED Hi, I’m John Green, this is Crash Course U.S. history and today we’re going to look at one of the most important periods of American social history, the 1950s. Why is it so important? Well, first because it saw the advent of the greatest invention in human history: Television. Mr. Green, Mr. Green! I like TV! By the way, you’re from the future. How does the X-Files end? Are there aliens or no aliens? No spoilers, Me From The Past, you’re going to have to go to college and watch the X-Files get terrible just like I did. No it’s mostly important because of the Civil Rights Movement We’re going to talk about some of the heroic figures like Martin Luther King and Rosa Parks, but much of the real story is about the thousands of people you’ve never heard of who fought to make America more inclusive. But before we look at the various changes that the Civil Rights Movement was pushing for, we should spend a little time looking at the society that they were trying to change. The 1950s has been called a period of consensus, and I suppose it was, at least for the white males who wrote about it and who all agreed that the 1950s were fantastic for white males. Consensus culture was caused first, by the Cold War – people were hesitant to criticize the United States for fear of being branded a communist, and, second, by affluence – increasing prosperity meant that more people didn’t have as much to be critical of. And this widespread affluence was something new in the United States. Between 1946 and 1960 Americans experienced a period of economic expansion that saw standards of living rise and gross national product more than double. And unlike many previous American economic expansions, much of the growing prosperity in the fifties was shared by ordinary working people who saw their wages rise. To quote our old friend Eric Foner, “By 1960, an estimated 60 percent of Americans enjoyed what the government defined as a middle-class standard of living.”[1] And this meant that increasing numbers of Americans had access things like television, and air conditioning, and dishwashers and air travel. That doesn’t really seem like a bonus. Anyway, despite the fact that they were being stuffed into tiny metal cylinders and hurdled through the air, most Americans were happy because they had, like, indoor plumbing and electricity. intro The 1950s was the era of suburbanization. The number of homes in the United States doubled during the decade, which had the pleasant side effect of creating lots of construction jobs. The classic example of suburbanization was Levittown in New York, where 10,000 almost identical homes were built and became home to 40,000 people almost overnight. And living further from the city meant that more Americans needed cars, which was good news for Detroit where cars were being churned out with the expectation that Americans would replace them every two years. By 1960, 80% of Americans owned at least one car and 14% had two or more. And car culture changed the way that Americans lived and shopped. I mean it gave us shopping malls, and drive thru restaurants, and the backseat makeout session. I mean, high school me didn’t get the backseat makeout session. But, other people did! I did get the Burger King drive thru though. And lots of it. Our whole picture of the American standard of living, with its abundance of consumer goods and plentiful services was established in the 1950s. And so, for so for many people this era was something of a “golden age” especially when we look back on it today with nostalgia. But there were critics, even at the time. So when we say the 1950s were an era of consensus, one of the things we’re saying is there wasn’t much room for debate about what it meant to be an American. Most people agreed on the American values: individualism, respect for private property, and belief in equal opportunity. The key problem was that we believed in equal opportunity, but didn’t actually provide it. But some people were concerned that the cookie cutter vision of the good life and the celebration of the middle class lifestyle was displacing other conceptions of citizenship. Like the sociologist C. Wright Mills described a combination of military, corporate, and political leaders as a power elite whose control over government and the economy was such as to make democracy an afterthought. In The Lonely Crowd sociologist David Riesman criticized Americans for being conformist and lacking the rich inner life necessary to be truly independent. And John Kenneth Galbraith questioned an Affluent Society that would pay for new cars and new missiles but not for new schools. And we can’t mention the 1950s without discussing teenagers since this was the decade that gave us Rock and Roll, and rock stars like Bill Haley and the Comets, Buddy Holly and the Crickets, and Elvis Presley and his hips. Another gift of the 1950s was literature, much of which appeals especially to teenagers. Like, the Beats presented a rather drug-fueled and not always coherent criticism of the bourgeois 1950’s morals. They rejected materialism, and suburban ennui and things like regular jobs while celebrating impulsivity, and recklessness, experimentation and freedom. And also heroin. So you might have noticed something about all those critics of the 1950s that I just mentioned: they were all white dudes. Now, we’re gonna be talking about women in the 1950s and 1960s next week because their liberation movement began a bit later, but what most people call the Civil Rights Movement really did begin in the 1950s. While the 1950s were something of a golden age for many blue and white collar workers, it was hardly a period of expanding opportunities for African Americans. Rigid segregation was the rule throughout the country, especially in housing, but also in jobs and in employment. In the South, public accommodations were segregated by law, while in the north it was usually happening by custom or de facto segregation. To give just one example, the new suburban neighborhoods that sprang up in the 1950s were almost completely white and this remained true for decades. According Eric Foner, “As late as the 1990s, nearly 90 percent of suburban whites lived in communities with non-white populations less than 1 percent.” And it wasn’t just housing. In the 1950s half of black families lived in poverty. When they were able to get union jobs, black workers had less seniority than their white counterparts so their employment was less stable. And their educational opportunities were severely limited by sub-standard segregated schools. Now you might think the Civil Rights Movement began with Rosa Parks and the Montgomery Bus Boycott or else Brown v. Board of Education, but it really started during WW2 with efforts like those of A. Philip Randolph and the soldiers taking part in the Double-V crusade. But even before that, black Americans had been fighting for civil rights. It’s just that in the 1950s, they started to win. So, desegregating schools was a key goal of the Civil Rights movement. And it started in California in 1946. In the case of Mendez v. Westminster the California Supreme Court ruled that Orange County, of all places, had to desegregate their schools. They’d been discriminating against Latinos. And then, California’s governor, Earl Warren, signed an order that repealed all school segregation in the state. That same Earl Warren, by the way, was Chief Justice when the landmark case Brown v. Board of Education came before the Supreme Court in 1954. The NAACP Legal Defense Fund under the leadership of Thurgood Marshall had been pursuing a legal strategy of trying to make states live up to the ruling in Plessy v. Ferguson that required all public facilities to be separate but equal. They started by bringing lawsuits against professional schools like law schools, because it was really obvious that the three classrooms and no library that Texas set up for its African American law students were not equal to the actual University of Texas’s law school. But the Brown case was about public schools for children. It was actually a combination of 5 cases from 4 states, of which Brown happened to be alphabetically the first. The Board of Education in question incidentally was in Topeka Kansas, not one of the states of the old Confederacy, but nonetheless a city that did restricted schooling by race. Oh, it’s time for the Mystery Document? The rules here are simple. I read the Mystery Document. If I’m wrong, I get shocked. "Segregation of white and colored children in public schools has a detrimental effect upon the colored children. The impact is greater when it has the sanction of the law, for the policy of separating the races is usually interpreted as denoting the inferiority of the negro group. A sense of inferiority affects the motivation of a child to learn. Segregation with the sanction of law, therefore, has a tendency to [retard] the educational and mental development of negro children and to deprive them of some of the benefits they would receive in a racial[ly] integrated school system. [Footnote 10]"[2] Stan, the last two weeks you have given me two extraordinary gifts and I am thankful. It is Earl Warren from Brown v. Board of Education. Huzzah! Justice Warren is actually quoting from sociological research there that shows that segregation itself is psychologically damaging to black children because they recognize that being separated out is a badge of inferiority. Alright, let’s go to the Thought Bubble. The Brown decision was a watershed but it didn’t lead to massive immediate desegregation of the nation’s public schools. In fact, it spawned what came to be known as “Massive Resistance” in the South. The resistance got so massive, in fact, that a number of counties, rather than integrate their schools, closed them. Prince Edward County in Virginia, for instance, closed its schools in 1959 and didn’t re-open them again until 1964. Except they didn’t really close them because many states appropriated funds to pay for white students to attend “private” academies. Some states got so into the resistance that they began to fly the Confederate Battle flag over their state capitol buildings. Yes, I’m looking at you Alabama and South Carolina. On December 1, 1955, Rosa Parks refused to move to the back of a bus in Montgomery, Alabama and got arrested, kicking off the Montgomery Bus Boycott that lasted almost a year. A lot of people think that Parks was simply an average African American working woman who was tired and fed up with segregation, but the truth is more complicated. Parks had been active in politics since the 1930s and had protested the notorious Scottsboro Boys case. She had served as secretary for the NAACP and she had begun her quest to register to vote in Alabama in 1943. She failed a literacy test three times before becoming one of the very few black people registered to vote in the state. And in 1954 she attended a training session for political activists and met other civil rights radicals. So Rosa Parks was an active participant in the fight for black civil rights long before she sat on that bus. The Bus Boycott also thrust into prominence a young pastor from Atlanta, the 26 year old Martin Luther King Jr. He helped to organize the boycott from his Baptist church, which reminds us that black churches played a pivotal role in the Civil Rights Movement. That boycott would go on to last for 381 days and in the end, the city of Montgomery relented. Thanks, Thought Bubble. So that was, of course, only the beginning for Martin Luther King, who achieved his greatest triumphs in the 1960s. After Montgomery, he was instrumental in forming the Southern Christian Leadership Conference, a coalition of black civil rights and church leaders who pushed for integration. And they needed to fight hard, especially in the face of Massive Resistance and an Eisenhower administration that was lukewarm at best about civil rights. But I suppose Eisenhower did stick up for civil rights when forced to, as when Arkansas Governor Orval Faubus used the National Guard to prevent the integration of Little Rock’s Central High School by 9 black students in 1957. Eisenhower was like, “You know, as the guy who invaded Normandy, I don’t think that’s the best use for the National Guard.” So, Eisenhower sent the 101st Airborne Division (not the entirety of it, but some of it) to Little Rock, Arkansas, to walk kids to school. Which they did for a year. After that, Faubus closed the schools, but at least the federal government showed that it wouldn’t allow states to ignore court orders about the Constitution. In your face, John C. Calhoun. Despite the court decision and the dispatching of Federal troops, by the end of the 1950s fewer than two percent of black students attended integrated schools in the South. So, the modern movement for Civil Rights had begun, but it was clear that there was still a lot of work to do. But the emergence of the Civil Rights Movement shows us that the picture of consensus in the 1950s is not quite as clear-cut as its proponents would have us believe. Yes, there was widespread affluence, particularly among white people, and criticism of the government and America generally was stifled by the fear of appearing to sympathize with Communism. But there was also widespread systemic inequality and poverty in the decade that shows just how far away we were from living the ideal of equal opportunity. That we have made real progress, and we have, is a credit to the voices of protest. Next week we’ll see how women, Latinos, and gay people added their voices to the protests and look at what they were and were not able to change in the 1960s. Thanks for watching. I’ll see you then. Crash Course is made with the help of all of these nice people and it’s possible because of your support through Subbable.com. Subbable is a voluntary subscription service that allows you to subscribe to Crash Course at the price of your choosing, including zero dollars a month. But hopefully more than that. There are also great perks you can get, like signed posters. So if you like and value Crash Course, help us keep it free for everyone for ever by subscribing now at Subbable. You can click on my face. Now, my face moved, but you can still click on it. Thanks again for watching Crash Course and as we say in my hometown, don’t forget to be awesome. ________________ [1] Foner Give me Liberty ebook version p. 992 [2] http://supreme.justia.com/cases/federal/us/347/483/case.html
US_History
Reconstruction_and_1876_Crash_Course_US_History_22.txt
Episode 21: Reconstruction Hi, I’m John Green, this is Crash Course U.S. History and huzzah! The Civil War is over! The slaves are free! Huzzah! That one hit me in the head? It’s very dangerous, Crash Course. So when you say, “Don’t aim at a person,” that includes myself? The roller coaster only goes up from here, my friends. Huzzah! Mr. Green, Mr. Green, what about the epic failure of Reconstruction? Oh, right. Stupid Reconstruction always ruining everything intro So after the Civil War ended, the United States had to reintegrate both a formerly slave population and a formerly rebellious population back into the country, which is a challenge that we might’ve met, except Abraham Lincoln was assassinated and we were left with Andrew “I am the Third Worst President Ever” Johnson. I’m sorry, Abe, but you don’t get to be in the show anymore. So, Lincoln’s whole post-war idea was to facilitate reunion and reconciliation, and Andrew Johnson’s guiding Reconstruction principle was that the South never had a right to secede in the first place. Also, because he was himself a Southerner, he resented all the elites in the South who had snubbed him, AND he was also a racist who didn’t think that blacks should have any role in Reconstruction. TRIFECTA! So between 1865 and 1867, the so-called period of Presidential Reconstruction, Johnson appointed provisional governors and ordered them to call state conventions to establish new all-white governments. And in their 100% whiteness and oppression of former slaves, those new governments looked suspiciously like the old confederate governments they had replaced. And what was changing for the former slaves? Well, in some ways, a lot. Like, Fiske and Howard universities were established, as well as many primary and secondary schools, thanks in part to The Freedman’s Bureau, which only lasted until 1870, but had the power to divide up confiscated and abandoned confederate land for former slaves. And this was very important because to most slaves, land ownership was the key to freedom, and many felt like they’d been promised land by the Union Army. Like, General Sherman’s Field Order 15, promised to distribute land in 40 acre plots to former slaves. But that didn’t happen, either through the Freedman’s Bureau or anywhere else. Instead, President Johnson ordered all land returned to its former owners. So the South remained largely agricultural with the same people owning the same land, and in the end, we ended up with sharecropping. Let’s go to the Thought Bubble. The system of sharecropping replaced slavery in many places throughout the South. Landowners would provide housing to the sharecroppers--no, Thought Bubble, not quite that nice. There ya go--also tools and seed, and then the sharecroppers received, get this, a share of their crop--usually between a third and a half, with the price for that harvest often set by the landowner. Freed blacks got to control their work, and plantation owners got a steady workforce that couldn’t easily leave, because they had little opportunity to save money and make the big capital investments in, like, land or tools. By the late 1860s, poor white farmers were sharecropping as well--in fact, by the Great Depression, most sharecroppers were white. And while sharecropping certainly wasn’t slavery, it did result in a quasi-serfdom that tied workers to land they didn’t own--more or less the opposite of Jefferson’s ideal of the small, independent farmer. So, the Republicans in Congress weren’t happy that this reconstructed south looked so much like the pre-Civil War south, so they took the lead in reconstruction after 1867. Radical Republicans felt the war had been fought for equal rights and wanted to see the powers of the national government expanded. Few were as radical as Thaddeus “Tommy Lee Jones” Stephens who wanted to take away land from the Southern planters and give it to the former slaves, but rank-and-file Republicans were radical enough to pass the Civil Rights Bill, which defined persons born in the United States as citizens and established nationwide equality before the law regardless of race. Andrew Johnson immediately vetoed the law, claiming that trying to protect the rights of African Americans amounted to discrimination against white people, which so infuriated Republicans that Congress did something it had never done before in all of American history. They overrode the Presidential veto with a 2/3rds majority and the Civil Rights Act became law. So then Congress really had its dander up and decided to amend the Constitution with the 14th amendment, which defines citizenship, guarantees equal protection, and extends the rights in the Bill of Rights to all the states (sort of). The amendment had almost no Democratic support, but it also didn’t need any, because there were almost no Democrats in Congress on account of how Congress had refused to seat the representatives from the “new” all-white governments that Johnson supported. And that’s how we got the 14th amendment, arguably the most important in the whole Constitution. Thanks, Thought Bubble. Oh, straight to the mystery document today? Alright. The rules here are simple. I guess the author of the Mystery Document and try not to get shocked. Alright let’s see what we’ve got today. Sec. 1. Be it ordained by the police jury of the parish of St. Landry, That no negro shall be allowed to pass within the limits of said parish without special permit in writing from his employer. Sec. 4. . . . Every negro is required to be in the regular service of some white person, or former owner, who shall be held responsible for the conduct of said negro.. Sec. 6. . . . No negro shall be permitted to preach, exhort, or otherwise declaim to congregations of colored people, without a special permission in writing from the president of the police jury. . . . Gee, Stan, I wonder if the President of the Police Jury was white. I actually know this one. It is a Black Code, which was basically legal codes where they just replaced the word “slave” with the word “negro.” And this code shows just how unwilling white governments were to ensure the rights of new, free citizens. I would celebrate not getting shocked, but now I am depressed. So, okay, in 1867, again over Johnson’s veto, Congress passed the Reconstruction Act, which divided the south into 5 military districts and required each state to create a new government, one that included participation of black men. Those new governments had to ratify the 14th amendment if they wanted to get back into the union. Radical Reconstruction had begun. So, in 1868, Andrew Johnson was about as electable in the U.S. as Jefferson Davis, and sure enough he didn’t win. Instead, the 1868 election was won by Republican and former Union general Ulysses S. Grant. But Grant’s margin of victory was small enough that Republicans were like, “Man, we would sure win more elections if black people could vote.” Which is something you hear Republicans say all the time these days. So Congressional Republicans pushed the 15th Amendment, which prohibited states from denying men the right to vote based on race, but not based on gender or literacy or whether your grandfather could vote. So states ended up with a lot of leeway when it came to denying the franchise to African Americans, which of course they did. So here we have the federal government dictating who can vote, and who is and isn’t a citizen of a state, and establishing equality under the law--even local laws. And this is a really big deal in American history, because the national government became, rather than a threat to individual liberty, “the custodian of freedom,” as Radical Republican Charles Sumner put it. So but with this legal protection, former slaves began to exercise their rights. They participated in the political process by direct action, such as staging sit-ins to integrate street-cars, by voting in elections, and by holding office. Most African Americans were Republicans at the time, and because they could vote and were a large part of the population, the Republican party came to dominate politics in the South, just like today, except totally different. Now, Southern mythology about the age of radical Reconstruction is exemplified by Gone with the Wind, which of course tells the story of northern Republican dominance and corruption by southern Republicans. Fortune seeking northern carpetbaggers, seen here, as well as southern turncoat scalawags dominated politics and all of the African American elected leaders were either corrupt or puppets or both. Yeah, well, like the rest of Gone with the Wind, that’s a bit of an oversimplification. There were about 2,000 African Americans who held office during Reconstruction, and the vast majority of them were not corrupt. Consider for example the not-corrupt and amazingly-named Pinckney B.S. Pinchback, who from 1872 to 1873 served very briefly in Louisiana as America’s first black governor. And went on to be a senator and a member of the House of Representatives. By the way, America’s second African American governor, Douglas Wilder of Virginia was elected in 1989. Having African American officeholders was a huge step forward in term of ensuring the rights of African Americans because it meant that there would be black juries and less discrimination in state and local governments when it came to providing basic services. But in the end, Republican governments failed in the South. There were important achievements, especially a school system that, while segregated, did attempt to educate both black and white children. And even more importantly, they created a functioning government where both white and African American citizens could participate. According to one white South Carolina lawyer, “We have gone through one of the most remarkable changes in our relations to each other that has been known, perhaps, in the history of the world.” That’s a little hyperbolic, but we are America after all. (libertage) It’s true that corruption was widespread, but it was in the North, too. I mean, we’re talking about governments. And that’s not why Reconstruction really ended: It ended because 1. things like schools and road repair cost money, which meant taxes, which made Republican governments very unpopular because Americans hate taxes, and 2. White southerners could not accept African Americans exercising basic civil rights, holding office or voting. And for many, the best way to return things to the way they were before reconstruction was through violence. Especially after 1867, much of the violence directed toward African Americans in the South was politically motivated. The Ku Klux Klan was founded in 1866 and it quickly became a terrorist organization, targeting Republicans, both black and white, beating and murdering men and women in order to intimidate them and keep them from voting. The worst act of violence was probably the massacre at Colfax, Louisiana where hundreds of former slaves were murdered. And between intimidation and emerging discriminatory voting laws, fewer black men voted, which allowed white Democrats to take control of state governments in the south, and returned white Democratic congressional delegations to Washington. These white southern politicians called themselves “Redeemers” because they claimed to have redeemed the south from northern republican corruption and black rule. Now, it’s likely that the South would have fallen back into Democratic hands eventually, but the process was aided by Northern Republicans losing interest in Reconstruction. In 1873, the U.S. fell into yet another not-quite-Great economic depression and northerners lost the stomach to fight for the rights of black people in the south, which in addition to being hard was expensive. So by 1876 the supporters of reconstruction were in full retreat and the Democrats were resurgent, especially in the south. And this set up one of the most contentious elections in American history. The Democrats nominated New York Governor (and NYU Law School graduate) Samuel Tilden. The Republicans chose Ohio governor (and Kenyon College alumnus) Rutherford B. Hayes. One man who’d gone to Crash Course writer Raoul Meyer’s law school. And another who’d gone to my college, Kenyon. Now, if the election had been based on facial hair, as elections should be, there would’ve been no controversy, but sadly we have an electoral college here in the United States, and in 1876 there were disputed electoral votes in South Carolina, Louisiana, and, of course, Florida. Now you might remember that in these situations, there is a constitutional provision that says Congress should decide the winner, but Congress, shockingly, proved unable to accomplish something. So they appointed a 15 man Electoral Commission--a Super-Committee, if you will. And there were 8 Republicans on that committee and 7 Democrats, so you will never guess who won. Kenyon College’s own Rutherford B. Hayes. Go Lords and Ladies! And yes, that is our mascot. Shut up. Anyway in order to get the Presidency and win the support of the supercommittee, Hayes’ people agreed to cede control of the South to the Democrats and to stop meddling in Southern affairs and also to build a transcontinental railroad through Texas. This is called the Bargain of 1877 because historians are so good at naming things and it basically killed Reconstruction. Without any more federal troops in Southern states and with control of Southern legislatures firmly in the hands of white democrats the states were free to go back to restricting the freedom of black people, which they did. Legislatures passed Jim Crow laws that limited African American’s access to public accommodations and legal protections. States passed laws that took away black people’s right to vote and social and economic mobility among African Americans in the south declined precipitously. However, for a brief moment, the United States was more democratic than it had ever been before. And an entire segment of the population that had no impact on politics before was now allowed to participate. And for the freedmen who lived through it, that was a monumental change, and it would echo down to the Civil Rights movement in the 1950s and 1960s, sometimes called the second reconstruction. But we’re gonna end this episode on a downer, as we are wont to do here at Crash Course US History because I want to point out a lesser-known legacy of Reconstruction. The Reconstruction amendments and laws that were passed granted former slaves political freedom and rights, especially the vote, and that was critical. But to give them what they really wanted and needed, plots of land that would make them economically independent, would have required confiscation, and that violation of property rights was too much for all but the most radical Republicans. And that question of what it really means to be “free” in a system of free market capitalism has proven very complicated indeed. I’ll see you next week. Crash Course is produced and directed by Stan Muller. Our script supervisor is Meredith Danko. The associate producer is Danica Johnson. The show is written by my high school history teacher, Raoul Meyer, and myself. And our graphics team is Thought Café. Every week there’s a new caption for the libertage. You can suggest those in comments where you can also ask questions about today’s video that will be answered by our team of historians. Thank you for watching Crash Course. Don’t forget to subscribe. And as we say in my hometown, don’t forget to be awesome. reconstruction -
US_History
Obamanation_Crash_Course_US_History_47.txt
Hi, I’m John Green, this is CrashCourse U.S. history and we’ve finally done it, we’ve reached the end of history! Ahh. Stan, says that history never ends but whatever we’ve reached the part that Me From the Present is in. So we aren’t going to cover the astonishing results of the 2016 presidential election. What we are going to do is try to talk about historical events that are also current events. Future John Green here to tell you that in a stunning turn of events the 2020 presidential election will be won by - Harry Styles. I know that he’s English and under 35 but we’re going to change the constitution to make it possible. Because… that’s how much we love Harry Styles in 2020. Intro So when we last left George W Bush, his approval rating was dropping to the lowest number in President of the United States approval rating history. And the U.S. was facing what turned out to be the 2nd worst economic crises in the past 150 years. A crisis that remains unnamed because we’re kind of still in it. But I’d like to propose a couple names: the Wall Street Wamboozle, the Financial Fartstorm. However, knowing historians they will inevitably call it like The Major Recession of 2008 - 2012. Booooo! So what caused the Financial Fartstorm that began in late 2008? Well, it was a mixture of public and private activities that tilted towards short-term economic thinking, speculation and irresponsible spending. First, there are the Federal Reserve’s policies of keeping interest rates freakishly low in response to the 2001-2002 recession. Now, this worked, but the recession ended and interest rates stayed low. And this, combined with unscrupulous mortgage lenders, encouraged people to buy houses that they could not afford. In the early 2000s, many millions of Americans, including certain Crash Course US History hosts, bought real estate assuming that its value would increase rapidly and forever, so that when you were unable to make payments, you’d just sell it, pay off the mortgage, and make a tidy profit. It turns out this was essentially a pyramid scheme and, my friends, I was not at the top of the pyramid. So back then you could buy a house with a so-called NINJA loan which sadly this did not involve mutant ninja turtles or pizza. Ninja stands for No Income, No Job, and No Assets. Traditionally, people in this situation can’t borrow hundreds of thousands of dollars, but in the early 2000’s, these loans were giving the benign sounding designation “subprime.” So it’s important to understand that it wasn’t just like big Wall Street banks financing huge deals with debt, regular people were doing it - like me. All this created a classic housing bubble, which was doomed to burst. Also, with the interest on government Treasury Bills effectively zero, investors had to look elsewhere for better returns, which led to the idea of issuing securities – these bond-like instruments that were backed by mortgages. The thinking was that the interest people paid on their mortgages would supply the underlying value of the security, the way that like tax revenues are the source of value of a government bond. Now of course there’d be a minority of people who’d fail to pay off their securitised mortgages, but most people would pay because, you know, they’d want to keep their houses. But it turns out that if you haven’t paid any money to own your house, you don’t feel all that invested in it. Now there are even more reasons why these securities were terrible ideas, but the important thing is that when the mortgages turned bad, these securities became toxic assets. Basically, the people who held them suddenly didn’t know what they were worth if anything and banks overreacted to this uncertainty as banks like to do by not lending out any money. And that’s called a credit freeze, which is very bad. So that’s how a housing bubble turned into a full-fledged financial crisis. Alright Let’s go to the ThoughtBubble. When banks stop lending, business can’t function. So the stock market collapsed, with the Dow Jones Industrial Average dropping from above 14,000 to around 8,000 which wiped out about $7trillion of shareholder wealth. And a majority of Americans had money invested in the stock market, much of it in the form of retirement funds. With it being harder to borrow money, Americans finally cut back on their spending, which resulted in many businesses failing and by the end of 2008, 2.5 million jobs had been lost, many of which were in manufacturing and construction. And because those were both male-dominated fields, it led to another change, by mid-2009 more women than men held paying jobs for the first time in American history. In the last three months of 2008 and the first three months of 2009 our GDP dropped 6%. And World Trade cratered and that led to unemployment and misery worldwide. The event that triggered the chaos was the failure of the investment bank Lehman Brothers in September, just 2 months before the presidential election. The Bush Administration tried to stop the damage by getting Congress to pass the Troubled Assets Relief Program, or TARP, which was basically a $700 billion bailout for banks like Citigroup and Bank of America, insurance companies like AIG and mortgage insurers Fannie Mae and Freddie Mac. Regular individuals also received tarps, but they had to buy them and they weren’t as cool. Anyway, these bailouts were probably necessary to stop a complete failure of the financial system but they were very unpopular. Most of the banks that received a rescue from the taxpayers didn’t help the homeowners facing foreclosure, and despite receiving millions of federal dollars, AIG continued to pay huge bonuses to its top executives. Thanks, Thoughtbubble. So, the end of the Bush years looked a lot like the end of the Hoover years. After a decade of Americans spending more than they had, government taking a back seat to business interests and deregulation of industries going hand in hand with increasing corruption, Barack Obama was faced with America’s biggest economic challenge since the Great Depression. Oh, by the way, we got a new president, Barack Obama, who 50 years before his election couldn’t have sat in the front of a bus in Alabama. So I know all the green parts of not-America are mad at us for causing the great financial meltdown and whatever, and fair enough, but we do make some progress now and again Barack Obama was young, he was relatively new on the national scene, and represented change. He appealed to young people and minorities, and he harnessed the power of social media to communicate with supporters, and get out the vote, and also raise TONS of money. Also, he was on the cover of US Weekly. You didn’t see Martin Van Buren on the cover of US Weekly. What’s that? It didn’t exist? Of course it existed! In 2008 Obama’s election At the time Obama’s election seemed a political watershed and not just because he was the first African American president. He appeared to break Republicans’ solid hold on the south, he won Virginia, and North Carolina and Florida, and his supporters represented a coalition of African Americans, and Hispanics, white liberals and, especially, young people. Oh, it’s time for the mystery document? I hope it’s from that US Weekly profile. The rules here are simple - he said for the final time. I’ll tell you one thing that’s going to change here at Crash Course, no more shock pens. I guess the author of the Mystery Document, if I’m wrong I get shocked. Alright, let’s see what we got here today. “For everywhere we look, there is work to be done. The state of our economy calls for action, bold and swift. And we will act, not only to create new jobs, but to lay a new foundation for growth. We will build the roads and bridges, the electric grids and digital lines that feed our commerce and bind us together. We’ll restore science to its rightful place, and wield technology’s wonders to raise health care’s quality and lower its cost. We will harness the sun and the winds and the soil to fuel our cars and run our factories. And we will transform our schools and colleges and universities to meet the demands of a new age. All this we can do. All this we will do.” Speaking of things that we will do, I will get this right it’s Barack Obama’s first inaugural! Ba-bam! No more shocks. The getting shocked part of my life has come to an end. Hopefully in Crash Course Literature when I get things right I’ll get a puppy and when I get things wrong I’ll get a rainbow! Stan says that my only reward is not being punished. So Obama promised to change the culture of Washington. He would end partisan squabbling…. sorry I couldn’t even get through that sentence. To be fair, he did end the squabbling, it became full blown yelling. He also wanted a foreign policy based on diplomacy, he wanted to reduce inequality and increase access to health care, he wanted to curb “greed and irresponsibility” that had helped bring on the economic crisis, and he wanted to end the Bush tax cuts. He also wanted to end the wars in Iraq and Afghanistan and, as critics mocked, reverse global warming. That’s a tall order So how has he done? Not bad. Well, some would say not great either. For instance he launched diplomatic outreach to the Muslim world, but a lot of this was more rhetoric than action, as in his verbal support for the revolution that overthrew Hosni Mubarak in Egypt. And he did keep some of his campaign promises, for instance he signed into law the Lily Ledbetter Fair Pay Act, which made it easier for women to sue when they had been systemically underpaid and he also reversed an earlier executive order that limited women’s reproductive rights. And speaking of women he appointed two of them to the Supreme Court, Elena Kagan and Sonia Sotomayor, the court’s first Hispanic melmber. He also followed through on his promise to end the war in Iraq, although to be fair the Bush administration had really set him up for success there. And he increased the number of U.S. troops in Afghanistan as part of a longer term plan to end the war there, which has sort of worked? He also authorized a successful military operation that killed Osama bin Laden on May 2, 2011. I was in Amsterdam at the time, and the Dutch media came to my house to ask me how Americans felt about this and I said, “Good!” On the other hand Obama has been criticized internationally for backing off his promise to close the Guantanamo Bay detention camp, and he has largely followed the Bush administration's policies with the war on terror. But the Obama administration has deployed far more unmanned drones to kill suspected militants around the world. Despite provoking outrage on the left and the right, Americans generally appear to support the use of drones and extra-legal assassination of accused terrorists. Obama also kept in place Bush’s executive power and in fact expanded it in some ways with NSA’s PRISM program. What about that financial mess he inherited? Well, Obama was fortunate to have a Democratic Congress for his first term in office, so he could push through a lot of legislation. This included a sweeping stimulus package with nearly $800 Billion in new spending, most of it on infrastructure, that was signed into law on February 17, 2009, just 28 days into Obama’s presidency. In the end, the recovery act cost $787 billion - more than the government had spent on a package of programs ever. More than the Great Society. More than the New Deal. Did it work,? Well it depends on who you ask. Among 9 large studies, 6 found that the stimulus did have a positive effect on growth and employment, 3 found that it had little or no effect, and economists are equally divided. The stimulus is estimated to have saved about 3 million jobs, but it also increased the deficit quite a bit. So Liberal economists see America’s current 7% unemployment rate as evidence that the stimulus’ Keynesian policies should have gone further, while conservatives say that the stimulus exploded the federal deficit and debt. Regardless, the recovery for the past few years has been steady, but quite slow. Lastly, let’s turn to Obama’s signature policy proposal the Affordable Care Act, better known as Obamacare. The Affordable Care Act is arguably the most significant piece of social legislation since Medicare. And it seeks to move the United States into the ranks of countries with universal health care. A list that includes every industrialized nation on Earth. We’re number one among countries that don’t have universal health care. So Obamacare aims to reduce the number of Americans without health insurance by making it easier and less expensive for the uninsured to buy it privately. It’s not a government insurance plan and the government will subsidize those who can’t afford insurance. That’s going to be expensive, but fortunately our health care system is so astonishly inefficient that there are lots of places to save money and the Congressional Budget Office at least thinks it’s going to be a wash. But controversially, the acts insurance mandate means that if you don’t have insurance from your employer you MUST buy it or else you have to pay a penalty. In 2012 the core of the law was upheld by the Supreme Court when they ruled that thiants was a constitutional use of the government’s taxing power. As for Obama’s success at ending partisan politics, not one Congressional Republican voted for Obamacare, and many used it to campaign against Democrats in the 2010 mid-term elections. And while the success of the Affordable Care Act won’t be able to be determined for many years there was a huge backlash against both Obama and his policies, on Facebook, and Twitter, and also the mysterious world of non-the-Internet One of these responses was The Tea Party, a reference to the Boston Tea Party and an acronym for Taxed Enough Already. For the record, I just want to say that the vast majority of American’s taxes are lower now than they have been at any point in the last one hundred years. But the Tea Party is also very concerned that deficits are out of control and that rising government spending is going to ruin America. Bolstered by 80 or so new Tea Party congresspeople, the Republicans took control of the House in 2010 and John, it’s pronounced, Boehner became the Speaker of The House. All these Tea Party freshmen took their mandate to cut taxes and reduce spending very seriously, and that made it difficult for Boehner to compromise with the Obama administration. Over in the Senate, Democrats held a slim majority, but because of the filibuster needed 60 votes to do anything, which made them look very dysfunctional In fact, the 111th congress was one of the least productive in American history. Obama was re-elected president in 2012, the Republicans continued to control The House, the Democrats continued to have slim majority in the Senate, and now America is facing something of a political crisis. Unwillingness to compromise precipitated a series of mini-fiscal crises over things like the budget and raising the debt ceiling. Things that Congress used to be able to hash out back when their business was governing not ideological rigidity. Meanwhile, the economy has slowly added jobs and looks halfway decent at the moment mostly because Europe looks so bad. Yay? That qualified questioning yay is about the last word I have to say on American history. The particular brands of ideological certainty that we see today may seem new but if you look at American history you realize that this has been going on for a long time. The Tea Party is right that the founding fathers would be astonished by the extend of the American government and the extent to which it’s involved in the lives of Americans. And progressives are right that people around the world have benefited from government investment in healthcare and infrastructure and transportation. We have to ask ourselves again, “What does freedom really mean?” Can you be free when you live in poverty or when you’re one injury away from bankruptcy? Can you be free when the government can go to a secret court to read your text messages? We know that you can’t be free if you’re dead, so is the government’s job to protect you not only by having a standing army but also making you wear your seat belt? Those are ultimately ideological questions, but we have to grapple with them in a real practical way. And the great story of American governance is compromise. But that is also often been the tragedy of American governance as when the Constitutional Convention compromised over whether African American people were people. So if you’ve learned anything this year, I hope its been that the American story that we find ourselves in now isn’t entirely novel. And I think we have much to learn from those who came before us, both from their successes and their many, many failures. Thank you so much for watching Crash Course US History next we will be discussing literature. Your first reading assignment The Odyssey. It’s a great book, I promise, you’re gonna like it. Thanks again for watching. I’ll see ya then. Crash Course US History is made with the help of all of these nice people and it exists because of your continuing support through Subbable.com. There is a link right there that you can click to voluntarily subscribe and keep this show, you know, rolling. Thank you, so much, to everyone that has watched and supported this show over the last two years. I’m wearing the same shirt that I wore on the 1st episode of Crash Course World History to celebrate two successful years of teaching history! This has been one of the great professional joys of my life and I’m so grateful to everyone that has helped make the show and everyone who has watched it. You can find a full list of your reading for Crash Course Literature in the doobly-doo. Thank you again for watching, and as we say in my hometown, “Don’t forget to be awesome
US_History
The_War_of_1812_Crash_Course_US_History_11.txt
Hi, I'm John Green, this is Crash Course US History and today we're going to talk about what America's best at: War. [Patriotic Rock Music] Uh, Mr. Green, the United States has actually only declared war 5 times in the last 230 years. Oh me from the the past, you sniveling literalist. Well today we're going to talk about America's first declared war, The War of 1812, so called because historians are terrible at naming things. I mean the could have called it The Revolutionary War: Part Deux, or The Canadian Cataclysm, or The War to Facilitate Future Wars. But no, they just named it after the year it started. [Theme Music] I know this disappoints the military historians among you, but as usual, we're gonna spend more time talking about the causes and effects of the war than the actual, like, killing parts, because ultimately it's the ambiguity of The War of 1812 that makes it so interesting. The reason most often given for The War of 1812 was the British impressment of American sailors, whereby American sailors would be kidnapped and basically forced into British servitude. This disrupted American shipping and also seems like a reasonably obvious violation of American sovereignty, but it's a little more complicated than that. First of all, there were many thousands of British sailors working aboard American ships so many of the sailors that the British captured were in fact British. Which gets to the large point that citizenship at the time was a pretty slippery concept especially on the high seas. Like, papers were often forged, and many sailors identified supposed American-ness through tattoos of like eagles and flags. And there were several reasons why a British sailor might like to become, or pretend to be, an American, including that the Brits at the time were fighting Napoleon in what historians, in their infinite creativity, called The Napoleonic Wars. And on that topic, Britain's impressment policy allowed them both to disrupt American shipping to France and get new British sailors to strengthen their war effort, which was annoying to the Americans on a couple levels especially the French-loving Republicans, which is a phrase that you don't hear very often anymore. Another reason often given for the war, was America's crazy conspiratorial Anglophobia. There was even a widespread rumor that British agents were buying up Connecticut sheep in order to sabotage the textile industry, lest you worry that America's fascination with conspiracy theories is new. So those pushing for war were known as war hawks, and the most famous among them was Kentucky's Henry Clay. They took the impressment of sailors as an affront to American national honor, but they also complained that Britain's actions were an affront to free trade, by which they meant America's ability to trade with Europeans other than Great Britain. And to be fair, the British were trying to regulate American trade. They even passed The Orders in Counsel which required American ships to dock in Britain and pay tax before trading with other European nations. Britain, we were an independent nation! You can't do that kind of stuff! We have a special relationship but it's not that special! But the problem with saying that this caused the war was that The Orders had been in effect for five years before the war started. AND they were rescinded in 1812 before the US declared war. Although, admittedly, we didn't know about it because it didn't reach us until after we declared. There was no Twitter. Another reason for the war was Canada. That's right Canada, American's wanted you and who can blame them we your excellent healthcare and your hockey and your first rate national anthem. Stan this is fun but enough with the #1812problems. According to the Virginia Congressman John Randolph, "Agrarian cupidity not maritime rights urges the war. We have heard but one word: Canada, Canada, Canada." I'm not here to criticize you John Randolph but that's actually three words. Now some historians disagree with this but the relentless pursuit of new land certainly fits in with the Jeffersonian model of an agrarian republic. And there's another factor that figured into America's decision to go to war: expansion into territory controlled by Native Americans. Oh, it's time for the mystery document? The rules here are simple: I try to guess the author of the mystery document. Usually I'm wrong and I get shocked. All right, let’s see what we got here. "You want, by your distinctions of Indian tribes, in allotting to each a particular tract of land, to make them to war with each other. You never see an Indian come and endeavor to make the white people do so." It's Tecumseh. Drop the mic... [dinging noise] is something that I would do except that the mic is actually attached to my shirt so there's n– there's no drama in this. Clearly a Native American criticism of white people, and I happen to know that that particular one comes from Tecumseh and I don't get shocked today. So it shouldn't come as a surprise that Americans were continuing to push westward into territories where Indians were living. I mean, this was a big reason for the Louisiana Purchase after all. By the beginning of the war, more than 400,000 settlers had moved into territories west of the original 13 colonies and they outnumbered American Indians by a significant margin. Some native groups responded with a measure of assimilation. Cherokees like John Ross wanted to become more "civilized", that is, more white and farmer-y. And some of them did even adopt such civilized practices as written languages and slavery, the most civilized practice of all. [sigh] People are always like, "Why aren't you more celebratory of American History?" Well, why isn't there more to celebrate? But other Indians wanted to resist. The best known of these were the aforementioned Tecumseh and his brother Tensk – Stan, can you just put it on the screen? [Tenskwatawa on screen] Yes. Let's just enjoy looking at that. Right, that's just for all you visual learners. So he was also known as The Prophet, because of his religious teachings, and also because of the pronunciation issues. The Prophet encouraged Indians, especially those living in and around the settlement of Prophetstown, to abandon the ways of the whites, primarily in the form of alcohol and manufactured consumer goods. So stop drinking alcohol and eating refined sugars – this guy sounds like my doctor! Tecumseh was more militant; attempting to revive Neolin's idea of pan-Indianism and actively resisting white settlement. As he put it, "Sell a country, why not sell the air, the great sea, as well as the Earth? Did not the Great Spirit make them all for the use of his children?" The Americans responded to this reasonable criticism in the traditional manner – with guns. William Henry Harrison destroyed the native settlement at Prophetstown in what would become known as the Battle of Tippecanoe. He would later ride that fame all the way to the presidency in 1840 and then – SPOILER ALERT: He would give the longest inauguration address ever, catch a cold, and die 40 days later. Let that be a lesson to you, American politicians: Long speeches? Fatal! So I just painted a pretty negative picture of the American treatment of the Indians – because it was awful. But I haven't mentioned how this relates to the War of 1812. The Americans were receiving reports that the British were encouraging Tecumseh, which they probably were. And the important thing to remember here is that the War of 1812, like the Seven Years War and the American Revolution was also a war against Indians. And as in those other two wars, the Indians were the biggest losers. And not in the cool way of biggest loser where, like, Trainer Bob helps you lose weight, but in the really sad way, where your entire civilization gets John C. Calhouned. So the War of 1812 was the first time that the United States declared war on anybody. It was also the smallest margin of a declaration of war vote: 79 to 49 in the House and 19 to 13 in the Senate. Northern States, which relied on trade a lot, didn't want to go to war, while southern and western states, which were more agrarian and wanted expansion to get land for farming and slavery, did. The closeness of the vote reflects a profound ambivalence about the war. As Henry Adams wrote, "Many Nations have gone to war in pure gaiety of the heart, but perhaps the United States were the first to force themselves into a war they dreaded, in the hope that the war itself might create the spirit they lacked." Don't worry, Henry Adams! In the future we're gonna get pretty gaiety of heart-ish about war. Anyway, as an actual war, the War of 1812 was something of a farce. Let's go to the Thought Bubble. The US army numbered 10-12 thousand and its officers were sunk into either sloth, ignorance, or habits of intemperate drinking. The US navy had 17 ships. Great Britain had 1,000. Also, America had very little money, Britain collected 40 times more tax revenue than the US. But Britain was busy fighting Napoleon, which is why they didn't really start kicking America's butt until 1814, after Napoleon was defeated. Napoleon's defeat was also the practice of impressment, since Britain didn't need so many sailors anymore. Initially, much of the war consisted of America's attempts to take Canada, which any map will show you went smashingly. Americans were confident the Canadians would rush to join the US. When marching from Detroit, General William Hull informed the Canadians that, "You will be emancipated from tyranny and oppression and restored to the dignified station of free men." And the Canadians were like, "Yeah, we're okay actually." And so the British in Canada, with their Indian allies, went ahead and captured Detroit and then forced Hull's surrender. America's lack of success in Canada was primarily attributable to terrible strategy. They might have succeeded if they'd taken Montreal, but they didn't want to march through northern New York because it was full of Federalists who were opposed to the war. Instead they concentrated on the West, that is, the area around Detroit, where fighting went back and forth. The British found much more success, even seizing Washington DC and burning the white house. In the course of the battle, British admiral George Cockburn, overseeing the destruction of a newspaper printing house, told the forces that took the city, "Be sure that all the Cs are destroyed, so that the rascals cannot any longer abuse my name. It's hard out there for a Cockburn. Thanks, Thought Bubble. Given these problems it's amazing there were any American successes, but there were. The battleship USS Constitution broke the myth of British naval invincibility when cannonballs bounced off it and earned it the nickname "Old Ironsides". Oliver Hazard Perry defeated a British fleet, in, of all places, Lake Erie. At the battle of the Thames, William Harry Harrison defeated Tecumseh. And the battle of Horseshoe Ben showed one of the reasons why Indians were defeated when Andrew Jackson played one group of Creeks against another group of Creeks and Cherokees. 800 Indians were killed in that battle. And speaking of Jackson, the most notable American victory of the war was the Battle of New Orleans, which catapulted him to prominence. He lost only 71 men while inflicting 2,036 British casualties. Of course, the most memorable thing about the battle is that it took place two weeks after the peace treaty ending the war had been signed. But hey, that's not Jackson's fault. Again – no twitter, #1815problems. The treaty of Ghent, which ended the war, proved just how necessary the war had been: not at all. No territory changed hands when negotiations started. And in August 1814, the British asked for Northern Maine, demilitarization of The Great Lakes, and some territory to create an independent nation for the Indians, and the Northwest. But none of that happened, not because the U.S. was in a particularly good negotiating position, but it would have been awkward for Great Britain to carve out pieces of the US, and then tell Russia and Prussia that they couldn't take pieces of Europe for themselves to celebrate their victory in the Napoleonic Wars. There were no provisions in the treaty about impressment or free trade and basically the treaty returned everything to the status quo. So, neither the US or Great Britain actually won. But the Indians suffered significant casualties and gave up even more territory, definitely lost. So with a treaty like that, the war must have had a negligible impact on American history, right? Except, no; the war of 1812 confirmed that the US would exist. Britain would never invade American again (until 1961). I mean the US were good customers, and Great Britain was happy to let them trade, as long as that trade wasn't helping a French dictator. The War launched Andrew Jackson's career and solidified the settlement and conquest of land east of the Mississippi River. And our lack of success in Canada reinforced Canadian nationalism while also ensuring that instead of becoming one great nation, we would forever be Canada's pants. The war also spelled the end of the Federalist Party, which tried in 1815 with the Hartford Convention to change the constitution. In retrospect, the Hartford Convention proposals actually look pretty reasonable. They wanted to eliminate the clause wherein black people were counted as 3/5 of a human, and require a 2/3 congressional majority to declare war. But Because they had their convention right before Jackson's victory at New Orleans, they only came off looking unpatriotic and out of touch, as the elite so often do. It's hard to argue that the Americans really won The War of 1812, but we felt like we won, and nothing unleashes national pride like war winning. The nationalistic fervor that emerged in the early 19th century, was, like most things, good news for some and bad news for others. But what’s important to remember, regardless of whether you're an American, is that after 1812, the United States saw itself not just as an independent nation but as a big player on the world stage. For better and for worse, that's a gig we've held onto. And no matter how you feel about America's international interventions, you need to remember, it didn't begin in Afghanistan or even Europe; it started with freakin' Canada. Thanks for watching, I'll see you next week. Crash Course is produced and directed by Stan Muller. The script supervisor is Meredith Danko. Our show is written by my high school history teacher Raoul Meyer and myself. Our Associate Producer is Danica Johnson. And our graphics team is Thought Cafe. If you have questions about today's video, you can ask them in comments, where they will be answered by our team of historians. We also accept suggestions for Libertage captions. Thanks for watching Crash Course and as we say in my hometown: Don't Forget To Be Awesome. Goodbye! Don't forget to subscribe!
US_History
The_1960s_in_America_Crash_Course_US_History_40.txt
Hi, I’m John Green, this is Crash Course US History and today we’re gonna talk about the 1960s. Mr. Green, Mr. Green. Great. The decade made famous by the narcissists who lived through it. Hey, Me From the Past, finally you and I agree about something wholeheartedly. But while I don’t wish to indulge the baby-boomers’ fantasies about their centrality to world history, the sixties were an important time. I mean, there was the Cold War, Vietnam, a rising tide of conservatism (despite Woodstock), racism. There were the Kennedy’s and Camelot, John, Paul, George, and to a lesser extent, Ringo. And of course, there was also Martin Luther King Jr. intro So, the 1960s saw people organizing and actively working for change both in the social order and in government. This included the student movement, the women’s movement, movements for gay rights, and a push by the courts to expand rights in general. But, by the end of the 1960s, the anti-war movement seemed to have overshadowed all the rest. So as you’ll no doubt remember from last week, the civil rights movement began in the 1950s if not before, but many of its key moments happened in the sixties. And this really began with sit-ins that took place in Greensboro North Carolina. Black university students walked into Woolworths and waited at the lunch counters to be served, or, more likely, arrested. After 5 months of that, those students eventually got Woolworths to serve black customers. Then, in 1961 leaders from the Congress On Racial Equality launched Freedom Rides to integrate interstate buses. Volunteers rode the buses into the Deep South where they faced violence including beatings and a bombing in Anniston AL. But despite that, those freedom rides also proved successful and eventually the ICC desegregated interstate buses. In fact, by the end of the 60s over 70,000 people had taken part in demonstrations, from sit-ins, to teach-ins, to marches. But they weren’t all successful. Martin Luther King’s year-long protests in Albany, GA didn’t end discrimination in the city. And it took JFK ordering federal troops to escort James Meredith to class for him to attend the University of Mississippi. The University of Mississippi: America’s fallback college. Sorry, I’m from Alabama. So, the Civil Rights movement reached its greatest national prominence in 1963 when Martin Luther King came to my hometown of Birmingham, Alabama, where there had been more than 50 racially-motivated bombings since WWII. Television brought the reality of the Jim Crow South into people’s homes as images of Bull Connor’s police dogs and water cannons being turned on peaceful marchers, many of them children, horrified viewers and eventually led Kennedy to endorse the movement’s goals. Probably should mention that John F. Kennedy was president of the United States at the time, having been elected in 1960. He was assassinated in 1963 leading to Lyndon Johnson. Alright, politics over. Anyway, in response to these peaceful protests, Birmingham jailed Martin Luther King where he wrote one of the great letters in American history (doesn’t have a great name): Letter from Birmingham Jail. 1963 also saw the March on Washington, the largest public demonstration in American history up to that time where King gave his famous speech, “I have a Dream.” King and the other organizers called for a civil rights bill and help for the poor, demanding public works, a higher minimum wage, and an end to discrimination in employment. Which eventually, in one of the great bright spots in American history, did sort of happen with the Civil Rights Act. So, one reason American history teachers focus on the Civil Rights Movement so much is that it successfully brought actual legislative change. After being elected president, John F. Kennedy was initially cool to civil rights, but to be fair, the Cold War occupied a lot of his time, what with the Cuban Missile Crisis and the Bay of Pigs and whatnot. But the demonstrations of 1963 pushed John F. Kennedy to support civil rights more actively. According to our dear friend, the historian Eric Foner, “Kennedy realized that the United States simply could not declare itself the champion of freedom throughout the world while maintaining a system of racial inequality at home.”[1] So that June he appeared on TV and called on Congress to pass a law that would ban discrimination in all public accommodations. And then he was assassinated. Thanks, Lee Harvey Oswald. Or possibly someone else. But probably Lee Harvey Oswald. So then, Lyndon Johnson became president and he pushed Congress to pass the Civil Rights Act of 1964. The law prohibited discrimination in employment, schools, hospitals, and privately owned public places like restaurants, and hotels and theaters, and it also banned discrimination on the basis of sex. The Civil Rights Act was a major moment in American legislative history, but it hardly made the United States a haven of equality. So, Civil Rights leaders continued to push for the enfranchisement of African Americans. After Freedom Summer workers registered people in Mississippi to vote, King launched a march for voting rights in Selma, Alabama in January, 1965. And television swayed public opinion in favor of the demonstrators. Thank you, TV, for your one and only gift to humanity. Just kidding. Battlestar Galactica. So, in 1965 Congress passed the Voting Rights Act, which gave the federal government the power to oversee voting in places where discrimination was practiced. In 1965, Congress also passed the Hart-Cellar Act, which got rid of national origin quotas and allowed Asian immigrants to immigrate to the United States. Unfortunately the law also introduced quotas on immigrants from the Western Hemisphere. Lyndon Johnson’s domestic initiatives from 1965 through 1967 are known as the Great Society, and it’s possible that if he hadn’t been responsible for America escalating the war in Vietnam, he might have been remembered, at least by liberals, as one of America’s greatest presidents. Because the Great Society expanded a lot of the promises of the New Deal, especially in the creation of health insurance programs, like Medicare for the elderly and Medicaid for the poor. He also went to War on Poverty. Never go to war with a noun. You will always lose. Johnson treated poverty as a social problem, rather than an economic one. So instead of focusing on jobs or guaranteed income, his initiatives stressed things like training. That unfortunately failed to take into account shifts in the economy away from high wage union manufacturing jobs toward more lower-wage service jobs. [2] Here’s what Eric Foner had to say about Johnson’s domestic accomplishments: “By the 1990s […] the historic gap between whites and blacks in education, income, and access to skilled employment narrowed considerably. But with deindustrialization and urban decay affecting numerous families and most suburbs still being off limits to non-white people, the median wealth of white households remained ten times greater than that of African Americans, and nearly a quarter of all black children lived in poverty.” While Congress was busy enacting Johnson’s Great Society programs, the movement for African American freedom was changing. Let’s go to the ThoughtBubble. Persistent poverty and continued discrimination in the workplace, housing, education, and criminal justice system might explain the shift away from integration and toward black power, a celebration of African American culture and criticism of whites’ oppression. 1964 saw the beginnings of riots in city ghettoes, for instance, mostly in Northern cities. The worst riots were in 1965 in Watts, in southern California. These left 35 people dead, 900 injured, and $30 million in damage. Newark and Detroit also saw devastating riots in 1967. In 1968 the Kerner Report blamed the cause of the rioting on segregation, poverty, and white racism. Then there’s Malcolm X, who many white people regarded as an advocate for violence, but who also called for self-reliance. It’s tempting to see leadership shifting from King to X as the civil rights movement became more militant, but Malcolm X was active in the early 1960s and he was killed in 1965, three years before Martin Luther King was assassinated and before all the major shifts in emphasis towards black power. Older Civil Rights groups like CORE abandoned integration as a goal after 1965 and started to call for black power. The rhetoric of Black Power could be strident, but its message of black empowerment was deeply resonant for many. Oakland’s Black Panther Party did carry guns in self-defense but they also offered a lot of neighborhood services. But the Black Power turned many white people away from the struggle for African American freedom, and by the end of the 1960s, many Americans’ attention had shifted to anti-war movement. Thanks, ThoughtBubble. So it was Vietnam that really galvanized students even though many didn’t have to go to Vietnam because they had student deferments. They just really, really didn’t want their friends to go. The anti-war movement and the civil rights movement inspired other groups to seek an end to oppression. Like, Latinos organized to celebrate their heritage and end discrimination. Latino activism was like black power, but much more explicitly linked to labor justice, especially the strike efforts led by Cesar Chavez and the United Farm Workers. The American Indian Movement, founded in 1968, took over Alcatraz to symbolize the land that had been taken from Native Americans. And they won greater tribal control over education, economic development, and they also filed suits for restitution. And in June of 1969, after police raided a gay bar, called the Stonewall Inn, members of the gay community began a series of demonstrations in New York City, which touched off the modern gay liberation movement. Oh, it’s time for the Mystery Document? The rules here are pretty simple. I read the Mystery Document, guess the author, I’m either right or I get shocked. Alright, what have we got here. If the Bill of Rights contains no guarantee that a citizen shall be secure against lethal poisons distributed either by private individuals or by public officials [I already know it!], it is surely only because our forefathers, despite their considerable wisdom and foresight, could conceive of no such problem. Rachel Carson! Silent Spring. YES. I am on such a roll. Silent Spring was a massively important book because it was the first time that anyone really described all of the astonishingly poisonous things we were putting into the air and the ground and the water. Fortunately, that’s all been straightened out now and everything that we do and make as human beings is now sustainable. What’s that? Oh god. The environmental movement gained huge bipartisan support and it resulted in important legislation during the Nixon era, including the Clean Air and Water Acts, and the Endangered Species Act. And yes, I said that environmental legislation was passed during the Nixon administration. But perhaps the most significant freedom movement in terms of number of people involved and long-lasting effects was the American Feminist movement. This is usually said to have begun with the publication of Betty Friedan’s book The Feminine Mystique, which set out to describe “the problem that has no name.” Turns out the name is “misogyny.” [3] Friedan described a constricting social and economic system that affected mostly middle class women, but it resonated with the educated classes and led to the foundation of the National Organization of Women in 1966. Participation in student and civil rights movements led many women to identify themselves as members of a group that was systematically discriminated against. And by “systemic,” I mean that in 1963, 5.8% of doctors were women and 3.7% of lawyers were women and fewer than 10% of doctoral degrees went to women. They are more than half of the population. While Congress responded with the Equal Pay Act in 1963, younger women sought greater power and autonomy in addition to legislation. Crucially, 60s-era feminists opened America to the idea that the “personal is political,” especially when it came to equal pay, childcare, and abortion. Weirdly, the branch of government that provided most support to the expansion of personal freedom in the 1960s was the most conservative one, the Supreme Court. The Warren Court handed down so many decisions expanding civil rights that the era has sometimes been called a rights revolution. The Warren court expanded the protections of free speech and assembly under the First Amendment and freedom of the press in the New York Times v. Sullivan decision. It struck down a law banning interracial marriage in the most appropriately named case ever, Loving v. Virginia. And although this would become a lightning rod for many conservatives, Supreme Court decisions greatly expanded the protections of people accused of crimes. Gideon v. Wainwright secured the right to attorney, Mapp v. Ohio established the exclusionary rule under the Fourth Amendment, and Miranda v. Arizona provided fodder for Channing Tatum in his great movie, 21 Jump Street, insuring that he would always have to say to every perp, “You have the right to remain silent.” But you can’t silence my heart, Channing Tatum. It beats only for thee. But, the most innovative and controversial decisions actually established a new right where none had existed in the constitution. Griswold v. Connecticut, dealt with contraception, and Roe v. Wade, guaranteed a woman’s right to an abortion (at least in the first trimester). And those two decisions formed the basis of a new right, the right to privacy. Protests, the counter culture, and the liberation movements continued well into the early 1970s, losing steam with the end of the Vietnam war and America’s economy plunging into the toilet. For many, though, the year 1968 sums up the decade. 1968 began with the Tet Offensive in Vietnam, which stirred up the anti-war protests. Then racial violence erupted after the assassination of Martin Luther King Jr. on April 4, 1968. Then, anti-war demonstrators as well as some counter culture types arrived in large numbers at the 1968 Democratic convention in Chicago where they were set upon by police and beaten in what was later described as a “police riot.” 1968 also saw the Prague Spring uprising in Czechoslovakia crushed by the Soviets. And student demonstrators were killed by the police in Mexico City where the Olympics were held and Parisian students took to the streets in widespread protests against, you know, France. All this unrest scared a lot of people who ended up voting for Richard Nixon and his promises to return to law and order. Ultimately, like any decade or arbitrary historical “age,” the 60s defies easy categorization. Yes, there were hippies and liberation movements, but there were also reactions to those movements. On this one, I’m just gonna leave it up to Eric Foner to summarize the decade’s legacy: “[The 1960s] made possible the entrance of numerous members of racial minorities into the mainstream of American life, while leaving unsolved the problem of urban poverty. It set in motion a transformation of the status of women. It changed what Americans expected from government – from clean air and water to medical coverage in old age. And at the same time, it undermined confidence in national leaders. Relations between young and old, men and women, and white and non-white, along with every institution in society, changed as a result.” But there’s one last thing I want to emphasize. All of this wasn’t really the result of, like, a radical revolution. It was the result of a process that had been going on for decades. I mean, arguably a process that had been going on for hundreds of years. Thanks for watching, I’ll see you next week. Crash Course is made with the help of all these nice people and it’s possible because of generous support from the Bluth Family Frozen Banana Stand. Just kidding. We don’t have corporate sponsors. We have you. Subbable.com is a voluntary subscription platform (by the way, you can just click on my face) that allows people who care about stuff, like you hopefully care about Crash Course, to support it directly on a monthly basis. I’m over here now, but you should still click on my face. So Subbable has lots of great Crash Course perks, you can get signed posters and all kinds of things, and most importantly, you can help us keep this show free, for ever, for everyone. Thank you again for watching, and as we say in my hometown, there’s always money in the banana stand.
US_History
Crash_Course_US_History_Preview.txt
hi and welcome to crash course u.s history a course that approximately follows the ap u.s history curriculum as it appeared in 2013 and even though we call it u.s history we do begin before any europeans show up in north america from the 15th century we'll take you through every beat of the country's history until we reach barack obama's presidency this will include all the well-known events we'll learn about the revolutionary war the civil war women's suffrage the world wars civil rights and so on but we'll introduce you to some lesser-known events and people as well we'll do that via primary sources that do a deep dive into how people were reacting to the time period they were living through we'll also consider who was given the privilege to have their reactions recorded because when citizens are marginalized based on class gender or race that history isn't just history it touches every part of the present so we'll learn it all together and try to make sense of the united states complicated history oh and if jon gets the mystery document wrong he'll be shocked literally [Music] you
US_History
The_Rise_of_Conservatism_Crash_Course_US_History_41.txt
Episode 41: Rise of Conservatism Hi, I’m John Green, this is CrashCourse U.S. history and today we’re going to--Nixon?--we’re going to talk about the rise of conservatism. So Alabama, where I went to high school, is a pretty conservative state and reliably sends Republicans to Washington. Like, both of its Senators, Jeff Sessions and Richard Shelby, are Republicans. But did you know that Richard Shelby used to be a Democrat, just like basically all of Alabama’s Senators since reconstruction? And this shift from Democrat to Republican throughout the South is the result of the rise in conservative politics in the 1960s and 1970s that we are going to talk about today. And along the way, we get to put Richard Nixon’s head in a jar. Stan just informed me that we don’t actually get to put Richard Nixon’s head in a jar. It’s just a Futurama joke. And now I’m sad. So, you’ll remember from our last episode that we learned that not everyone in the 1960s was a psychedelic rock-listening, war-protesting hippie. In fact, there was a strong undercurrent of conservative thinking that ran throughout the 1960s, even among young people. And one aspect of this was the rise of free market ideology and libertarianism. Like, since the 1950s, a majority of Americans had broadly agreed that “free enterprise” was a good thing and should be encouraged both in the U.S. and abroad. Mr. Green, Mr. Green, and also in deep space where no man has gone before? No, MFTP. You’re thinking of the Starship Enterprise, not free enterprise. And anyway, Me From The Past, have you ever seen a more aggressively communist television program than “The Neutral Zone” from Star Trek: The Next Generation’s first season? I don’t think so. intro Alright so, in the 1950s a growing number of libertarians argued that unregulated capitalism and individual autonomy were the essence of American freedom. And although they were staunchly anti-communist, their real target was the regulatory state that had been created by the New Deal. You know, social security, and not being allowed to, you know, choose how many pigs you kill, etc. Other conservatives weren’t libertarians at all but moral conservatives who were okay with the rules that enforced traditional notions of family and morality. Even if that seemed like, you know, an oppressive government. For them virtue was the essence of America. But both of these strands of conservatism were very hostile toward communism and also to the idea of “big government.” And it’s worth noting that since World War I, the size and scope of the federal government had increased dramatically. And hostility toward the idea of “big government” remains the signal feature of contemporary conservatism. Although very few people actually argue for shrinking the government. Because, you know, that would be very unpopular. People like Medicare. But it was faith in the free market that infused the ideology of the most vocal young conservatives in the 1960s. They didn’t receive nearly as much press as their liberal counterparts but these young conservatives played a pivotal role in reshaping the Republican Party, especially in the election of 1964. The 1964 presidential election was important in American history precisely because it was so incredibly uncompetitive. I mean, Lyndon Johnson was carrying the torch of a wildly popular American president who had been assassinated a few months before. He was never going to lose. And indeed he didn’t. The republican candidate, Arizona senator Barry Goldwater, was demolished by LBJ. But the mere fact of Goldwater’s nomination was a huge conservative victory. I mean, he beat out liberal Republican New York Governor Nelson Rockefeller. And yes, there were liberal Republicans. Goldwater demanded a harder line in the Cold War, even suggesting that nuclear war might be an option in the fight against communism. And he lambasted the New Deal liberal welfare state for destroying American initiative and individual liberty. I mean, why bother working when you could just enjoy life on the dole? I mean, unemployment insurance allowed anyone in America to become a hundredaire. But it was his stance on the Cold War that doomed his candidacy. In his acceptance speech, Goldwater famously declared, “Extremism in the defense of liberty is no vice.” Which made it really easy for Johnson to paint Goldwater as an extremist. In the famous “Daisy” advertisement, Johnson’s supporters countered Goldwater’s campaign slogan of “in your heart, you know he’s right” with “but in your guts you know he’s nuts.” So in the end, Goldwater received a paltry 27 million votes to Johnson’s 43 million, and Democrats racked up huge majorities in both houses of Congress. This hides, however, the significance of the election. Five of the six states that Goldwater carried were in the Deep South, which had been reliably democratic, known as the “Solid South,” in fact. Now, it’s too simple to say that race alone led to the shift from Democratic to the Republican party in the South because Goldwater didn’t really talk much about race. But the Democrats, especially under LBJ, became the party associated with defending civil rights and ending segregation, and that definitely played a role in white southerners’ abandoning the Democrats, as was demonstrated even more clearly in the 1968 election. The election of 1968 was a real cluster-Calhoun, I mean, there were riots and there was also the nomination of Hubert Humphrey, who was very unpopular with the anti-war movement, and also was named Hubert Humphrey, and that’s just what happened with the Democrats. But, lost in that picture was the Republican nominee, Richard Milhous Nixon, who was one of the few candidates in American history to come back and win the presidency after losing in a previous election. How’d he do it? Well, it probably wasn’t his charm, but it might have been his patience. Nixon was famous for his ability to sit and wait in poker games. It made him very successful during his tour of duty in the South Pacific. In fact, he earned the nickname “Old Iron Butt.” Plus, he was anti-communist, but didn’t talk a lot about nuking people. And the clincher was probably that he was from California, which by the late 1960s was becoming the most populous state in the nation. Nixon won the election, campaigning as the candidate of the “silent majority” of Americans who weren’t anti-war protesters, and who didn’t admire free love or the communal ideals of hippies. And who were alarmed at the rights that the Supreme Court seemed to be expanding, especially for criminals. This silent majority felt that the rights revolution had gone too far. I mean, they were concerned about the breakdown in traditional values and in law and order. Stop me if any of this sounds familiar. Nixon also promised to be tough on crime, which was coded language to whites in the south that he wouldn’t support civil rights protests. The equation of crime with African Americans has a long and sordid history in the United States, and Nixon played it up following a “Southern strategy” to further draw white Democrats who favored segregation into the Republican ranks. Now, Nixon only won 43% of the vote, but if you’ve paid attention to American history, you know that you ain’t gotta win a majority to be the president. He was denied that majority primarily by Alabama Governor George Wallace, who was running on a pro-segregation ticket and won 13% of the vote. So 56% of American voters chose candidates who were either explicitly or quietly against civil rights. Conservatives who voted for Nixon hoping he would roll back the New Deal were disappointed. I mean, in some ways the Nixon domestic agenda was just a continuation of LBJ’s Great Society. This was partly because Congress was still in the hands of Democrats, but also Nixon didn’t push for conservative programs and he didn’t veto new initiatives. Because they were popular. And he liked to be popular. So in fact, a number of big government “liberal” programs began under Nixon. I mean, the environmental movement achieved success with the enactment of the Clean Air Act, and the Clean Water Act, and the Endangered Species Act. The Occupational Health and Safety Administration and the National Transportation Safety Board were created to make new regulations that would protect worker safety and make cars safer. That’s not government getting out of our lives, that’s government getting into our cars. Now, Nixon did abolish the Office of Economic Opportunity, but he also indexed social security benefits to inflation and he proposed the Family Assistance Plan that would guarantee a minimum income for all Americans. And, the Nixon years saw some of the most aggressive affirmative action in American history. LBJ had begun the process by requiring recipients of federal contracts to have specific numbers of minority employees and timetables for increasing those numbers. But Nixon expanded this with the Philadelphia plan, which required federal construction projects to have minority employees. He ended up attacking this plan after realising that it was wildly unpopular with trade unions, which had very few black members, but he had proposed it. And when Nixon had the opportunity to nominate a new Chief Justice to the Supreme Court after Earl Warren retired in 1969, his choice, Warren Burger was supposed to be a supporter of small government and conservative ideals, but, just like Nixon, he proved a disappointment in that regard. Like, in Swan v. Charlotte-Mecklenbug Board of Education, the court upheld a lower court ruling that required busing of students to achieve integration in Charlotte’s schools. And then the Burger court made it easier for minorities to sue for employment discrimination, especially with its ruling in Regents of the University of California v. Bakke. This upheld affirmative action as a valid governmental interest, although it did strike down the use of strict quotas in university admissions. Now, many conservatives didn’t like these affirmative action decisions, but one case above all others had a profound effect on American politics: Roe v. Wade. Roe v. Wade established a woman’s right to have an abortion in the first trimester of a pregnancy as well as a more limited right as the pregnancy progressed. And that decision galvanized first Catholics and then Evangelical Protestants. And that ties in nicely with another strand in American conservatism that developed in the 1960s and 1970s. Let’s go to the ThoughtBubble. Many Americans felt that traditional family values were deteriorating and looked to conservative republican candidates to stop that slide. They were particularly alarmed by the continuing success of the sexual revolution, as symbolized by Roe v. Wade and the increasing availability of birth control. Statistics tend to back up the claims that traditional family values were in decline in the 1970s. Like, the number of divorces soared to over one million in 1975 exceeding the number of first time marriages. The birthrate declined with women bearing 1.7 children during their lifetimes by 1976, less than half the figure in 1957. Now, of course, many people would argue that the decline of these traditional values allowed more freedom for women and for a lot of terrible marriages to end, but that’s neither here nor there. Some conservatives also complained about the passage in 1972 of Title IX, which banned gender discrimination in higher education, but many more expressed concern about the increasing number of women in the workforce. Like, by 1980 40% of women with young children had been in the workforce, up from 20% in 1960. The backlash against increased opportunity for women is most obviously seen in the defeat of the Equal Rights Amendment in 1974, although it passed Congress easily in 1972. Opponents of the ERA, which rather innocuously declared that equality of rights under the law could not be abridged on account of sex, argued that the ERA would let men off the hook for providing for their wives and children, and that working women would lead to the further breakdown of the family. Again, all the ERA stated was that women and men would have equal rights under the laws of the United States. But, anyway, some anti-ERA supporters, like Phyllis Schlafly claimed that free enterprise was the greatest liberator of women because the purchase of new labor saving devices would offer them genuine freedom in their traditional roles of wife and mother. Essentially, the vacuum cleaner shall make you free. And those arguments were persuasive to enough people that the ERA was not ratified in the required ¾ of the United States. Thanks, ThoughtBubble. Sorry if I let my personal feelings get in the way on that one. Anyway, Nixon didn’t have much to do with the continuing sexual revolution; it would have continued without him because, you know, skoodilypooping is popular. But, he was successfully reelected in 1972, partly because his opponent was the democratic Barry Goldwater, George McGovern. McGovern only carried one state and it wasn’t even his home state. It was Massachusetts. Of course. But even though they couldn’t possibly lose, Nixon’s campaign decided to cheat. In June of 1972, people from Nixon’s campaign broke into McGovern’s campaign office, possibly to plant bugs. No, Stan, not those kinds of bugs. Yes. Those. Now, we don’t know if Nixon actually knew about the activities of the former employees of the amazingly acronym-ed CREEP, that is the Committee for the Reelection of the President. But this break in at the Watergate hotel eventually led to Nixon being the first and so far only American president to resign. What we do know is this: Nixon was really paranoid about his opponents, even the ones who appealed to 12% of American voters, especially after Daniel Ellsberg leaked the Pentagon Papers to the New York Times in 1971. So, he drew up an enemies list and created a special investigative unit called the plumbers whose job was to fix toilets. No, it was to stop leaks. That makes more sense. I’m sorry, Stan, it’s just by then the toilets in the White House were over 100 years old, I figured they might need some fixing, but apparently no. Leaking. Nixon also taped all of the conversations in the Oval Office and these tapes caused a minor constitutional crisis. So, during the congressional investigation of Watergate, it became known that these tapes existed, so the special prosecutor demanded copies. Nixon refused, claiming executive privilege, and the case went all the way to the Supreme Court, which ruled in U.S. v. Nixon that he had to turn them over. And this is important because it means that the president is not above the law. So, what ultimately doomed Nixon was not the break in itself, but the revelations that he covered it up by authorizing hush money payments to keep the burglars silent and also instructing the FBI not to investigate the crime. In August of 1974, the House Judiciary Committee recommended that articles of impeachment be drawn up against Nixon for conspiracy and obstruction of justice. But the real crime, ultimately, was abuse of power, and there’s really no question about whether he was guilty of that. So, Nixon resigned. Aw man, I was thinking I was going to get away without a Mystery Document today. The rules here are simple. I guess the author of the Mystery Document, and lately I’m never wrong. Alright. Today I am an inquisitor. I believe hyperbole would not be fictional and would not overstate the solemnness that I feel right now. My faith in the Constitution is whole, it is complete, it is total. I am not going to sit here and be an idle spectator to the diminution, the subversion, the destruction of the Constitution.” Aw. I’m going to get shocked today. Is it Sam Ervin? Aw dang it! Gah! Apparently it was African American congresswoman from Texas, Barbara Jordan. Stan, that is much too hard. I think you were getting tired of me not being shocked, Stan, because it’s pretty strange to end an episode on conservatism with a quote from Barbara Jordan, whose election to Congress has to be seen as a huge victory for liberalism. But I guess it is symbolic of the very things that many conservatives found unsettling in the 1970s, including political and economic success for African Americans and women, and the legislation that helped the marginalized. I know that sounds very judgmental, but on the other hand, the federal government had become a huge part of every American’s life, maybe too huge. And certainly conservatives weren’t wrong when they said that the founding fathers of the U.S. would hardly recognize the nation that we had become by the 1970s. In fact, Watergate was followed by a Senate investigation by the Church Committee, which revealed that Nixon was hardly the first president to abuse his power. The government had spied on Americans throughout the Cold War and tried to disrupt the Civil Rights movement. And the Church Commission, Watergate, the Pentagon Papers, Vietnam all of these things revealed a government that truly was out of control and this undermined a fundamental liberal belief that government is a good institution that is supposed to solve problems and promote freedom. And for many Conservatives these scandals sent a clear signal that government couldn’t promote freedom and couldn’t solve problems and that the liberal government of the New Deal and the Great Society had to be stopped. Thanks for watching, I’ll see you next week. Woah! Crash Course is made with the help of all of these nice people and it exists because of...your support on Subbable.com. Subbable is a voluntary subscription service that allows you to support stuff you like monthly for the price of your choosing, so if you value Crash Course U.S. History and you want this kind of stuff to continue to exist so we can make educational content free, forever, for everyone, please check out Subbable. And I am slowly spinning, I’m slowly spinning, I’m slowly spinning. Thank you again for your support. I’m coming back around. I can do this. And as we say in my hometown, don’t forget to be awesome.
US_History
The_Seven_Years_War_and_the_Great_Awakening_Crash_Course_US_History_5.txt
Hi, I'm John Green. This is Crash Course U.S. History. And today we're going to discuss the events that led to the events that led to the American revolution. So, we'll begin with the Seven Years War which, as Crash Course World History fans will remember, Winston Churchill referred to as as the "First World War". The Americans called it "The French and Indian War", the Prussians called it "The Third Silesian War", Swedes called it the Pomeranian War. For today, we're just going to call it the Seven Years War on account of how it lasted for 9 years. [Theme Music] So, here at Crash Course, we take a broad view of history. And rather than talking about the minute details of wars, we try to focus on the important stuff: Causes, effects, any time Vladimir Putin might show up, and teacup kittens. And as and far as causes go, the Seven Years War was really like most wars, about economics. Mr. Green! Mr. Green! Is this economics class? Because I don’t remember signing up for it. Yeah, this is economics class, Me From the Past. It's economics and religion and psychology and anthropology and astronomy and physics and ecology and literature. That’s the great thing about history. You can’t put the past into little boxes that you study for 50 minutes a day until the bell rings. You can’t separate what happened from what people wanted and believed and valued. Right, so, mercantilism was the key economic theory of the British Empire in the 18th century. Because while Adam Smith and David Ricardo were talking up free trade and economic liberalism, by 1750 no one was really listening. Mercantilism was basically the idea that the government should regulate the economy in order to increase national power. This meant encouraging local production through tariffs and monopolies and also trying to ensure a favorable balance of trade. And colonies were an awesome way to create this favorable trade balance because they both produced raw materials and bought back finished goods made from those raw materials. But for it to work, you always need more and more land so you can have more raw materials and more colonists to buy finished goods. By the way, it’s important to understand the centrality of slavery in this colonial economy. I mean, the most important colonial trade goods were tobacco and sugar. And both of those crops relied heavily on slave labor. And slaves themselves were a key trade good in the so-called triangular trade between Europe, Africa, and the colonies. As one historian put it: “The growth and prosperity of the emerging society of free Colonial British America were achieved as a result of slave labor." So, Britain’s greatest rival in the 18th century was France. Like, on paper, the Spanish had a more significant empire in North America. And they had certainly been there longer. But their empire was really sparsely populated. In fact, by 1800, Los Angeles, the most populous town in Spanish California, had a population of 300 and only 17 freeways. The French colonies were considerably more populous, but even so, by 1750, there were only about 65,000 French colonists, most of them in the St. Lawrence River Valley, thereabouts. I don’t know, maybe it was somewhere over here. This isn’t a terribly detailed map, and also, I’m not looking at it. But the French were moving into the Mississippi and Ohio River Valleys and forming alliances with American Indians there to try to dominate the fur and deer-skin trades. And that proved problematic. So, wars usually have really complicated causes, and it’s very rare that we can refer to one thing as making them inevitable. Fortunately, the Seven Years War is the exception to that rule. Stan, I think I just used the word "exception", which means it’s time for a "Mongol-tage." I guess they heard there was an exception in town. So, in 1749, the Governor of Virginia award a huge land grant to something called the Ohio Company, which was basically a real estate development firm designed to benefit the Governor of Virginia's friends. The Native Americans and their French supporters thought this was bad form, because they thought they had rights to the land. So, the Ohio Company asked the French to recognize their land claims, and the French were, like, “Non.” Let’s go to the Thought Bubble. The actual fighting began when the British, or more precisely British colonists led by a 21-year-old militia colonel named George Washington – Yes, that George Washington – tried to eject the French from the forts they were constructing in Western Pennsylvania. The first attempt in 1754 was a disaster. Washington built and then abandoned the ironically named Fort Necessity with the loss of one-third of his men. It was followed by the equally unsuccessful attack on Fort Duquesne, now located in downtown Pittsburgh, where the French and Indians pounded the British, killing two-thirds of General Braddock’s forces, and also General Braddock. Things didn't go much better for the British for the next two years, although they did take control of part of Nova Scotia and kick out more than 11,000 French Acadians, many of whom died in what is called "The Expulsion." Some of those who didn't ended up in Louisiana, and became Cajuns. But anyway, the tide began to turn for the British in 1759 when they captured French Forts Duquesne – finally – Ticonderoga and Louisbourg. The biggest victory of all came in September, when the British trounced the French at the Plains of Abraham near Quebec. Montreal surrendered the next year. The rest of the battles aren’t that important, unless you were fighting in them. And I’m sure you can count on the French and Indian War aficionados to fill the gaps in in comments. But suffice it to say, the British were victorious in North America, the Caribbean, Europe, and as far away as India. The war continued officially for three more years and ended with the Treaty of Paris in 1763. Thanks, Thought Bubble. So, the most obvious result of the war was territorial changes, particularly in the green areas of not-America. And even though Britain won the war, they arguably got the short end of the territorial stick. Under the terms of the Peace of Paris, Britain got Canada from France and Florida from Spain. In return, France got Guadalupe and Martinique, Caribbean sugar islands that were much more valuable, at least monetarily, than Canada. Sorry, Canada, but if you want to be valuable, grow some sugar. And not sugar beets, either, Canada. Real sugar. And Spain got Cuba, with its awesome sugar trade, and the Philippines with its proximity to China, which were much more valuable than Florida. I mean, at the time Florida did not even have Disney World; instead, it had yellow fever. But the real losers of the war were not the British or the Spanish or the French, but the Native Americans. The shuffling of territories meant the French were out of the Mississippi and Ohio River Valleys. And the American Indians were stuck with the British who kind of sucked. And as the British moved west, Native Americans felt compelled to fight back. Oh, it’s time for the Mystery Document? The rules here are simple: if I'm wrong about the author of the Mystery Document, I will be shocked by electricity. If I am right, I will be shocked by my knowledge of 18th century primary sources. OK, here we go: "We humbly conceive that it is contrary to the maxims of good policy and extremely dangerous to our frontiers, to suffer any Indians, of what tribe soever, to live within the inhabited parts of this province while we are engaged in an Indian war, as experience has taught us that they are all perfidious, and their claim to freedom and independence puts it in their power to act as spies, to entertain and give intelligence to our enemies, and to furnish them with provisions and warlike stores. To this fatal intercourse between our for pretended friends and open enemies, we must ascribe the greatest of the ravages and murders that have been committed in the course of this and the last Indian war. We, therefore, pray that this grievance be taken under consideration and remedied..." Enough! Usually you either know it or you don’t. And I don’t. The author is clearly not an Indian. The first-person plural makes me think the author is probably not an individual, which makes it harder. Certainly, we’re getting a taste of tension between colonists and Native Americans on the frontier. But who is writing about this tension, I have absolutely no idea. Stan, you get to shock me. Who is it? [Buzzing Noise] Are you serious? I told you, it has to be an individual person! Fine. [Buzzing Noise] Gah! So, after the end of the Seven Years War, American Indians organized an armed revolt. In 1763, Indians, particularly from the Ottawa and the Delaware tribes, launched what has come to be known as Pontiac’s Rebellion. Now, of course, the rebellion ultimately failed to dislodge the British, but the Native Americans did manage to besiege Detroit and kill hundreds of settlers. And that convinced the British that if they wanted to avoid future conflicts, they should slow down the colonists' settlements in the territories. So, the British Parliament issued the Proclamation Line of 1763 which forbids settlement west of the Appalachian Mountains and reserved that territory for Indians. Now, that sounds like a sensible policy until you remember that the British colonists had just finished fighting a war in order to get the right to move into that very territory. So, the settlers duly ignored the Proclamation Line and got down to settling. The other big outcome of the Seven Years War was that it set up the American Revolution. I mean, you've just seen colonists ignoring the British Parliament. We’ll talk more about that next week, but around the end of the Seven Years War, new ideas like republicanism were taking root in the colonies. Republicanism initially meant supporting a government without a king, but in the colonies it came to mean something broader. Now, they didn’t believe that everyone was equal. Republicans believed that only property-owning citizens possessed "virtue" which was defined in the 18th century not as being, like, morally good but as a willingness to subordinate one’s personal interests to the public good. This type of republicanism harkened back to a Roman ideal. Only, you know, without Caesar stabbing and togas. Stan, I wish you wouldn’t. And a second type of political philosophy grew out of ideas that in the 18th Century were called "liberalism." For classical liberals, the main task of government was to protect citizens’ natural rights, which were defined by John Locke as life, liberty, and property. For liberals like Locke, governments were the result of a social contract, whereby individuals would give up some of their liberty in exchange for a government protecting their natural rights. So, republicanism and liberalism were undermining traditional political authority. And so was the "Great Awakening", in which Americans awakened from being very religious to being super religious. The Great Awakening took place in the early decades of the 18th century, and it was a revitalization of religious feeling, energized by revival meetings and the introduction of new denominations. In the early part of the 17th century, most of the English colonists were Anglicans, unless you count the Catholics running Maryland. But by the time of the Great Awakening, there were also Presbyterians and Baptists and Methodists, oh my! Even the Old Line Congregationalist churches were challenged by so-called New Light ministers who placed less of an emphasis on predestination and more on an individual’s experience of salvation or being born again. So religion became much more emotional in the colonies, especially after the arrival of the Englishman George Whitefield who went on a preaching tour from 1739 to 1741. The main thrust of his sermons was that humans need only repent to avoid the horrors of damnation and be saved. And he believed that salvation was within each individual. It’s worth noting that this rise in religious fervor was not confined to America or even to Christianity. Like, for instance, Wahhabism, the Islamic reform movement that's still closely associated with Saudi Arabia, began in the Middle East around the same time. So one of the keys in the American Revolution was a breakdown in respect for authority. And this was fueled partly by economics, partly by political philosophies that undermined faith in governance from afar, and partly by religious revivals that criticized not only church hierarchies, but also other aspects of colonial society. I mean, if people were going so far as to criticize their religious leaders and established religious norms, is it any wonder that they would criticize the acts of a Parliament working an ocean away? We’ll find out next week. Thanks for watching. Crash Course is produced and directed by Stan Muller. Our script supervisor is Meredith Danko. The show is written by my high school history teacher Raoul Meyer and myself. Our associate producer is Danica Johnson. And our graphics team is Thought Bubble. If you have questions about today’s video, you can ask them in comments, where they will be answered by our team of historians. Thanks for watching Crash Course. And as we say in my hometown: Don’t forget to be awesome!
US_History
Gilded_Age_Politics_Crash_Course_US_History_26.txt
Hi, I’m John Green, this is Crash Course: US History, and today we’re going to continue our look at the Gilded Age by focusing on political science. Mr. Green, Mr. Green, so it’s another history class where we don’t actually talk about history? Oh, Me From the Past, your insistence on trying to place academic exploration into little boxes creates a little box that you yourself will live in for the rest of your life if you don’t put your interdisciplinary party hat on. So the Gilded Age takes its name from a book by Mark Twain and Charles Dudley Warner that was called The Gilded Age: A Tale of Today. It was published in 1873 and it was not that successful, but while The Gilded Age conjures up visions of fancy parties and ostentatious displays of wealth, the book itself was about politics, and it gives a very negative appraisal of the state of American democracy at the time. Which shouldn’t come as a huge surprise coming from Twain, whose comments about Congress included, “Suppose you were an idiot. And suppose you were a member of Congress. But I repeat myself.” And also, “It could probably be shown by facts and figures that there is no distinctly Native American criminal class except Congress.” So when faced with the significant changes taking place in the American economy after the Civil War, America’s political system both nationally and locally dealt with these problems in the best way possible: by becoming incredibly corrupt. intro Stan says I have to take off my party hat. Rrrr rrrr rrrrr.... So House Speaker Tip O’Neill once famously said that all politics is local and although that’s not actually true, I am going to start with local politics today, specifically with one of America’s greatest inventions, the urban political machine. So a political machine is basically an organization that works to win elections so that it can exercise power. The most famous political machine was New York City’s Tammany Hall, which dominated Democratic party politics in the late 19th century, survived until the 20th, and is keenly associated with corruption. Oh, it’s already time for the Mystery Document? This is highly unorthodox, Stan. Well, the rules here are simple. I guess the author of the Mystery Document. I’m usually wrong and I get shocked with the shock pen. Alright, let’s see what we’ve got here. “My party’s in power in the city, and it’s going to undertake a lot of public improvements. Well, I’m tipped off, say, that they’re going to lay out a new park at a certain place and I buy up all the land I can in the neighborhood. Then the board of this or that makes its plan public, and there is a rush to get my land, which nobody cared particular for before. Ain’t it perfectly honest to charge a good price and make a profit on my investment and foresight. Of course it is. That’s honest graft.” Stan, I know this one. It’s about machine politics. It’s from New York. It doesn’t say it’s from New York, but it is because it is George Plunkitt. Yes! How do you like them apples? Oh, you wanna know the name of the book? It’s “Plunkitt of Tammany Hall.” Stan, transition me back to the desk with a Libertage, please. Plunkitt became famous for writing a book describing the way that New York City’s government actually worked, but he was a small fish compared with the most famous shark-like machine politician of the day, William “Boss” Tweed, seen here with a head made of money. “Boss” Tweed basically ran New York in the late 1860s and early 1870s, and his greatest feat of swindling helps explain how the machine system worked. It revolved around the then-new County Courthouse that now houses the New York City Department of Education. Building the courthouse was initially estimated to cost around $250,000, but ended up costing $13 million by the time it was finished in 1871. Included in that cost was a bill of $180,000 for three tables and forty chairs, $1.5 million for lighting fixtures, and $41,000 for brooms and cleaning supplies. A plasterer received $500,000 for his initial job and then $1 million to repair his shoddy work. The standard kickback in these situations was that Tammany Hall received two dollars for every one dollar received by the contractor. That may seem like a bad deal for contractors, but remember: That plasterer still got to keep half a million dollars, which is worth about $9 million in today’s money. Now of course that makes it sound like political machines were pure evil, especially if you were a taxpayer footing the bill for that courthouse. But machines also provided valuable services to immigrants and other poor people in cities. As Plunkitt explained, Tammany could help families in need: “I don’t ask whether they are Republicans or Democrats, and I don’t refer them to the Charity Organization Society, which would investigate their case in a month or two and decide they were worthy of help about the time they are dead from starvation. I just get quarters for them, buy clothes for them if their clothes were burned up, and fix them up until they get things running again.” In return for this help, Tammany expected votes so that they could stay in power. Staying in power meant control of city jobs as well as city contracts. Plunkitt claimed to know “every big employer in the district – and in the whole city, for that matter --- and they ain’t in the habit of saying no to me when I ask them for a job.” But with all the corruption, sometimes even that wasn’t enough. Fortunately Tammany politicians could always fall back on fraud. Tammany found bearded men to vote, then took them to the barber to shave off the beard, but left the moustache, so that they could vote a second time. And then, they would shave off the ‘stache so they could vote for a third. And then of course, there was always violence and intimidation. By the end of the century a Tammany regular lamented the good old days when, “It was wonderful to see my men slug the opposition to preserve the sanctity of the ballot.” But, corruption wasn’t limited to big cities like New York and Chicago. Some of the biggest boondoggles involved the United States Congress and the executive branch under president Ulysses Grant. The first big scandal, dubbed the “King of Frauds” by the New York Sun, involved Credit Mobilier, the construction company that did most of the road building for the Union Pacific Railroad. This two pronged accusation involved, first: overcharging the public for construction costs and siphoning off profits to Credit Mobilier, and second: bribery of Congressmen. Now, this second charge was, of course, much juicier and also more partisan because only Republican congressmen, including the Speaker of the House, were implicated in it. Eventually Massachusetts Congressman Oakes Ames was found guilty of giving bribes, but no one was ever found guilty of receiving those bribes. As you can imagine, that did wonders for the reputation of Congress. The second major scandal involved the so-called Whiskey Ring, which was a group of distillers in St. Louis who decided that they didn’t like paying excise taxes on their product, perhaps a slightly more noble cause than that of the 2009 Bling Ring, who just wanted to dress like Paris Hilton. John McDonald, a Grant administration official, helped distillers reduce their taxes by intentionally undercounting the number of kegs of booze. But then in 1875, the tax evasion grew out of control. And McDonald eventually confessed and was convicted, thereby tainting the presidency with corruption just as Credit Mobilier had tainted Congress. That leaves the Supreme Court untainted, but don’t worry, the Dred Scott decision is worth at least, like, eighty years of tainting. So with all this distrust in government, after Grant served two terms, presidential elections featured a series of one-termers: Hayes, Garfield (whose term was filled out by Chester Arthur after Garfield was assassinated), Cleveland, Benjamin Harrison, and then Cleveland again. McKinley, who was elected twice, but then he was assassinated. As for their parties, Gilded Age Republicans favored high tariffs, low government spending, paying off national debt and reducing the amount of paper money – or greenbacks – in circulation. Democrats opposed the tariffs and were often linked to New York bankers and financiers. In short, both parties were pro-business, but they were pro-different-businesses. Despite that and the widespread corruption, some national reform legislation actually did get passed in the Gilded Age. The Civil Service Act of 1883 – prompted by Garfield’s assassination by a disgruntled office seeker – created a merit system for 10% of federal employees, who were chosen by competitive examination rather than political favoritism. But, this had an unintended effect. It made American politicians much more dependent on donations from big business rather than small donations from grateful political appointees, but, you know, nice idea. And then in 1890 the Sherman Anti-Trust act forbade combinations and practices that restrained trade, but again it was almost impossible to enforce this against the monopolies like U.S. Steel. More often it was used against labor unions, which were seen to restrain trade in their radical lobbying for, like, health insurance and hard hats. But all in all the national Congress was pretty dysfunctional at the end of the 19th century, stop me if that sounds familiar. So state governments expanded their responsibility for public health and welfare. Cities invested in public works, like transportation, and gas, and later, electricity, and the movement to provide public education continued. Some northern states even passed laws limiting the workday to 8 hours. “What is this, France?” is what courts would often say when striking those laws down. Reform legislation was less developed in the South, but they were busy rolling back reconstruction and creating laws that limited the civil rights of African Americans, known as Jim Crow Laws. In the west, farmers became politically motivated over the issue of freight rates. Wait, are we talking about railroads? Let’s go to the ThoughtBubble. In the 1870s, farmers formed the Grange movement to put pressure on state governments to establish fair railroad rates and warehouse charges. Railroads in particular tended to be pretty monopolistic: They owned the track going through town, after all, so it was hard for farmers to negotiate fair shipping prices. The Grange Movement eventually became the Farmer’s Alliance movement, which also pushed for economic cooperation to raise prices, but was split into Northern and Southern wings that could never really get it together. The biggest idea to come out of the Farmers Alliance was the subtreasury plan. Under this plan, farmers would store grain in government warehouses and get low-rate government loans to buy seed and equipment, using the stored grain as collateral. This would allow farmers to bypass the banks who increasingly came to be seen, along with the railroads, as the source of all the farmers’ troubles. Eventually these politically motivated farmers and their supporters grew into a political party, the People’s Party or Populists. In 1892 they held a convention in Omaha and put forth a remarkably reform minded plan, particularly given that this was put forth in Omaha, which included: The Sub-Treasury Plan, (which didn’t exactly happen, although the deal farmers ended up with was probably better for them) Government Ownership of Railroads (which sort of happened, if you count Amtrak) Graduated Income Tax (which did happen, after the passage of the 16th amendment) Government Control of the Currency (which happened with the creation of the Federal Reserve System) Recognition of the Rights of Laborers to Form Unions (which happened both at the state and federal level) and Free Coinage of Silver to produce more money, which we’ll get to in a second The People’s Party attempted to appeal to a broad coalition of “producing classes” especially miners and industrial workers, and it was particularly successful with those groups in Colorado and Idaho. As the preamble to the party platform put it: “Corruption dominates the ballot box, the Legislatures, the congress and touches even the ermine of the bench … From the same prolific womb of governmental injustice we breed the two great classes – tramps and millionaires.” Thanks, Thought Bubble. So, some western states were so Populist, they even granted women the right to vote in the 1890s, which added tremendously to the Populist’s electoral power. But most American voters stuck with the two main parties. Industrial workers never really joined in large numbers because the Populist calls for free coinage of silver would lead to inflation, especially in food prices, and that would hurt urban laborers. But if it hadn’t been for that threat of silver inflation, we might have three major political parties in the U.S. today. Or at least two different ones. Stupid inflation, always ruining everything. Populist leaders also struggled to unify because racism. Some Populist leaders, like Tom Watson, argued that black and white poor farmers were in the same boat, but Southern populists were not inclined to take up the fight against segregation, and even Watson himself later began spouting anti-Semitic rhetoric. But, in the halcyon Populist days of 1892, their presidential candidate, James Weaver, gained 1 million votes as a third party candidate. He carried 5 western states and got 22 electoral votes, which is better than Mondale did. But the best known Populist candidate was actually the Democratic nominee for president in 1896, William Jennings Bryan. Bryan, who once spoke of America as being crucified on a cross of gold, firmly supported free coinage of silver in the hopes that increasing the amount of money in circulation would raise prices for farmers and make it easier for people to pay off their debts. Williams Jennings Bryan is probably better known for the anti-evolution stance he took in the famous Scopes “Monkey Trial,” where he was up against none other than Clarence Darrow. But he did almost become president. So, the Populists were really wary of Bryan as a Democrat, because they feared that their ideas would be reduced to simply “free silver,” but they voted for him anyway. But Bryan still lost the 1896 election to William McKinley in what has become known as the first modern political campaign, because the business classes gave McKinley’s campaign an unprecedented $10 million. Which these days will buy you nine ads in Iowa. But back then, it won you an entire presidential election. He won the electoral college in a landslide 271-176. Bryan’s defeat in 1896 effectively put an end to the Populist Party. The corruption in government, both federal and local, continued, and new journalists called Muckrakers began exposing it in the press. Even though they were defeated at the polls, Populist ideas, especially direct election of senators and a progressive income tax, quickly became mainstream. Now, these days we don’t necessarily associate those ideas with Populists, which suggests that maybe they were right to worry about hitching their wagon to Bryan’s star. But in the end, would you rather have your name survive or see your ideas enacted? But of course many of the problems that the Populists were concerned with persisted, as did the scourge of Jim Crow. We’ll discuss those next week when we look at the Progressive Era. Thanks for watching. Crash Course is produced and directed by Stan Muller. Our script supervisor is Meredith Danko. The associate producer is Danica Johnson. The show is written by my high school history teacher, Raoul Meyer, Rosianna Rojas, and myself. And our graphics team is Thought Café. Okay, I’ll make the transition, but I think you’ll want to keep filming this. Every week there’s a new caption for the Libertage. If you’d like to suggest one in comments, you can do so where you can also ask questions about today’s video that will be answered by our team of historians. Thank you for watching Crash Course and as we say in my hometown, don’t forget to be awesome.
US_History
The_Reagan_Revolution_Crash_Course_US_History_43.txt
Hi, I'm John Green, this is Crash Course U.S. history, and today we're going to talk about the guy who arguably did the most to shape the world that I live in. NO, Stan not Carrottop. No, not Cumberbatch although he did do the most to shape the Tumblr that I live in. I'm talking about The Great Communicator: Ronald Reagan. Reagan is a fascinating president because he was, in lots of ways, straightforward. His presidency was called the Reagan Revolution but it's a bit odd that he gets so much credit for changing America because he was one of the least hands-on of all presidents and as you know here at Crash Course we don't really indulge in great man history. So we're going to talk about Reagan but we're also going to talk about the forces that predated his presidency that led to the so-called Reagan Revolution. Mr. Green? Mr Green? I remember some of this stuff. It's like almost interesting. I'm glad to be almost interesting me from the past. Someday maybe you'll be almost interesting. Intro The Reagan era began, unsurprisingly, with his election to the Presidency in 1980. Now, anyone could have beaten Jimmy Carter, but Reagan succeeded largely by pulling together many strands of conservatism. Reagan emphasized his belief in "states rights" and he condemned "welfare cheats." He also condemned busing and affirmative action. And he won the support of religious conservatives, including the newly formed Moral Majority, by standing for family values, even though in fact he was the first U.S. president to have been divorced. Also, he once acted with a monkey. And there's nothing "family values" about that. Stan just informed me that Ronald Reagan did not in fact act with a monkey. He acted with a chimp. I apologize to all the primate rights people out there. Good lord! Now Reagan also appealed to the so-called white backlash, working class white people who resented the advances that African Americans had made during the 1960s and the 1970s. And economic conservatives liked his anti-union, low taxes, free market positions, and anti-government crusaders and libertarians liked his assertion that government was not the solution to problems, but was itself the problem. Then there were the Cold War hawks who liked his militant anti-Soviet rhetoric and his desire to spend more on the military. Now that's a big coalition but it turned out to be just barely a majority coalition. Still Reagan won in 1980. He even carried the traditionally Democratic states of Illinois and New York proving that Jimmy Carter truly was profoundly unelectable. A lot of Reagan's policy ideas weren't all that popular at the time, but he truly was a great communicator. I mean Reagan's was a former actor and he knew how to talk to people without them feeling condescended to. Reagan's most famous campaign advertisement proclaimed that it was "morning in America" again, and that relentless optimism (I mean at least if you're a morning person) was a welcome contrast to Jimmy Carter being like "you should wear sweaters inside to save fuel." Sorry Jimmy this is America! Ronald Reagan used the word "freedom" more than any other president in American history, but it's interesting to think about what he meant by the word "freedom." Because as we've seen in American history freedom has meant lots of things to lots of people. Is freedom, freedom from government tyranny? Or is freedom government protection from hunger and homelessness and military attacks? Do governments ultimately restrict freedom or provide it? Now there's no question that the federal government that Ronald Reagan inherited would have been absolutely foreign to the people who founded this country. I mean Social Security, Federal Income Taxes, the National Endowment for the Arts. But some people would argue that the America of 1980 was much more free for more Americans than say the America of 1790 when after all slavery was legal. And in fact in the early 19th century many slave owners said that the government was taking away their freedom to own slaves. Ultimately, the question for how we should imagine freedom and how we should allow for it, is at the center of American history. And a big part of Ronald Reagan's vision of freedom was economic freedom, which he laid out in his Economic Bill of Rights. It would curtain union power, reduce federal regulation of industry and the environment, and most of all lower taxes. All these ideas were a big part of the Reagan Revolution. But as we know much of what he proposed had been brewing for years during the rise of conservatism. So what aspects this Economic Bill of Rights actually ended up happening? Well, his main accomplishment was lowering taxes: in 1981 Reagan persuaded Congress to lower the top tax rate from 70% to 50%. In 1986, Congress went even further with the Tax Reform Act that lowered the top income tax rate to 28%. Oh, it's time for the mystery document! The rules here are simple... I read the mystery document, I either get the author of it correct or I get shocked. Alright here we go. Can I just take a preliminary guess and say that it's going to be Reagan? "I will not accept the excuse that the Federal Government has grown so big and powerful that it is beyond the control of any President, any administration or Congress. We are going to put an end to the notion that the American taxpayer exists to fund the Federal Government. The Federal Government exists to serve the American people and to be accountable to the American people. On January 20, we are going to re-establish that truth. Also on that date we are going to initiate action to get substantial relief for our taxpaying citizens and to put people back to work. [...] We will simply apply to our government the common sense that we use in our daily lives." It is Reagan! Stan is telling me that I'm not going to get the check mark unless I guess the correct speech? Well he talked about January 20th, so obviously it's not his inaugural address. It's either the acceptance speech he gave at the convention or like the speech that he gave after he was elected. But I don't think.... convention? Yes! So the idea that to lower taxes is the best way to spur economic growth is called supply side-economics, trickle down economics or, if you're George HW Bush running against Reagan in the 1980 primaries, voodoo economics. Sadly, this does not involve zombies or putting pins in dolls. Instead, it's about high interest rates to combat inflation coupled with cutting taxes, especially for wealthy Those rich people then spend more and invest more in private enterprise which creates new jobs. Also, the thinking goes that lower taxes will encourage people to work harder since they will be able to keep more of their money. Did this work? Eh. Now we're getting into the part of history where it depends on your political perspective. Initially, the high interest rates definitely provoked a recession in 1981 and 1982. Which was not ideal. But, inflation did drop from 13.5% in 1981 to 3.5% in 1988 and after 1982 the economy began expanding. And the rest of the Reagan era saw consistent increases in gross domestic product; however, not everyone benefited from that expansion. While the stock market boomed, wages didn't rise very much. And in fact, haven't risen since. Now one of the central ideas of supply-side economics is that you have lower tax rates and you also cut government spending. Because, you know, the government has less money. Which, yeah, it did not happen. The government is always good at cutting taxes but never good at cutting spending. The Reagan era did see cuts to some programs, but the really expensive items: Social Security, Medicare, Medicaid, they remained largely intact. And instead of cutting the overall amount of spending it actually went up considerably because of the defense spending binge that saw the national debt balloon to 2.7 trillion dollars. But Reagan totally did deliver on his anti-union rhetoric. In August 1981, when the unionized air traffic controllers went out on strike, violating federal law in the process, Reagan fired more than 11 thousand controllers who refused to return to work.. So as I mentioned before, the 80's were a pretty great decade for Wall Street generally, which is why Oliver Stone made a movie about it that immortalized the line "Greed is Good." In the 1980s it became easier to make money buying and merging companies than actually like running them profitably. But fortunately we later dealt with that problem..... ugh. We never fix the problems, we only fix the things that are fine. One of the reasons that American history is so important to me is that I want us as a country to like summon the courage to deal seriously with our problems. Sometimes I think that we're just so cowardly like we're the cowardliest country on Earth... alright the French. Right, but like the merger of RJ Reynolds Tobacco, maker of Winston cigarettes, and Nabisco, which gave us Oreos, not only created a cancer and heart disease dream team, it also generated nearly $1 billion for the lawyers and bankers who put the deal together. But if you were like most of us in the 80's watching Dallas and Dynasty, working at your regular job, inexplicably having a carpeted bathroom, than you probably didn't share in that abundance. The 80's saw a rising economic inequality, although not nearly as dramatic as we see today. By the mid 1990s the richest 1% controlled 40% of the nation's wealth, double the share from 20 years before. Meanwhile the income of middle class families stagnated and that of the poorest 20% began to decline. And one often overlooked aspect of de-regulation was the closing of hospitals for the mentally ill. Now, some of these institutions were hellish, but rapid closure of all of these facilities without replacement services meant that many patients were left to live on the street. Homelessness increased dramatically. Now of course Reagan is considered the darling of conservatives today, but by current standards he was something of a moderate. I mean yes, he cut taxes, and he cut funding for programs that helped the poor like food stamps and school lunches. But during his second term he worked effectively with the democratic congress. There's no bipartisanship today. Also, he left the big New Deal and Great Society programs largely intact. I mean he was too old to believe in cutting Medicare. He was like "all of my friends are on this." And the 80s also didn't see the fulfillment of the desires of the Christian Right. I mean divorce rates went up, abortion continued to be legal, women didn't leave the workforce. In fact, Reagan appointed the first woman to the Supreme Court. Are you kidding? We didn't have a woman in the Supreme Court until the 1980s? This is the craziest country ever. Even affirmative action persisted, and Nancy Reagan's urging of Americans to "Just Say No" to drugs didn't convince anybody. And then we have Ronald Reagan's reputation as the man who ended the Cold War. The thinking here goes that Reagan spent so much money on defense that the Soviet Union bankrupted itself trying to compete. And there may be a case to be made there but we don't want to remove agency from the people who protested the oppression of life behind the Iron Curtain. So while you can argue that the Reagan administration helped create good conditions for the change that happened, the people who made the change, made it. Alright. Let's go to the ThoughtBubble. In his first term Ronald Reagan took a really hard line against the Soviet Union. He called it an Evil Empire and even once joked that the U.S. would "begin bombing in 5 minutes." That was ill advised. Reagan also sponsored the largest military buildup in U.S history including the MX missile. The highlight was his proposed Strategic Defense Initiative aka Star Wars: space-based missiles and lasers for shooting Soviet missiles out of the sky. This was a fantastic idea, although it would have violated the 1972 Anti-ballistic Missile Treaty, but anyway it was technologically impossible to build. The force was not strong with this idea. Reagan also pressured NATO to put missiles in Western Europe and the war games that NATO staged in 1983 were so realistic that the Soviets almost scrambled their planes and launched ICBMs. Now if that had resulted in nuclear war, we would have a very different story on our hands, but it didn't. And Regan's aggressive nuclear posturing had a couple of positive results. First, it boosted the world wide anti-nuclear weapons movement, called the FREEZE movement. Second, it turned Reagan into the most successful nuclear abolitionist in the atomic age. There's nothing like a reasonably close brush with nuclear apocalypse to tone down your rhetoric a little. In his second term Reagan was much more conciliatory towards the Soviets and worked to reduce the number of warheads. In his first term, according to the historian Victor Sebastian, "[Reagan] spent nearly as much on defense as Presidents Nixon, Ford, and Carter combined and much more than both the cost of the Korean and Vietnam wars,"[1] but in his second, Reagan toned down both the spending and his rhetoric, declaring, "Our constant and urgent purpose must be a lasting reduction of tensions between us."[2] Thanks, Thought Bubble. So, Reagan was able to negotiate the first reduction in nuclear weapons with the new Soviet Premier Mikhail Gorbachev in 1986. In fact, the two leaders might have tried to get rid of nuclear weapons altogether, but Reagan's unwillingness to give up his Star Wars initiative made that impossible. That was a big deal, but the rest of Reagan's foreign policy was somewhat less triumphant. For instance, he sent Marines to Lebanon as part of a peacekeeping mission, but then withdrew forces after 241 of them were killed by a car bomb. And Middle Eastern policy played a key role in the biggest controversy of Reagan's presidency: the Iran-Contra Scandal. This was truly one of the craziest schemes ever hatched up by an American presidential administration. Which is really saying something. The Contras were rebels seeking to overthrow the socialist Sandinista government of Nicaragua. Because they were anti-communists and the Cold War was in full swing, the Reagan administration wanted to support them. But Congress passed a law saying that they couldn't. So two administration officials, John Poindexter and Oliver North, got creative. They hatched a plan to sell arms to the Iranian government, still technically our enemies, and then funnelled some of the profits from these illegal arms sales to the Contras. And Congress would never have to know about it. Except that they found out. Congressional hearings followed, and we learned a lot about Ronald Reagan's penchant for delegating the details of his policy to underlings. In this case, that served him well as he could plausibly claim that he knew nothing about the clandestine activities of these two rogue employees. And this gets to the big point of the Reagan era. I'm not sure that it was really about Reagan. In fact, I'm not sure that any great-man history is really about the great men that supposedly spearheaded it. Whether or not you think America is better off from the rise of conservatism we've seen since LBJ's great society. It wasn't really, and it still really isn't about individuals. It's about us collectively deciding what we mean when we talk about freedom and equality. Thanks for watching. I'll see you next week. Crash Course is made with all the help from these nice people. Who work on this show partly because they care it and partly because, you know, money. If you want to help us in our mission to keep Crash Course free for everyone forever, please consider subscribing over at Subbable. A voluntary subscription platform that allows you to pay whatever you want monthly to make Crash Course exist. Thanks for watching Crash Course and as they say in my hometown "It's morning in America." What should I say - "Don't Forget To Be Awesome"?
US_History
World_War_II_Part_2_The_Homefront_Crash_Course_US_History_36.txt
Episode 36: World War II (2) – the war at home Hi, I’m John Green, this is Crash Course U.S. History and today we’re going to discuss how World War II played out at home and also the meaning of the war. Mr. Green, Mr. Green, so is this going to be, like, one of the boring philosophical ones, then? Oh, Me From the Past, I remember when you were idealistic. I remember a time when all you cared about was the deep inner meaning of … mostly girls. But, you’ve changed, Me from the Past, and not in a good way. intro So anyway World War II brought about tremendous changes in the United States, in many ways shaping how Americans would come to see themselves and how they would want to be seen by the rest of the world. Some of these ideological changes were a continuation of the New Deal, others were direct results of the war, but one thing we can say is that by the end of the war, the country was very different. For starters, World War II strengthened the federal government of the United States. This always happens when a country goes to war, but World War II brought about even more governmental intervention and control than we had seen in World War I. It was like the New Deal on steroids. Like federal agencies, like the War Production Board, War Manpower Commission and Office of Price Administration took unprecedented control of the economy. There was massive rationing of food and supplies, entire industries were completely taken over by the government. The federal government fixed wages, rents, prices, and especially production quotas. Like, if you’re looking to buy a 1942 model Ford, or Chrysler, good luck because there weren’t any. The government told those car makers not to create new models that year. So basically FDR was president for life and controlled all the industries. I mean, how did this Communist end up on the dime? Well the answer is that while it might have sucked not to have a 1942 Ford, most people were just happy to be working after the Great Depression. Unemployment dropped from 14% in 1940 to 2% in 1943. Of course 13 million Americans were serving in the military in some capacity, so that helped employment. But in general the war kicked the American economy into overdrive. Like, by 1944 American factories were producing an airplane every five minutes and a ship every day. U.S. Gross National Product went from $91 billion to $214 billion during the war. Why did this happen? Well that’s controversial, but primarily because of federal spending. Government expenditures during the war were twice the amount they had been in the previous 150 years. Combined. Although a lot of this was financed with debt, much of the war was paid for with taxes. Like, the federal government began the practice of withholding taxes from paychecks, for instance, a practice I first became familiar with when working at Steak N Shake discovering that instead of being paid I don’t know, like, $100 a week, I was being paid -$30 a week because I had to declare my tips. Because my dad made me. Before World War II only 4 million Americans even paid federal income taxes; but after the war 40 million did. Also big business got even bigger during the war because of government contracts. Cost-plus contracts guaranteed that companies would make a profit, and the lion’s share of contracts went to the biggest businesses. So, by the war’s end the 200 biggest American corporations controlled half of all of America’s corporate assets. And all this government spending also spurred development, like defense spending basically created the West Coast as an industrial center. Seattle became a shipping and aircraft-manufacturing hub. And California got 10% of all federal spending. And Los Angeles became the second largest manufacturing center in the country, meaning that it was not in fact built by Hollywood, it was built by World War II. All of this was pretty bad for the South, by the way, because most of this industrialization happened in cities and the South only had two cities with more than a half a million people. And organized labor continued to grow as well, with union membership soaring from around 9 million in 1940 to almost 15 million in 1945. Besides union-friendly New Deal policies, the government forced employers to recognize unions in order to prevent labor strife and keep the factories humming so that war production would not decrease. And, from a human history standpoint, one of the biggest changes is that many of the workers in those factories were women. You’ve probably seen this picture of Rosie the Riveter and while there wasn’t actually a riveter named Rosie, or maybe there was but, she’s an amalgam. But by 1944 women made up 1/3 of the civilian labor force in addition to the 350,000 who were serving in the military. And the type of women who were working changed as well. Married women in their 30s outnumbered single women in the workforce. But the government and employers both saw this phenomenon as temporary, so when the war was over most women workers, especially those in high paying industrial jobs, were let go. This was especially hard on working class women who needed to work to survive and had to return to lower paid work as domestics or in food services, or, god forbid, as teachers. Oh, it’s time for the Mystery Document? The rules here are simple. We use primary sources for learning as this is a serious show about history and then if I guess the author wrong, I get shocked. Okay, what do we got today? Let’s take a look. Certainly this is no time for any of us to stop thinking about the social and economic problems which are the root cause of the social revolution which is today a supreme factor in the world. For there is nothing mysterious about the foundations of a healthy and strong democracy. The basic things expected by our people of their political and economic systems are simple. They are: Equality of opportunity for youth and for others. Jobs for those who can work. Security for those who need it. The ending of special privilege for the few. The preservation of civil liberties for all. I mean, that’s some pretty hardcore New Deal stuff right there. And, uh, the biggest New Deal-er of all was FDR, BUT I remember last time when I guessed FDR and it was actually Eleanor Roosevelt. So. You wouldn’t do Eleanor Roosevelt twice. Or would you? Hm. No it sounds more like a speech. FDR. YES! So, I mentioned at the beginning of this video that World War II was an ideological war, and nothing better encapsulates that idea than FDR’s “Four Freedoms,” which were: freedom of speech, freedom of worship, freedom from want, and freedom from fear. During the war the National Resources Planning Board offered a plan for a peacetime economy based on full employment, an expanded welfare state and a higher standard of living for all. In 1944 FDR even called for a new Economic Bill of Rights that would expand governmental power in order to create full employment, and guarantee an adequate income, medical care, education, and housing to all Americans. As FDR put it: “True individual freedom cannot exist without economic security and independence.” But that didn’t happen, largely because Southern Democrats in the House and Senate didn’t want it to because it would have meant a larger role for unions and also extending greater equality to African Americans, and they weren’t about to let that happen. I mean, their jobs were literally dependent upon African Americans not being able to vote. But, Congress did pass the GI Bill of Rights – officially the Servicemen’s Readjustment Act -- to attempt to prevent widespread unemployment for returning soldiers. It worked amazingly well, and by 1946 more than one million former soldiers were enrolled in college and almost 4 million got assistance with mortgages, spurring a post-war housing boom. Levittown and all the towns since that look like it came after the war. So, we talked about FDR’s Four Freedoms, but big business added a fifth freedom – free enterprise. Advertisers helped on this front, trying to make the war about consumption, telling Americans that they were fighting to “hasten the day when you … can once more walk into any store in the land and buy anything you want,” according to an ad for Royal Typewriters. And FDR’s vision of extending freedom wasn’t limited to the United States, like Henry Luce, the publisher of Time Magazine published a book called The American Century claiming that the war had thrust upon the U.S. the opportunity to share with all people their “magnificent industrial products” (that’s a quote) and American ideas like “love of freedom” and “free economic enterprise.” Now, of course, there wasn’t complete agreement on this liberal, government-led vision of freedom. Like, Frederick Hayek in 1944 published the Road to Serfdom, claiming that government planning posed a threat to individual liberty. And even though he claimed not to be a conservative because conservatives liked social hierarchy, Hayek’s equating New Deal planning with Fascism and socialism became a foundation for later American conservatives. The struggle against Nazism also helped re-shape the way that Americans thought of themselves. Like, because the Nazis were racists, Americanism would mean diversity, and tolerance, and equality for all people. The federal government supported this version of America. FDR claimed that to be an American was “a matter of mind and heart,” not “a matter of race or ancestry.”[1] Of course, it wasn’t a matter of race and ancestry, we’d already killed 95% of the indigenous population. This was also, not coincidentally, the period where American intellectuals began publishing books debunking the supposed “scientific” basis of racism. Now this didn’t mean that Americans suddenly embraced equality for all people. Anti-Semitism still existed and contributed to the government’s not doing more to help the Jews who perished in the Holocaust. In fact, only 21,000 Jewish people were allowed to come to the U.S. during the course of the war. And white peoples’ fear over minority groups contributed to race riots in Detroit and the Zoot Suit Riot against Mexicans in Los Angeles in 1943. Not just a song by the Cherry Poppin’ Daddies, also a tragic moment in American history. The war years saw a dramatic increase in immigration from Mexico under the Bracero program (which lasted until 1964). And about 500,000 Mexican American men and women served in the armed forces during the war. As did 25,000 American Indians although Indian reservations being largely rural, didn’t really share in the wartime prosperity. Asian Americans are probably the most glaring example of the failure to be adequately pluralistic. Although things did improve for Chinese Americans because America couldn’t keep restricting the immigration of its ally in the war, Japanese Americans suffered horrible racism and one of the worst violations of civil liberties in America’s history. Executive Order 9066 in February 1942 expelled all persons of Japanese descent from the west coast. 70% of Japanese Americans lived in California and as a result of this order more than 110,000 people, almost 2/3 of whom were American citizens, were sent to internment camps where they lived in makeshift barracks under the eyes and searchlights of guards. A man named Fred Korematsu appealed his conviction for failing to show up for internment all the way to the Supreme Court, where he lost in yet another horrendous court decision. Okay, let’s go to the Thought Bubble. The group that experienced the greatest change during World War II was probably African Americans. They still served in segregated regiments in the armed forces, but more than 1 million of them answered the call to fight. And just as important, continuing the Great Migration that had begun in the 1920s 700,000 African Americans left the south, moving to northern and especially western cities where they could find jobs, even though these mass migrations often led to tensions between blacks and whites and sometimes these tensions exploded into violence. World War II also saw the beginning of the Civil Rights Movement. Angered by discrimination in defense employment, black labor leader A. Philip Randolph threatened a march on Washington demanding access to defense jobs, an end to segregation and a federal anti-lynching law. He didn’t get all those things, but he did get Executive Order 8802 which banned discrimination in defense hiring and created the Fair Employment Practices Commission. The FEPC couldn’t enforce anti-discrimination but as a compliance agency it helped African American workers obtain jobs in arms factories and shipyards. By 1944 more than a million black people were working in manufacturing, and 300,000 of them were women. The rhetoric of fighting a war for freedom against a racist dictatorship wasn’t lost on African Americans and many saw themselves as engaged in the double-V campaign, victory over the Axis powers abroad and over racism in the United States. The war saw ending segregation and black equality become cornerstones of American liberalism, along with full employment and the expansion of civil liberties. Eventually even the army and navy began to integrate, although the full end to discrimination in the military would have to wait until well after the war. Thanks Thought Bubble. So if America was isolationist before the war – and I’ve argued that it actually wasn’t really – after the war it certainly wasn’t. FDR took a very active role in planning for a more peaceful and prosperous post-war world. And conferences at Teheran, Yalta, and Potsdam clarified war aims, and established the idea that Germany would be divided and Nazis tried for war crimes. These conferences also laid the foundation for the Cold War in allowing Soviet influence in Eastern Europe, especially Poland, so that wasn’t such a good thing. But, the 1944 conference Bretton Woods, in beautiful, freedom loving New Hampshire, established America’s economic dominance as the dollar – which again would be backed by gold -- replaced the pound as the main currency in international transactions. It also created World Bank to help rebuild Europe and also to help developing countries and the IMF to stabilize currencies. How well that’s worked is debatable, but this isn’t: the United States became the financial leader of a global capitalist order. The United States also took a leading role in establishing the United Nations at the Dumbarton Oaks conference in 1944. Why do we not have a UN commission on improving the names of historical events? And then America adopted the UN charter, which was endorsed by the Senate because apparently we had learned our lesson after the League of Nations debacle. The goal of the UN was to ensure peace, and the United States’s position as one of the five permanent members of the Security Council signaled that it intended to take an active and leading role in international affairs. And we had to because by the end of the war only the United States and USSR were powerful enough to have any influence. So, World War II ended the depression and transformed America’s economy. It cemented the new definition of liberalism established by the New Deal, and opened up opportunities for diverse groups of Americans. It also transformed definitions of freedom both at home and abroad. I mean, even before the U.S. entered the war it issued the Atlantic Charter along with Britain affirming the freedom of all people to choose their own government and declaring that the defeat of Nazi Germany would help to bring about a world of “improved labor standards, economic advancement, and social security.” At home and abroad World War II became a war that was about freedom, but was also about what Gunnar Myrdal called the American Creed – a belief in equality, justice, equal opportunity, and freedom. I want to be clear that we have done a terrible job of living up to the American Creed, but the story of American history is in many ways the story of ideas pulling policy, not the other way around. American history is an economic and political and social history, but it is also a story about the power of ideas. And World War II helped clarify those ideas for America and for the world. Thanks for watching. I’ll see you next week. Crash Course U.S. History is made by all of these nice people and it exists because of you and your support through Subbable. Here at Crash Course, we like making educational content that’s free for everyone forever. And we can do that because many people subscribe on a monthly basis through Subbable.com to Crash Course. There are lots of cool perks, like signed posters over at Subbable, but the biggest perk is that you get to keep watching this show. So thank you for making it possible, thanks for watching, and as we say in my hometown, don’t forget to be awesome...boom. Oh god. It was worse than I expected. ________________ [1] quoted in Foner Give me Liberty p. 927
US_History
The_Progressive_Era_Crash_Course_US_History_27.txt
Episode 27: Progressive Era Hi, I’m John Green, this is CrashCourse U.S. history, and today we’re gonna talk about Progressives. No Stan Progressives. Yes. You know, like these guys who used to want to bomb the means of production, but also less radical Progressives. Mr. Green, Mr. Green. Are we talking about, like, tumblr progressive where it’s half discussions of misogyny and half high-contrast images of pizza? Because if so, I can get behind that. Me from the past, your anachronism is showing. Your Internet was green letters on a black screen. But no, The Progressive Era was not like tumblr, however I will argue that it did indirectly make tumblr and therefore JLaw gifsets possible, so that’s something. So some of the solutions that progressives came up with to deal with issues of inequality and injustice don’t seem terribly progressive today, and also it kinda overlapped with the gilded age, and progressive implies, like, progress, presumably progress toward freedom and justice, which is hard to argue about an era that involved one of the great restrictions on freedom in American history, prohibition. So maybe we shouldn’t call it the Progressive Era at all. I g--Stan, whatever, roll the intro. Intro So, if the Gilded Age was the period when American industrial capitalism came into its own, and people like Mark Twain began to criticize its associated problems, then the Progressive era was the age in which people actually tried to solve those problems through individual and group action. As the economy changed, Progressives also had to respond to a rapidly changing political system. The population of the U.S. was growing and its economic power was becoming ever more concentrated. And sometimes, Progressives responded to this by opening up political participation and sometimes by trying to restrict the vote. The thing is, broad participatory democracy doesn’t always result in effective government--he said, sounding like the Chinese national Communist Party. And that tension between wanting to have government for, of, and by the people and wanting to have government that’s, like, good at governing kind of defined the Progressive era. And also our era. But progressives were most concerned with the social problems that revolved around industrial capitalist society. And most of these problems weren’t new by 1900, but some of the responses were. Companies and, later, corporations had a problem that had been around at least since the 1880s: they needed to keep costs down and profits high in a competitive market. And one of the best ways to do this was to keep wages low, hours long, and conditions appalling: your basic house-elf situation. Just kidding, house elves didn’t get wages. Also, by the end of the 19th century, people started to feel like these large, monopolistic industrial combinations, the so-called trusts, were exerting too much power over people’s lives. The 1890s saw federal attempts to deal with these trusts, such as the Sherman Anti-Trust Act, but overall, the Federal Government wasn’t where most progressive changes were made. For instance, there was muckraking, a form of journalism in which reporters would find some muck and rake it. Mass circulation magazines realized they could make money by publishing exposés of industrial and political abuse, so they did. Oh, it’s time for the Mystery Document? I bet it involves muck. The rules here are simple. I guess the author of the Mystery Document. I’m either correct or I get shocked. “Let a man so much as scrape his finger pushing a truck in the pickle-rooms, and all the joints in his fingers might be eaten by the acid, one by one. Of the butchers and floormen, the beef-boners and trimmers, and all those who used knives, you could scarcely find a person who had the use of his thumb; time and time again the base of it had been slashed, till it was a mere lump of flesh against which the man pressed the knife to hold it. ... They would have no nails – they had worn them off pulling hides.” Wow. Well now I am hyper-aware of and grateful for my thumbs. They are just in excellent shape. I am so glad, Stan, that I am not a beef-boner at one of the meat-packing factories written about in The Jungle by Upton Sinclair. No shock for me! Oh Stan, I can only imagine how long and hard you’ve worked to get the phrase “beef-boner” into this show. And you finally did it. Congratulations. By the way, just a little bit of trivia: The Jungle was the first book I ever read that made me vomit. So that’s a review. I don’t know if it’s positive, but there you go. Anyway, at the time, readers of The Jungle were more outraged by descriptions of rotten meat than by the treatment of meatpacking workers: The Jungle led to the Pure Food and Drug Act and the Meat Inspection Act of 1906. That’s pretty cool for Upton Sinclair, although my books have also led to some federal legislation, such as the HAOPT, which officially declared Hazel and Augustus the nation’s OTP. So, to be fair, writers had been describing the harshness of industrial capitalism for decades, so muckraking wasn’t really that new, but the use of photography for documentation was. Lewis Hine, for instance, photographed child laborers in factories and mines, bringing Americans face to face with the more than 2 million children under the age of 15 working for wages. And Hine’s photos helped bring about laws that limited child labor. But even more important than the writing and photographs and magazines when it came to improving conditions for workers was Twitter … what’s that? There was no twitter? Still? What is this 1812? Alright, so apparently still without Twitter, workers had to organize into unions to get corporations to reduce hours and raise their pay. Also some employers started to realize on their own that one way to mitigate some of the problems of industrialization was to pay workers better, like in 1914, Henry Ford paid his workers an average of $5 per day, unheard of at the time. . Whereas today I pay Stan and Danica 3x that and still they whine. Ford’s reasoning was that better-paid workers would be better able to afford the Model Ts that they were making. And indeed, Ford’s annual output rose from 34,000 cars to 730,000 between 1910 and 1916, and the price of a Model T dropped from $700 to $316. Still, Henry Ford definitely forgot to be awesome sometimes; he was anti-Semitic, he used spies in his factories, and he named his child Edsel. Also like most employers at the turn of the century, he was virulently anti-union. So, while the AFL was organizing the most privileged industrial workers, another union grew up to advocate for rights for a larger swath of the workforce, especially the immigrants who dominated unskilled labor: The International Workers of the World. They were also known as the Wobblies, and they were founded in 1905 to advocate for “every wage-worker, no matter what his religion, fatherland or trade,” and not, as the name Wobblies suggests, just those fans of wibbly-wobbly-timey-wimey. The Wobblies were radical socialists; ultimately they wanted to see capitalism and the state disappear in revolution. Now, most progressives didn’t go that far, but some, following the ideas of Henry George, worried that economic progress could produce a dangerous unequal distribution of wealth that could only be cured by … taxes. But, more Progressives were influenced by Simon W. Patten who prophesied that industrialization would bring about a new civilization where everyone would benefit from the abundance and all the leisure time that all these new labor-saving devices could bring. This optimism was partly spurred by the birth of a mass consumption society. I mean, Americans by 1915 could purchase all kinds of new-fangled devices, like washing machines, or vacuum cleaners, automobiles, record players. It’s worth underscoring that all this happened in a couple generations: I mean, in 1850, almost everyone listened to music and washed their clothes in nearly the same way that people did 10,000 years ago. And then BOOM. And for many progressives, this consumer culture, to quote our old friend Eric Foner, “became the foundation for a new understanding of freedom as access to the cornucopia of goods made available by modern capitalism.” And this idea was encouraged by new advertising that connected goods with freedom, using “liberty” as a brand name or affixing the Statue of Liberty to a product. By the way, Crash Course is made exclusively in the United States of America, the greatest nation on earth ever. (Libertage.) That’s a lie, of course, but you’re allowed to lie in advertising. But in spite of this optimism, most progressives were concerned that industrial capitalism, with its exploitation of labor and concentration of wealth, was limiting, rather than increasing freedom, but depending on how you defined “freedom,” of course. Industrialization created what they referred to as “the labor problem” as mechanization diminished opportunities for skilled workers and the supervised routine of the factory floor destroyed autonomy. The scientific workplace management advocated by efficiency expert Frederick W. Taylor required rigid rules and supervision in order to heighten worker productivity. So if you’ve ever had a job with a defined number of bathroom breaks, that’s why. Also “Taylorism” found its way into classrooms; and anyone who’s had to sit in rows for 45 minute periods punctuated by factory-style bells knows that this atmosphere is not particularly conducive to a sense of freedom. Now this is a little bit confusing because while responding to worker exploitation was part of the Progressive movement, so was Taylorism itself because it was an application of research, observation, and expertise in response to the vexing problem of how to increase productivity. And this use of scientific experts is another hallmark of the Progressive era, one that usually found its expression in politics. American Progressives, like their counterparts in the Green Sections of Not-America, sought government solutions to social problems. Germany, which is somewhere over here, pioneered “social legislation” with its minimum wage, unemployment insurance and old age pension laws, but the idea that government action could address the problems and insecurities that characterized the modern industrial world, also became prominent in the United States. And the notion that an activist government could enhance rather than threaten people’s freedom was something new in America. Now, Progressives pushing for social legislation tended to have more success at the state and local level, especially in cities, which established public control over gas and water and raised taxes to pay for transportation and public schools. Whereas federally the biggest success was, like, Prohibition, which, you know, not that successful. But anyway, if all that local collectivist investment sounds like Socialism, it kind of is. I mean, by 1912 the Socialist Party had 150,000 members and had elected scores of local officials like Milwaukee mayor Emil Seidel. Some urban progressives even pushed to get rid of traditional democratic forms altogether. A number of cities were run by commissions of experts or city managers, who would be chosen on the basis of some demonstrated expertise or credential rather than their ability to hand out turkeys at Christmas or find jobs for your nephew’s sister’s cousin. Progressive editor Walter Lippman argued for applying modern scientific expertise to solve social problems in his 1914 book Drift and Mastery, writing that scientifically trained experts “could be trusted more fully than ordinary citizens to solve America’s deep social problems.” This tension between government by experts and increased popular democratic participation is one of the major contradictions of the Progressive era. The 17th amendment allowed for senators to be elected directly by the people rather than by state legislatures, and many states adopted primaries to nominate candidates, again taking power away from political parties and putting it in the hands of voters. And some states, particularly western ones like California adopted aspects of even more direct democracy, the initiative, which allowed voters to put issues on the ballot, and the referendum, which allows them to vote on laws directly. And lest you think that more democracy is always good, I present you with California. But many Progressives wanted actual policy made by experts who knew what was best for the people, not the people themselves. And despite primaries in direct elections of senators it’s hard to argue that the Progressive Era was a good moment for democratic participation, since many Progressives were only in favor of voting insofar as it was done by white, middle class, Protestant voters. Alright. Let’s Go to the Thought Bubble. Progressives limited immigrants’ participation in the political process through literacy tests and laws requiring people to register to vote. Voter registration was supposedly intended to limit fraud and the power of political machines. Stop me if any of this sounds familiar, but it actually just suppressed voting generally. Voting gradually declined from 80% of male Americans voting in the 1890s to the point where today only about 50% of eligible Americans vote in presidential elections. But an even bigger blow to democracy during the Progressive era came with the Jim Crow laws passed by legislatures in southern states, which legally segregated the South. First, there was the deliberate disenfranchisement of African Americans. The 15th amendment made it illegal to deny the right to vote based on race, color or previous condition of servitude but said nothing about the ability to read, so many Southern states instituted literacy requirements. Other states added poll taxes, requiring people to pay to vote, which effectively disenfranchised large numbers of African American people, who were disproportionately poor. The Supreme Court didn’t help: In 1896, it made one of its most famous bad decisions, Plessy v. Ferguson, ruling that segregation in public accommodations, in Homer Plessy’s case a railroad car, did not violate the 14th amendment’s Equal Protection clause. As long as black railroad cars were equal to white ones, it was A-OK to have duplicate sets of everything. Now, creating two sets of equal quality of everything would get really expensive, so Southern states didn’t actually do it. Black schools, public restrooms, public transportation opportunities--the list goes on and on--would definitely be separate, and definitely not equal. Thanks, ThoughtBubble. Now, of course, as we’ve seen Progressive ideas inspired a variety of responses, both for Taylorism and against it, both for government by experts and for direct democracy. Similarly, in the Progressive era, just as the Jim Crow laws were being passed, there were many attempts to improve the lives of African Americans. The towering figure in this movement to “uplift” black southerners was Booker T. Washington, a former slave who became the head of the Tuskegee Institute in Alabama, a center for vocational education. And Washington urged southern black people to emphasize skills that could make them successful in the contemporary economy. The idea was that they would earn the respect of white people by demonstrating their usefulness and everyone would come to respect each other through the recognition of mutual dependence while continuing to live in separate social spheres. But Washington’s accommodationist stance was not shared by all African Americans. WEB DuBois advocated for full civil and political rights for black people and helped to found the NAACP, which urged African Americans to fight for their rights through “persistent, manly agitation.” So I wanted to talk about the Progressive Era today not only because it shows up on a lot of tests, but because Progressives tried to tackle many of the issues that we face today, particularly concerning immigration and economic justice, and they used some of the same methods that we use today: organization, journalistic exposure, and political activism. Now, we may use tumblr or tea party forums, but the same concerns motivate us to work together. And just as today, many of their efforts were not successful because of the inherent difficulty in trying to mobilize very different interests in a pluralistic nation. In some ways their platforms would have been better suited to an America that was less diverse and complex. But it was that very diversity and complexity that gave rise and still gives rise to the urge toward progress in the first place. Thanks for watching. I’ll see you next week. Crash Course is produced and directed by Stan Muller. Our script supervisor is Meredith Danko. The associate producer is Danica Johnson. The show is written by my high school history teacher, Raoul Meyer, Rosianna Rojas, and myself. And our graphics team is Thought Café. Every week there’s a new caption for the libertage. You can suggest captions in comments where you can also ask questions about today’s video that will be answered by our team of historians. Thanks for watching Crash Course. If you like it, and if you’re watching the credits you probably do, make sure you’re subscribed. And as we say in my hometown don’t forget to be awesome...That was more dramatic than it sounded. Progressive Era -
US_History
The_New_Deal_Crash_Course_US_History_34.txt
Episode 34 – The New Deal Hi, I’m John Green, this is CrashCourse U.S. history, and today we’re going to get a little bit controversial, as we discuss the FDR administration’s response to the Great Depression: the New Deal. That’s the National Recovery Administration, by the way, not the National Rifle Association or the No Rodents Allowed Club, which I’m a card-carrying member of. Did the New Deal end the Depression (spoiler alert: mehhh)? More controversially, did it destroy American freedom or expand the definition of liberty? In the end, was it a good thing? Mr. Green, Mr. Green. Yes. Ohh, Me from the Past, you are not qualified to make that statement. What? I was just trying to be, like, provocative and controversial. Isn’t that what gets views? You have the worst ideas about how to make people like you. But anyway, not EVERYTHING about the New Deal was controversial. This is CrashCourse, not TMZ. intro The New Deal redefined the role of the federal government for most Americans and it led to a re-alignment of the constituents in the Democratic Party, the so-called New Deal coalition. (Good job with the naming there, historians.) And regardless of whether you think the New Deal meant more freedom for more people or was a plot by red shirt wearing Communists, the New Deal is extremely important in American history. Wait a second. I’m wearing a red shirt. What are you trying to say about me, Stan? As the owner of the means of production, I demand that you dock the wages of the writer who made that joke. So after his mediocre response to the Great Depression, Herbert Hoover did not have any chance of winning the presidential election of 1932, but he also ran like he didn’t actually want the job. Plus, his opponent was Franklin Delano Roosevelt, who was as close to a born politician as the United States has ever seen, except for Kid President. The phrase New Deal came from FDR’s campaign, and when he was running FDR suggested that it was the government’s responsibility to guarantee every man a right to make a comfortable living, but he didn’t say HOW he meant to accomplish this. Like, it wasn’t gonna come from government spending, since FDR was calling for a balanced budget and criticizing Hoover for spending so much. Maybe it would somehow magically happen if we made alcohol legal again and one thing FDR did call for was an end to Prohibition, which was a campaign promise he kept. After three years of Great Depression, many Americans seriously needed a drink, and the government sought tax revenue, so no more Prohibition. FDR won 57% of the vote and the Democrats took control of Congress for the first time in a decade. While FDR gets most of the credit, he didn’t actually create the New Deal or put it into effect. It was passed by Congress. So WTFDR was the New Deal? Basically, it was a set of government programs intended to fix the depression and prevent future depressions. There are a couple of ways historians conceptualize it. One is to categorize the programs by their function. This is where we see the New Deal described as three R’s. The relief programs gave help, usually money, to poor people in need. Recovery programs were intended to fix the economy in the short run and put people back to work. And lastly, the Run DMC program was designed to increase the sales of Adidas shoes. No, alas, it was reform programs that were designed to regulate the economy in the future to prevent future depression. But some of the programs, like Social Security, don’t fit easily into one category, and there are some blurred lines between recovery and reform. Like, how do you categorize the bank holiday and the Emergency Banking Act of March 1933, for example? FDR’s order to close the banks temporarily also created the FDIC, which insures individual deposits against future banking disasters. By the way, we still have all that stuff, but was it recovery, because it helped the short-term economy by making more stable banks, or was it reform because federal deposit insurance prevents bank runs? A second way to think about the New Deal is to divide it into phases, which historians with their A number one naming creativity call the First and Second New Deal. This more chronological approach indicates that there has to be some kind of cause and effect thing going on because otherwise why would there be a second New Deal if the first one worked so perfectly? The First New Deal comprises Roosevelt’s programs before 1935, many of which were passed in the first hundred days of his presidency. It turns out that when it comes to getting our notoriously gridlocked Congress to pass legislation, nothing motivates like crisis and fear. Stan can I get the foreshadowing filter? We may see this again. So, in a brief break from its trademark obstructionism, Congress passed laws establishing the Civilian Conservation Corps, which paid young people to build national parks, the Agricultural Adjustment Act, the Glass Stegall act, which barred commercial banks from buying and selling stocks, and the National Industrial Recovery Act. Which established the National Recovery Administration, which has lightening bolts in its claws. The NRA was designed to be government planners and business leaders working together to coordinate industry standards for production, prices, and working conditions. But that whole public-private cooperation idea wasn’t much immediate help to many of the starving unemployed, so the Hundred Days reluctantly included the Federal Emergency Relief Administration, to give welfare payments to people who were desperate. Alright. Let’s go to the ThoughtBubble. Roosevelt worried about people becoming dependent on relief handouts, and preferred programs that created temporary jobs. One section of the NIRA created the Public Works Administration, which appropriated $33 billion to build stuff like the Triborough Bridge. So much for a balanced budget. The Civil Works Administration, launched in November 1933 and eventually employed 4 million people building bridges, schools, and airports. Government intervention reached its highest point however in the Tennessee Valley Authority. This program built a series of dams in the Tennessee River Valley to control floods, prevent deforestation, and provide cheap electric power to people in rural counties in seven southern states. But, despite all that sweet sweet electricity, the TVA was really controversial because it put the government in direct competition with private companies. Other than the NIRA, few acts were as contentious as the Agricultural Adjustment Act. The AAA basically gave the government the power to try to raise farm prices by setting production quotas and paying farmers to plant less food. This seemed ridiculous to the hungry Americans who watched as 6 million pigs were slaughtered and not made into bacon. Wait, Stan, 6 million pigs? But…bacon is good for me... Only property owning farmers actually saw the benefits of the AAA, so most African American farmers who were tenants or sharecroppers continued to suffer. And the suffering was especially acute in Oklahoma, Texas, Kansas, and Colorado, where drought created the Dust Bowl. All this direct government intervention in the economy was too much for the Supreme Court. In 1936 the court struck down the AAA in U.S. v. Butler. Earlier in the Schechter Poultry case (AKA the sick chicken case - finally a Supreme Court case with an interesting name) the court invalidated the NIRA because its regulations “delegated legislative powers to the president and attempted to regulate local businesses that did not engage in interstate commerce.”[1] Thanks, ThoughtBubble. So with the Supreme Court invalidating acts left and right, it looked like the New Deal was about to unravel. FDR responded by proposing a law that would allow him to appoint new Supreme Court justices if sitting justices reached the age of 70 and failed to retire. Now, this was totally constitutional – you can go ahead at the Constitution, if Nicolas Cage hasn’t already swiped it – but it seemed like such a blatant power grab that Roosevelt’s plan to “pack the court” brought on a huge backlash. Stop everything. I’ve just been informed that Nicolas Cage stole the Declaration of Independence not the Constitution. I want to apologize to Nic Cage himself and also everyone involved in the National Treasure franchise, which is truly a national treasure. Anyway, in the end, the Supreme Court began upholding the New Deal laws, starting a new era of Supreme Court jurisprudence in which the government regulation of the economy was allowed under a very broad reading of the commerce clause. Because really isn’t all commerce interstate commerce? I mean if I go to Jimmy John’s, don’t I exit the state of hungry and enter the state of satisfied? Thus began the Second New Deal shifting focus away from recovery and towards economic security. Two laws stand out for their far-reaching effects here, the National Labor Relations Act, also called the Wagner Act, and the Social Security Act. The Wagner Act guaranteed workers the right to unionize and it created a National Labor Relations Board to hear disputes over unfair labor practices. In 1934 alone there were more than 2,000 strikes, including one that involved 400,000 textile workers. Oh, it’s time for the Mystery Document? Man, I wish there were a union to prevent me from getting electrocuted. The rules here are simple. I guess the author of the Mystery Document. And I’m usually wrong and get shocked. “Refusing to allow people to be paid less than a living wage preserves to us our own market. There is absolutely no use in producing anything if you gradually reduce the number of people able to buy even the cheapest products. The only way to preserve our markets is an adequate wage.” Uh I mean you usually don’t make it this easy, but I’m going to guess that it’s Franklin Delano Roosevelt. Dang it! Eleanor Roosevelt? Eleanor. Of course it was Eleanor. Gah! The most important union during the 1930s was the Congress of Industrial Organizations, which set out to unionize entire industries like steel manufacturing and automobile workers. In 1936 the United Auto Workers launched a new tactic called the sit-down strike. Workers at the Fisher Body Plant in Flint, Michigan simply stopped working, sat down, and occupied the plant. Eventually GM agreed to negotiate, and the UAW won. Union membership rose to 9 million people as “CIO unions helped to stabilize a chaotic employment situation and offered members a sense of dignity and freedom.”[2] That quote, by the way, is from our old buddy Eric Foner. God, I love you, Foner. And unions played an important role in shaping the ideology of the second New Deal because they insisted that the economic downturn had been caused by underconsumption, and that the best way to combat the depression was to raise workers’ wages so that they could buy lots of stuff. The thinking went that if people experienced less economic insecurity, they would spend more of their money so there were widespread calls for public housing and universal health insurance. And that brings us to the crowning achievement of the Second New Deal, and/or the crowning achievement of its Communist plot, the Social Security Act of 1935. Social Security included unemployment insurance, aid to the disabled, aid to poor families with children, and, of course, retirement benefits. It was, and is, funded through payroll taxes rather than general tax revenue, and while state and local governments retained a lot of discretion over how benefits would be distributed, Social Security still represented a transformation in the relationship between the federal government and American citizens. Like, before the New Deal, most Americans didn’t expect the government to help them in times of economic distress. After the New Deal the question was no longer if the government should intervene, but how it should. For a while, the U.S. government under FDR embraced Keynesian economics, the idea that the government should spend money even if it means going into deficits in order to prop up demand. And this meant that the state was much more present in people’s lives. I mean for some people that meant relief or social security checks. For others, it meant a job with the most successful government employment program, the Works Progress Administration. The WPA didn’t just build post offices, it paid painters to make them beautiful with murals, it paid actors and writers to put together plays, and ultimately employed more than 3 million Americans each year until it ended in 1943. It also, by the way, payed for lots of photographers to take amazing photographs, which we can show you for free because they are owned by the government so I’m just going to keep talking about how great they are. Oh, look at that one, that’s a winner. Okay. Equally transformative, if less visually stimulating, was the change that the New Deal brought to American politics. The popularity of FDR and his programs brought together urban progressives who would have been Republicans two decades earlier, with unionized workers - often immigrants, left wing intellectuals, urban Catholics and Jews. FDR also gained the support of middle class homeowners, and he brought African Americans into the Democratic Party. Who was left to be a Republican, Stan? I guess there weren’t many, which is why FDR kept getting re-elected until, you know, he died. But, fascinatingly, one of the biggest and politically most important blocs in the New Deal Coalition was white southerners, many of whom were extremely racist. Democrats had dominated in the South since the end of reconstruction, you know since the other party was the party of Lincoln. And all those Southern democrats who had been in Congress for so long became important legislative leaders. In fact, without them, FDR never could have passed the New Deal laws, but Southerners expected whites to dominate the government and the economy and they insisted on local administration of many New Deal programs. And that ensured that the AAA and the NLRA would exclude sharecroppers, and tenant farmers, and domestic servants, all of whom were disproportionately African American. So, did the New Deal end the depression? No. I mean, by 1940 over 15% of the American workforce remained unemployed. But, then again, when FDR took office in 1933, the unemployment rate was at 25%. Maybe the best evidence that government spending was working is that when FDR reduced government subsidies to farms and the WPA in 1937, unemployment immediately jumped back up to almost 20%. And many economic historians believe that it’s inaccurate to say that government spending failed to end the Depression because in the end, at least according to a lot of economists, what brought the Depression to an end was a massive government spending program called World War II. So, given that, is the New Deal really that important? Yes. Because first, it changed the shape of the American Democratic Party. African Americans and union workers became reliable Democratic votes. And secondly, it changed our way of thinking. Like, liberalism in the 19th century meant limited government and free-market economics. Roosevelt used the term to refer to a large, active state that saw liberty as “greater security for the average man.” And that idea that liberty is more closely linked to security than it is to, like, freedom from government intervention is still really important in the way we think about liberty today. No matter where they fall on the contemporary political spectrum, politicians are constantly talking about keeping Americans safe. Also our tendency to associate the New Deal with FDR himself points to what Arthur Schlessinger called the “imperial presidency.” That is, we tend to associate all government policy with the president. Like, after Jackson and Lincoln’s presidencies Congress reasserted itself as the most important branch of the government. But that didn’t happen after FDR. But above all that, the New Deal changed the expectations that Americans had of their government. Now, when things go sour, we expect the government to do something. We’ll give our last words today to Eric Foner, who never Foner-s it in, the New Deal “made the government an institution directly experienced in Americans’ daily lives and directly concerned with their welfare.”[3] Thanks for watching. I’ll see you next week. Crash Course is made with the help of all of these nice people. And it is possible because of your support at subbable.com. Here at Crash Course we want to make educational video for free, for everyone, forever. And that’s possible thanks to your subscription at subbable.com. You can make a monthly subscription and the price is up to you. It can even be zero dollars although more is better. Thanks so much for watching Crash Course and as we say in my hometown, don’t forget to be awesome. ________________ [1] Foner. Give me Liberty ebook version p. 870 [2] Foner. Give me Liberty ebook version p. 873 [3] Give me Liberty ebook version p. 898