playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
MIT_508J_Biological_Chemistry_II_Spring_2016
2_Protein_Synthesis_1.txt
The following content is provided under a Creative Commons license. Your support will help MIT Open Courseware continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT Open Courseware at ocw.mit.edu. ELIZABETH NOLAN: What we'll do today is have an overview looking at the ribosome structure, and also an overview of translation to get everyone on the same page for the discussions we'll start next week on the elongation cycle of translation. So I'll post, within lecture notes, reading as it applies to a given module and information about the problem sets, so you have that here. So before we get into some more molecular level details about ribosome structure, it's important to appreciate how we've gotten to where we are now in terms of our understanding, and so where we'll start is back in some early studies of electron microscopy. And this is in the '50s, and this researcher, Palade, obtained images, that looked like this, of rat pancreas tissue. And what was seen in these images were a lot of dark spheres. You can see them throughout, and they were called the particles of Palade. And one thing the scientists questioned is whether these black spheres or dots were something real or an artifact from his methods, so the perennial and arduous question of artifact versus reality. So this is something that we all question everyday when doing our experiments as well. And so he was quite a thorough scientist and experimentalist, and he repeated these experiments using different procedures to fix the tissues. And he observed these types of features in many different types of samples, and what was determined later on is that these black spheres are actually ribosomes. So one of the things we're going to look at today is how did we get from an image like this, just seeing some black dots, to the crystal structures we have today and the atomic resolution and understanding. And so he received the Nobel Prize back in 1974 for this contribution. So just to keep in mind the hypothesis of translation, which is easy for us to take for granted these days. Goes back into the '60s, so there were studies during the '60s that resulted in the discovery that the 50S subunit of E. coli ribosomes catalyzes peptide bond formation. And it was discovered that the anticodon of the tRNA interacts with the 30S subunit, and that was important for translation. So this decoding problem-- effectively, how do we get from mRNA to protein-- was also articulated in the early '60s, and this was a puzzle for basically four decades. If we think about this from the standpoint of structure analysis and crystal structure-- so we'll look at images and data from crystal structures of the ribosome subunits today. If you take a look, what's important to appreciate here is that there was huge amounts of effort over many, many years to get where we are now. So in 1980, first crystals of the ribosome were obtained, but these crystals weren't of suitable quality for analysis. If we look at 20 years later, in 2000, the first crystal structure of the 50S subunit was reported, and since then, there's been a flurry of activity. So in 2001, first crystal structure of the 30 subunit-- 30S subunit with mRNA bound, and in this time, too, single molecule spectroscopy was well on its way, and so there were studies beginning of ribosome dynamics. And later, 2011, we have a crystal structure of a eukaryotic 60S subunit here. And so we're going to focus our discussions on the prokaryotic ribosome, their similarities and differences between prokaryotic ribosomes and eukaryotic ribosomes, just to keep in mind if you've heard about eukaryotic ribosomes in other classes. So also to note, in 2009, the Nobel Prize in chemistry was awarded for structural studies of the ribosome to these three researchers here. And their contributions are shown ranging from basically the first low quality crystals of the 50S ribosomal subunit to understand how important that was, to the first crystal structures. And something Professor Stubbe and I like to remind everyone and keep in mind is, with these types of problems and areas, there's often many contributors, and they can't all be recognized by this prize because it's limited to three individuals at maximum. And so other folks like Harry Noller, Peter Moore, and Joaquim Frank made really seminal contributions to our understanding of this macromolecular machine. So what are the questions we're going to address in this module? And then we'll go over some of the basics in ribosome structure. So first, one is that we learn from structural studies of the ribosome, and really, what does ribosome structure at an atomic level tell us about its function? How does the ribosome recognize, bind, and decode mRNA? How are amino acids recognized and delivered, and how is the correct amino acid delivered? The genetic message needs to be read, and it needs to be read properly. And what happens if a wrong amino acid is delivered? So that's a possibility. How does the ribosome cope? So this brings up the notion of fidelity. How is fidelity of translation maintained? And we'll address that next week. How is translation initiated? How does the ribosome catalyze peptide bond formation? So we're interested in that mechanism within the context of this course. How does the polypeptide leave the ribosome, and what happens to that polypeptide after it exits? So that will be a transition for us into module 2 on protein folding. How is translation terminated, and what happens to the ribosome after? So a given polypeptide chain is made. What happens after that? And where we'll close this module is thinking about how our understanding of the ribosome, from all of these basic and fundamental studies, allows for the development of new technologies. And we'll specifically think about how it's possible to use the ribosome to incorporate unnatural amino acids into proteins. So where we're going to move forward today is really structure-- focusing on ribosome structure and a general overview of translation, basically to have everyone here up to speed for the discussions to come next week. So first of all, we'll do an overview of key players in translation, a brief look at the cycle, and then we'll go into structural studies of the ribosome. And within this set of lecture notes are several tables that have lists of the players and detailed overall cycle that I encourage you to use, just as a reference throughout this module for keeping everything straight. So first, of course, we have the ribosome. So the ribosome, as we all know, reads the genetic code via the mRNA, and it catalyzes peptide bond formation. So in addition to the ribosome, we have the mRNA. So this mRNA delivers the genetic code to the ribosome, and it provides a template-- [AUDIENCE MEMBER SNEEZES] Bless you-- for protein synthesis. So effectively, we can think about this process as a template-driven polymerization. So somehow, the amino acids need to get to the ribosome, and so we need the help of the tRNAs. So these transfer RNAs deliver the amino acid monomers, to the ribosome, and they transfer the amino acids during synthesis of the polypeptide. So in addition to the ribosome, the mRNA, and the tRNAs, the ribosome needs some help, so we have translation factors. And there's translation factors that are involved in each step of the translation cycle. So these are proteins that are required at specific points during the translation process. And so in terms of translation factors, we can break the process of translation into three or four steps-- I prefer three-- which are initiation, elongation, and termination. Some review articles and papers will divide this into four steps, because termination, you can think about peptide release and then ribosome recycling. But regardless to that detail, at each of these stages, there are translation factors that help. So we have initiation factors that help with the process of initiation, and in prokaryotes, we have initiation factors 1, 2, and 3, so 3 translation factors that help during elongation. So the process of making the peptide bond-- there are elongation factors. EF for Elongation Factor. IF for Initiation Factor. We have EF-Tu, EF-G, and others, and we'll spend quite a bit of time thinking about EF-Tu and EF-G over the course of the next week and in recitation and in problem sets, thinking about how these factors are really facilitating the elongation process here. And during termination, there are release factors, so we have release factors 1, 2, and 3. And we can also-- these are involved in release of the polypeptide that's been synthesized from the ribosome, and there's other players as well that I'll list here, including ribosome recycling factor, so the subunits get recycled, as we'll see. And we can also include a protein called trigger factor here. That is involved in folding of nascent polypeptide chains or the polypeptide chain as it's coming off the ribosome. And then just to summarize in terms of three stages of translation as I'll present them to you within this course, we have the initiation process; two, elongation; and three, termination. And where we'll be focusing the lectures next week, and really, this whole module, is here on elongation. And I'll just note that the elongation cycle is highly conserved. Termination and initiation vary quite a bit between prokaryotic and eukaryotes there in terms of the processes and involved players. So where are we going? Just as a brief overview of the cycle, and we'll come back to this later within today's lecture, or if not, on Monday. So we start with initiation, and we're going to have to ask ourselves, how is it that this 70S prokaryotic ribosome or initiation complex is assembled? And so there's a special tRNA involved, the initiator tRNA that we see binds here, and we'll talk more about these E, P, and A-sites in a moment. So we see the ribosome is assembled, the mRNA is bound, and there's an initiator tRNA bound. In order for the elongation cycle to be entered, an amino acid needs to be delivered, and that's delivered by an aminoacyl tRNA. That's in a ternary complex, so three components. We have the tRNA, the elongation factor Tu, and GTP. So this complex here somehow delivers an aminoacyl tRNA to the ribosome, and we're going to look at this process in detail next week. So this will be one of our case studies thinking about experiments and how experiments have supported a specific kinetic model here. So here, we have a complex where the tRNA is ready to occupy the A-site. What happens here-- we see that there's a GTP hydrolysis event. We'll talk about more as we go forward. Peptide transfer reaction-- so we have formation of a peptide bond, and then this elongation factor G comes in to facilitate the elongation cycle. And then this cycle will continue until some point that signals to stop synthesis, so a stop codon will enter the A-site. And there's a termination process, ribosome recycling, and you can imagine this whole cycle happening again. So how do we get to this cartoon to some more detailed understanding? That's where we're going. So come back to this cartoon at various stages throughout the course. So first, we'll do a cartoon overview of the prokaryotic 70S ribosome, and then we're going to look at some of the data from crystallography studies here. So as I think we all know the ribosome is comprised of RNA and proteins, and by mass, it's about 66% RNA and about 34% protein. And it's comprised of two subunits, and those are indicated in the cartoon by different colors. So in prokaryotes, we have the 50S, which is the large subunit. This is made up of 23S ribosomal RNA, a piece of 5S ribosomal RNA, and proteins. In terms of size this is huge, so it's approximately 1.5 megadaltons. And what we find within this subunit is the catalytic center, or peptidyl transferase center. This is sometimes abbreviated as PTC. And what we also find in the 50S subunit are three sites for tRNA binding. And so the other subunit in prokaryotes is the 30S. This is a small subunit. It's comprised of 16S rRNA and proteins, and it's also quite large. Just smaller than the 50S, so on the order of 0.8 megadaltons. And in terms of function, what we have in the 30S is the decoding center, so for decoding the mRNA, and the site of mRNA binding. So if we draw this in cartoon form-- and this is something I really encourage you all to do when thinking about the experiments and the problem sets because that's going to help you understand the experimental design and what actually happened. Here, what we have on top is the 50S subunit. On bottom, the 30S subunit. We have the mRNA, and note the directionality, so 5 prime end, 5 prime end of the ribose, 3 prime end here. And then within this 50S, we can think about these sites for tRNA binding ordered as such, so E, P, and A here, so this is the catalytic center or peptidyl transferase center here. So overall, this assembled ribosome is on the order of 2.3 megadaltons and is about 200 Angstroms in diameter. So just in terms of these names, 50S, 30S-- this is overall the 70S assembled ribosome. What do these numbers-- where do they come from? What does this 50S, 30S, 70S mean? So what is the S? AUDIENCE: It's [INAUDIBLE]. It has to do with the sedimentation [INAUDIBLE].. ELIZABETH NOLAN: Yeah, it has to do with the sedimentation. So there's a type of experiment called analytical ultra centrifugation, and effectively, you can use this to ask about the sedimentation of a biomolecule. So effectively, what is the rate at which a biomolecule moves in response to the centrifugal force in a centrifuge there? And so you can use optics to monitor the sedimentation and use mathematics to fit those data to come up with an S value. So typically, the larger the S value, the larger the size. It's not always directly proportional to the mass because things like shape play a role as well, but effectively, we see 50S, and that subunit is larger than the 30S. And note, when they come together, it's not additive. It's 70S there, if you're to look at the assembled ribosome in one of these experiments. So that's where those values come from here. So if we take a look from my cartoon depiction to actual image from cryoelectron microscopy-- so this is just rotated basically 90 degrees. What do we see? We have the 50S here, the catalytic center. We have the 30S. Here's the mRNA, and what we're seeing is that, in this particular structure, there's some tRNAs bound, and they've indicated also a ribosomal protein here. Just as a sense of complexity-- so in E. coli, the 50S subunit has over 30 proteins associated with it. That's a lot-- 30 different proteins. And the 30S has 21 ribosomal proteins associated with it, so we need to think about the proteins in addition to the RNA. So let's take a look at an image from the crystal structure reported in 2000, of the 50S ribosome from a particular prokaryote shown here. So this is what's described as the crown view, and in this particular depiction, what we're seeing is that the ribosomal RNA of the 50S is in gray or white, and the ribosomal proteins that are bound are in gold. So taking a look at this, what do we see? We can ask ourselves some questions from this structure. So the first question I'll ask is about the RNA. What does this RNA look like? So do we see any obvious domains? If anyone has some experience looking at structures. I don't see any. What I see is a compact mass of RNA here. There's not obvious domains or regions that are somehow different here. To me, in this structure, it looks like one big glob of RNA. But then the question is, is that truly the case, or is there an organization we're just not seeing at this level? The next question we can ask is where are the proteins? So if we look at the proteins and how they're arranged on this compact mass of RNA, what do we see? Where are they? AUDIENCE: The edges? ELIZABETH NOLAN: On the edges, yeah. There's many on the edges, like L1 here, this one on the outside here, over here. So it looks like these proteins, at least in this view, are mostly on the outside. Is there anything unusual or potentially unusual we can see in addition about these proteins? Maybe looking at this one here or here. What's going on? AUDIENCE: I can't see very well, but I think that there's not just [INAUDIBLE]. ELIZABETH NOLAN: It looks like there's some unfolded parts? AUDIENCE: Yeah. ELIZABETH NOLAN: Right, so look here. So it looks like there's some unfolded regions to these proteins. And why is that? And where are these unfolded regions going? So what we can do is look at the RNA separately and look at the protein separately now and see what we learn from these analyses. So effectively, if we consider the 23S rRNA, despite that structure we saw before that looked like a compact mass of RNA, it's structured, and it consists of six domains. And these domains have quite complicated shapes, and they fit together. And here is just a schematic diagram of this structure. So if we take a look, we can see that there's domain 1, domain 2, 3, 4, 5, and 6. And on the left here, it's indicated where, in that crown view we just looked at, right here, these domains are located. So there is organization, even though in that structure, it looks like one compact mass of RNA. So let's think about these proteins a bit more. And in addition to the crown view and the observations we had from this particular face of the ribosome, where it looks like many proteins are on the outside, and there's some unfolded regions, what happens if we look elsewhere? So here, we have rotation, so 180 degrees from here, effectively looking, we can say, on the backside. And here, we can look at the view from the bottom of this subunit. So what do these images suggest? Do they support what we were thinking from this one view here, that proteins are mostly on the outside? Yeah, I see some shaking heads "yes." It looks like the surface of this 50S is covered, effectively, by a protein lattice here. So what might a role be for these proteins, an important role? AUDIENCE: Structural? ELIZABETH NOLAN: Yeah, so some structural role. So these proteins can help with stabilizing this 3D structure of the RNA. And they have other functions as well, and some of those will come up as we discuss this elongation cycle. But one function is certainly structural. If we just think about the distribution of the proteins along the surface of this 50S, it looks more or less uniform. There aren't patches where there's no protein or patches where there's a lot of protein. They're pretty much evenly distributed here. So as it turns out, most of the segments of the 23S do interact with protein, and if we look at these proteins more closely, we're going to follow up on the observation that it looks like they have some unfolded regions. So what we're looking at here are just a selection of the 50S proteins in the absence of the RNA. So these structures have been taken out of that total structure. In terms of nomenclature, l means large and s means small, in terms of thinking about ribosomal proteins. And so what's found in the 50S is that we can categorize 17 of the proteins as globular or folded and 13 of the proteins as cases where there's extensions that are non-globular or have no clear structure. And that's color coded in these examples, where we have folded regions in green and then unfolded regions in red. So why is this, and where are these red extensions going? So what's seen is that these non-globular extensions work their way into the interior of the ribosome, so we can think about them kind of like tentacles, for instance, going into the interior. So how might they interact with the RNA? So I'll give you a hint. In these regions in red, there are quite a number of arginine and lysine residues compared to other regions. So what properties of arginine or lysine would be important? AUDIENCE: [INAUDIBLE] ELIZABETH NOLAN: Positive charge. Right, we have positively charged amino acids. What about PKAs? So who votes for arginine having a higher PKA than lysine? The opposite? So that's a point for review. Lysine around 10.5. arginine around 12.5. arginine's higher here. So if we have a bunch of positively charged residues in these extensions, how are they going to interact with the rRNA? What are the molecular features there that are important? AUDIENCE: [INAUDIBLE] ELIZABETH NOLAN: Pardon? AUDIENCE: Phosphates? ELIZABETH NOLAN: Yeah, the phosphate backbones. So we have the negatively charged phosphates, positively charged amino acids-- effectively formation of salt bridges here. AUDIENCE: So I know structure for a lot of these, well, non-globular regions. Does it mean that they're more disordered, or do they still have relatively similar B factor compared to the rest of the globular region? It's just that they don't fall under [INAUDIBLE]---- ELIZABETH NOLAN: I don't know what the B factors are for the different regions of these proteins, and for the case of discussion here, I would have it fall under a lack of secondary structure. And keep in mind, the ribosome is quite dynamic, and in isolation, are all the proteins there and in their native way or not is just something else to keep in mind. But these are certainly lacking a fold and going into the interior and working from salt bridges here. Here's just an example of the 50S with tRNAs bound. So we have the 50S. We see tRNA in the E-site, the P-site, and the A-site. And so what are these three sites? Effectively, their names indicate what they bind or what they do in terms of these letters. The A-site binds aminoacyl tRNAs with the exception of initiator tRNA, which cannot bind to the A-site. The P-site binds the initiator tRNA during the initiation process of translation, and then it also binds peptydil tRNAs, so effectively the tRNA that has the growing peptide chain attached. And then the E-site binds the DA slated tRNA, and this is called the E-site because it's the exit site. And eventually, this tRNA that has lost its amino acid needs to get kicked out of the ribosome. So one more point-- just going back about these proteins to highlight. We stated that these proteins are mostly on the exterior, and there's just these extensions that go in. One thing I didn't explicitly say is that this peptidyl transferase center is devoid of protein. So in this catalytic center that's responsible for peptide bond formation, there's no protein. So based on all of the structural evidence, the nearest protein is 18 Angstroms away. That's quite far when thinking about making a peptide bond in a catalytic center. And also we'll learn that magnesium ions are important for ribosome assembly. I'll just point out that the closest magnesium ion is 8 Angstroms away. So if there's no protein in this catalytic center that's responsible for formation of peptide bonds in this growing polypeptide chain, what does that tell us right off the bat about the ribosome and catalysis? AUDIENCE: [INAUDIBLE] ELIZABETH NOLAN: Pardon? AUDIENCE: You have many functional component of [INAUDIBLE] ribozymes. ELIZABETH NOLAN: Yeah, so the ribo-- the ribosome is a ribozyme, yes. So there's many functional components, but in terms of peptide bond formation, it's the RNA that's catalyzing that reaction. So it's a ribozyme, or an RNA based catalyst. And so this is something many of us may take for granted right now, but it was a big surprise to see this here. And to the best of my knowledge, the ribosome is the only natural ribozyme that has a polymerase activity. So many of these natural ribozymes are involved in RNA maturation here, so for those of you interested in evolution and hypotheses about RNA world, this observation that there's no protein in the catalytic center of the ribosome supports an RNA world hypothesis, the idea that the RNA, which stores genetic information, can perform chemical catalysis predates DNA and proteins. One thing I'll just, though, point out is that, prior to this structural study, roughly two years before, there was some experimental work done just looking at isolated 50S rRNA with no proteins. And it was found that isolated 50S rRNA could catalyze peptide bond formation, and that, specifically, domain 5 was important for that reaction here. So if you're curious about that, I can point you in the direction of a paper. One last observation about the 50S subunit involves a peptide exit tunnel. And so somehow, the growing polypeptide chain needs to get out of this macromolecular machine, and in order for that to happen, there's an exit tunnel in the 50S subunit. So here, if we go back to that cryo-EM image, what's shown in this particular depiction is a polypeptide chain emerging from the 50S here. If we look at this view, a top or bottom view, what we see is that there's a hole here, and that hole is this exit tunnel. This is just another view of the same thing rotated, and a macrolide is a type of antibiotic that can bind in the region and is thought to block exit of the polypeptide. So there's some features about this exit tunnel that are interesting and that we need to consider. First of all, it's long, so approximately 100 Angstroms. And the diameter is relatively small, so the diameter is on the order of 15 Angstroms. So what we need to think about, from the perspective of this diameter, is what can fit, and so this week in recitation, you're looking at using PyMOL and ubiquitin as an example. If you just ask yourself, would ubiquitin, folded ubiquitin, fit in this exit tunnel based on its size? And so where does protein folding occur? We think about this as primarily and predominantly happening after the polypeptide comes out of the ribosome because there just isn't room in this exit tunnel for some folded structure to exist here. Also, the exit tunnel not shown in these images is lined with hydrophobic residues, just as another feature. So it's narrow, and it cannot accommodate folded proteins. So briefly on the 30S, similar to the 50S as said before, this 30S is comprised of RNA and proteins. It has the sites of mRNA binding and decoding. Here's just a structural overview of the 30S with different regions named, and similar to what we saw for the 23S rRNA of the 50S subunit, the 16S rRNA also has structure. And I just show you the domain organization here, so we see that there are four domains, and they're color coded in green, yellow, blue, and red here. And so another point just to make in passing about 16S-- 16S rRNA is highly conserved amongst species, so sequencing the 16S is commonly done in studies of, say, the microbiome to figure out something about the distribution of different types of prokaryotic organisms there for that. So why spend so much time on the individual subunits? What we find is that the structures are very similar when the ribosome is assembled. So we can think of the 30S and the 50S as coming together to give the 70S, and these subunits basically look the same as they do in isolation. And that's depicted here, in just another example. So if we're looking at this structure based on the cartoon and our discussions, you should be able to identify the different components. So here, what do we have? AUDIENCE: 50S. ELIZABETH NOLAN: Yeah, and here? AUDIENCE: 30S. ELIZABETH NOLAN: 30S, right. What's this? AUDIENCE: [INAUDIBLE] AUDIENCE: [INAUDIBLE] ELIZABETH NOLAN: Yeah, we have a tRNA bound here. Here, a protein. So bring yourself back to this cartoon and its simplicity as we work through problems next week. So another point to make, just to think about, is how is it that these subunits actually come together, and what mediates that interaction there? And so these subunits basically come into contact at about 12 positions, and magnesium ions are really important for mediating the interaction between the 30S and the 50S. So there's bound magnesium ions that mediate interactions between these subunits here. And so in week four recitation, we're going to think about how to purify ribosomes. And if you're interested in purifying ribosomes, how do you get an assembled 70S prokaryotic ribosome? And based on the need for magnesium ions here, we'll see how that's an important variable in these procedures. So we'll close just with some overview points about the translation process as a whole. So during translation, mRNA is read from the five prime to the three prime end. Polypeptides are synthesized from the N terminus to the C terminus, so there's directionality. As I said earlier, translation factors are required at each stage-- initiation, elongation, and termination. Something that I haven't highlighted yet is the importance of GTP. So in that initial overview of the cycle, we saw that there were some instances of GTP hydrolysis by certain translation factors, and in this translation process, GTP hydrolysis provides a means to convert chemical energy into mechanical energy. And so we're going to think a lot about how GTP hydrolysis plays a role next week. And although we're going to look at many structures, keep in mind that conformational changes are essential for catalysis by the ribosome, and that this is a very dynamic system here. So just some additional facts-- so ribosomes will synthesize six to 20 peptide bonds per second. The error rate is less than 1 in 1,000, which brings up fidelity again. How does the ribosome maintain this? And the rate accelerations are on the order of 10 to the 7-fold, so less than many enzymes, but quite good. And in all living organisms, these ribosomes carry out protein synthesis, so all ribosomes contain two subunits that reversibly associate during the translation cycle. Protein synthesis occurs through the binding of the aminoacyl tRNAs to the 70S ribosome in an order dictated by the mRNA. And next week, we're going to dissect how this actually occurs, and we think this will be quite new for all of you, even if you've learned about the ribosome in other courses. These tRNAs move sequentially through these three ribosome binding sites, as we saw before here. So we can return to our overview cycle here, that we saw before. And so we'll briefly address how initiation occurs. So how is this 70S ribosome assembled? We'll have a detailed case study of EF-Tu and then look through this elongation cycle in more detail. In terms of the players and the outcomes-- so this is a reference slide for all of you, where the stages are listed, that all of the players are listed, so some more detail than what's up here, and then the outcome. So what you can see from this overview, and go back and study it outside of lecture, is that, in each case, we see GTP, which means GTP hydrolysis occurs at each step. In initiation, our outcome is assembly of the 70S with mRNA bound and with an initiator tRNA in the P-site. The outcome of elongation is synthesis of this nascent, or new, polypeptide chain, and termination is the hydrolytic release of the peptide, release of the tRNAs and mRNAs and dissociation of the 70S and, ultimately, recycling. So there's many factors that need to be taken into account and dealt with it at every stage here. This is just another reference table. It has some additional players, like EF-Ts, and this is a nucleotide exchange factor for EF-Tu. So EF-Tu is a GTP-ase that we'll hear more about in lecture next week, and in recitation next week. Briefly, some topics for review-- if you need to review the genetic code, please do. We're not going to spend much time on it here. But in brief, I think we all know this genetic code is based on codons. They're read sequentially from a fixed starting point, and the code, which is a triplet code, is degenerate and non-overlapping. So why do we have a triplet code? We have four bases. AUDIENCE: We need enough combinations to [INAUDIBLE].. ELIZABETH NOLAN: Exactly, there needs to be enough combinations for all the amino acids. So we have 20 proteinogenic amino acids, and what else do we have? We have selenocysteine. We have pyrrolysine. So a triplet code with four bases covers everything we need here. We have start codons and stop codons we have to keep in mind, listed here. And as a reminder, in translation, the amino acids are delivered by the aminoacyl tRNAs. So the mRNA does not recognize these amino acids directly. We need the tRNAs that allows this reading to occur. Throughout this course, we're going to refer-- well, throughout this section with the ribosome, we'll be referring to nucleotides, et cetera, by the letter abbreviations. There are structures, chemical structures, associated with these abbreviations, and it's important to know those and be thinking about those as you work problems. So just as review, we have the DNA bases, C, G, A, and T. In RNA, we have uracil instead of thymine. The purines, A and G, have two rings, and the Pyrimidines, one ring. For nomenclature, nucleoside versus nucleotide-- so the nucleoside is a base plus a sugar, so there's this glycosidic bond here between the base and the carbon here of the ribose. And then the nucleotide is this nucleoside with one or more phosphate groups attached at the five prime carbon. So we go one prime, two prime, three prime, four prime, five prime for the numbering of the ribose. And keep in mind, from 5.07 or 7.05-- I think this should be known, but these phosphates, we have alpha, beta, and gamma phosphates. And depending on whether your ATP or some other nucleotide is being hydrolyzed to, say, an AMP or and ADP, you're going to have attack at different positions. So if you need to review, visit your basic biochemistry textbook for these details. Also just to keep in mind, the Watson-Crick based pairing, so G and C. We have three hydrogen bonds here, A and T, two hydrogen bonds. And after spring break, Professor Stubbe will be presenting a module on nucleotide metabolism, where we'll be thinking about these things in some more detail. So where we'll begin on Monday is briefly looking at an overview of initiation, and then we're going to begin to ask how did these amino acids get attached to tRNAs, and how did those aminoacyl tRNAs get to the A-site of the ribosome. So we'll see you then.
MIT_508J_Biological_Chemistry_II_Spring_2016
24_Cholesterol_Homeostasis_4.txt
The following content is provided under a Creative Commons license. Your support will help in MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: So anyhow, this is where we ended the lecture last time. We're finishing up. We were talking about two kinds of regulation for cholesterol sensing. One using the transcription factor sterol responsive element binding protein, and how we can use that when cholesterol levels-- or sterol levels are low to upregulate the amount of sterol by turning on the genes for the biosynthetic pathway. And so that's what we were talking about at the end, and this week in recitation as well. And also you can turn on the gene for the LDL receptor, which then allows you to take more cholesterol in from the diet. And so what I wanted to do is in this week's problem set 7, you were focused on looking at the SCAP protein and how do you know what the SCAP protein is doing. And what you were seeing in the data you were give given, which was taken from one paper. There are many papers trying to study these proteins to understand what's going on with these complex membrane proteins. Where does the sterol bind? How does the sterol bind? What is it that causes this chemistry to happen where this complex migrates from the endoplasmic reticulum to the Golgi, allows cleavage chemistry to happen, and ultimately a little piece of DNA which binds to the sterol responsive element to actually turn on a bunch of genes that we just talked about? So I just wanted to say one or two words about the players. And you've all thought about the players by now. We're going to come back and look at some of them in a few minutes, but the key player in the scheme I just showed you was SCAP and that was the focus of what you guys had to do in your problem set. And what you notice again is a sterol sensing domain. And there is also-- I point out we'll come back to that at the end of today's lecture-- there's a sterol sensing domain in HMG-CoA reductase, which we will see is involved in post-transcriptional regulation. So we're still looking at transcriptional regulation now. And the question is-- and then you have a bunch of other transhelices, single transmembrane helices probably helical within the membrane. And the question is, how does this guy work to allow the model we showed on the previous slide? So that defines all those terms. Hopefully, you're now familiar with all those terms. It's written down in a place we can go read about it again. But this is the cartoon that you were dealing with in the problem set. And so the key question is-- most of these things are defined. Whether you have a transmembrane helix is defined by some kind of sequence gazing, and then you have to do experiments to test whether the model is correct. And we don't have any pictures of the SCAP protein at all at this stage. So the kinds of experiments that you were looking at your problem set are the kinds of experiments that people are still doing to try to figure out how all this information is coordinated to allow the chemistry to happen, or that migration from the ER to the Golgi. And we talked about last time-- we talked about a zip code. So if we look at SCAP, so we have eight transmembrane helices. And the key to the way this works is that there's a little zip code. And you've seen a zip code before transiently when we were looking at a zip code on the LDL receptor, which targeted AP2 to then bring in the clathrin coats to make the clathrin-coated vesicles. I think what you'll see over the course of the rest of the semester, there are lots of times three or four amino acid sequences that are the key that allow some kind of confirmational change to occur, which can trigger off a sequence of events that people have found by doing a lot of studies on the system. So the zip code here, and that's what you were focused on in your problem set, again. And I don't expect you to remember any of this, except to sort of know that these little zip codes play a role quite frequently in biological transformations of these complex systems. And so here is the little zip code. And we've been talking about so far what happens under low sterol levels, where we want to make cholesterol or get more in from the diet. And under these conditions, if we want to make more cholesterol, we have to turn on the biosynthetic machinery, HMG-CoA reductase, or turn on LDL receptor that allows you to take things into the cell. And so this is proposed to be a key player in loop 6. And this loop 6, which is pretty big-- and you might ask yourself the question, where did they get this loop? So that's something that you have to design experiments to figure out what is cytoplasmic, what faces the lumen, how big are these loops? All of that plays a role in thinking about how this works from a molecular perspective. So loop 6 plays a key role, as does, you can see from your problem set, loop 1. So what does loop 6 do in the model? The cartoon is shown over here. Under these conditions where you have low sterol, is below whatever the membrane concentration is that you looked at for the recitation this week, we're down 3% or less or something like that, what happens is this little zip code is exposed. And it in some way recruits proteins that are involved in another complex process that we aren't going to talk about where you can bud off little vesicles where proteins of interest and also lipids can be moved from one membrane to another, the ER in this case to the Golgi. And so this interaction is a GTPase here. There are a couple of additional proteins that have been identified. We're not going to talk about the details. But this is the key to allow movement into the Golgi, which then you have the defined proteases that we've talked about before which allow cutting and allow the little piece with the helix loop helix at the N-terminus to become soluble and then move to the cytosol. So we want to ask the question now and spend a little bit of time, what happens with high sterol concentrations? So everything we've looked at has been-- this is at low sterol concentrations. And under these conditions, this zip code targets SCAP and SRE-BP to the Golgi. So now we want to go to the second set of conditions. And again, in this week's recitation, we're focused on high sterol, low sterol concentrations. High sterol, what is it that allows this to prevent movement into the Golgi so you can get this processing so you can turn on HMG reductase and LDL receptor biosynthesis. And so the proposal has been-- so in some way this is connected to sterol, so we're at 5% or 6% sterol in the ER membrane. That's what we're going to discuss today, for those of you who haven't had recitation. How do you know what the turn on versus the turn off is for sterol levels? Where does it bind? How does it bind? All of that becomes pretty interesting. And in high sterol we now introduce yet another player. So in addition to SCAP we now have to pay attention to INSIG. So that's the other player. We'll see that INSIG is this protein that's the linchpin for all the regulatory mechanisms. So if we go back here, what do we see about INSIG? It's small. It's much smaller than SCAP. SCAP is this huge protein, 1,200 amino acids. Here's 200 amino acids, all transmembrane. Recently, actually, there was a structure not of the human system, but of a bacterial system. It's not found in the vast majority of bacteria, but they found one and so they've gotten some-- they have proposed some model for how INSIG could be in the membrane. Now, one of the things that I find confusing to think about the molecular basis for what's going on-- which we don't know anything. You're looking at a cartoon level-- is we have so many transmembrane helices, but is this thing a monomer? Is it a dimer? Is it a tetramer? Is it a hexamer? And how do you look at that? Because when you solubilize it, you have to put it in detergent, et cetera. And is what you see in the test tube, how do you relate it back to what's going on inside the cell? And I think we really don't know. Most of these things-- both SCAP, which is thought to be a tetramer, and now in this new paper they're claiming it's a trimer of dimers. Just add to the complexity of trying to figure out how all these things interact. So that's the issue with doing these kinds of experiments. We don't have very good experiments. We need people to invent new ways of trying to ferret out how these things interact within the membrane. So here INSIG is going to be a key player and SCAP is also a key player. And so somehow, in the presence of sterol, so we're at high sterol, in the presence of INSIG, we need to get rid of the zip code. That's the bottom line. So the model is in the presence of sterol, we remove-- we don't really remove it, but we hide the zip code. And when you hide the zip code, so this is shown-- I'm not going to draw this out on the board because I think they draw out better than I can do it, and we really don't know what that's going on anyhow. But again, this comes from a region where it's accessible to another region where these proteins can no longer bind to do the transport of these proteins into the Golgi. So that means you never get processing of SRE-BP to become active. Does everybody understand that? So one of the questions is, if I asked you to design an experiment, hopefully you're now starting to be able to think about designing experiments. Is there any kind of an experiment you might think you could do to-- a simple experiment that you might be able to do or try to do that would allow you to show that you you'd undergone a conformational change in a loop 6. Anybody think of something? So loop 6 this is big, huge-- it's proposed to be a big, huge piece of polypeptide and it's proposed to undergo-- this is a cartoon, but a tremendous conformational change. And so what you need is some kind of a simple probe that might tell you that it's undergone a conformational change. And what might some kind of a probe like that be? How could you design something like that? Yeah? AUDIENCE: A FRET experiment? JOANNE STUBBE: OK, so that would be one thing you could do. You couldn't just do a FRET experiment to do it. What would you have to do? AUDIENCE: Incorporate fluorophores. JOANNE STUBBE: Yeah, so you'd have to incorporate fluorophores. So the issue with FRET is not only now do you have one problem, so you can put it one place, but now you've got to figure out where to put the second FRET. So that would be a lot of experiments, but now you can do mutagenesis, so you could probably do an experiment like that. Is there anything else you can think about? Yeah? AUDIENCE: So I don't know how much is known about the conformational rigidity of loop 6 and also I'm not sure-- I've ever seen this in a membrane protein, but you can maybe look at proton exchange, like [INAUDIBLE] backbone exchange. JOANNE STUBBE: So that's a sophisticated experiment and I would say there probably wouldn't be my first choice. But the idea that it's accessible, is there any kind of an enzyme that you might be able to use that could-- again, you'd have to be lucky, but you could look at the sequence and think about this. Is there any kind of an enzyme you might be able to use that could sense a change in the conformation? And if the model is right, it's on the cytoplasmic face. And so the answer is people use trypsin. So if you go back and look at the sequence, there's an arginine. And actually, I wouldn't have expected you to see this, although I think it is mentioned in one of the papers where it becomes much more accessible in one state than the other, so you get proteolytic clipping. But those are the kinds of things-- what other kind of experiment could you do? So you think you're undergoing a conformational change, what kinds of probes did Liz talk about that might allow you to see some kind of change in conformation? So we have fluorescence probes, which we haven't really talked about yet. Can you think of what other kinds of probes? She spent a whole recitation on it. AUDIENCE: [INAUDIBLE]. JOANNE STUBBE: The what? AUDIENCE: As far as binding [INAUDIBLE],, you would do it in cross-linking. JOANNE STUBBE: Yeah, so some kind of cross-linking. You might get information out of that. To do that again, you've got to put in the cysteine. So here you might have issues because you have all these cysteines down there. So the question is, could you do that? Cysteine is the most easy-- that's the easiest side chain to modify. On the other hand, these cysteine's really playing a functional role, there's no way you could modify it. But those are the kinds of things. Hopefully you're starting to get a battery of tools that you're learning about with different systems that you could further probe this, but you can just see that this is, I think, a really challenging problem. So the idea is it does undergo a conformational change and you can no longer see the zip code. It disappears. And it resides, and so the model, then, becomes here that the sterol binds to the sterol sensitive binding domain and recruits INSIG. Is that true? Could INSIG bind first and that the two of them together? And in the paper that you were looking at in recitation this week, the concentrations of INSIG were elevated and they got a result that you may or may not have been able to predict what the outcome would be. So again, the order of binding and interaction, I think, really still remains something that needs to be studied. So we just form a complex in this case. So we form a complex. So in the membrane-- we have our ER membrane. We have INSIG. I'm not going to draw the-- we have INSIG. It overlaps somehow with SCAP. And then SCAP somehow binds the sterol, so you need to have a sterol here. And this is a sterol-sensitive domain. And then up here you have a loop and the zip code hides. So that's the model. And so that's as sophisticated as we can get at this stage, which is, for a chemist, not particularly sophisticated. But that's still the model. The model has been in the literature for a long time and we don't know that much about it. Here is just another cartoon from another paper where, again, the little gold thing here is the INSIG. This is the SCAP. Here is the sterol responsive binding element. And here they don't show the hiding of the little zip code. So this is the model that people have put forward for how low cholesterol levels allow you to turn on the genes required to make more cholesterol or to take it from the diet. So that's a transcription model. So in the last lecture now what I want to focus on is post-transcriptional regulation. So this is lecture 5. We're talking about post-transcriptional. And what you're going to see, we're going to focus on again is our common player in both of these regulatory mechanisms. INSIG played a key role in keeping SRE-BP in the ER membrane. We're going to say INSIG plays a key role also in destroying-- so at high cholesterol concentrations, you don't want to make any more HMG-CoA reductase. And if you have a lot of it in the membrane, you want to get rid of it. So INSIG and HMG-R are going to interact with each other based on-- so this is a high sterol. And what's going to happen is, in a sophisticated way, HMG-R, which makes cholesterol, is targeted for degradation. So this is the model. We're going to come back to this model over and over again. Sorry. My handwriting is getting worse and worse. So we have INSIG. We have HMG-R. We want to get rid of it because we don't want to make anymore cholesterol. That's the bottom line. So INSIG is a player in both of these mechanisms. So what I want to do now in lecture 5-- that's what this is here. What I want to do in lecture 5 is really talk about how you do degradation in eukaryotic cells in general. And then what I'm going to do is come back and ask the question, how is HMG-R, HMG-CoA reductase targeted for degradation. So that's the overview of where we're going. It's pretty simple and we'll see that mechanism of the proteosome, it's much more complicated, but there are many similarities between this chamber of doom and the one you saw with ClpX and ClpP where you've spent a lot of time discussing what we know about it from a more chemical and biochemical perspective. And what I'm going to show you at the very end is not only does HMG-CoA reductase get targeted, but now in the last couple of years, they found that all of the proteins involved in cholesterol homeostasis get targeted by different mechanisms to get degraded. So protein degradation inside the cell is extremely complicated. So what I'm going to do is give you an outline of a generic picture of how it gets degraded. And the caveat is this is a very active area of research and I think you'll see why it's so complicated. And then the next problem set, problem set 8, there's been a model in the literature that this is the equipment that targets this protein for degradation. And I'm going to give you a bunch of data that says that may not be correct. So this is what you're dealing with. Every time you pick up a journal, there's another model and perhaps there are five or six different ways-- not that many-- three or four different ways that you can mediate the degradation. And we're in the process of trying to unravel this. So this is where we're going. And so what I want to do is start out by looking at-- and then if we have time at the end, I'll come back to both recitations. But I probably won't have time at the end because I want to move on to the next modules. But you'll see that the recitations really are pretty much linked to what we're talking about in class. So it's unfortunate that they weren't timed a little better, but that's the way life is when you're trying to balance all of these acts. So this is the overview. I'm not going to draw this on the board, but I'm going to walk through it step by step. So this is a cartoon overview. You can see it's pretty old. We've learned some stuff, but there's a lot of stuff that remains unknown. So let's just work through the cartoon and then we'll walk through who the players are, what the model is and then in the end, we're going to return to HMG-CoA reductase. So we have a protein and we need to target it for degradation. How does anything know that this is targeted for degradation? The protein's the same in the beginning. In the end, how do we know why this protein has a different kind of half life than some other protein? We haven't discussed that, but we asked the same question in bacterial systems and I'm going to spend not very much time on it. But the N-terminus of the protein can be modified in many ways. This is called the N-end rule, totally mind boggling. I might give you a few examples of this on a problem set. But you can add on amino acids or take off amino acids. It changes the lifetime of the protein from minutes to hours. So this is like-- when this first came out, I, said there's no way that can be true. So we're talking about a few amino acids, just like we're talking about these zip codes over here. It's true. And the way the rules work or have evolved, they're different in all organisms. They're in all organisms, but they're all distinct. So another way that I think is key, and we're still trying to figure this out is that many proteins, are post translationally modified by phosphorylation or hydroxylation or whatever. I think that's also a key thing that's going to target them for degradation. So we have a protein. Somehow it's going to get targeted for degradation. What does that? So it turns out we're going to be introduced to a molecule you saw in your first recitation, ubiquitin, small little protein, like a rock, 76 amino acids. What is it doing? It's like the SSRA tag except more complicated, that you saw before. And then we're going to be introduced to three proteins-- E1, E2, and E3, an activating enzyme, a conjugating enzyme, and a ligase. And I'll sort of define for you what the function of these proteins are. And you'll see that they require energy. Maybe not surprising, the Nobel Prize was given for the work on discovery of how this little system works a number of years ago, where a major player in that was a mechanistic entomologist named Ernie Rose, who nobody ever heard of. I remember when the Nobel Prize came out, the chemist was saying, who the hell is this guy? Well, so that's because, again, they don't care about how enzymes work. But what's amazing is this guy is one of the most brilliant people. He's dead now, but he's one of the most brilliant people I've ever met. And he was [INAUDIBLE]---- he did thousands of things that were really creative and important, but this is the one, because he was hooked in with the guys that were doing the biology, that allowed him to elucidate what was going on. So we're going to take this a little ubiquitin and somehow this equipment is going to attach the ubiquitin onto the protein that's targeted for degradation. And we'll see that you have to have multiple ubiquitins attached to get degraded. That being said, we now know that almost all proteins can be ubiquitinated. We know ubiquitin has something like 20 homologues, look alikes, and they all do different biology. So this is another example of post-translational modification. We're only going to focus on targeting for degradation. That's what I'm going to show you but the ubiquitinome is quite complicated. So once it gets the ubiquitins attached, what do you see here? You see the proteosome. This is the chamber of doom. We'll come back and look at that, just like clip x and clip p. So you have clip p here, the chamber of doom. And then you have little pieces on the top and the bottom of that, which would be sort of like clip x. Hexameric ATPase we'll see is much more complicated in human cells. And so what do you have to do? You have to unfold the protein. You have to thread it into the chamber of doom. You have to break it down into pieces and you spit out the pieces. This process requires ATP like you studied in the bacterial proteosome. And then there are actually many different proteosomes in human cells and I'm just going to talk about the generic proteosome. So that's the cartoon overview. So I want to say a few things about-- so let's start by looking at the proteosome. And again, this is the human proteosome. And if you look at these big machines, you've already learned one way you characterize them is by their sedimentation in some kind of a centrifugal field. And so these things migrate. Like, a 26S particle only has a sedimentation value of 26. So it's huge, and it's 2.5 megadaltons. So this is a huge machine just like the ones you've been studying in the first part of the course. So it turns out this can be divided into two parts as you've already seen and you can see over there. You have the 20S, which is the core proteosome. And then you have a 19S lid. Actually, you can have multiple lids and in these lids there can be 20 proteins, 15 to 20 proteins. So the lid contain 15 to 20 proteins. We'll come back and look at this a little bit. And so this is going to be in-- and among these things are the AAA plus ATPases, which are actually quite distinct from what you're going to see, what you have seen in the bacterial proteosome. So here again is going to be the 20S core. Here are the proteins, so this 20S core. Here are the proteins involved in the lid. Some are tightly bound, some are not tightly bound. Remember, we had a hexameric ATPase, so RP-- I can't remember the acronyms-- RPT, and there are six different ATPases, not one, six. But they form a hexameric structure. And then you have a lot of additional proteins that we're going to come back and look at, but one of you might expect would be something that could recognize ubiquitin, just like you had something that recognized the SSRA tag. It turns out ubiquitin is recycled inside the cell. So the equipment that allows you to cut off the ubiquitin so it can be used again, de-ubiquinating enzymes, is also located in the lid. And you can also imagine that could be many kinds of adapter proteins because we're going to be able to degrade many, many, many proteins under different sets of conditions. So this changes in composition, as opposed to the chamber of doom, the 20S proteasome. So let's look again at the core particle, so the core protease. Let's abbreviate it CP. And what do we know about this? What we know is the following-- that it forms four heptameric rings. And the rings, so each one of these is a 7-mer. And it turns out we have two kinds. They're actually pretty similar to each other, just like the proteosome from bacteria. But we have alpha, we have beta, we have beta, and we have alpha. And we call them-- we put i's next to them because, again, they're not the same. So they're all different. So they're all structurally the same, but they're all different. So i can be 1 through 7. So what do we know from studies that people have done? The key thing is alpha. So these alphas at the top and the bottom are inactive in terms of chemistry of peptide bond hydrolysis. So all of the chemistry-- so these are each the beta heptamer is also-- these are in the center. These are active. So the activity is here and it's flanked by to heptameric rings that are inactive. And so what do we know about beta? So even though we have beta i, where this is 1 through 7, it turns out that four out of the seven betas are inactive. So again, you saw the complexity with Saunders' talk on single molecule stuff on ClpP, right? So four of the seven betas are inactive. So that might not be so different. But I think every proteosome, even though the architecture is sort of similar, has evolved slightly different strategies to deal with the same problem. But what's interesting here is, it doesn't matter which one is which, but the three betas that are active all have different specificities of peptide bond hydrolysis. So B1 has D,E specificity. Hopefully you all know what that means. That means simply, for example, if you had an aspartate and this is where the peptide bond cleavages, they recognize aspartate in the P1 binding site. So if they recognize aspartate and a glutamate, B2-- or I might have the numbers mixed up-- recognize lysine and arginine. What does that look like? We've seen this now a hundred times. That should remind you of trypsin. So we have yet another lysine-arginine-dependent protease, and these are all over the place in the body. So it's not just this one little site. It is, maybe, in the proteosome. But if you look at blood coagulation, there was something like 15 lysine-dependent proteases, and they've got to all be controlled, otherwise we would clot all the time. Yeah? AUDIENCE: When you say four of the seven [INAUDIBLE] are inactive. JOANNE STUBBE: Yeah. AUDIENCE: Do you mean that-- JOANNE STUBBE: They can't catalyze any peptide bond. AUDIENCE: But is it, like, in some of the other proteases we saw where it changes? Or is it for a given or a specific molecule, a specific protease it's always the same four units that are inactive? JOANNE STUBBE: It's always the same four that are inactive, but whether they're locate-- how do you call 1, 2, 3, 4, and how they assemble? An interesting question that actually people are studying in thermophilic bacteria. But you can imagine if they had be-- I don't think they have to be predisposed. That's why I'm saying the numbers don't make that much difference. AUDIENCE: [INAUDIBLE]. JOANNE STUBBE: So it doesn't have to be B1, B2, B3, B4, B5, and you always see the same. I don't think that's true, but I don't really know. So you have a different specificity there. And the third one, which they call-- I don't remember what they call it. They call it B4 in the paper, so maybe they more know more about this than I do, but you have hydrophobics and you have aromatics. Where have you seen that kind of a protease before? Yeah, kind of with trypsin. So you're seeing this-- these are the common proteases you find all over the place. I mean, we use them as tools all the time as biochemists, these three. There are many variations on this theme. So anyhow, so what you have then is basically heptameric units where the activity is here. These are inactive. And somehow you have to get the protein that's going to be degraded just like you did in the clip p protein, get it into the chamber of doom. So what do we know about the mechanism of cleavage? So I'm not going to go through the mechanism in detail, but the mechanism I'm going to say a few things. The mechanism of peptide bond cleavage is distinct in that what did you see in clip p? You had a serine-type protease that involved covalent catalysis. Here, what you have in the human system is a threonine. So that's sort of unusual. There were been a number of these since this was discovered a while back. There have been a number of threonine proteases, so this is the rest of the proteosome. So it turns out that threonine is at the N-terminus of the proteosome. So that becomes important in terms of its chemistry. So this is the N-terminus. And the two things that you could picture that might be involved in catalysis, based on what you've learned about the bacterial protease, is that you have a serine, you have a threonine. They both have OHs that could be involved in covalent catalysis. So this OH is thought to be involved in covalent catalysis. And remember, what do you have and what do you have in the case of the clip p protein? What also is required besides a serine? You need some kind of general acid, some kind of general base catalyst in all these proteases. In the case of serine proteases, it's usually histadine. It's not a histadine in this case. It is the N-terminal amino group of the protein that's proposed to be the N-terminal amino group of the protein. Now, if you look through the PKAs amino groups at the N-terminus versus lysine, they're always lower. So this amino group, so the N-terminus amino group is thought to be the general base catalyst and the general acid catalyst in the mechanism. So that's the proposal. And I must say, I don't think we know a whole heck of a lot. I haven't read the literature [INAUDIBLE].. There's not that many people working on the mechanism at this stage. So this is the proposed mechanism, so just put it in quotes, "proposed". And it's completely sort of analogous to what you went through in the first part of the first part of the semester. So the amino group is deprotonated. It's got to be deprotonated to function as a general base catalyst, proposed to deprotonate the hydroxyl group of a threonine, activated for nucleophilic attack somehow. Do you form an oxyanion hole? Whenever you see brackets, that means we don't see it and its a proposed intermediate. From chemical studies, we know tetrahedral intermediates exist. In proteases, no one has ever seen a tetrahedral intermediate. So all these things you see in all these mechanisms are a figment of people's imaginations based on really sort of a thorough understanding of the chemistry of what's going on. So when whenever you see brackets, that means there's no direct evidence or it's reasonable mechanism based on the chemistry. And then what we need to do is we want to break down this tetrahedral intermediate. And we want to cleave the amide bond, which is the goal of this proteosome. And to do that, we can now use this amino group which we've initially used as a general base catalyst, as the general acid catalyst. So that's where the general acid catalysis comes in, and you see this over and over again in biology. You have one group and it can function as an acid and base catalyst. And it gives you-- what does it give you? It gives you an acyl-enzyme. So you've seen that before. And now you just do the reverse of this reaction where this forms as a general base catalyst to activate water, forming a tetrahedral intermediate, and then loss of water to regenerate your starting material. So that's a working hypothesis and I don't really want to spend any more time on that. So that's the chemistry of the core particle. And really sort of what we want to do now is focus on the real chemistry that's going to go on, that's going to allow us to mediate degradation of proteins of interest as a regulatory mechanism. So the second thing that I want to talk about, the second player I want to talk about is ubiquitin, which you have all seen. And this is the key tag that targets, although it's a major tag. But I think the tagging is much more complicated, as I tried to indicate in the first slide. We just went over it. You need something else to target your protein for degradation because you need to target the ubiquitination in the first place. So something has got to be special about protein A and protein D that controls lifetime inside the cell. And this is a lot of people working on that. So tag the targets. Let's call it a protein of the interest. So this would be HMG-CoA reductase for degradation. So this is a key player and we're going to spend a little bit of time talking about that. But furthermore, one ubiquitin is not enough. You need to have polyubiquitins. So what we're going to see is-- and I think this is still the rule, it keeps changing-- but you need polyubiquitins where n is greater than or equal to 4 for this targeting to work, so one's not enough. So now we're faced with the problem, how do we stick this ubiquitin onto the protein that's going to get degraded? So that's what we're focusing on. So let's look a little bit at the structure of ubiquitin. So here's the structure of ubiquitin, and ubiquitin is 76 amino acids. It's a compact-- you've already looked at the structure of this guy and it's compact a little protein that has a C-terminal glycine. Let me write this out. So this is a C-terminal glycine, which is a key player. So in some of the pictures that you're going to see that I draw on the board, this is going to be a key player so I might write G76 or something like that, so glycine 76 is in all of these things. And it's going to be making the linkages we're going to be looking at. So you just need to remember that. And if you look at where it is, if you go back and you look at the structure, it's on a flexible loop at the end. So the other thing that you need to know about ubiquitin when we look at the structure is that it has seven lysines. And the lysines, because this is so small, are located on the surface. For targeting ubiquitin, targeting proteins for degradation, we're really going to be focusing on, like, lysine 48. But all of the lysines can get modified. That's just the complexity of it. So we're going to focus on lysine 48. So the key thing is what happens. What I'm going to show you is what the structure is. So we have a protein of interest that's targeted for degradation, and we're going to talk about how it's targeted. It has a lysine on its surface, a lysine on its surface, which is somehow then going to be attached to glycine 76 on the C-terminus of ubiquitin. So this is going to be part of ubiquitin. So let me just write it-- I'm not going to write this out anymore, but this is a C-terminus. So what's unusual about this bond? I mean, do you normally see that bond in proteins? No. Normally, you see it with alpha amino groups, now you're seeing it with the epsilon amino group of lysine. So this is an isopeptide and everything with ubiquitin in chemistry is through isopeptide linkages. So this is, again, this is a lysine here. And we will see that all the proteins targeted have lysines on the surface that can do covalent chemistry. And we'll see how with ubiquitin. So this is an isopeptide. Again, this is the epsilon amino group, not the alpha amino group. And this is the C-terminal flexible chain of ubiquitin. And then, ultimately, what we need to get to when we're looking at the biosynthetic pathway is how do we attach all of these things? So the question is, what are the linkages between the ubiquitins? They're going to be isopeptide linkages, and they're isopeptide linkages between the lysine on the ubiquitin and the C-terminal glycine of another ubiquitin. So what you have here again is an isopeptide. And I think once you get this down in your head as to what's going on, the chemistry is going to be really straightforward. So again, what you have in the case of ubiquitin, you have lysine. Remember, I told you we were going to focus on lysine 48, so lysine 48. Again, it is surface exposed and if forms a covalent linkage with this is glycine 76 at the C-terminus. So again, we have this isopeptide linkage. So that you're going to see over and over and over again. Does everybody get that? I don't think it's so hard to see, but it's just different from what you've seen before. So now I want to do-- I told you at the very beginning, how do we do all of that? Well, the time is over. I'm sorry. The time is over already. It just goes by so fast. I can't stand this. I didn't even get to the exciting part. So next time, next time we'll have to come back and we will talk about E1, E2, E3. The chemistry is straightforward and it's analogous to chemistry you've already seen before. And then we're going to briefly look at how this relates to HMG-CoA reductase, degradation, which I'll show you what the factors are, but it's still pretty much a black box in my opinion.
MIT_508J_Biological_Chemistry_II_Spring_2016
34_Reactive_Oxygen_Species_4_Nucleotide_Metabolism_1.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: What I want to do today is finish up module 7 on reactive oxygen species and then move on into the last module, which we are obviously not going to get completely through. We're going to be focused mostly on purines and maybe some pyrimidines. And I'll give you a big overview of what I think the things are you need to think about in nucleotide and deoxynucleotide metabolism as a starting point. OK. So we've been talking about module 7 and, in this section, how you control reactive oxygen species for signaling. We were going through the generic overview. And at the end of the last lecture, this is the system we were talking about using epidermal growth factor receptor, which we've now looked at quite a bit as an example. But what I wanted to point out is that it's not limited to epidermal growth factor receptor. So you have insulin growth factor receptor, nerve growth factor signaling, VEGF, IL-1, IL-4, et cetera. And all of these things are all distinct. They all have different signaling cascades. But the generic approach that we've been looking at in the Kate Carroll paper is also, I think, applicable to these other systems. And so what I wanted to do was just make one more point with this, and then what I'm going to do is summarize the general principles of post-translational modification by anything-- we're using post-translational modification by sulfenylation and then briefly come back to the methods used. But we spent a lot of time in recitations in 11 and 12 focused on methods, so I'm not going to spend very much time on that. It's also in your PowerPoint handouts. So the key thing here is the general-- is we have EGF, OK, so that's Epidermal Growth Factor in the membrane. We have epidermal growth factor receptor, which you all know has to dimerize and you all know, at this stage, is a tyrosine kinase. And the key thing we're going to be focused on is if we modify these proteins, what is the biological consequence, OK? Do you have any biological consequence? And if you don't, it's probably just an artifact of the fact that cysteines react rapidly with hydrogen-- not rapidly, but they react with hydrogen peroxide at some level to give you modification. So this is all, I'm just going to say, tyrosine kinase activity. We've already gone through that. And what happens is you activate the NOX proteins. And in this case, it's the NOX2 isozymes. And this is outside, and this is inside the cell. And NOX2 can generate superoxide-- OK, so let's just put this in parentheses-- which can rapidly generate hydrogen peroxide. And so the issue is that the superoxide and all of the hydrogen peroxide needs to come from the outside of the cell to the inside of the cell. OK. So we have hydrogen peroxide. And what is hydrogen peroxide doing? So the model is-- and this is what we've been focusing on-- that the hydrogen peroxide can modify the cysteine by sulfenylation, OK? So we can go from SH to SOH. And in the case of the tyrosine kinase and in the paper you had to read, it turns out that tyrosine kinase by activity assays was more active. So it's phosphorylated. It's sulfenylated. That leads to higher activity. That means it's potentially biologically interesting. And we also, in the Kate Carroll paper, looked not only at the activity, but we looked downstream at the signaling pathways, and we saw signaling as defined by phosphorylation events. We saw more signaling. So those are the kinds of peak criteria people are looking at for being biologically interesting. Now, what we also have is a key control, and, in these cascades, like over there, we also have PTP. And that's Protein Tyrosine Phosphatase. And these proteins all have a cysteine at the active site. We talked about this before. And the cysteine at the active site, what can it do? It can really sort of dephosphorylate the tyrosine kinase. And if you remove the phosphate, the activity is lowered. OK. So again, you have something that activates, something that removes it. But what we also know-- so this is the active form, and this is the key in all these signaling events. And so what we also have-- so let me go over here, since I didn't leave quite enough room. So we have PTP that can also react with hydrogen peroxide to become sulfenylated. That's the inactive form. So when it's in this state, basically, you put a roadblock in this pathway. So this is inactive. OK. And the Carroll paper spent a lot of time trying to define-- there are lots of protein tyrosine phosphatases inside the cell-- not anywhere near as many as kinases. So one protein tyrosine phosphatase services many proteins. But both of these guys are regulated by sulfenylation. And there's one third thing, and so this is just giving us the big picture now. If you have hydrogen peroxide in the cell, I've already told you that there are enzymes that can degrade hydrogen peroxide-- peroxiredoxins. And so that removes the hydrogen peroxide, which then prevents these things from happening. So you have peroxiredoxins, which I already talked about. And so the hydrogen peroxide concentration goes down. So that's another mechanism of control. OK. So the take-home message is shown in this slide. It's shown in the papers you had to read. And there are many proteins that have some variation on this theme, and this is a really active area of research to look at this in more detail. OK. Yeah? AUDIENCE: The tyrosine kinase activity, [INAUDIBLE] 160% or something. I was just wondering how they actually classified that as [INAUDIBLE]. JOANNE STUBBE: Active? So, I mean, in biology, that's a huge effect. AUDIENCE: OK. JOANNE STUBBE: So, I mean, to somebody that's doing something in the test tube, a factor of two is nothing. In biology, that's all it takes. So the question is, is it enough? And you should always ask that question. And then you've got to look at the consequences, and you do more experiments. If you hadn't seen any effect, well, maybe you didn't have the right proteins in there, and you need five more proteins to assay, which would give you a bigger effect. OK. So that's the issue with all of these problems. That particular experiment, if you go back and look at it, was done in crude extracts, OK. And the activity is extremely low. They had to use a luciferase assay to be able to measure this and amplify the signal, OK, which probably has a lot of issues with-- can have a lot of issues. So if you're not happy with that, then you're going to have trouble in biology. So the question is, what is the baseline? How much slop do you have? And then you have to do the experiments many, many times. It's all a question of statistics. And then do you believe it? And does the rest of the community believe it? So that's a good question. But if that's what they saw, that's what they saw. And their interpretation was, based on this and other-- they did a lot of experiments in the paper, and that's why we chose that paper-- suggested this is a good working hypothesis. So I'm one of these people, you always start out with the simplest working hypothesis. You do experiments. It always gets more complicated, always. And then you expand it, or you change it. There's nothing wrong with that. That's what science is all about. So NOX. We've been talking about NOX. We talked about it for the last couple lectures. And I had already told you that there were seven isozymes of NOX. OK. We had focused on this guy in the phagosome. We now are focusing on NOX2 again. And this guy is also important, but this guy is not. That's the phagasome oxidase. So you're changing the factors inside the cell that govern what happens. And so each one of these guys-- you Google it, you find another 100 papers on this. People are trying to understand the details of what's going on with these systems. OK. So that sort of just shows you, again, with generic, it affects a lot of growth factors, or a lot of cytokines can use these signaling pathways that NOX is important in. Many of them, in the model, sulfenylation is also important. But in many cases, that remains to be established. OK. So what I want to briefly do, then, is look at the general principles of regulation. OK. And I'm just going to briefly outline these. And we've gone through each one of these examples in the two recitation sections. So I'm just sort of reviewing this and making a point. So is post-translational modification important? OK. And I think your generation needs to think about this, because as the methods become more and more sophisticated, OK, we've got really amazing mass spec methods if you can figure out how to do them correctly. Everything, almost any metabolite in the cells, can modify a protein. Acetyl-CoA, it acetylates things. S-Adenosyl methionine, the universal methylating agent, methylates things. OK. So you have hundreds of modifications on your protein, OK, and it is, in fact, related to, in part, I think, the metabolites interacting with the proteins not enzymatically. The question is, is it interesting? So I want you to think about that. So that's why the general principle is, what are you going to use as a control? You can see it. Are you going to spend five years of your life chasing this? Or is it not interesting? So you need to think about that question. It's not an easy question. And that's what everybody is into now. That's the future for the next five years. So one of the things we see is that-- and I told you this 20 times-- it needs to be reversible. OK. So in our case, it doesn't matter whether it's phosphorylation, dephosphorylation, acetylation, deacetylation, methylation, lots of the methyl group. OK. Ubiquitination, deubiquitination. We seen many examples of this. It needs to be reversible. In our case-- and this is related to, again, oxidative stress-- so this is forward-- how do you reverse this? So you need a reductant. And this could be any one of a number of things. There are lots of reductants inside the cells. So I'm just going to say reductant. I've used thioredoxin here, but this has not been identified in the case of the NOX2 system in the epidermal growth factor receptor. We've already looked at this. You can convert this back to the cysteine. This is reduced. Something else needs to be oxidized. OK. So that's a basic thing that you need to think about. OK. The second thing, which I think is very important, and I think this is a general principle used in biology over and over and over again, is increasing the effective molarity. OK. And so we say increasing effective molarity. And why is that important? Because if you have two things reacting-- here, we have the things reacting. Here, we have two things reacting, hydrogen peroxide and a protein. So if we can generate them right next to each other, the concentration is much higher. The rate of the reaction has to be faster. OK. And so how do you increase the effective molarity in the case of the epidermal growth factor receptor? We looked at this. These guys were in the membrane, and NOX interacted with the epidermal growth factor receptor by the immunoprecipitation experiments that we looked at in the recitation. So an example is, I'll just say, NOX dot EGF receptor immunoprecipitation. OK. So another way that-- and we're talking about signaling-- nature has used over and over again is, lots of times, we have these little G proteins, GTPases. And these GTPases can be in the cytosol. But a lot of the time, they do all of the signaling at the membrane. How do you get them to the membrane? Anybody got any idea? So these things move around inside the cell. Localization becomes really key. How would you get a little, soluble G protein to the membrane? AUDIENCE: Through post-translational modification, like a GPI anchor. JOANNE STUBBE: Yeah, so you would put an anchor on it. And what do you use as anchors? You can use isoprenes. So farnesylated, geranylated are frequently used. You've seen that in the first module, module 5, of the second half. And you also can put a fatty acid on there. It's used over and over again. So the prenylation reaction, people have been looking for the reversibility of that for a long time. And as far as I know, no one's found it. But the fatty acid, which is put on usually as a thioester or as an amide, you can hydrolyze it off. So what you can do, then, is have-- let's just use fatty acid whatever. So you have CoA fatty acid. And so this gets modified. And then this goes to the membrane. So what that does, then, is it takes it out of solution under a certain set of conditions. So you've modified your protein, just like we had by sulfenylation. And you bring it to wherever the proteins are it's interacting with. So you're increasing the effective molarity. OK. So this happens all the time. Putting that into a big picture related to what I'm going to say next is, I think, incredibly important. So this is a general principle, but one that needs to be studied or described in a lot more detail. So the third thing is the post-translational modification must have a biological phenotype. So this is the question that Shiva was just asking. If this is increased by whatever, 50%, is it interesting? Is it important? OK. So you need to do additional experiments if you don't think, based on what you know about the system, that that's true. And so in the case of the NOX system, what do we use? Remember, we talked about this. We did two things. We have increased activity of the tyrosine kinase. And then we also had increased downstream signaling. And how did they look at that? We looked at that by phosphorylation. So we used antibodies to serine phosphate. OK. So by those two criteria, the NOX system in the Carroll paper was interesting. And then the fourth thing that I think is also really important is relating to this one. Whatever the signaling agent is, if you have ways of removing it, you then decrease the signaling. OK. And so this is frequently observed in many of these systems. So enzymes can modulate the concentrations of the signaling agent. And the example I used up there with the NOX system-- so we're looking at hydrogen peroxide, and I'm not going to draw the structures out, because I've already done this before, the peroxiredoxins. We've gone through that two lectures ago. Something can remove that signaling agent. OK. So to me, these are the key things you need to think about if you're looking at whether you think your post-translational modification is interesting or not. And a lot of people are doing that. So we see lots of modifications because of the power of mass spec. The question is, are they interesting? And so finally, the only other thing I wanted to say here is in the last little section. And I'm not going to look at this in any detail, either. But if you look at methods-- so this is the last-- how do you look at this? So what you saw in the Carroll case-- and again, it's not unique to the Carroll case-- is you need to develop a reagent that's specific for the post-translational modification. So number one, you need to develop a reagent specific for post-translational modification. It needs to be specific. It needs to be fast. So the kinetics are important under physiological conditions. And it needs to be cell permeable. OK. Because ultimately, with something like hydrogen peroxide or NO or many of the other signaling agents, these guys are really reactive. And you crack open the cells, and you do things out, and you add more oxygen. You can change the levels of modification all over the place. So you really want to look at this contained within the cell under controlled growth conditions. And this is what the two papers we looked at by Carroll were focused on. OK. So you have a reagent. Hopefully, you believe dimedone was a good reagent. OK. So I'm not going to-- but NOX for NOX, sulfenylation, we use dimedone. OK. We discussed this. We've discussed the mechanism. And then what we looked at is MS analysis and how you had modified the reagent so it worked effectively inside the cells so you can enrich. And then use modern methods. We break down the protein into peptides and sequence doing this MS, MS. OK. So I'm not going to talk about that more, because we had two whole recitations on these topics. OK. So that's what I wanted to say in this module on reactive oxygen species. Reactive oxygen species, I think, are front and center. You can't pick up any journals or even listen on the radio or newspapers, if you read newspapers, without seeing reactive oxygen, reactive nitrogen species. I think you now know what you need to think about. And here's an example-- reactive oxygen species can modify cysteines. Cysteines, you've seen over and over and over again, play central roles in enzymatic reactions and control of signaling pathways. And I think the growth factor receptor is a good example of that, of the kinds of things you need to do to try to determine whether these modifications, which are everywhere, are really, in fact, real. OK. So that's what I wanted you to get out of this little module. What I want to do now is move into the next module. And the next module, last module, module 8, is going to be on nucleotide metabolism. How bad am I? Oh, good. I've got lots of time. All right. OK. So let me just erase something so we have some place to start. So nucleotide metabolism is something that, in our introductory course, we don't talk about at all, because we just don't have time, and we just focus on glycolysis, sugar biosynthesis and degradation, fatty acid biosynthesis and degradation. But you all know, and I'll show you that, nucleotides are everywhere. And so, in my opinion, nucleotides had their heyday when I was your age. Everybody and his brother was focused on nucleotide metabolism. The data is really old. We learned how to make nucleotides back in those days. But we didn't have any of the tools we have now. We used a T60 rather than an 800 megahertz machine to look at [INAUDIBLE]. I mean, you had to take the spectra 20 times to remove the spinning sidebands. Anyhow, we didn't have any of the modern methods. But everything back in those days was correct, because people really cared about the truth back in those days, as opposed to publishing in Nature, Cell, and Science. OK. So that data, if you want reproducibility and you go back in the literature, is absolutely going to be reproducible. OK. So I'm going to show you where we are. But I would say, in the next decade, it's going to be the era of nucleotides. But what we need is ways of looking at nucleotides inside the cell. And I'll show you the complexity of this. But nucleotides are everywhere. They control everything. OK. And we really don't know that much about regulation. And to understand regulation, you need to be inside the cell. I can tell you what all these enzymes do. I know a lot about the enzymes. But the question is, how do they work inside the cell? And how are they regulated? So I'm going to try to give you sort of a picture of what the issues are and teach you something about pathways, because a purine pathway, to me, is sort of an amazing-- it's not erasing-- it's sort of an amazing pathway. And in fact, one of my heroes, when I first moved to MIT, is Jack Buchanan, whose picture is on the first slide. He was still here. And I just remember talking to other people. He was older than me. I think he was probably 75. And he was just my hero. I mean, if you read his papers, it's totally mind boggling what the guy did with what he had. OK. And everybody was dumping on him, because he had moved into the state of the art back in those days. OK. But if you took what he did in perspective, he'd done so much more than all the people that were dumping on him. It drove me nuts. So I used to have fights with everybody when I got here, telling everybody what a great scientist this guy was. And I'll try to point out why I think he was such a great scientist when we look at the pathway. Anyhow, the purine biosynthetic pathway, we'll see, was elucidated in pigeons. He used to catch the pigeons in the Boston Common. And then I'll tell you why. They have a different metabolism of excretion than humans do, and so you could feed the pigeons N15. This was back in the 1940s, 1950s. You could feed them stable isotopically labeled nitrogen stuff. And we'll see purine's got nitrogens all over the place. And then you isolate the poop and then characterized it. And that's how we unraveled the pathway. OK. All right. So where am I? All right. I just want to make sure I'm in order. So reading. So what I've assigned you to read in 5.07, people haven't done nucleotide metabolism. So we put it online for the chapter on nucleotide metabolism from Voet and Voet. There's a lot of stuff in there that's not right, but it gives you sort of an overview. And you can take it out of any book if you use Stryer or if you use whatever. You can use any book you want. It just sort of gives you a big picture. And the picture keeps changing, and the books don't keep up to date. OK. I gave you an article to read by Benkovic, which is a review not just focused on the papers that we've talked about and we'll talk about today in recitation. And so what I want to do is, after introducing you to the nomenclature, I'm going to give you a general overview of nucleotide metabolism, focus a little bit on the biology of purines. Then we'll talk about the pathway and why I think the pathway is interesting. And we were going to close with this section, which is what we're doing on today. One of the reasons I talked about this is because I think this idea of purinosomes, complexes of transiently interacting proteins, has captured people's attention for decades. And when this paper came out in 2008, it was one of the first examples where people thought they might have gotten evidence inside tissue culture cells-- so it's still in vitro-- to show that these transient interactions of pathways play another regulatory mechanism inside cells. OK. So that's where we're going. OK. So nomenclature. OK. So many of you probably have seen this before if you took 7.05 instead of 5.07. I guess they taught in-- did they teach you in 5.07 nucleotides? Any of you have Ting and Klibanov? Didn't they teach you about nucleotide metabolism? I thought they taught about DNA replication. AUDIENCE: They talked about DNA replication. JOANNE STUBBE: Well, how can you talk about DNA replication without knowing what a nucleotide is? Sorry. All right. So anyhow, I'm not going to draw. I'm not going to draw these structures on the board. But this is like the amino acids. I think you should know the nucleotides, OK? People hate me for the amino acid side chains, and the pKa is something else you can dislike me for. But anyhow, these are the bases. The names are not so easy to remember. But, I mean, it's central to all of genetic material. So it's pretty darn important, no matter what kind of a biologist, biochemist you are. So we're going to be looking at the purines-- adenine and guanine. So these are the bases-- thymine, cytosine, and uracil. OK. And if you take the base and stick on a sugar-- OK, so this sugar is ribose-- you now have the nucleoside. OK. And this is in the introductory-- if you don't know this, you should read the first few pages of Voet and Voet, and they'll introduce you to this nomenclature again. But you can come back to your notes. So I've redone these notes again, and I will repost them again-- whoops-- putting in more detail, because I didn't really know what your backgrounds are. So this is something that I think-- so we have adenosine, cytidine, guanosine, uridine. What about thymidine? Why don't I have that up there? So this is a take-home message from the next few lectures. AUDIENCE: Because they're [INAUDIBLE].. JOANNE STUBBE: So these all have [INAUDIBLE],, two prime, three prime sys hydroxyls. There is no ribothymidine. OK. You only have deoxy. OK. So thymidine, some people write "deoxy." That's redundant. It is deoxy. Thymidine is deoxy. So this hydroxyl is replaced with a hydrogen, OK, on thymidine. So that becomes really important in connecting nucleotide and deoxynucleotide metabolism, because you have to get from the nucleoside to the deoxynucleoside. And it's not straightforward. OK. There are many, many steps. The metabolism is complicated. And I'll show you one of them. But every organism is slightly different. OK. So one of the things I want you to remember is you have bases, and you have bases in the sugar. Those are the nucleosides. These are the bases. And in DNA, you have T, or, as in RNA, you have U. So you need both uridine and you need thymidine in DNA as the building blocks for DNA biosynthesis. OK. And what we're going to do-- and this was, again, developed mostly from the work of Jack Buchanan's lab a long time ago. And you don't need to remember this. But what pigeons excrete is uric acid. And so this is the molecule they isolated from pigeon poop, OK, which allowed them to tell, ultimately-- which is the key to these isotopic labeling experiments-- the source of all of the different atoms in purines. OK. And we're going to come back to this. But what I want you to see-- this is true in both purines and pyrimidines. And what we're focusing on, what we're going to be focusing on, is de novo purine biosynthesis. But what I'm going to also show you, of course, is you have salvage. So you can get purines from the diet your DNA breaks down, your RNA breaks down. So all of that stuff can then be used, as well. And so it's a question of de novo, and it's a question of salvage. I think it's really underappreciated how important salvage pathways are. And now, with mass spec and isotopic labeling, we can actually figure that out fairly recently. And people interested in making chemotherapeutics are finding, really, sort of things nobody ever expected in terms of how much comes from salvage versus how much comes from de novo. OK. And the salvage is easy to understand. I'll show you. That's chemically simple. The de novo is much more complicated. OK. So anyhow, it's these labeling-- we'll come back to this in a minute. But I think this is important. All of these atoms come from simple building blocks. And you'll see that when we look at the pathway. So glutamine. Glutamine is the major source of ammonia in all metabolic pathways. How does that happen? I'm going to show you. That will be one of the generic reactions I talk about, because the same approach is used over and over again by nature. And the nitrogens play a key role in these heterocyclic purines and pyrimidines. Glycine. We'll see where glycine comes from. Aspartic acid, formate, and bicarbonate. OK. So you can't get much simpler than that. And most of you probably know these all self assemble, allowing you to maybe think about the evolution of this process. You can throw them all together, and you can get a purine out the other side with varying degrees of success. OK. So that's a purine. So what I want to do now is sort of give you an overview. So I've introduce you to the nomenclature and what the purines are going to be. But I want to give you an overview to nucleotide metabolism in general. OK. There's a lot of stuff, so the way I'm going to do this is up and down. OK. So you need a piece of paper, if you're writing this down, that goes up and down. OK. So what's central to everything is phosphoribosyl pyrophosphate. OK. So this is a central player. So this is PRPP. And in your recitation and also in your handout, I've given you the horrible names that are involved with the purine pathway. If we have a test, I will give you the structures of the purine pathway. I don't expect you to remember the details of the purine pathway. It's complicated, and I'm not sure I would have designed it that way to start with. So it's not like it's so logical, like some of the other pathways, which are straightforward. OK. So where do you think phosphoribosyl pyrophosphate comes from? Does anybody have any idea? What did you do learn from basic metabolism? This is something that's covered in most introductory courses. Where does PRPP most likely come from? AUDIENCE: Out of the pentose phosphate pathway? JOANNE STUBBE: Yeah, out of the pentose phosphate. So the two things that play a really critical role in nucleotide metabolism are the oxidative and non-oxidative pentose phosphate pathway where you form ribose biphosphate and NADPH. OK. So over here, we have ribose biphosphate. And for phosphate, from now on, and for pyrophosphate, from now on, I'm going to abbreviate it so I don't have to draw the structures out. But the chargers are important, so you need to remember the structures that are charged. So this is ribose biphosphate. 1 prime, 2 prime, 3 prime, 4 prime, 5 prime. OK. Let me ask this question. Why do you think this is the major form of ribose inside the cell? I don't know if they teach you this or not, but I think it's important. Why is ribose always phosphorylated inside the cell? AUDIENCE: To keep it in the cell? JOANNE STUBBE: To keep it in the cell. But what happens if you don't phosphorylate it? Yeah, to keep it in the cell. Phosphates keep things inside cells. What happens to that structure when it's not phosphorylated? AUDIENCE: The [INAUDIBLE] it can open. JOANNE STUBBE: So it can open. Is that what you see inside the cell? AUDIENCE: No. JOANNE STUBBE: No. What do you see? What kind of a sugar is this if you look at-- it's a five-membered ring sugar. What's that? Anybody remember that? This is what happens. I digress, and then we don't get to finish the course. So it's a furanose. OK. If you ring open this thing, then it can close. It either forms a five-membered ring or a six-membered ring. It forms the six-membered ring almost all the time. That's a pyranose. So it's not in the right state. So then you have to have your enzymes. And there are enzymes that do this that can catalyze back into conversion into the ribose form. So phosphorylation plays-- it keeps it inside the cell, which is incredibly important. But it also controls the state with which you want to deal in metabolic pathways. OK. And we're going to be talking about-- hopefully, we'll get this far. I'm not sure. But if we start with glutamine, then it'll abbreviate like this. Bicarbonate and aspartate. OK. So those are the same three things I just told you were involved in making the purine ring. These are also involved in making the pyrimidine ring. So what we're looking at now is de novo pyrimidine pathway. OK. And what we'll see if we do this is that we have-- skipped a number. So there are four steps. And you make the molecule called orotate. OK. And what we're doing now, which is going to be completely distinct from the purine pathway, is you make the base. OK. So you make your nucleotide base. Let me go back. You make your nucleotide base first. And then you're going to stick on the ribose biphosphate. In purine biosynthesis, you make the base on the ribose biphosphate. So the strategy is distinct. OK. So here, we have no base on it. So what happens here is it interacts with the phosphoribosyl pyrophosphate. We'll talk about this reaction, because it's a major way you use salvage proteins if you get a base. How do you put them back together to form the nucleoside? It allows you to form OMP. OK. So OMP, we're not there yet. OMP, we'll see, can get converted. It loses CO2. We'll look a little bit. The chemistry in this pathway is really pretty simple. So this is enzyme five. This is enzyme six. So you lose CO2, and you form UMP. OK. So UMP is one of the nucleotides we need to actually make RNA. To make DNA, we need deoxythymidine. OK. And we also need deoxycytidine. So this pathway does not give us cytidine. And so the way we go from UMP to the cytidine monophosphate is complicated. OK. So you're going to see there's a couple little-- central to everything is sort of straightforward. But then you'll see it's going to be organism specific. And there's a lot of messing around you have to do with kinases and hydrolases to get you into the right stage to get you all of the building blocks required for RNA and DNA biosynthesis. OK. So I'm going to go over here. And I'm going to say many steps. And we'll look at this to form CTP. And this does not go through CMP. OK. So there are many steps here. And so let's just put a question mark there. Also, we need to have TTP. And again, there are many steps. And we're going to have to figure out how to do this. OK. So it's not simple to get from UMP to CMP or deoxy TMP. OK. So I'm just telling you where you're going to see the complexity in the end. OK. So phosphoribosyl pyrophosphate is central in what it does. I'm not going to have enough room to do this. But anyhow, there are 10 steps. And you've already seen this if you had recitation on Thursday. Or hopefully, you read the paper. This is the purine pathway de novo. OK. And so what we're doing is we have the sugar. And so in every single step in the pathway, what you're doing is you're building up the base. OK. So you're adding it. So that's why there are so many steps. And I showed you whatever on the first slide or maybe the second one where all of the pieces come from. So again, let me just emphasize this. These all come from small building blocks. Let me do that over here. So you have glycine, bicarbonate, aspartic acid, and formate. OK. So the other thing from PRPP is salvage. And the salvage pathways are really important when you're scarfing up bases that are provided by the diet or from breakdown of DNA and RNA. So you have the salvage pathways. OK. And so this can come from the diet or from nucleoside or tide, tide being having a phosphate on it break down. And why is this important? It's important because many organisms like parasites, like malaria parasites, don't make purines. The only way they can get purines for anything is from salvage pathways. It's a major target focusing on anti-malarial and, in some cases, antiviral systems. So here, we have 10 steps. And at the bottom of this, I'm not going to draw the structure out. We don't get to AMP and GMP, which is what we were looking at in the previous slide. We get to IMP. OK. And then IMP, that's a branch point. IMP can get converted either to GMP and AMP. So those are the two purine nucleotides that we need as building blocks to make both RNA and DNA. So we end up over here with AMP and GMP. OK. So when you get that down, there's one other thing I want to say up on the top board. And that's to introduce you to a co-factor that many of you probably haven't thought about before, which I plan to talk about. And that's folate. So any of you think about chemo therapeutics, folates have been around for decades. And it's a major target, successful target, of drugs that are used clinically in the treatment of a wide range of cancers. So folate, this is a key co-factor. And what I will show you is that it can do chemistry. It does one carbon transfer, so one carbon at a time. And what's really interesting about it, and I'll draw the mechanisms out-- it can transfer the methyl-- it can transfer one carbon in an all oxidation state in the methyl state, the aldehyde state, and the acid state. So for example, in the purine pathway over here-- I'm just going to draw this out-- you need it in this state, the acid state. OK. In this state, what you're going to see is you need it in this state. We'll come back and look at this again. Sorry. Methylenetetrahydrofolate, which is a key player. So this is going to end up being the methyl group and thymidine. OK. The only interesting co-factor chemistry in the purine pathway is folate. And folate plays a central role in therapeutic design. OK. So then we're down here, and we still haven't gotten finished. How are we doing? All right, I'm over. So just let me say this. So now, you're into kinases. OK. And there are lots of different kinases. So the kinase story gets complicated, but it's extremely important. So if you're going to make deoxynucleotides, you have to have it in the diphosphate stage. So there are kinases that can convert these guys and also the pyrimidines from over there into NDPs. OK. So we're going to have to think about kinases. And in all organisms-- again, this is de novo-- deoxy NDPs are made by ribonucleotide reductases. OK. So this is the only way, de novo, that you could make deoxynucleotides. If you think about the substrates for DNA replication and repair, they need to be triphosphate. So again, you need kinases again. So I'm going to stop here. I will finish off the last half to get this to go back together. And we will talk about folate metabolism, introductory and folate metabolism, so I don't sort of digress. And then we're going to look at the purine pathway and things that I think are interesting about the purine pathway.
MIT_508J_Biological_Chemistry_II_Spring_2016
31_Metal_Ion_Homeostasis_7_Reactive_Oxygen_Species_1.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: --iron homeostasis in module 6. And then, we're going to move on to module 7. And the readings have been posted. And the PowerPoint has been posted for today's lecture as well. OK. So we were talking about last time peptidoglycans in gram-positive bacteria. And they're actually quite thick, depending on the organism. And so, what is the strategy nature uses to be able to pick up iron where you have a bacteria surrounded by this huge peptidoglycan? And the strategy in Staph aureus that has evolved is they attach everything, almost everything, covalently to the peptidoglycan itself. So at the end of the last lecture, we were looking at-- we introduced you to the operon and Staph aureus. And there were proteins called sortases, which are transpeptidases, really, basically. And they have different sequence specificities. And I'll just go back one. So where we were last time, we have this operon. And all of the proteins that are going to be involved in uptake of heme into the cell are called the Isd proteins. And so, here is the operon. And you can see that each part of this operon is activated by a Fur box. A Fur box is the transcriptional factor that turns off or on regulation at the transcriptional level by the presence or absence of iron, which we're not going to talk about further. And what I wanted to point out here before we move on is that all of the yellow anchors are proteins that are attached covalently by Sortase A, which has a defined zip code. The blue in IsdC has its own sortase, Sortase B. And IsdE is not attached covalently to the peptidoglycan, but actually has a membrane that's attached covalently to a lipid, which is then inserted into the membrane, which is the other strategy that's used in these systems. And let me just point out-- you saw this. I'm not going to spend much time on this today because you've already had a problem in your last problem set on this. But you have these little N domains, or the NEAT domains, which are 120 amino acid domains that allow you to pick up heme, and are involved in transfer of heme down a bucket brigade. So what we're trying to do is attach these Isd proteins to the peptidoglycan. How it does that, how they end up being organized, how many there are, is all an active area of research. We don't really know that. Somebody might know it, but it's not in the published literature. And so, the idea is, again, you can see all of these proteins in some way are tethered via usually a single transmembrane spanning a region in the protein. And here is one Isd protein here with a little zip code. And it involves the Sortase A, which you've now seen over and over again, happens to have a cysteine in its active site to go through a covalent bond formation. And you're going to be cleaving one peptide bond and forming another peptide bond. Where have you seen that before? That's how you did cross-linking, the transpeptidation reaction we talked about last time that allows you to form the bag. And the bag of peptidoglycans becomes cross-linked. And that's the key to survival of the organism. And so, you cleave this bond between T and G. The specificity is understood. And people in the Pentelutes Lab use sortase quite a bit. But it has been engineered by the Liu lab to be more efficient as a catalyst. So this is a natural catalyst. You do a transpeptidation reaction. And then you have-- this is to me, what's amazing. You have this lipid II, which is that C55 isoprene system with the pentaglycine hanging off the pentapeptide, all of which is generated on the inside of the cell, and gets transferred to the outside of the cell. One, then, is set up to do peptide bond formation, just like we saw with the transpeptidase reaction. And so, you end up with then this lipid, lipid II attached to the Isd protein. And then, that's taken-- so this is the detailed chemistry that we just discussed. And you can go through it again if you don't remember it. But it's very similar to things we've now seen many, many times in this course. So once you attach this, what happens is it gets transferred. The whole thing now gets transferred by the mechanism similar to forming the polymer. But now you have the pentaglycine with an Isd protein attached as well. And so you can see, then, this is the growing peptidoglycan with alternating units of N-Acetylglucosamine and N-Acetylmuramic acid we talked about last time. And now we have the Isd protein covalently attached. So this is a pretty amazing strategy. And you can imagine trying to study this under natural systems. With everything attached, it's quite challenging. And so the problem you had in the problem set, what did they do? They cut everything off. And all you're looking at is a little piece of the protein, not the whole protein anchored in this way with the right topology in the active site. So, to me, even thinking about how you make a peptidoglycan that's 50-50 nanometers thick-- so you add things. How does it construct itself, a sacculus around this system and build up? I think nobody knows that. And recent studies have been able to the first time to, in Staph aureus, reconstitute transpeptidase and a glycosyltransferase reaction. So maybe that, with a sort of an imaging technology we have, maybe in the next five or six years somebody will figure out how this amazing thing is organized. So anyhow, we were able to attach these things covalently. And so, here we are. We've got all of these things anchored. And then, in your problem set-- and in the last part of this, I just wanted to very briefly go through how are people trying to study how the heme that's up here gets transferred. And so, this is a cartoon model. And so, we have hemoglobin. We've been able to get-- we've been able to lyse red blood cells, get hemoglobin to come out, or haptoglobin hemoglobin. And now somehow you have to extract the heme. And that's done by two proteins, ISB and H. And then we somehow have to get it to the plasma membrane. And so, the model is that this is transferred from IsdB through a NEAT domain. So all of these transfers up to here are through these little domains that can bind heme. And we have a structure of it. I'll briefly show you that. And then the question, is there an order? If you get it out of IsdB versus IsdH, is it transferred? Here we have A closer to this than B. But is C closer than A? What is the organization? How many of these guys are there? Can they be transferred back and forth between the IsdAs? I mean, there's 100 questions you can ask. And I think we don't know the answer to any of them. And the methodology, the problems that you looked at, again, they were looking at something that wasn't the intact system. And the key to all of this, of course, is the kinetics. And I think when you start taking apart the pieces-- so you have this piece and this piece, and half the spinach is missing, then you have this issue of how do you design the experiment so you have the right concentrations, so they meet in an efficient fashion? And so the kinetics of what you were looking at in your problem set are not realistic. Because things aren't covalently attached. And so the question is, does it really give you a good representation of what's going on in the intact system. So the model has been from studies like the ones that you saw in your problem set, that there is a pathway. That this protein can extract-- the B and the H proteins can extract the heme. It's transferred to the A protein. And so, here we have, can it be transferred directly to the C protein? Or does it need to go through the A protein? So then you can start doing experiments like that. And you can take this protein and look for transfer to this protein. Or you can add in this protein and see if the rate, if you set the experiment up correctly, increases. You can also ask the question, are these reactions stoichiometric or catalytic, which was also asked in the problems that you were looking at. If there is an order, how fast is the transfer from here to here versus here to here? And if it's really fast, what can happen is you don't need stoichiometric amounts of this. You can use small amounts of this to get the transfer to work. And those are the kinds of questions that people are actually focused on. So this is one model that came out of the types of studies that you guys had on your problem set. And eventually, you need to get down here to transfer this to the ATPase, which allows the heme to be transferred into the cytosol. So these proteins are structurally distinct from this protein, which is structurally distinct from that protein. We have structures of all these things. And they all do heme transfer. That's what their function actually is. And so the details, the molecular details of how the heme is transferred is a major focus of a lot of energy right now. And so, if you actually look at this, I want to just say a few things about how this happens. And so, one of the things that you need to think about, so what are the methods to examine Isd-heme transfer? And this is through these NEAT domains, which I defined before. And so, one of the first things that people did was number one, you need to clone, express, purify, the Isd proteins. And most of these studies have been done in pieces versus full length. And when you do-- so you could try to do the whole thing. Some of these things have three NEAT domains. Some have one NEAT domain. If you look in those cartoons I gave you, the number of NEAT domains are defined by sequences. So you can study the full-length thing. Or the other one-- and, in fact, I think in your problem set, I can't remember, but in the problem set I think you actually did an expe-- was an experiment that you had one NEAT domain versus two. And what was the differences in the transfer? So you can make these things. But when you make them, heme is quite often-- because there is biosynthetic machinery like there is to insert iron clusters, biosynthetic iron clusters, there's also biosynthetic machinery to insert hemes. So it comes out-- these come out to be apo. So then, what you need to do is you need to add apo plus protoporphyrin IX to get presumably holo-- holo, whatever Isd is. And then, you need to purify that so you can get rid of any heme. You don't have a bunch of heme floating around. So, it's, again, yet another purification to look at all of this stuff that may or may not be easy. And so, then after you do that, what you want to do is you need to characterize the spectra of the Isd proteins loaded with heme. And so this is the key to the solution of the problem, which is the same thing you saw in your problem set. But if you look at hemes-- and I'll show you one set of data, a different one from what you had in your problem set. But hemes, you all know this from looking at hemoglobin, when you prick yourself and you bleed, if it's red or it's blue, depending on the oxygenation state, they have a very strong band called the Soret band at 400 nanometers-- about 400. And that's a key thing. Why do they like the Soret band? Because it has extremely high extinction coefficient. I don't know what the number is. But its extinction coefficient at 400 is approximately 10 to the fifth. So it's easy to see that's why people have studied hemes over the years. Because heme is so much easier to see than any non-heme iron system. And what you can see here, which also turn out to be quite useful, is these much weaker bands between 500 and 650. And those bands can also be used if you have enough sample to look at this. And they're more distinct. They're indicative of the coordination environment in heme, whether it's hexacoordinate or whether it's pentacoordinate. So all of this, I'm not going to talk about the spectroscopy. The spectroscopy of hemes has been extremely well-studied and is extremely rich. And so, you need to do that. You need to have your little proteins characterized and loaded with heme. You need to quantitate the amount of heme balance. They want to make sure you don't have any free heme around which can interfere with all of your experiments. And then you can ask the question-- you can start asking your questions once you've got that information. If you start out with IsdA loaded with heme, does it get transferred to IsdC in the apo form? And so, how do you monitor this reaction? Anybody got any suggestions? One way would be-- so you want to-- the question that they're focused on here is, does IsdA with a heme plus apo-IsdC, does it transfer the heme to give you apo-IsdA plus heme-IsdC? And the question is, what's the mechanism of this transfer? So I just told you we have spectroscopy. So I'm going to show you you could potentially do that. That's not so easy to do. But is there another method you can think of so we could monitor the reaction? You need an assay. And that's what you saw in your problem set. So we can take advantage of the region of the Soret band, or perhaps in the longer wavelength. What other method might you be able to use to monitor this reaction that we've learned about recently in recitations, which hasn't been used in the papers that you've read, but actually has been used to study these systems? AUDIENCE: Mass spec? JoANNE STUBBE: Yeah. Mass spec. And so, what people have done-- so you have to carry-- what you will see is that if you look at the sizes of these things, some have two NEAT domains, some have one. The apo also is distinct in size from the non-apo. So theoretically, if you do all of your homework, you can use mass spec quantitatively to measure these reactions. And a lot of people have recently gone to that method, because this method is challenging. Now, again, the caveat is, so you always need to remember this, is that all the Isd proteins are not covalently attached to the peptidoglycan. So to me, this immediately raises this issue of how do you decide how to do your experiments? So how much-- do you use micromolar? Do you use millimolar? Is it widely different, depending on whether it's attached or not attached? And all of that is going to affect the kinetics of this transfer. So you can see transfer. But what you really want is the rate constants for transfer. So you have two questions, is do you see transfer? So that's the first question. And so, one could tell that by either of these two assays. So you could use one and two. And what you really want are the rate constants for transfer. And the rate constants for transfer are dependent on the concentrations. So how you set this up is something you've got to do a lot of messing around with. And so, if you look at this model, you can ask the question, how is this transfer occurred? Do you go through a ternary complex? So does this form a complex with this? And then the transfer occurs through the complex? Or you can ask a question, does the heme dissociate, and then the heme get picked up? So you can ask the question is, what is the order of addition? So you can look at the mechanism of transfer, and specifically, the order of addition. Do you need the second protein there to see transfer? So they've done a whole bunch of experiments like this. I'll show you using the Soret band what they actually monitor in this particular reaction. But they did an experiment where they just took IsdA, IsdA loaded with heme, and asked the question, does heme go into solution? So that's not a trivial experiment either because it can rebind. It depends on the on-rate and the off-rate. So you have to have a way of making sure that if it comes off, you pull it to the right to be able to measure the rate constant. So this whole problem is really associated with thinking about detailed kinetic models, which I'm not going to go into. But if you look at the data, that's what you need to think about to believe the data, whether the data has been interpreted correctly, that I'm very briefly going to show you. You need to derive the equations, and look at what your expectations are. Are they consistent with the kinetic analysis of what's going on? So here, they see a rate constant. So here, they're just looking simply at this reaction. Does this go to IsdA apo plus heme? And if they look at that rate constant, it's .0007 per second. So it's really slow. So then the question is, is this transfer-- does anybody remember with the rate constants were for transfer in the problem you worked at? Was it seconds? Was it minutes? Did you think about it? I don't remember what the numbers are off the top of my head. AUDIENCE: Like, tenths per second. JoANNE STUBBE: Yeah. So that's much faster than this. And so, this is a really low number. But the question is, how low-- remember, we're missing part of a whole system. And so it could be really low, just because we don't have the system set up to mimic what we see in the native organism. So that's the problem you always face. And so, if you look at the data, here's what they're monitoring. So here's the Soret band at 400. And so you need to get a good spectrum and convince yourself you're looking at the stoichiometric loading. So you need to know how much heme. Because that's going to affect your absorption spectrum. So you need to know how much is loaded. And then you can do the experiment outlined over there. And what they're monitoring is these small changes. And so, when they do that, they come up with, if you look at the analysis, and I'll show you a few pieces of data, they come up with-- they favor the model associated with the kinetic analysis of all of their data, that you form a tertiary complex-- sorry, a binary complex of the two proteins, and that the heme is transferred from one to the other. And then the apo IsdA dissociates. So this is analogous to what you've seen. And here's some data. So they've done it with every single pair in this particular paper. And what you can do-- and they've done all the experiments the same way, been able to see differences in Soret bands. These were all done with changes in the spectra, the visible spectra. But if you look at this, for example, from the transfer of the heme from methemoglobin. So that's the first step up here. And you look at all the rate constants they measured. The fastest number is 0.31 per second. So that's slow. But you might expect it. It might not be so easy to extract the heme out of hemoglobin. So you might expect this to be slow. But, again, if you look here, and this is a thing that I think hopefully some of you thought about, if you look at the rate constants, these reactions are all bimolecular. But what do they have up there? They have first order processes. So that has got to be telling you something about the interaction-- let's go down-- whether this interaction is rapid and reversible. And in the paper describing this work in detail, and I give you the reference in the PowerPoint for those of you want to look at it, you need to think about the kinetic analysis. So when you're looking at rate constants, you need to think about whether it's first or second order. Somehow they have to get together. If this is doing half and rapid and reversible, you still have a term for that equilibrium step. But then you're looking at a first-order process. So, if you look-- I'm not going to go through the whole thing. But if you look at the transfer from holo IsdB, so that's the second step. It got the B, got the heme out of the hemoglobin. And then you can look at the transfer to all the other proteins. You see that in this case one is 114 per second. So that's a fast transfer. So you can look down. And they've done every single one of these steps. And they've also then also asked the question, can these proteins act catalytically? So then they put it in a small amount of one, and look at the rate constants in the presence or the absence of one. And they conclude that these proteins can act catalytically as well. So these are the kinds-- I don't want to spend a lot of time discussing this detailed setup. Because I think you still have to worry about being covalently attached to the peptidoglycan. But these rate constants are pretty darn fast. And so then the other thing that's interesting is, why does it have an order? Does it have an order? So if you were going to take B, can I transfer it to C? And what are the differences in the rate constants? And here, it's 114 versus 15. So now the question is, did they set up the experiment correctly? They probably used all the same concentrations in the experiment. But one might have a higher affinity than the other. And so, you need to think about all that stuff. And if they thought about that correctly, they really have learned something about the order of addition and the ability of these proteins to act catalytically. So this is state of the art right now, the way people are studying this. And this kinetic data has allowed us to come up with that model. I just showed you that there is an ordered way to transfer these systems. And then I just want to say very briefly-- I just want to show you very briefly the structures of these, and just show you again where the state of the art is in this area. All of these NEAT domains-- so NEAT domains are the heme-binding domains, 120 amino acids. They are found in A, B, C, and H. They are found in four different proteins. This is super position of all the NEAT domains. You can see they all look alike. Furthermore, if you go down here, and you look at binding, they all have a pentacoordinate heme with one axial ligand being a tyrosine. The other one, the top face, is apo in this version of it. So they also have a structure of two of these things bound together. Again, these are little domains. And so the question, then, you have to ask yourself, which is this question of rates of exchange of ligands. How is this transferred? Do you have-- how does this interface help this guy move from this protein to that protein? And that's what people are focusing their energies on, trying to think about the detailed structures to come up with a model for how this transfer occurs. And I'll just show you, this is a-- if you go all the way down, you're going to go from C to E. E has a different structure. So the mechanism of heme transfer is different. People have a model for that. You need to think about the details. And now, furthermore, you can go from E to F, all the way through the plasma membrane to the ATPase, which then helps you get the heme, provides the driving force getting heme into the cytosol. So we have a lot of structural data. But what's disappointing, I think, from reading those papers, which I have read, is we still really don't have a good model for how these transfers actually work. So this is an active area of research, and the people interested in the bioinorganic chemistry and how you get heme into cells. So that is the end of module 6. I think we've learned a lot in the last few years about these proteins. But, as you can see, we still have a long way to go in terms of molecular understanding. And so the next module, module 7, is going to be the shortest module. And I'll give you an outline of what I'm going to be talking about. And then, today's-- the first lecture is much longer than the second lecture. And we'll see the second lecture is going to be focused a lot on what we're doing in recitation this week. So if you notice, maybe you haven't, but we posted the readings. And one of the papers for the course, this part of the course, the lecture part of the course, is a Carroll paper you're supposed to read for recitation. So there's a lot of overlap. And so, the second lecture will be much shorter because we're going to draw on what we're doing in recitation, actually today. So let me give you the outline. So module 7 is the shortest. And this is the required reading. So we've posted a review article by Winterbourn, who, in my opinion-- this area of reactive oxygen species and how [INAUDIBLE] to how you control reactive oxygen species. I'm going to show you they can be good. They can be bad. Just like we saw with iron, it's all a question of homeostasis. The most thoughtful discussions have been described by Winterbourn, who is in New Zealand, who really thinks about the kinetics of what's going on. And I would argue, you can't do anything in this field without thinking about kinetics, which most people, most MDs in this field, don't think about at all. So the literature is a mess. But I think the last few years it's become-- it's starting to get unmuddy. And I think it's an incredibly important area. I guarantee you that that's true. So unmuddying an incredibly important area is going to be up to you guys. But I think it's going to happen in the next five years or something. I think we've already learned a lot in the last couple of years. So I'm going to have an outline. So we have that paper. And then we have the Carroll paper that you guys hopefully have already looked at in some form. All right. So here, let me just switch. So where are we going? And so, we're going to have a couple of lectures. First of all, what is ROS? So ROS-- a reactive oxygen species. So automatically, there are a bunch of molecules that are reactive. And so, what you need to think about is what does reactive mean? So the first thing is we're going to identify them. The second thing is we're going to look at the chemical reactivity. And, again, the question of chemical reactivity can be quite complex. But I'm going to give you my view of the chemical reactivity and what that view is based on. And then we'll very briefly look over-- since we move from an anaerobic world, whatever, a billion years ago into an aerobic world, like we learned from the last module, the question is, how do we defend ourself against the presence of oxygen with reduced metals? And that's the issue we raised last time. So what are our defense mechanisms? Because we saw we had copper. We had iron. And now we have oxygen. And we'll see that that can be a recipe for disaster unless you can figure out how to control all of that. So, again, it's all homeostasis. So that's the second. That will be the first part of today's lecture. Then we're going to move into the question of the battle between bacteria or viruses or parasites in humans. And what I'm going to talk about specifically is destruction of bacteria by neutrophils. And we'll see that neutrophils are white blood cells. And we will see that they are the first responders. So if you have a bacteria in our bloodstream, the first guys there are the neutrophils. And that's what we're going to focus on. So let me see. All right. I'll go over here. So neutrophils are the first responders. Now, we know quite a bit about this. And really, what we're going to be focusing on in both today's lecture and in the next lecture is the group of isozymes called Nox proteins. N-O-X-- NADPH oxidases. And we're going to talk about that particular protein. And we're going to be specifically focused on Nox2. And we'll see that Nox2's professional job, we'll talk about that, is to generate a reactive oxygen species, superoxide, which is then going to be used in some way to kill bacteria. So we're going to be talking about neutrophils. We're going to be talking about the Nox protein. Also, if you've read the recitation paper for today, what are we talking about in signaling that's oxygen-dependent? The Nox proteins. So here we have bad. We're killing the bacteria. Here we have good. We're using the Nox proteins for signaling. So that's sort of the take-home message is homeostasis. How do you control it for bad versus for good? We've already seen that in the case of the iron system. So the other protein we're going to talk about today, or probably won't get that far today-- how bad am I? So we have another protein called myeloperoxidase. And we're going to see that this-- so this guy is going to be involved with superoxide. This guy is involved in the neutrophils with generating hypochlorous acid. So these are the proteins. And these proteins together-- I'll give you the model, the current model. But the current model I'm going to give you is much simpler than reality. But those are the proteins we're going to focus on. And those are two of the reactive species we're going to be focusing on. And then the second lecture goes back to the Nox2 proteins, and the question now of not killing, but signaling. And as we already saw in the last recitation, how were we signaling? We were signaling by a reaction of sulfide groups with hydrogen peroxide, which can be generated from superoxide. I'll show you how that happens. So, signaling. And we're focusing on signaling by sulfenylation which, again, is the topic of today's recitation of the epidermal growth factor receptor. So that's where we're going. And I'm going to give you that-- we're going to follow this outline. And I think you'll get a pretty good feeling for it, an overview of reactive oxygen species, even though we're not going through it in a lot of details. It's really complicated. So what I want to do before I get to this slide is give you the big picture. So this is the take-home message. So we have a big picture. And so here we have cell. And in this cell we have reactive oxygen species. And there were also things called reactive nitrogen species. And you'll see that in some of the slides in the PowerPoint presentation. We're not going to talk about that chemistry. It's interesting. If I had an extra three or four lectures I would also be talking about that. It's central. It's as important as reactive oxygen species. But I've decided to focus on-- that's what I've decided to focus on. So we have hydroxide radical. We have hydrogen peroxide. We have superoxide. And we have hypochlorous acid. So these are the four species that we're going to be focused on. And you can already see that some of them are radical. And some of them are not. So reactive oxygen species doesn't mean they have a free radical. They can do one-electron or two-electron chemistry. And we'll talk briefly about that. And so, the question is, where do these come from? So remember, we made a transition from an anaerobic to an aerobic world a billion years ago. And during that process, we have a respiratory chain. In humans, the respiratory chain is found in the mitochondria. Otherwise, it's found in the plasma membrane. So we have complexes I, II, and III. And these guys, to chemistry, their ultimate goal, if you're in an oxygen-dependent world, is to reduce oxygen to water. So this is a goal, is oxygen to water. Although, you all know in bacteria if there's no oxygen, you have to have some other terminal electron accepter. And so what you get from these complexes is uncoupling. So 100% of the time it doesn't do what you want it to do. And so, you have to-- you get side reactions that you have to deal with. All right. Now, what did I do with my chalk? Anyhow. So you get uncoupling. So that's one way they're generated. A second way they're generated we've already been through. We went from an anaerobic world to an aerobic world. What do we have under those conditions? We have iron, and we have copper. And, again, we have oxygen. And so, we can generate reactive oxygen species. We've already talked about the fact that iron just isn't freely floating around inside the cell. But if something happens and you have an imbalance in iron homeostasis, it leads to imbalance in oxygen homeostasis. And hopefully, you remember that one of the proteins regulated by the iron-responsive element, iron-responsive binding proteins, was the oxygen transcription, oxygen-sensing transcription factor. Another way that we get these things are from xenobiotics or environmental pollutants. If you smoke, which I guess people don't do. But when I was in graduate school everybody smoked. And they smoked in the lab. Anyhow-- you get-- so pollutants can generate reactive oxygen species. So that's how we get them. What do we do with them? So there's some important things that we can do with them. Here it is. This is what I want. So what are we going to do with them? So one of the things we can do is we have white blood cells and neutrophils. And we kill bacteria or viruses or parasites. So that's one of the good things we can do with them. A second thing we're going to do with them, which is what we're focusing on, is signaling. And we'll see that while we're looking at signaling of growth factors or hormones or cytokines, which is good signaling, we'll see that you can have all kinds of signaling. So it's not limited to the one system we're going to be studying. So it's very broadly defined. This is a huge area of research right now, people looking at this. And the third thing that happens when this is completely out of control is you modify, you damage all the macromolecules, the small molecules inside the cell. So you have extensive damage. You can have extensive damage of DNA-- this thing is not writing very well-- proteins. But it's not limited to that. You have lipids. Lipids are modified by hydrogen peroxide. And so, that's the big picture. That's where we're going. We're focusing on these guys. And we're focusing on good things. And we're focusing on bad things. And how do we control all of that. So in the next-- what I want to do is show you why this is-- why I decided to talk about this. I've always found this area fascinating. And, I must say, I've been going to meetings off and on for decades. And I sort of quit going because I lost information every time I went to a meeting because it was so confusing. Because everybody used different cell types. And they have different kinds of assays. And they didn't pay attention to what the assays were really telling them. But now, I think we're at the time when people really need-- people are doing good experiments. I think we've turned a corner. And so, one of the things that has been in the front pages of all the newspapers since 2007, there was-- we actually-- there was Jim Collins, who was at Boston University, received a huge amount of press on a paper he published in 2007. And since that time it's been extremely controversial. And so, I think it brings up a lot of issues about, again, how you do controlled experiments. So his observation and his conclusions were, from the experiments he published, were very interesting. So we just talked about antibiotics. What do they target? They target cell walls. We have penicillin, vancomycin. They can target the ribosome. You saw that in the first part of the course. You have aminoglycosides. They can target DNA replication. So those are the three sort of major targets. You have the quinolones that do that. His conclusion from the paper is that all of these things, the mechanism of cell kill is not involved with the primary targets at all. But it's involved with a downstream target, that somehow these guys undo oxygen homeostasis, resulting in bad radicals, reactive oxygen species, that end up damaging macromolecules, damaging the cell, and resulting in bacterial cell death. So that's the model. This has been quite controversial. I've given you some papers to read. The latest paper was just published online saying reactive oxygen species play an important role in bactericidal activity of the quinolones that targets topoisomerase in DNA replication. But we've had two articles published in Science saying killing by bacterial antibiotics does not depend on reactive oxygen species. So we've had quite inflammatory responses to this paper, which I at one stage didn't believe anything that he did. Because I think if you look at a lot of the original experiments, they use reagents that were completely nonspecific for what they thought they were. But I think I am now-- I think his observations, in fact, were correct. But not-- some of his observations were correct. But the reasoning behind the observations has changed. And I think there is something about this. You do signaling up here. And then where do you see something? Way down here, because you trigger off a signaling cascade. And that's the way the world works inside the cell. So a second example of this which also received a lot of press, this guy-- everybody know who this guy is? The DNA guy. The famous-- he's a male chauvinist pig, but the DNA guy. Anyhow, what he-- there's been a big fight. Are reactive oxygen species good or bad in fighting cancer? So some people say, again, it's the reactive oxygen species that eventually kill the cancer cells. Again, you do something up here. You trigger off a set of events. You generate a pathway that generates reactive oxygen species that helps kill the tumor cells. Or do they stimulate growth-- reactive oxygen species stimulate growth? Or do they trigger apoptosis? So people are still debating that. And there's probably some truth in both of these statements, depending on how you look at it. Anyhow, that's just to get you thinking about the fact that this is an important area that a lot of people are actually focusing energy on. So what I want to do now is we've given a big overview. What I want to do now is look at what are the identification of the species. And the species we're going to be focusing on, again, are superoxide-- I'll write this down later-- hydrogen peroxide, H202, hypochlorous acid, and hydroxide radical. So this is a scheme that was taken from a Winterbourn article. So it's in all of her papers. I don't know if it's the exact same scheme in the reading assignment, but you'll see something similar. And we're not-- I told you we're not going to talk about reactive oxygen species. We're only focusing on reactive-- sorry-- reactive nitrogen species. We're really focusing on reactive oxygen species. But this gives you the big picture with both. So where are we going to be focusing? Oxygen picks up an electron and goes to superoxide. Superoxide, in the presence of another electron and protons, goes to hydrogen peroxide. So we're going to be focusing here. Hydrogen peroxide in the presence of Iron(II)-- if we've somehow screwed up our iron and where in the reduced state generates hydroxide radical. Hydrogen peroxide with myeloperoxidase in the neutrophils forms hypochlorous acid. And so-- at which eventually can chlorinate everything. They chlorinate amino acids. It chlorinates lipids, and can result in extensive damage to whatever hypochlorous acid is adjacent to. So this is where we're going to be focusing, this part of the scheme. Those of you who have seen reactive nitrogen in the species can look at how that gets integrated into this big picture. So I just want to write down one of these things that I think it's important to think about. Ultimately, we're doing chemistry. This-- whoops. I mean, this is like, it's terrible. It's all of a sudden, I look up. It's a good thing I looked up. Because I would have kept going. Anyhow. Sorry. The time is over. But next time we will come back, and we will talk about what I outlined on the board. And I didn't even digress today. I'm just-- anyhow. All right.
MIT_508J_Biological_Chemistry_II_Spring_2016
R5_Overview_of_CrossLinking_Including_PhotoReactive_CrossLinking_Methods.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. ELIZABETH NOLAN: What we're going to do today is just discuss a few aspects of cross linking. So we decided it was important to introduce this within recitations this year, because cross-linking comes up time and time again. And there's different ways to do this, and different strengths and limitations to different approaches. So I guess in just thinking about this, what is cross-linking? So if you say, oh, I'm going to use a cross-linker for my experiment, what does that mean? AUDIENCE: Forming a covalent linkage between two molecules of study. ELIZABETH NOLAN: Yeah. So there's going to be formation of some sort of covalent linkage between two or maybe more-- right? Because some cross-linkers can have more than two reactive groups, OK, of study, right? So we're chemically joining two or more molecules. So why might we want to do this? What are possible applications? AUDIENCE: Study protein-protein interactions. ELIZABETH NOLAN: So that's one. So protein-protein interactions, right. And that could be identifying unknown protein-protein interactions or maybe you know two proteins interact, act but you don't know how, right? And you decide to use cross-linking as a way to probe that. So how might cross-linking help with studying a known protein-protein interaction? AUDIENCE: Start getting an idea of where the proteins are actually interacting or which residues [INAUDIBLE] ELIZABETH NOLAN: Yeah. AUDIENCE: It could allow you to isolate them. [INAUDIBLE] ELIZABETH NOLAN: Right. So maybe there's an unknown one, and you fish that out, because a cross-linker was used, right? And you know what one of them are. Or maybe, say, we know that these two proteins interact somehow, but we don't know how. So is it on an interface on this side versus maybe the other side versus maybe behind the board, et cetera. And so, there's many ways to study protein-protein interaction. And really, how I'll present cross-linking today is in the context of this particular application, but there are many others. But if we just think, we've seen a lot of protein-protein interactions in this course, right? So just even today, ClpXP is an example, right? We saw protein nucleotide interaction with the ribosome GroEL GroES is an example of protein-protein interactions, right? And they've been studied by many other methods, like crystallography for instance. But sometimes maybe it's not possible to get a structure, right? And you want to define an interaction surface or know exactly what residues are important. So here, say, is protein-protein. But that could be generalized to any other type of molecule, like RNA, DNA, right? What about a single protein? So can you use cross-linking to learn more about tertiary structure, quaternary structure? So imagine for instance, rather than two separate proteins, we have one protein where there's some flexible linker. And we have reason to believe these different domains interact. But how do they interact? Again, is it something like this undergoes some conformational change and they're like this versus other possibilities here? So what about just other applications of cross-linking chemistry before we look at some examples of molecules? So we can capture and identify binding partners, as Lindsey indicated. We can study known interactions. Where else could this come up? While it wasn't defined in this way, we've seen certain technology that takes advantage of cross-linking chemistry often. AUDIENCE: Within the realm of biological things, it's used for-- I mean, if you want to find a functional root. So like bioaccumulation or general bioconjugate chemistry for [INAUDIBLE] ELIZABETH NOLAN: Right. So general. Exactly, general bioconjugate or conjugation chemistry. So maybe you want to attach a tag to a purified protein. Maybe you want to modify an antibody. Similar chemistry can be employed. And likewise, even like from application standpoint, a mobilization. So say you need to make your own resin to do some sort of affinity chromatography and you want to attach a protein or an antibody to that, you can use the types of chemistry shown here. So we're going to talk about a few different types of cross-linker and the chemistry, and pros and cons. And just as a general overview, I'll describe types. So we just heard the word homobifunctional. So homobifunctional versus heterobifunctional. OK. And this refers to the reactive groups. So we need to talk about what types of chemistry is going to be used to do cross-linking. So this refers to reactive groups. And then another classification will be non-specific versus specific. And so, this doesn't refer to, say, the chemical reaction between the cross-linker and whatever it's hitting, but rather whether or not the cross-linking reagent is site-specifically attached to a protein or biomolecule of interest or not. If we just think about this non-specific versus specific, if we want to attach a cross-linker at some specific site in a protein, how can we do that? So think back to the ribosome discussion, where unnatural amino acid incorporation was not attached, but was introduced. So that's one possibility. If you have an amino azyl tRNA synthetase and a tRNA that can allow some sort of cross-linker to be introduced site-specifically, and it works for your experimental situation, you can do that. So we saw benzophenone, which is a cross-linker and the evolution of that orthogonal ribosome ribo-x. But let's say you can't do that, right? So for instance as far as I know, there's no tRNA AARS pair for benzophenone in a eukaryotic cell, right? Or maybe in some circumstance. What is something just using standard biochemistry you could do? So what type of residues can be modified in a protein? AUDIENCE: Cysteine. ELIZABETH NOLAN: Yeah. So cysteine, lysine. These are common side chains that are modified. And what would you say is more typically employed if you want to introduce a site-specific modification using chemistry? AUDIENCE: Cysteine. ELIZABETH NOLAN: Cysteine, right? So if you have an individual cysteine that's in the protein or maybe you use site-directed mutagenesis, you know where that cysteine is, and then you can modify it with some reagent there. We'll come back to that in a minute. So in terms of reactive groups then on the protein, we can think about lysines, right? We have the epsilon amino group, cysteines. We have the thiol. What do we need to think about for our chemistry when thinking about these types of side chains and wanting to do a reaction? So under what conditions do we have a good nucleophile? Pardon? AUDIENCE: [INAUDIBLE] ELIZABETH NOLAN: Yeah. So we need to think about the basicity, right? The PKA of these groups, right? That's very key here for that. What else do we need to think about? What other factors might govern reactivity, just thinking broadly? So PKA. For your amine, it will be type of amine. For a cysteine, redox will play a role, right? You can't have your cysteine and a disulfide. It needs to be the free thiol form. So these are all things to keep in mind. So Alex has used a homobifunctional cross-linker. Why did you use a homobifunctional cross-linker? AUDIENCE: It was to stabilize a nanoparticle. ELIZABETH NOLAN: To stabilize a nanoparticle. OK. So very different type of application here. AUDIENCE: Yeah, that's why I didn't mention it. ELIZABETH NOLAN: That's fine. Yeah. We're not doing much with nanoparticles here. But let's say we want to use a non-specific homobifunction. So this was non-specific cross-linker to look at some protein-protein interaction, right? So if we just suppose, for instance, we have some protein A and we think it interacts somehow with protein B, how can we use cross-linkers to study this? So let's take a look at an example of a homobifunctional cross-linker in terms of design. So this one will be amine reactive. And its name is DSS here. So effectively, if we want to dissect this structure into different components, what do we have? AUDIENCE: Two leaving groups kind of linking. ELIZABETH NOLAN: So we have two reactive groups, or leaving group, separated by a linker. And in this case, we have two NHS or 6-cinnamyl esters, right? That are amine reactive. So what's the product of reacting an alpha amino group or a lysine epsilon amino group with an NHS ester? What do we get? AUDIENCE: Amide. ELIZABETH NOLAN: An amide, right? We get an amide bond. And then we have this linker or spacer region. OK? Here. So two amine reactive groups and a linker, or spacer. And in this particular case, this linker or spacer is about 11 angstroms and it's flexible. And it's stable and cannot be cleaved. So in the case of Alex's project, this was used to stabilize a nanoparticle. Did you have a pure nanoparticle? Or was this in a very complicated mixture? AUDIENCE: It's very not in this course. ELIZABETH NOLAN: So what's going to happen if this reagent, say, is added to cell lysate? What are you going to get? AUDIENCE: Random cross-linking with a bunch of different lysate proteins [INAUDIBLE].. ELIZABETH NOLAN: Yeah. So there's a high, high likelihood of a lot of different cross-links, right? So potentially a big mess, right? High likelihood, right? Because you have no control over where these reactive groups are going to hit. And do most proteins have lysine residues? Yeah. Do all proteins have an alpha amino group? Yeah. Well, some might be modified, but anyhow. You have very little control with this type of reagent. So then the question is, if you use it, how are you going to fish out your desired protein-protein interaction? Or even if you're working with two purified proteins and they have multiple lysines, you can end up getting multiple cross-links, right? So maybe that's helpful for initially identifying that an interaction exists. But in terms of getting more detailed information in terms of how do these actually interact, that may be tough here. OK? So easy to come by, but potential complications. Just in terms of thinking about this in the linker, why is it important to think about the linker and your choice of some reagent here? So what properties does the linker give? AUDIENCE: [INAUDIBLE] to the link, then I guess its flexibility will determine how close the two proteins have to be in space for those to be [INAUDIBLE]. ELIZABETH NOLAN: Yeah. So there's some constraints imposed by the linker in terms of how close together or far away are groups that react. What else comes with the linker? How does it affect the properties of the molecule? Alex? AUDIENCE: I was going to say it can dictate how likely you get a cross-linking on the same molecule between two amines. If you make it short enough, so that it can't reach the next lycine or something, then it can prevent [INAUDIBLE] ELIZABETH NOLAN: Yeah. May be able to. So what's an inherent property of a molecule? AUDIENCE: It might affect solubility. ELIZABETH NOLAN: Yeah. Right. It may affect solubility. So linkers can be-- this is a bunch of CH2 groups, relatively hydrophobic, right? There can be more hydrophilic linkers or other strategies. And then the question is, does that matter? Does the solubility properties work with your experiment or not? But imagine if you want to do cross-linking in a live cell, you need that cross-linker to get into the cell. So you need to think about membrane permeability and what happens after that. Here. So the linker is another critical aspect. And so, if you're ever working with a cross-linker, that's something you want to think about in addition to what types of side chains or what types of biomolecules do you want to modify. So let's look at an example of a heterobifunctional linker. It's not linker. Yeah. Well, it is cross-linker. OK. So this one will have a different type of spacer group. So it will be with a cyclohexyl. So what do we have in this case? Steve? AUDIENCE: So you have an NHS ester and also a maleimide. And then the sulfonate group probably helps the solubility. ELIZABETH NOLAN: Right. So there's a bunch of interesting aspects to this molecule. So we have the NHS ester to react with an amine. Right here we have a maleimide, which will react with thiols. So heterobifunctional, because there's two different reactive groups for different types of side chains. And then, as Steve mentioned, we have this group here. And so, this is to improve water solubility. OK. And then what do we have in this linker region? AUDIENCE: A cyclohexyl instead of the aliphatic-- ELIZABETH NOLAN: Yeah. And what does that give? AUDIENCE: Isn't it rigid? ELIZABETH NOLAN: Yeah, exactly. Like cyclohexyl, right? Think about chair conformation, rather than what I have done here. But it will give a more rigid linker, and also shorter than what we see up here. So this is on the order of eight angstroms. So how might this molecule be used? What could you do with it that you can't do with this one? AUDIENCE: Cross-link cysteine and lycine [INAUDIBLE] ELIZABETH NOLAN: Yeah. Well, that's the first point, right? You can have two different groups. One end will react with a cysteine. One with some lysine. So is this specific, or non-specific, or both? AUDIENCE: Probably depends on the context. ELIZABETH NOLAN: Yeah. Right. Could depend on the context. And then from the standpoint of specific cross-linking-- which I would argue is a better use of this compound-- what can you do? Just imagine you have some protein of interest and maybe you want to label it here. And you have some side chain. So site-directed mutagenesis to put in a cysteine. And then you can modify that there, such that you have cross-linking reagent, right? And then you can imagine whatever your experiment is here. So again, thinking about using this compound in, say, a complicated mixture, like a cell lysate-- you want to see if there's any binding partners or whatever. What's the limitation in terms of reactivity of this amine group that you would use in that second step? Where do you lack control? AUDIENCE: You still can't control for the alpha for the N-terminal reaction, right? ELIZABETH NOLAN: What do you mean by that? AUDIENCE: So if the [INAUDIBLE] is free, then would you have comparable reactivity between the N-terminals, and, for example, your desired lycine? ELIZABETH NOLAN: OK. So that could be an issue. So do lycines and N-terminal alpha amino groups have different reactivity? Do they have different PKAs? And is that something you could control? Maybe, maybe not. But more broadly than that, so you have an issue that it will react, let's say, with any amine, right? Can you control when it reacts? AUDIENCE: To some extent [INAUDIBLE] pH. ELIZABETH NOLAN: So what are you thinking? AUDIENCE: If you-- ELIZABETH NOLAN: So if you think about just experimental design, right? And say you were to try to use pH to control reactivity-- and I'm defining this broadly-- reacting with any amino group. So we're not going to try to do something to selectively label one, right? This is reactive. It will react, right? So would pH change your whole buffer? Or pH change the cell lysate, and then switch to turn on reactivity? Probably not. Probably not, right? That, I'd say is not very likely. So the issue I'm getting at here is that you have little temporal control or spatial control of an NHS ester. It will react with an amine provided your conditions are appropriate. So just getting back to this pH issue and a little digression, if you want to use something like an NHS ester in a test tube experiment, what you need to think about beyond pH? So what do you need to think about with the buffer? AUDIENCE: You don't have something [INAUDIBLE] buffer [INAUDIBLE] so you might want to use the phosphate buffer, something that doesn't [AUDIO OUT] ELIZABETH NOLAN: So this is a key point. You need to think about cross-reactivity with the buffer. So if you have tris buffer, you have amine. If you have a buffer that's like glycine, there's amine, right? And your buffer concentration in most instances is much higher than whatever the concentration is of the molecule you want to actually modify, right? If you think about 10 million molar tris or 75 million molar tris compared to micromolar or nanomolar of some protein, so you need to have a buffer that's not reactive. You need to have an appropriate PKA. Those are important considerations. You need to know that your reagent is good. Sorry, appropriate pH. What about the thiol here? What do you need to think about if you're doing a test tube experiment and want to modify a thiol with a maleimide or something else, like iodoacetamide that we saw last time? AUDIENCE: Buffers need to avoid DDT. ELIZABETH NOLAN: So what's DDT? AUDIENCE: [INAUDIBLE] ELIZABETH NOLAN: Right. Or BME, beta mercapto ethanol. Right. Even before that step, what you need to make sure? So what if there's multiple cysteines? AUDIENCE: That they're not [INAUDIBLE].. ELIZABETH NOLAN: Right. So either inter- or intramolecular, right? So if a reducing agent's added and the reducing agent is thiol-based, again, you're going to have much more reducing agent than your protein of interest, right? So you don't want your thiol-reactive probe to react with the reducing agent in the buffer here. So that needs to be removed. And then if you remove it, you need to ask, does the thiol stay reduced or is it susceptible to air oxidation? So these are just all practical considerations to keep in mind. If a reaction doesn't work, why doesn't it? And was it something that wasn't right with the buffers there? OK. So back to this issue of not having much control about timing control for reactivity of these types of groups, what could be done to overcome that? So what other types of cross-linkers are out there? Yeah. Photo-active. Photo-reactive cross-linkers. So what's the idea here? AUDIENCE: [INAUDIBLE] the appropriate [INAUDIBLE] ELIZABETH NOLAN: Yeah. So what do we have? And what can we do? So just the first point to make is that we want to think about specific labeling here. So we can attach site-specifically to a protein or some other biomolecule, maybe it's bi-cysteine modification with something like a maleimide. Maybe it's unnatural amino acid incorporation. And it's chemically inert locally until irradiated. OK? And so, basically irradiating this photo-reactive cross-linker will activate the photo-reactive group, and then you get s-linking. OK. So this type of approach is often used to capture binding partners. It can be used in the test tube or in cells. What are the types of photo-reactive cross-linkers? AUDIENCE: Aryl azides. ELIZABETH NOLAN: Yeah. So aryl azides are one type. What's one we saw in class? Although we, didn't talk about photochemistry. Yeah, benzophenone. And there's some other examples. So what's another example? AUDIENCE: Fluorinated [INAUDIBLE] ELIZABETH NOLAN: Yeah. So they fall in here, right? So we can think about, either just phenyl azides or fluorinated phenyl azides. So another way to do this is to generate carbenes via diazirines here. We'll pretty much focus on these types, which are major types. So where did this idea come from? How new is this type of work to stick a photo-reactive group on a protein, and then use it in a cross-linking application? And where did the idea come from in the first place? What types of chemists often study photochemistry? AUDIENCE: DNA [INAUDIBLE] ELIZABETH NOLAN: More broadly. So physical organic chemistry, right? There's a whole component of photochemistry there. Let's take a vote. 2000? First photo cross-linker. 1990? '80? '70? '60? Just no clue? So around 1962 was the first paper using a photoreactive group on a protein here, Westheimer. And then Jeremy Nulls in 1969 was the first example of an aryl azide. OK? So this work came out of physical organic chemistry and at a time where physical organic chemists were transitioning into enzymology. So we don't have time to go into a lot of the photochemistry of these different moieties, but it was quite rich there. So how does this work? What types of reactions and groups get modified here in the cross-linking? So let's think about them. So let's consider an aryl azide. So what happens when aryl azides are irradiated with UV light? AUDIENCE: Took all of the nitrogen gas. Get a nitrene. ELIZABETH NOLAN: Get a nitrene. Yeah. So if we just think about nitrenes for a minute, what types of chemistry do nitrenes do? Are they reactive? So can they insert into C-H bonds? N-H bonds? Add to double bonds? Can they do other things as well? OK. So here we have our protein. What's going to happen? As Steve said, we're going to generate a nitrene. So how does that happen? We irradiate with light to get our nitrene. So what happens with these aryl azides is some interesting photochemistry when you're at, say, room temperature. So rather than this nitrene reacting, say, with a C-H bond or an N-H bond, it actually undergoes a ring expansion. So what we get-- and this is very fast. So on the order of 10 to 100 picoseconds. So this happens before it has a chance to react with something else. OK. To give us this intermediate. OK. And so, this species has very different chemistry than a nitrene. And what happens is it will react with nucleophiles. So imagine our amino group to give the cross-link. So this pathway is the dominant pathway if just an aryl azide is used here. To think about from the standpoint of wanting to do cross-linking. So let's say you attach this aryl azide to a protein of interest, and then you irradiate with light and look for it to cross-link with something, is this an issue? It will form a cross-link. Would you rather have a nitrene reactor or this seven member reaction with a seven membered ring? AUDIENCE: [INAUDIBLE] nitrene, but it would depend on what you're actually looking at, like what you were investigating. ELIZABETH NOLAN: OK. So why would you argue for the nitrene? AUDIENCE: Because we were talking about the nitrene does have the capacity to do a [INAUDIBLE] So if you wanted to do something like that kind of chemistry, then having this be the dominant pathway would be inefficient. ELIZABETH NOLAN: Yeah. There's a lot more C-H bonds than there are lysines or N-termini. So that's one aspect. We've lost that chemistry. And then to another point, how well did these reactions work? So nitrene reactions are very fast. Relatively speaking, this is kind of sluggish there. And so the question is, what can be done in order to improve upon this? OK. And Steve mentioned these fluorinated phenyl azides there. And so, photochemical work, unrelated to any sort of biological cross-linking chemistry, showed that if you fluorinate aryl azides you can get nitrene reactivity, rather than this other pathway here. OK? And so, if we just take a look at that, what happens? For instance, imagine we have this tetrafluoro analog here. We can imagine irradiating this and getting to our nitrene. I'm going to skip the steps. OK. Now what can happen? Imagine we have some C-H bond nearby. We get this cross-link. And this reaction is very, very fast here. Very, very fast. Can ring expansion occur in this situation? OK. So I'm pointing this out, because the language in the packet was a little strong. If there is something for this nitrene to react with nearby, it will react. But this can undergo ring expansion. It's just much slower than the case above. So the studies I've read say about 170-fold slower there. So it's not that the pathway is completely blocked. It also too depends on the experimental conditions. But anyhow, this is quoted to be near diffusion controlled here for that. I mean, this is pretty interesting when you think about it, right? Because aryl azides, they can be fed to cells to do unnatural amino acid incorporation, right? They're used in click chemistry for instance, types of conjugation chemistry. But here, the photochemistry can be taken advantage of to give a cross-linker that can be controlled in a temporal manner there. So what about the benzophenone? What does benzophenone react with after being irradiated with light? So imagine you have your protein. And maybe in this case you did unnatural amino acid incorporation to site-specifically attach a benzophenone. What happens? What happens is that there's formation of a triplet diradical. And what will this do? Here, it's going to react with some C-H bond to get the cross-link. Let's say you have this guy here and you want to do a cross-linking experiment. So we can imagine some different possibilities. What do you think you'll get out? Are you only going to get out your desired cross-link? What might happen? 2:00, 1:50 on a Friday. Let's get some jumping jacks. Come on. Should I dismiss all of you, because there is a major energy low today, I have to say there. Yeah. Do you think you'll get one product, 10 products, 100? AUDIENCE: There will be lots of side reactions. ELIZABETH NOLAN: Right. There's still the possibility for many side reactions, right? And you always need to be aware of that. So if you cross-link something, the next question is, is this something that's actually relevant or not? Or is it an artifact there? So the analysis can be very complicated. And so, that's just something to think about. Say you have cross-linked species from cell lysate, what are you going to do to analyze that? Just think about some of the things that have come up in other contexts here. We talked about protease digest and mass spec for looking at substrates of GroEL GroES, that's something that can be applied. And there's many sophisticated new tools to get a lot of information out of the mass spec, which we won't talk about. But having tags within the cross-linker, right? So then you need to ask, how well is the coverage going to be? So even after this step, there's a lot more work, which we won't go into details in this recitation today. What about inherent efficiency of cross-linking in terms of these benzophenone versus the aryl azides? We want to think about relative cross-linking efficiency. Any sense of that? AUDIENCE: I think the benzophenone compared to the diazirine is a lot less efficient. I don't really know [INAUDIBLE] AUDIENCE: I have a question. When we're talking about efficiency, is it purely based on the speed of this reactivity? Or is it also taking into account the different cross-reactions that could occur? Because it seems like there are more possibilities for more cross-reactions. Even though it might be more reactive, it's not-- ELIZABETH NOLAN: Yeah. The former, right? Just thinking about the reaction. There's the possibility of cross-reactions for all of these. They're highly reactive. A nitrene is highly reactive. The benzophenone triplet diradical is highly reactive. A carbene, if you're going to get that from some diazirine is very reactive. And yes, it's something important to think about in terms of your experiment. What is the relative efficiency of the reaction? So I said that aryl azide is a little sluggish compared to the others. Something to consider, right? You know what is the timescale of whatever it is you're trying to trap. So the wavelengths. What is it about these wavelengths that might be undesirable? AUDIENCE: For in vivo studies, one shifting towards UV means that you can have issues undesirable, like DNA cross-linking stuff, but also it means that it's not going to have deep penetrants [INAUDIBLE] shift towards [INAUDIBLE].. ELIZABETH NOLAN: What wavelength would you like? JOANNE STUBBE: I would like it around 650. These are all UV visible interface. And you have hundreds of things that absorb length are very incredible inefficient. [INAUDIBLE] Most people never identify what they get out of the other side. They just see two things stuck together, and that's the extent of it. They never describe the molecular details. ELIZABETH NOLAN: So let's actually just close-- JOANNE STUBBE: [INAUDIBLE] ELIZABETH NOLAN: Right. So one of the questions I asked in the discussion section, is it worth the effort if you're going to site-specifically put in a cross-linker? And imagine you find this protein-protein interaction, if one chooses, you can do quite a bit more experiments in terms of where you place this cross-linker and mapping out that interaction region there. And so, that's I think also just a take-home is often you need to put your reactive group in more than one place to really get at the answer to the question you're asking. And so, there's folks around doing that there. But is it 20 positions? Is it 10? Is it 50? Because if you don't know at the beginning, you may need to do a lot of just systematic trial and error for that. Yeah. So I think you should all read the packet. And there are some suggestions for reading if you're curious to learn more, one of which is a manual from Thermo. So often, the companies give a lot of good general background information, and there's many different types of chemistry included in that as well. I'll also point out, Ed is here for those of you who don't know Ed. So he'll be presenting next week on cryo-EM. And you should definitely read the fatty acid synthase paper beforehand. The structures are incredible. And fatty acid synthase serves as a base for our discussions of polyketide and polyketide synthases, which is where we'll begin module four in thinking about the biosynthesis of natural products there. OK. Have a good weekend.
MIT_508J_Biological_Chemistry_II_Spring_2016
14_Protein_Degradation_3.txt
NARRATOR: The following content is provided under a Creative Commons license. Your support will help MIT Open Courseware continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseware at ocw.MIT.edu. PROFESSOR: So we have the five-step model. And what we're going to do-- this model was presented last time. And what we'll do is look at experiments that were designed to look at the denaturation, translocation, and degradation processes here. So one question is, can we separate denaturation from translocation in experiments to learn about the rates of each process. And also, how can we examine the role of ATP? Because that's a question key question here-- how is ATP hydrolysis by ClpXP allowing this macromolecular machine to work? And so we're going to begin with some experiments that involve a GFP substrate. So these are some studies of ClpXP activity with a substrate that has radio-label GFP ssrA. And so if we think about this substrate here, we have a radio-label-- bless you. We have green fluorescent protein. And this has a particular fold. So we have a folded substrate that's fluorescent. And here we have our tag that will direct the GFP to the ClpXP degradation machine. And so this substrate has been used to look at both degradation and unfolding. We'll get to the translocation issue in the second type of substrate we examine today. And so if we think about degradation, this is where the radio-label comes in. And if we think about unfolding, this is where the fluorescence comes in. And so what we're going to look is at degradation and denaturation assays using this substrate. And so just as a reminder, for anyone not familiar with green fluorescent protein, I might just show you the barrel-like structure here and the chromophores in the interior. And so in order for GFP to fluoresce, it needs to have its proper fold. And if it's denatured, that fluorescence emission is lost. So let's first look at a degradation assay. So this is experiment one. So what is the experiment? We have GFP here. And it has this ssrA tag. And we're going to incubate GFP with ATP and with ClpXP for some period of time. And then we're going to stop this reaction with a quench. And the quench will be acid. And so if we think about this protein and this degradation process by ClpXP, what are possible products? So maybe there's some GFP ssrA that hasn't yet been degraded, depending on your time point. And we can imagine some of these short polypeptide fragments of seven to eight amino acids. And so we have this radio-label. And what we want to do is track the radio-label. So here, we have radio-labeled protein. And here, we have radio-labeled peptides. And so if we want to quantify how much degradation occurred, somehow, we need to separate these. And so what is a way we can do that here-- something simple. And so what you want to think about is just relative solubility under acidic conditions. So if we have this large GFP that's folded, when that is treated with acid, GFP is going to precipitate. So this will be insoluble. And in contrast, these peptide fragments will be soluble, in most instances. So as a result, we can take advantage of this differing solubility, effectively to centrifuge the mixture. And we can measure the radioactivity in the pellet and in the supernatant. And then we can quantify degradation here. And so what are the results? So the result-- here, we can imagine a plot where we have percent of the substrate versus time. And these were conducted over the course of an hour. And what's observed here-- for instance, we have reactions where the substrate was incubated with ClpXP and ATP. So we see that over time, there's a decrease in the percent of GFP ssrA. And from these data, we can get some degradation rate. And we'll come back to that degradation rate in a little bit here. So what we see here is degradation. What happens if we add an inhibitor of the protease? So in the introductory lecture, we talked about a number of different types of inhibitors. And so that experiment was done. And so here, if we take ClpXP plus ATP plus inhibitor, what we see is no degradation. And the name of this inhibitor is DFP. And effectively, it covalently modifies the serine, in terms of what was used. So what do we conclude from these data? If the active site serine of ClpX is covalently modified with an inhibitor, which is diisopropyl fluorophosphate, we lose activity. So that serine is important. So what about unfolding or denaturation? How can we get at that? So that will be experiment two. And in terms of thinking about denaturation, rather than the radio-label, we're going to think about the GFP. And so imagine we have our folded GFP-- however we want to show that here-- that has this ssrA tag. So this is folded and fluorescent. So this gets denatured by ClpX of the machine to give us some unfolded polypeptide that has this ssrA tag-- unfolded and non-fluorescent. And then what happens? This gets degraded. And we get these fragments. And these fragments are also non-fluorescent. So effectively, s we can perform the exact same assay as we did an experiment one. But we'll look at fluorescence as a readout rather than quantifying radioactivity. STUDENT: So If you take ssrA on any protein, would ClpXP break it down? [INAUDIBLE] PROFESSOR: Yeah, so GFP probably isn't a native substrate of-- definitely not of E. coli ClpXP. What happens in a system that expresses GFP natively, I'm not sure. But yes, this has been a wonderful tool for experiments because many different protein substrates can be modified with this ssrA tag and directed to ClpXP. This is just one example. So I think broadly, we can think that there are many, many possibilities for what can be delivered. Are there certain proteins that ClpXP just can't deal with? That's a possibility. So the problem set for the upcoming week has a case where there's a disulfide bond, for instance, and asking what happens when we have some other types of structural features within a designed substrate. But for the purposes of this, yes, we can attach ssrA on to some protein that we can use to study the system and therefore do the experiments. And does that make sense, also, just thinking from the standpoint of what types of polypeptides might get directed to ClpXP in vivo? The ribosome could stall with many different types of proteins being synthesized there. So pretty versatile here. So we're going to perform the same assay. But we're going to measure fluorescence rather than radioactivity. And so what is the result? So here, we have fluorescence. And again, now, we have the percent of folded GFP-- 100. And again, we can we can imagine this going down zero to 60 minutes. So here we have ClpXP. What happens if we have the inhibitor, for instance? What they found-- and I'll draw the inhibitor in a minute because I'm sure some of you are wondering. And here we have ClpX alone. So how do we interpret these data? So if we have the full machinery-- ClpXP and ATP-- we see a loss in fluorescence over time, which indicates a loss in folded GFP. So the substrate is being denatured. What about this case here when we only have ClpX present? And also, it won't have ATP. What's happening there? STUDENT: Without ClpP, there's no actual degradation that goes on. PROFESSOR: Do we need to see degradation in this assay? That's true, but what is this assay giving us a readout on? Just unfolding. So what do we learn from that? Rebecca? STUDENT: ClpX needs to be found to ClpP to be in the correct confirmation to unfold. PROFESSOR: Yes, yes. So this indicates that ClpX and ClpP need to be in complex in order to allow unfolding to occur. So thinking to the cellular environment, does not make sense? Yeah, I'm seeing nodding heads. Yes, right. So we wouldn't want just ClpX to be able to bind and unfold anything it comes into contact with there. And in terms of this inhibitor, we're seeing that it's not unfolding very well. So this inhibitor is for the protease. Just for that structure, effectively, what we have here-- you actually saw this in the lecture slides from last time. So this is DFP. And effectively, what it does is it will modify a serine side chain to give us this species here. And that will block proteolytic activity. So how did these data compare? How does the denaturation and degradation data compare? And so we can look at what was done. And effectively, what we want to ask is, how did the steady state kinetic data compare? And so steady state experiments were done, of course, with varying substrate. And the data were re-plotted. And so those data are shown here. And what we're looking at on the y-axis is the loss of substrates-- so GFP ssrA versus the concentration of substrate. And what we see is that in circles, we have the fluorescence data. And in triangles, we have the data from radioactivity. So what does this analysis tell us? STUDENT: The data set doesn't look that complete. But it looks like they're on about the same time scale. PROFESSOR: They look very similar. We're getting the same steady state kinetic parameters for both analyses here. And yes, it might be nice to have more data. But that's just not available. So all of these data can be fit to the same kcat and km. So what do these data tell us about a rate determining step, for instance? Not very much. And we also haven't yet thought about this issue of translocation. We're just seeing the unfolding step and seeing the degradation step in this assay here. So we need some more information. So if we think about this, we have denaturation versus translocation degradation. And so far, we've been able to look at this and this. And our intuition tells us degradation by the protease should be very fast. So can we learn something about translocation which we weren't able to see in these experiments here? And so that's what we want to focus on now because there was no readout on this step from experiments one and two here. So is it possible to separate denaturation and translocation with some strategically designed substrates? STUDENT: From this experiment, can't we deduce that translocation step is much slower than denaturation? PROFESSOR: Can we? How? Yeah, there's just no readout because this loss in fluorescence is just telling us that the protein is folded or unfolded. And the degradation is just telling us what happens in the protease chamber. So what happens from that point-- unfolding to degradation-- in between, we don't know here. So what we need is a new set of substrates that are going to let us get at this and allow us to separate denaturation from translocation experimentally. And so what was the idea for doing this? The idea was to take some protein that's been studied and take that protein and a series of mutants of that protein that have also been studied. And the key here is that the mutants of the protein have varying instabilities-- so varying instabilities of the fold. And so you can imagine that there have been many studies of protein folding out there asking the consequences of making point mutations in a given protein fold on stability there. And so that's exactly what was done. So what we need is a new set of substrates to probe effectively denaturation and translocation in more detail here. And the key question is, is it possible to separate denaturation from translocation? And so what was done is to take an immunoglobulin-like domain from a protein found in striated muscle that has been the subject of many studies and mutants of this protein and to employ them in assays. So we're going to take a protein plus variants with varying stabilities and perform this assay and compare the data. And so here is the protein that was used as a model substrate. So shown here, this is the titin I27 domain that has an ssrA tag attached. OK so if we take a look at this protein that has a beta sandwich fold, we see that there's a disulfide bond. There's a single tryptophan residue. And this is helpful because tryptophan residues have intrinsic fluorescence that's sensitive to the environment. And we see it's buried in the inside here. So in a hydrophobic versus hydrophobic environment, the fluorescence will differ. And so we can use that as a readout of unfolding here. And this is just an example of data from a prior study where this protein and various mutants of the protein like here, valine 11P Y9P were studied for stability of the fold. So guanidinium, we learned that's the denaturant in the folding section. So these various point mutations have different stabilities. And we can see that in these denaturation curves here. So what was done in their experiments were very similar to what was done before. So we take this titin radio-labeled-- bless you. So this is experiment three with the ssrA tag. Incubate with ClpXP with ATP and asks what happened. And in terms of these substrates, we have the wild-type, we have the mutants, as shown up here. And we have CM, you'll see in the data, which is chemically modified. And these chemically modified variants are completely denatured-- we can consider them. And so effectively, what was done here with cysteine modification, with iodoacetamide. So we saw that in discussions-- introductory discussions-- about unnatural amino acid incorporation. So the disulfide bond is completely disrupted here. The disulfide can be reduced, the cysteines modified, and we get an unfolded version here for that. And here what do we find? So there's a number of different point mutants that are listed here. And we're just going to look at a few, in terms of what they found. So in terms of degradation assay, which is how they did this readout, we're going to have the percent titin remaining. So again, using radioactivity in the supernatant or pellet versus time-- what did they find? So if we take a look at a selection of the data-- just put three examples-- here, what do we have? Here, we have wild-type. Here, we have one of the mutants, B13P. And here, we have chemically modified wild-type. So what do these data tell us? STUDENT: Degradation is faster. It's [INAUDIBLE] PROFESSOR: Yeah. That's one thing we see here. So this chemically modified protein is denatured. And we see that the denatured protein is easier to degrade by ClpXP than the native protein. We also see that the mutant is more rapidly degraded than the wild-type. So ClpXP is having an easier time with this one here too. So there's an apparent correlation here between the ease of unfolding and the ease of degradation. A protein that's already unfolded or is relatively easy to fold is degraded more rapidly than the wild-type protein that has this beta sandwich fold here. If we think about the processes happening in each of these and we think back to that five-step model, what's happening? So here, we have denaturation plus translocation plus degradation. And likewise, here, we have these three parameters as well. And in this case, we don't have denaturation. We just have the translocation and the degradation here. STUDENT: Why are the rates linear here? And it was not linear in the previous one. PROFESSOR: Just imagine this is-- well, one is a completely different substrate. The time frame, I haven't given here. Don't worry about that. We're just looking at that one part. So here's the actual data from the report. And now, what we want to do is, using this whole family of substrates-- so the native I27 domain, the various point mutants, and these chemically modified forms, we want to look at the details of the steady state kinetic analysis. And we also want to look at what's going on with the ATPs. So what is the rate of ATP hydrolysis. And how many ATPs are hydrolyzed? We know nothing about that yet, in terms of the data that's been presented so far. So what we're going to do is take a look at this dataset and see what we learn here. So there's quite a bit of data in here. But we're just going to systematically work through. So what do we see? Here, we have all of the different I27 domain-based substrates they used. And the table is divided basically in terms of whether or not the protein was chemically modified. So on top, we have wild type. And then we have these four-- or sorry, five-- point mutants. And in this bottom half of the table, we have the chemically modified wild type and the chemically modified point mutants. So these begin with a fold, and depending on the mutation here, there's differing stability of that fold. And here, we have unfolded variants because the disulfide was disrupted. So what are we looking at? We have degradation, we have km, we have denaturation, and then we have the ATP S rate, and the number of ATPs per I27 domain degraded here. So the question is, what do we learn from each column of data? So if we take a look at these degradation rates here, what do we see? So what happens amongst the proteins that are not chemically modified? And don't try to over-analyze it, just look for what are the obvious differences here. So what's the slowest? Wild-type, right? Similar to what we saw here, and that makes sense because wild-type has the most stable fold, just based on what we saw here. And then what do we see for the mutants? There's variability. And all of these values are greater. How do they compare to chemically modified variants? And what do we see here? These are the fastest. And they're all pretty similar. So these data agree with what we drew up here. What about the km values? So are these all similar or different? All pretty similar, yeah. And why does that make sense? So that indicates that ClpX binds all of these substrates in a similar way. They all have the ssrA tag there. So we can't attribute any changes we're seeing in rate to this km value here. What about these denaturation rates? So we don't have any values for the chemically modified forms because they're already denatured. What do we see? We see the wild-type is more difficult to denature-- so the slower rate-- than these point mutations here. And you could imagine if you were the researcher going back and comparing these data to what's known about the relative stabilities of each fold from other data in the literature from studies like that guanidinium denaturation on the prior slide here. So what about the data in these columns? What do we see? So here, we're looking at ATPase activity. STUDENT: In that case, it's slower and less efficient for wild-type than chemically modified. PROFESSOR: Yes. Yes. That's certainly the case. So, first, if we look at wild-type, and even for that matter, these single point variants, versus these chemically modified forms, we see that the wild-type has a value of about 150 per minute. And these are slightly higher. We see these are on the order of about 600 per minute. So in a way, these fall into two groups-- the chemically modified forms defined one group. And this wild-type and single point mutants define another group here. And the wild-type has the slowest ATPase right here. And then in terms of efficiency, as you mentioned-- maybe that's in terms of the number of ATPs degraded-- what do we see here? What is incredibly striking about these data? We're seeing about 600 ATPs for I27 domain degraded for this wild-type that's a huge number of ATPs-- so 600. What do we see for these denatured variants? They're all around 115 ATPs per substrate consumed here. So many, many ATPs are consumed here. Many ATPs are required to denature that native substrate. And it looks like many ATPs are required for translocation here. And if the substrate is less stable, what we see is that fewer ATPs are consumed. So these are all filled in within your notes. And there's some additional details here. So these data indicate that the easier the protein unfolds, the faster it's degraded. And just to reiterate, these denatured titins, we can think about ATP consumption as being indicative of that translocation event because they're already denatured. And for these native titin, the rate of ATP consumption is indicative of both the unfolding or denaturation process and the translocation process here for that. Here is just another way of plotting the data in the table, where they're just highlighting ATP hydrolysis and then the different types of substrates here. So we see the rate's highest for denatured and that it decreases with increasing stability of the substrate to degradation by ClpXP. Another interesting thing they found in these studies is that the ATP is consumed very linearly with time. So if we look at ATP consumed versus the average denaturation time-- here, wild-type, and down in this region, the mutants. So we have a linear relationship. And what came out of this is about 144 ATPs consumed per minute of unfolding from these experiments here. So what does this tell us about how ClpXP works-- how it works here? So basically, this machine has been described as having a relentless try and try again mechanism here. And it's effectively explained in this cartoon. So ClpP is omitted, but imagine it's there. So what's happening? We have some folded protein that's been condemned and has the ssrA tag attached. And so ClpXP needs to deal with it. There's the tag-mediated substrate binding. So the substrate binds, there can be ATP hydrolysis. And that results in ClpX trying to unfold the protein. But frequently, the substrate can get released. And this cycle of binding and pulling can happen many, many times. And that consumes a lot of ATPs here. And then at some point, there's going to be a successful unfolding event, which results in the polypeptide being translocated and entering the degradation chamber. So when thinking about a hard to denature substrate, you want to think about this substrate binding ClpX many times. There might be multiple instances of binding and release before it's successfully denatured and before translocation occurs. And so that uses a lot of ATPs. STUDENT: Does it all confine the substrate to many different places or just in one spot? PROFESSOR: So the ssr tag is what's going to allow it to bind to the pore. And recall, for instance, there can be the adapter protein SspB to help ssrA tag proteins make their way to the pore. So think of it less as some undesirable protein-protein interaction than a failed attempt at unfolding by this ATPase here. So why might ClpX want to do this? When we think about the cell-- just some possibilities? STUDENT: It would make sense, I guess, the more unstable the protein is, the easier it is to degrade it because proteins that are more unstable are already partially unfolded and are probably ones that need to be degraded. PROFESSOR: Yeah, so that's one way to think about this. And then maybe another way to phrase that is perhaps this helps to avoid jamming the protease. If there's things that need to be degraded versus other things, if you have something that's very difficult to degrade, you don't want that to block the protease such that something unfolded can't be dealt with. Also, just dealing with a mixture, that maybe ClpXP likes to get rid of the substrates that are easiest to degrade first. So is it energetically wasteful there-- just to think about. On one hand, it might seem like it. So many ATPs-- just think back to the TCA cycle, for instance, and how many ATPs you get from one cycle there versus 600 ATPs being consumed here for that wild-type titin domain. But this makes sense because in the cell, it does have to deal with many different types of substrates. And these substrates can have varying structure and varying stabilities. So how does ClpX actually work? What's going on with this ATP hydrolysis? How are denaturation and translocation coupled? And how do we even think about this translocation process? Effectively, we saw the cartoons, where it looked like ClpX was somehow pulling on this polypeptide. And so we'll close with some discussion about that, which we'll continue on Monday. So effectively, we have our general paradigm of somehow having ATP hydrolysis leading to conformational change that provides some mechanical work. And so here in this system, conformational change in ClpX will drive unfolding and translocation. And of course, the big question is how? And so a key observation that's not intuitive with this system and that we'll build upon in the first 10 minutes or so of Monday is the fact that ClpX is a homohexamer. We saw that when we went over the structure. But this hexamer has some inherent asymmetry to it, despite the fact that each subunit is the same. So a key observation here-- ClpX is homohexamer. But it has inherent asymmetry. And this asymmetry arises from nucleotide ATP binding. And the observation from a variety of studies is that ATP binds to some of the ClpX subunits but not others. OK And also it can bind to different subunits with different affinities, just as another detail. And what we'll see is that this is also dynamic-- so just some subunits. So although we think about this as a homohexamer in terms of the ATP loading at different points, we don't have six ATPs bound. And where we'll begin on Monday is looking at individual ClpX subunits and then how the ClpX subunits work together and lessons learned from studies there. So effectively, this asymmetry is thought to be quite important, in terms of how ATP hydrolysis is allowing the movements and activity of the ATPase.
MIT_508J_Biological_Chemistry_II_Spring_2016
19_Cholesterol_Biosynthesis_1.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: What I want to do is sort of introduce you to the second half of the course, where we're going, and what topics we're going to be covering. And then I'll start in module 5. So as with the first half of the course, we have four modules. The first half the courses was pretty well organized-- that is, you went from here to here to here. It all sort of made sense. This half of the course doesn't do that. We're talking about-- there are hundreds of topics in biochemistry, and any one of them is exciting and important. And these are the ones we're talking about this semester. And again, the focus will be sort of trying to get you to think about, what is the evidence that supports the model I'm going to present. So in module 5, we're going to be talking about the terpenome. And I'm going to be talking about, most of you have seen in the last module, with polyketide synthases, you have aldol reactions and claison reactions to form carbon carbon bonds. In this module, you're going to be introduced to another way to form carbon carbon bonds. And that's through C5 units. And C5 units are the basis for forming cholesterol, which is really the focus of module 5. And what we're going to be talking about is initially cholesterol biosynthesis, that will be this lecture and probably most of the next lecture. And then you all know from eating McDonald's hamburgers, you get a lot of cholesterol in your diet. And the question is, how do you control cholesterol levels? And so this semester, the second part of the semester is really focus on the question of homeostasis. Cholesterol is essential. If you have too much you've got problems. Doesn't matter what pathway you're talking about, if you have too much or you have too little, you have problems. So how do you control everything? And we're going to be talking about cholesterol sensing and regulation. And we're going to come back to a topic you covered in the first half of the course with ClpX and ClpP protein mediated degradation, because it plays a central role in controlling cholesterol homeostasis. So that will be the first module, module 5. Module 6 is also going to be a module on homeostasis. It's my feeling that in introductory courses, people don't get introduced enough to metals. And 50% of all the proteins inside of us bind metals and do something with them. And so in this module, I'm going to talk about metal homeostasis. And I'm really going to focus on iron. And you'll see why I'm going to focus on iron when we get there. But we're going to talk about iron sensing and regulation, initially in humans, and then we're going to focus on the war between pathogenic organisms for iron and humans for iron, since iron is essential for almost all organisms to survive. The third module is sort of-- module 7-- sort of follows from module 6. And it's a topic that I've been following for 30 years, and it irritates the hell out of me. So everybody talks about reactive oxygen species, and how bad they are. You can't-- you can listen to NPR, you can read it in The New York Times. What are reactive oxygen species? As a chemist, what are they? So we'll define what reactive oxygen species are. Many of you know that they're bad. That's why you eat vitamin C and vitamin E. And they're involved, in fact, in defense against some microorganisms. But also they're good. They're now known to be involved in signaling. So again, it's a question of homeostasis. And so you need to understand the chemistry of what these species do, and where they can go astray, or where you can harness the reactivity to do something important. And the last section-- hopefully we'll get there this year, I'm trying my best-- this is the area I know most about, we're going to talk about nucleotide metabolism. In 5.07, they don't talk about nucleotide metabolism at all. And I would say in the next decade, you're going to see a lot about nucleotide metabolism. Where have you seen ATP and GTP in the first part of this course? Everywhere. How do we control all of that? Pretty important. What are the questions we're going to be asking? Where does it come from? And how do we control the levels, which is central to all of metabolism? Hopefully we'll get there and discuss that. So the required reading has now been posted on Stellar. And there are-- there is really sort of three things you need to read. One is about a short review on the terpenome, which is what I'm going to start talking about. The second is lipid metabolism that's been taken from Voet & Voet, you can go back in any basic textbook and read the section, because it's central to thinking about what's going on with cholesterol. And then you'll see that these are two of my favorite guys, Brown and Goldstein. They won the Nobel Prize for their work, but in reality they should have won at least two Nobel prizes for their work. I mean, you can never not listen to them talk and not get excited. I mean, they always have something important and exciting to say. So they-- last year we used this review. There's a new review. They're different, they're both pretty short. Pick which one you want, they cover the material. And then this one doesn't cover the most recent material as well. So here's another short review that covers that material. So these guys here will give you an overview of what I'm going to be talking about. And they've all been posted on the website already. So cholesterol homeostasis-- I never end a lecture on time. You'll find out that I'm trying my best, anyhow. We have about five lectures we're going to be covering. And this is where we're going. And the first one, today's lecture, we're going to be talking about a new way to form carbon carbon bonds through C5 units. And we'll be talking about the terpenome, where there are a huge number of natural products, distinct from polyketide synthases and non-ribosomal polypeptide synthases that you just finished talking about. We want to get to-- we'll see in the biosynthetic pathway to get to these C5 building blocks, we need to get to a metabolite called mevalonic acid, that's front and central in cholesterol biosynthesis. So we need to get that far. And then we need to get from mevalonic acid into cholesterol. And so we'll be talking about this the first couple of lectures. In addition to making cholesterol, you get a lot of cholesterol from your diet. And so the question is, how does it get from your diet, transferred through the plasma, and taken up into cells? And this section, we'll describe the discovery of receptor mediated endocytosis, by Brown and Goldstein, that's now known to be prevalent everywhere. So the module-- module 6, you take up iron by receptor mediated endocytosis. In module 7, growth factors are involved in receptor mediated endocytosis. So it represents a general paradigm that happens all the time over and over again in biology. And then we're going to ask the question, how do you sense the cholesterol? And we're going to be doing two recitations on this. And then we'll have a few lectures on the machinery that sense cholesterol sterile responsive binding proteins, in a molecule called SCAP, and another molecule-- both of these are proteins-- called INSIGs. Everything is located in a membrane-- that's something you haven't been exposed to. How do you control everything when you have stuff stuck in a membrane. And then we'll come back at the end to look at the rate limiting step in formation of mevalonic acid, HMGCoA reductase, and how that plays a key role, since it's involved in making cholesterol. How do we control and regulate that protein, also in the ER membrane. And we'll see it requires ubiquitin mediated protein degradation. And so that's why I'm going to come back, and we're going to spend a little bit of time talking about this process in eukaryotic systems. And actually, in all the other modules, you're going to see ubiquitin mediated protein degradation. And finally, this week in recitation, there's a new-- not new, it was discovered in 2003-- a new target for drug therapy in controlling cholesterol levels. And there was a paper published this year to show that this is, in fact, a good target. And this paper used CRISPR-Cas9 technology. So I'm going to-- even though I'm not an expert in that, my lab hasn't used it-- I'm going to introduce you to this technology, and then focus on why we think this is a good new target, and what the targeting is. So that's where we're going. So the terpenome-- let's see if I can remember what I'm going to say. The first thing I want to say is-- let me just get all this-- OK, the first thing I want to tell you something about is the nomenclature-- and all terpenes are either called isoprenoids or terpenoids, and they're all made from C5-- a C5 hydrocarbon skeleton. And this C5 hydrocarbon skeleton is an isoprene. So this is the key building block that you're going to see over and over again over the course of the first couple of lectures. So an isoprenoid is, in general, linear. And it's made of n of these C5 units. So n can be 2 to thousands. And I'll give you examples of these. And again-- and then the terpenoids-- so let's just use terpenoids over here-- in general are also made of C5-- C5 units. But often, they're oxidized, cyclized, and sometimes rearranged. So my goal today really sort of is introduce you to this huge class of natural products. And give you some examples of this, and then start focusing on how we get the building blocks. Do you think iso-- this isoprene can be a building block? It's chemically not very reactive. So no, we have to convert-- we have to convert this into the chemically reactive building block. So while isoprene gives you the C5 unit, our focus today is going to be on creating the building blocks. And again, the building blocks are going to be-- these are the guys you're going to see over and over again. 1, 2, 3, 4, 5-- so does everybody know what PPI means, so I don't have to write it out? pyrophosphate-- we're going to see this over and over again. You've seen this in the first part of the semester. And this isopentenyl pyrophosphate, or IPP. And we're going to see this-- if you look at this this hydrogen, what's interesting about this hydrogen, chemically? What's the pKa of that hydrogen, compared to if it was this hydrogen? It's much lower. Yeah, so it can form allylic cations or anions. And so this can readily isomerize to form this species, which also plays, which is dimethylalyl pyrophosphate, which is the other key building block that we're going to be looking at. So currently, it's estimated from the latest paper that I've read that these are the building blocks for what we call the terpenome. And it's estimated that there are greater than 70,000 natural products. Now in contrast to what you've learned about with non-ribosomal polypeptides synthases and polyketide synthases, where you sort of can find everything in an operon, and you can sort of understand how your molecules could be put together. It's not so trivial with terpenes. There is no such logic in these systems. So let's just look at what some of these molecules actually are so you know why they're important. And, OK, so so in the center of everything are these two guys, our two building blocks. And these building blocks can go to the fat soluble vitamins-- so for example A and K. So if you look over here, you have vitamin A-- I knew this was going to happen. My late-- my pointers are not working very well. I need-- OK, so anyhow you have vitamin A and you have vitamin K. And where do you see-- you can see here readily that you have these C5 units somehow stuck together. And you have to ask the question, where does the rest of this come from? So fat soluble vitamins, which we're not going to talk about, what you also have is prenylated proteins. And prenylated proteins are shown here. So quite frequently, you have small little g proteins, GTPases, you've seen these before. EFTU, EFGG, they're all over the place. There are hundreds of g proteins-- we talked about them in the recitation section on Rodnina that I gave you. Anyhow, a lot of those little g proteins go to the membrane and come away from the membrane. They do that by sticking on a tag. This tag can be geranylated or gerynalgeranylated-- it just the hydrophobic tag that allows things to interact with the membrane, increasing the effective molarity. Nature uses this trick over and over and over again. Another thing you can generate is natural products of medicinal interest. And I just show here taxol and artemisinin. So taxol is used in the treatment of breast cancer. Artemisinin, anybody heard of that? Yeah, so this has been the major target-- in fact, this pathway we're talking about today has been a major focus of many synthetic biologies, trying to make this mevalonic acid pathway so they can make potential drugs, but also jet fuels-- which, again, you want some kind of hydrocarbon. So this pathway has been studied a lot from a point of view of metabolic engineering. It's also involved in-- this is one of my favorite-- the perfume. You can tell from the way I smell the perfume industry. Any of you ever break a pine cone? A pine needle? Yeah, doesn't it smell great? No, you don't think so? I think it's wonderful. It's called pinene. Anyhow, it has-- I think terpenes wonderful smells, and it's the hallmark of the fragrance industry. Is that on here? Yeah, so menthol. Limonene is orange-- in fact, if you were here when Barry Sharpless used to teach-- I can't digress, because that's why I never get to the end of the course. But anyhow, Barry used to bring to organic class-- he had boxes of smells. And he used to pass around the smells, and they were all wonderful. And they were almost all terpenes. And we're going to be looking at things like dolichol-- we aren't going to be looking at it. We will see it a little bit. But what you can see here, Suzanne Walker is giving a talk here April 4th. And she works on-- one of the things she works on is peptidoglycan biosynthesis. And so sugars are carried around on these lipids. Some of them are C19 to C55. If you look at these, you can see these little units stuck together. Barbara Imperiali, in our department, the biology department, works on a asparagine-linked glycosylation. Again, the sugars are carried around by these kinds of terpenes. So plays a central role in putting sugars onto systems. And then what we're going to be focused on today-- and this is the focus in general-- is on cholesterol. Do I have that up there? I think so. So what we're going to-- do I have cholesterol up there? Yeah. OK, here it is. Cholesterol. That's what we're going to be focusing on. And that's not a C5, But a C30. So how do we get from these C5 units into the C30 units? So that's an introduction to the terpenome. They're everywhere. And so you can't become a biochemist without seeing carbon carbon bond formation by these C5 units, in many, many kinds of reactions-- in both primary and secondary metabolism. They're very important. So what I want to do before we get into looking at one of the pathways that you can make the building blocks, IPP and DMAPP-- this is abbreviated DMAPP. One of the ways is through the mevalonic acid pathway, and here's a picture of a cell that I took from something off the web. But I want to introduce you to where we're going to be going, cholesterol biosynthesis. So where do we break down fatty acids? Does anybody know? You remember from your introductory course? I want to try to put this into the big picture on metabolism, so you're not-- we're just not pulling it out of the air. Anybody know where you break down fatty acids from the diet? AUDIENCE: You're asking in the cell, specifically? JOANNE STUBBE: There in the cell. Yeah, where in the cell? These are-- we're talking about eukaryotes now. Bacteria don't make cholesterol. Yeah. What? OK, you don't even know that. OK, so you should go back and read the chapter on fatty acid biosynthesis and degradation. That would be a good thing for you to do. Anyhow, fatty acids are broken down in the mitochondria. We'll see this in a second. So the mitochondria play a role. We're going to see today-- so here's the nucleus, here's the endoplasmic reticulum. The endoplasmic reticulum is the key sensor in cholesterol homeostasis. And so we're going to-- and we're going to see that it controls transcription factors. Transcription factors are stuck in a membrane in the ER. That's-- how weird is that? Because where do transcription factors need to go? They need to go to the nucleus. So how can you do that? How can you take something stuck in a membrane and get it to the nucleus? So they need to go through a golgi stack. They do some stuff we're going to learn about to eventually get into the nucleus, where they control not only the levels of cholesterol proteins, but also of metabolism of phospholipids, triacylglycerols. so this takes us into the big realm of all lipid metabolism, which most of the time people don't spend a lot of time talking about in an introductory course. And I mean, one of the interesting questions-- we're going to see the key rate limiting step in cholesterol homeostasis. The protein is bound to the ER membrane, and a lot of the proteins involved in cholesterol biosynthesis are in the ER membrane. 50% of all the cholesterol ends up in the plasma membrane. how does it get there? Does it just go through solution? You need to think about the properties of cholesterol. So when you get confused about where we're going, come back to the picture. And I'm going to show you one other big picture, which we use when I teach-- when I've taught 5.07 with Essigmann-- again, this is the picture we can back to over and over and over again. Because everything is interconnected. So we're going to be talking about cholesterol biosynthesis. We're going to see a key player is acetyl-CoA. Where have you seen that? You've learned a lot about acetyl-CoA in biosynthesis, and use in biosynthesis and polyketide. Synthases, you talked about biosynthesis of fatty acids. So fatty acids are biosynthesized in the cytosol. But I just told you fatty acids are degraded in the mitochondria. What are they degraded to? Degraded to acetyl-CoA. Can acetyl-CoA get from the mitochondria to the cytosol? Nobody knows? Let's get some energy, you guys. What do we know about acetyl-CoA? AUDIENCE: [INAUDIBLE] JOANNE STUBBE: It's what? AUDIENCE: A transport system [INAUDIBLE]---- JOANNE STUBBE: So you think is the transport system that takes it from the mitochondria to the cytosol. So that's wrong. And in fact, this is again another thing that I think maybe isn't emphasized enough in an introductory course. A lot of these things cannot transfer across these membranes. So-- and this may or may not be logical to you, when you take this and you're saying, oh my god, this is so complicated. It's really not that complicated when you put all primary metabolism into the big picture. Acetyl-CoA goes into the TCA cycle, and it condenses with oxaloacetic acid to form citric acid. We're going to see citric acid again with iron homeostasis. Anyhow, it's citrate that is able to go across the mitochondrial membrane, as is malate, as is pyruvate. Acetyl-CoA is not able to do that. And so to get acetyl-CoA, then you have to enzymatically break down citrate. So if you don't know what citrate is, it's a central metabolite. You're going to see it again and again. Pull it up, Google it. Put it in your brain. You form acetyl-CoA. An acetyl-CoA, as you learned in the first part of course, can form fatty acids. Where do fatty acids go? They can attach to glycerol. And where does glycerol come from? It comes from breakdown of sugars through the glycolosis pathway. And they come together to form phospholipids, which make up all our membranes. Pretty important. What else can fatty acids do? They can interact with glycerol without a phosphate, to triacylglycerols. Triacyl-- esterified triacylgycerols. Does everybody know what glycerol is? Everybody know? OK, so-- and this is another thing we're going to find. We have huge amounts of phospholipids and triacylglycerols in our diet. So we have to deal with those things. But acetyl-CoA, we'll also see, is the building block to form mevalonic acid through a pathway we're going to describe now. So mevalonic acid is a key player, and its formation is rate limiting in cholesterol biosynthesis. The enzyme that makes mevaloinc acid is located in this little messy thing here, and that's the ER. So it's bound to the ER. It makes cholesterol. And then ultimately, how does cholesterol move-- and a lot of the precursors to cholesterol stay solubilized, and then get distributed to all the membranes. 50% of the cholesterol, for example, is found in the plasma membrane. So that's the big picture. And so when you get confused about where we're going, you need to go back and see how central a player acetyl-CoA is to everything. And so its regulation, you're going to see, is governed by the same transcription factors that regulate cholesterol homeostasis. Because they were all linked. So where are we going? Let's see if I can remember where we're going. So where are we going? So I'm giving you an overview of where we're going. And I'm only-- this is a long pathway to get to lanosterol. I'm not going to look at all the steps in the pathway. I'm just going to tell you how we use these C5 units to generate terpenes. So we've got to get to the C5 units over here. We've got to get to these two intermediates. And then we're going to use them to get eventually to a C30. And we'll see the same chemistry, once we know a few rules, just like with aldol reactions and claison reactions, are used over and over and over again. There are a few basic rules. Every protein is different, but I'm going to make sweeping generalizations, which is a good place to start. So we have to use three molecules of acetyl-CoA. So you're all-- you all should be very familiar with acetyl-CoA. And we're after trying to form C5. So three of these gives us C6, so we have to get rid of a carbon. So from this we need to lose whatever the pathway is. It turns out we lose one carbon as CO2. And this whole process-- so this can be multiple steps-- is called initiation. And it forms IPP, which can isomerize to form DMAPP. And then, again, this is our C5 unit that we're after. So this is C5. And we've lost one carbon as CO2. So then we're going to have what I'm going to call elongation. And what we're going to see is that to get to lanosterol, all we need to have a C30. So we need six of these guys. So we have six C5 to form a C30. Which is lanosterol. This is a precursor to steroids. [INAUDIBLE] talk about cholesterol-- uh oh. I knew that that was going to happen. I jump around. Usually this falls off, so you'll have to get used to this. It's a good thing-- I spent all morning trying to figure out where the-- where this cord came, because I knew my cord wasn't going to fit, and the cord was going to be shot, and then I was thinking, how am I going to run from one door to the next? I need to get this back on me. Can you hear me? OK. Let me get back on gear. So C30-- but then what's going to happen-- so we get to lanosterol, and then from lanosterol, and going to lanosterol here, we're going to have to do, like, I think, the most amazing chemistry in the whole world. We're going to have to do an oxidation and a cyclization. So this is going to be a terpene. So we're going to put these things together to form a C30, a linear C30, and then they have to come together to form this guy. So we're going to talk about this reaction, because it's such a cool reaction. Anyhow, we're going to talk about how this linear molecule gets to this. That's-- that is the coolest reaction, in my opinion, in biology. I remember when I first heard about this when I was in graduate school in 1968. A long time ago. That's what made me decide I didn't want to be an organic chemist. I said, how amazing is this? That you can do one step and you can put in all of these asymmetric centers and 100% yield. So that was it. That was a turning point in my life. Anyhow, hopefully it'll be a turning point in your life too. So we have C30. And then we're not there yet. So this is going to be, for us, the-- after the elongation, these cyclizations and oxidation is going to be the termination to get to this ring structure. But then to get to cholesterol, we have 19 more steps. So this is really complicated pathway. And I'll tell you what had the chemistry actually is not so hard to understand, but the details are really still not understood. Because all of the proteins are membrane bound. So what I want to do now is come back over here. And we're going to talk about initiation, elongation, and the termination steps. And I'm going to focus on a few of the reactions that I think are important, and a lot of the details are written down-- are written down on the PowerPoint. So let's look at the first few steps. And so let's start the pathway. And the first molecule we're dealing with is acetyl-CoA. And what is special about acetyl-CoA. Why is nature-- you've just had a whole bunch of units on acetyl-CoA-- why does nature use thioesters? What are the two key things you need to think about in terms of its reactivity? You guys should be experts on this now. Yeah? AUDIENCE: There's a low pKa [INAUDIBLE].. JOANNE STUBBE: Right, so you have a reduced pKa, is reduced from, say, 22 to 18. So this is the alpha hydrogen acidity. And what else does a thioester do? What is the other reactive part of CoA? This should be like the back of your hand. I mean, this is part-- this is central to everything in biochemistry. Yeah. AUDIENCE: [INAUDIBLE] JOANNE STUBBE: Pardon me? AUDIENCE: The leaving group. JOANNE STUBBE: The leaving group. You are going to have a leaving group. But that's-- and that is important, but that's not the key important thing. That is a part of the game. It can drive the reaction to the right, if you look at the free energy of hydrolysis. What else is activated when you have a sulfur ester as opposed to an oxygen ester? AUDIENCE: Carbonyl. JOANNE STUBBE: The carbonyl, because of the decreased resonance stabilization. So what you've done then is you have activation for nucleophilic attack. And you see this-- nature uses this in cholesterol homeostasis as well, over and over and over again. So in the first step, and I'm not going to draw out the details, what you can see here is that you're taking two molecules of acetyl-CoA and you're forming acetoacetyl-CoA-- that should be good practice for you for thinking about the exam on Wednesday. And this is an example of a claison reaction, one of the three types of mechanisms, to form carbon carbon bonds. The next step, we need three acetyl-CoA's to get eventually to isopentenyl pyrophosphate and dimethylallyl pyrophosphate. So we're going to use another molecule on acetyl-CoA to form hydroxymethylglutaryl-CoA. So we need to add another one of these guys. And this is HMG-CoA synthase. So before I go there, let's go through what we know. So this is-- so we're starting here. So here-- acetyl-CoA plays a central role. Why thioesters? It's important in claison reactions. Here's the example of a claison reaction involving a carbanion intermediate that you guys should all be experts at at this stage. What about this step? The next step? Formation of hydroxymethylglutaryl-CoA? So here it turns out that this enzyme uses covalent catalysis. Frequently enzymes-- you've already seen this as well, we're going to see this again and again over the course of the rest of the semester. One of the major mechanisms of rate acceleration is covalent catalysis. Here the thioester-- the CoA ester has been removed and it's attached to a cysteine in the active site of the enzyme. And then this can react with, in this case, a ketone like molecule in an aldol reaction. So here's the second example. We take acetoacetyl-CoA. We add another CoA. And what we form, then, is hydroxymethylglutaryl-CoA. And what we're going to see is, during this reaction, we also have to do a hydrolosis reaction. Because we start out with acetyl-CoA and we only add-- end up with a single thioester. And this reaction forms hydroxymethylglutaryl-CoA. So we've lost-- in this reaction over here, you can see where this is lost. So you have an acetyl-CoA to form the thioester in the active site. You you've lost a CoA. Is everybody with me? So you're using the third molecule of acetyl-CoA. You've already lost the CoA. And then what you end up with in the end is you have to hydrolize this off. So if you didn't realize it went through a covalent intermediate, it would be like you just lost a CoA, which you did. But you lost it in this step, because you went through a covalent intermediate. And you're not responsible for the details. Many, many enzymes that use acetyl-CoA go through covalent intermediates just like this. But you have to study each one to figure out why they do that. Why do they do that? Because covalent catalysis gives us rate accelerations of 10 to the 4th, 10 to the 5th. So nature has used that as a repertoire of defining how to catalyze reactions at amazing rates. So now we're at this stage. And the next step in this pathway-- so we're still trying to get to the C5 over here. And to get to this now, we're going to have to do a reduction. And we're trying to get to-- so we did this reaction here. I will fix my thing for next-- OK. So now what we're doing is we're going from hydroxymethylglutaryl-CoA to mevalonic acid. And this is wrong. They should all be NADPHs. When you're doing biosynthesis, what do you use? You don't use any NADH. You use NADPH in almost all biosynthetic pathways. So what happens? You're reducing, basically, the thioester down to-- a thioester down to an alcohol. Everybody should know at this stage how this kind of reaction goes. Everybody, this is one of the two major redox co-factors and all of biology. Can somebody tell me how this redox reaction goes? This is one of the vitamins on your bottle, niacin, which gets metabolized into NAD, NADP. How does that do a reduction? Can somebody tell me? Yeah. AUDIENCE: Form an aromatic ring by eliminating the hydride-- so the hydride attacks the-- JOANNE STUBBE: Right, so that's it. And where does the hydride attack? AUDIENCE: The carbonyl. JOANNE STUBBE: What part of the carbonyl? AUDIENCE: The carbon. JOANNE STUBBE: Yeah, OK. So this is something that I fight with kids all the time in 5.07. It doesn't attack the oxygen. It attacks the carbonyl because it's polarized delta plus delta minus. So you have-- this, again, of all the vitamins on your vitamin bottle, this is the simplest one. So hydrogen moves with a pair of electrons that's called the hydride, to do this reduction. And over here, I think I have the details written out. So I'm not going to write this out in more detail. But you generate this intermediate-- this intermediate-- this intermediate may-- or the oxygen may or may not be protonated. You need to look in the active site of the enzyme. But then what happens to this intermediate? This intermediate-- so tetrahedral intermediate's not very stable. It can break down to liberate CoA. And what are you left with? You're left with an aldehyde. So that's one reduction. And where do we want to go? Where we want to go-- and so I'm not going to draw the whole thing out, but I'll draw a part of this out-- so this gives us, then, through a tetrahedral intermediate, an aldehyde. And then what can happen to the aldehyde? We use another molecule of NADPH? And what happens with that? The same thing. You now do a hydride transfer. And so we need another molecule of NADPH to form the alcohol. So this is a mevalonic acid. So this is written out in more detail here, for those of you who have trouble trouble thinking about this. But of all the factors that nature has evolved to help us expand the repertoire of reactions in biology, NADPH-- NADH is the simplest. Hydride-- it's always hydride. Flavins, much more complicated. We'll see some of those-- the chemistry is much more complicated. This is really straightforward. So what do we know about this? Why are people interested in this? And this enzyme is called HMG-CoA reductase. And in your handouts, it's abbreviated-- I think these things are terrible. I will give you a list with all the acronyms on them. I can't remember the acronyms. And people change them. And people name things-- enzyme names are extremely difficult. The older they are, the worse the issue is. Because do you know what NAD used to be called? Any of you have a memory of that? Any of you read the old literature? It used to be called DPN-- dipyridine nucleotide. So this is pyridine, and that's where they got the name from. And I used to teach with somebody, [INAUDIBLE] a long time ago, and everything was DPN. So anyhow, if you've read the literature, nothing will be called in NAD, NADH. And in fact, a lot of the seminal experiments that elucidate the pathways came out of the old literature. So why is this protein interesting? So we're going to spend a little bit of time on this protein. People have spent a huge amount of time looking at this protein in detail. Does anybody know why? AUDIENCE: [INAUDIBLE] people target it-- like statins target it, or cholesterol. JOANNE STUBBE: So the key thing in this system is it's the rate limiting step in cholesterol biosynthesis. And it's the target of, I would say, a wonder drug-- the wonder drugs of the statins. So people really care about the detailed mechanism. We don't care about the detailed mechanism. We do care that hydride attacks the carbonyl, and it attacks the carbon and not the oxygen. But the details, if you're interested in that, you can go read about this in the reference. A lot of people have focused a lot of energy on this, trying to make better statin inhibitors. So what do we know about this? And there's a few things I want to say about this. And so if we look at the protein, we're going to come back to this in lecture 3. So this is important to remember. So this is the protein. I'm going to use this cartoon. And Liz used these cartoons as well. But what we're going to see is the protein has eight of these things-- eight. Each one of these things-- OK, [INAUDIBLE],, and we need three more. I'm not going to fit this. So it has a transmembrane helices. And the protein itself is, again, 888 amino acids. And what's interesting about this, if you have this many transmembrane helices, where's the protein going to be located? It's going to be located in a membrane. So these five are called the sterile sensor domain. HMG-CoA reductase is going to be a key player in regulation of cholesterol levels. And it exists-- it's found, this protein is found in the ER membrane. And SSD is the sterile sensor domain. And we're to come back to this when we start talking about homeostasis. We're going to see that there are other proteins that also have transmembrane helices that somehow bind and sense cholesterol that are going to help us control cholesterol levels. Now, what's really interesting about this-- so the protein is huge. It's stuck in membrane. What's really interesting is that you can cut the protein in half, about in half. That's what you're looking at there. You have a soluble protein, they're much easier to crystallize than membrane proteins. And it turns out, if you cut the protein in half, this guy, if you cut, is active. And it's soluble. And the activity is the same as the protein bound to the membrane. So it has very high activity. So it's like you have two separate domains. Furthermore, if you cut this in half, you can still target this to the ER membrane, and you can still sense cholesterol. So somehow these two things have come together. They have two really independent activities. But we're going to see, they work together to control cholesterol levels. So what I want to do-- how am I doing? Oh, see, time goes by too fast. Isn't time going by too fast for you? Anyhow, I want to show you-- and we'll come back to it next time-- is that the statins are the target of HMG-CoA reductase. I mean, this is like an amazing thing. Cholesterol biosynthesis was only elucidated in 1955. And it turns out this guy, Endo, was the first one to discover a natural product that somehow could lower cholesterol in 1976. And actually, when I was a young person, Al Alberts used to work at Merck. I used to consult for Merck back in those days. It was incredibly exciting times, because he discovered really sort of the first real statin that worked, that wasn't toxic-- lovastatin. And really, within a period of only seven years, this was approved by the FDA. So that's an amazing observation. People are still gobbling down statins everywhere. There are issues with them, but it makes $30 billion for the companies that own this. So you now might have heard of Lipitor or Crestor-- anyhow, they are there. And it really is a wonder drug. And it works, we're going to see next time. Because it looks like the substrate hydroxymethylglutaryl-CoA-- So that it acts as a competitive inhibitor for binding to the active site of HMG-CoA reductase, and prevents the reduction process. And we'll come back next time and look at a little bit at the details. We're not going to spend a lot of time looking at the details, but then finish on to get to IPP and dimethyl APP, the building blocks we're after to make all terpenes. OK.
MIT_508J_Biological_Chemistry_II_Spring_2016
23_Cholesterol_Homeostasis_3.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: --that Brown and Goldstein carried out, which in conjunction with many other experiments and experiments by other investigators have led to the model that you see here. And so we'll just briefly go through this model, which, again, was the basis for thinking about the function of PCSK9 that you learned about recitation last week, as well as providing the foundation for thinking about the recitation. This week, we really care how you sense cholesterol levels in membranes, which is not an easy thing to do given that it's lipophilic and so are many other things. OK. So the LDL receptor-- that was their model, that there is a receptor-- is generated in the endoplasmic reticulum. If you looked at the handout, you'll see that it has a single transmembrane-spanning region, which means it's inserted into a membrane. And the membrane where it functions, at least at the start of its life, is in the plasma membrane. So somehow, it has to get from the ER to the plasma membrane. And this happens by forming coated vesicles. We'll see a little bit of that, but we're not going to talk about this methodology in any detail. But Schekman's lab won the Nobel Prize for this work, either last year or the year before, of how do you take proteins that are not very soluble and get them to the right membrane. And they do this through coated vesicles that, then, move through the Golgi stacks that we talked about at the very beginning. And then, eventually, they arrive at the plasma membrane and become inserted. So these little flags are the LDL receptor. OK. So that's the first thing that has to happen. And I just know that this whole process is extremely complex. And patient mutants are observed in almost every step in this overall process. It's not limited to the one set of types of experiments, where something binds and doesn't bind to LDL receptor that we talked about last time. So the next thing that has to happen-- again, and we haven't talked about the data for this at all, but not only do these receptors have to arrive at the surface, but they, in some way, need to cluster. And it's only when they cluster that they form the right kind of a structure that, then, can be recognized by the LDL particles that we've talked about. And so they bind in some way. And that's the first step in the overall process. And then, this receptor, bound to its cargo, its nutrients-- and, again, this is going to be a generic way of bringing any kinds of nutrients into cells. It's not limited to cholesterol-- undergoes what's now been called receptor-mediated endocytosis. And so when the LDL binds to the receptor, again, there's a complex sequence of events that leads to coding of the part that's going to bud off, by a protein called clathrin. Again, this is a universal process. We know quite a bit about that. And it buds off. And it gives you a vesicle. And these little lines along the outside are the clathrin coat. I'll show you a picture. I'm not going to talk about it in any detail, but I'll show you a picture of it. So the LDL binding, we talked about. We talked about binding in internalization. Those are the experiments we talked about last time in class that led, in part, to this working hypothesis. And so we have clathrin-coated pits. And it turns out that there's a zip code. And we'll see zip codes throughout-- we'll see zip codes again, in a few minutes, but we'll see zip codes which are simply short sequences of amino acids that signal to some protein that they're going to bind. So how do you target clathrin to form these coated pits? How do you form a pit, anyhow, in a circle? And how does it bud off? And where do you get the curvature from? Many people study these processes. All of these are interesting machines that we're not going to cover in class. So you form this coated pit, and then it's removed. So once it's formed, and you've got a little vesicle, it's removed. And then it can go on and do another step. And another step that it does is that it fuses with another organelle called an endosome, which is acidic pH. How it does that, how it's recognized, why does it go to the endosome and not directly to the lysosome-- all of these things, questions, that should be raised in your mind if you're thinking about the details of how this thing works, none of which we're going to discuss. But it gets into the endosome, and then what you want to do is separate the receptor from its cargo, the LDL. And we know quite a bit about that. If you read-- I'm not going to talk about that either, but if you read the end of the PowerPoint presentation, there's a model for actually how this can happen. And you can separate the receptor from the cargo. And the receptors bud off, and they are recycled in little vesicles to the surface, where they can be reused. The LDL particles can also, then-- and what's left here can then fuse with the lysosome. And that's, again-- we've talked about this-- it's a bag of proteases and a bag of esterases, hydrolysis, lipids. That's what we have in the LDL particle-- hydrolysis. We talked about ApoB being degraded with iodinated tyrosine, last time. That's where this happens and gives you amino acids and gives you cholesterol. OK. And then, again, depending on what's going on in the environment of the cell, the cholesterol would then be shuttled, somehow, to the appropriate membranes. OK. So you can see the complexity of all of this. If the cholesterol is present, and we don't need anymore in the membranes, then it can become esterified with long-chain fatty acids. Those become really insoluble, and they form these little globules inside the cell. And then the process can repeat itself. And the question we're going to focus on in lectures 4 and 5, really, are how do you control all of this. OK. So this is the model. And so I think what's interesting about it is people have studied this in a lot of detail. It was the first example of receptor-mediated endocytosis. So we know something about the lifetime of the receptor. We know it can make round trip from surface inside, back to the surface in 10 minutes. We also know it doesn't even have to be loaded to make that round trip. It could be one of the ones that isn't the clustering of the receptors, which is required for clathrin-coated vesicles to form. And so you can tell how many trips it makes in its lifetime. And so the question, then, what controls all of this? But before we go on and do that, I just want to briefly talk about, again, mutations that have been found in the LDL receptor processing. And they're really, basically, at every step in the pathway. So the initial ones we found, that we talked about, we'll come to in a minute. But we had some patients with no LDL receptor express at all. So somehow, it never makes it to the surface. OK? There are other examples-- and these have all been studied by many people over the decades-- that it takes a long time to go through this processing. And it gets stuck somewhere in the processing. That may or may not be surprising, in that you have transmembrane insoluble regions. And if the processing goes a little astray or some mutation changes, then you might be in trouble. So we talked about this last time. We talked about that they had just looked at 22 patients. Some of the patients had no binding of LDL to the surface of the fibroblast that they were using as a model, at all. Some have defective binding. So if they compared it to a normal, they had a range of dissociation constants. And we'll talk quite a bit about dissociation constants, not this week but next week, in recitation. It's not so easy to measure dissociation constants when things bind tightly. And thinking about how to measure them correctly, I think, is really important. And I would say, probably, I could pull out 10 papers out of current journals, really good journals, where people haven't measured dissociation constant correctly, when you have tight binding. So this is something that we put in because I think it's important that people need to know how to think about this problem. So anyhow, let's assume that Brown and Goldstein did these experiments correctly, which I'm sure they did. And they got a range of binding. And we also saw that the patient we looked at, JD, had normal binding. That indicates he was the same as normal patients, but something else was problematic. And that something else wasn't that it failed to form coated pits, but that it failed to bring this into the cell. So it failed to internalize the LDL. That was JD's defect. We also, in recitation last week-- hopefully, you've had time, now, to go back and look at the paper a little bit. But LDL, in the model we were just looking at, gets recycled. It goes in and gets back to the surface. But what happens if, on occasion, instead of budding off into vesicles and returning to the surface, it, with the LDL cargo, goes to the lysosome and gets degraded? Well, that was the working hypothesis for what PCKS did. It targeted to the wrong place and degraded it. And the phenotypes of those patients were interesting, and that's why it was pursued. So there are many, many defects. And despite the fact that we have these statins, people are still spending a large amount of time thinking about this because of the prevalence of coronary disease. So I'm not going to talk about this, but I'm just going to show you two slides. And you can go back and think about this yourself. But this is the LDL receptor. We know quite a bit about it now. And one of the questions you can ask yourself, which is an interesting question we're not going to describe-- but you have LDL particles that are different sizes. How do you recognize all these different sizes? And how does the clustering do that? And so that's done up here. And there's calcium binding. We know quite a bit about that, but I don't think we really understand the details. You have a single transmembrane helix in the plasma membrane. And this is the part-- this part up here-- that actually binds the LDL particle. And the last thing I just want to briefly say, because we're going to see this again but without going through any details, remember that eventually we form what are called clathrin-coated pits. That's a picture of what the clathrin-coated pits look like. And the key thing-- and I just wanted to mention this briefly because we're going to see this again, over and over-- is the LDL receptor, itself, has a little zip code. And that's enough-- it's at the tail. That's enough for it to attract this green protein called to AP-2, which is key to starting clathrin binding, and formation of the curvature, and eventually being able to bud off these vesicles surrounded by clathrin. And when you do that, you start budding. And then, somehow, it turns out there's a little machine, a GTPase-- we've seen GTPases all over the place-- that's involved-- this is the name of it-- that allows you to bud off. And you use ATP energy to do all of this. We've seen this over and over again. And so the point I wanted to make here is we've seen this with these seminal experiments, by Brown and Goldstein. But in fact, we now know that this is sort of a generic mechanism for taking nutrients into the cell. So it's not limited to LDL receptor and LDL. And in fact, we're going to see, we're going to talk about, in module 7, Epidermal Growth Factor Receptor. And we're going to talk, in module 6, the receptor that takes iron into the cell, both of which do this kind of signaling. So this is a generic mechanism to do that. All of these things are interesting. We know quite a bit about it. And if you want to study that, you could have spent another weeks worth of lectures studying this. So the idea, then, is that we have nutrient sensing. And this is a general way to try to get nutrients into the cell, that is, you have a receptor, and it's undergoing receptor-mediated endocytosis. So that's the end of lecture 3. I think I'm one lecture behind, but that's not too bad. So what I'm going to do now is-- let's make sure I get this right-- I'm going to start on lecture 4. And now we're sort of into the question of how do we sense cholesterol. OK. So what I've done in the original handout, I had lecture 4 and 5 in the single PowerPoint. They're still in a single PowerPoint, but I've just split them into two. So I'll tell you how I've split them. So lecture 4 is going to be focused on sensing and transcriptional regulation. And lecture 5 will be focused on sensing and post-transcriptional regulation by a protein-mediated degradation. So I'm going to split that in two parts. And so today's lecture will be mostly focused on transcriptional regulation. And the key issue is how do we sense cholesterol-- what is the mechanisms by which we sense cholesterol. And the outline for the lecture is that the transcriptional regulation involves a sterol-responsive element. So this is sterol-responsive element. This is a DNA sequence of about 10 base pairs. And it also involves a transcriptional factor, so TF. This is a transcriptional factor-- transcription factor. And this is called SRE-BP. So this is Sterol-Responsive Element Binding Protein. So BP is Binding Protein. OK. So the first thing I'm going to talk about, then, is the discovery of SRE-BP. So that'll be the first section. And then what we're going to do is we want to know what are the players that allow us to understand how this transcription factor works. What we'll see that's sort of amazing-- it was amazing at the time, but now it's been found in a number of systems-- is where would you expect a transcription factor to be located? AUDIENCE: In the nucleus. JOANNE STUBBE: In the nucleus. OK. And what they found from their studies that it's located in the ER membrane. So this was a major discovery. So this protein is located in the ER membrane. They didn't know it at the time. But now, you're faced with the issue, transcription factors do work in the nucleus. So somehow, we have to get it from the ER membrane into the nucleus. And so to do that, what we need are players for SRE-BP to go from the ER to the nucleus. And we're going to see that these players are called SCAP, and they're called INSIG. And we'll come back, and we're going to talk about those in some detail. And then the last thing we'll focus on is-- we'll see it throughout. I'm going to give you-- what I usually do when we're talking about some complex mechanism, I give you the model upfront so you sort of see where you're going. Hopefully, you've all had time now-- we've been in this module for a long time-- to read the review articles. But we want a model for transcriptional regulation. So that's where we're going. And so what I want to do, before we get into the model, is come back where we started to try to keep you grounded on what we're doing. And what we're doing here is our cartoon of the cell that I showed you in the very beginning. We know that metabolism of hydrocarbons, fatty acids, and cholesterol all focus on a central player. And the central player is acetyl CoA. Acetyl CoA can be obtained from fatty acids in the diet. We've talked about the distribution of fatty acids using lipoproteins, including LDL. And we get to acetyl CoA-- this all happens in the mitochondria. But acetyl CoA cannot get across membranes. And that's true. There are a number of things that can't get across membranes. And so carriers in the mitochondrial membrane are key to metabolism. And I think once you look at it and think about metabolism overall, it's not so confusing. But you might not have chosen those. If you were the designer, you might not have chosen these to be the carriers to move in between organelles. So I think this happens quite frequently, so you need to pay attention to it. And so what happens in this case is acetyl CoA combines with oxaloacetic acid to form citrate. Citrate is an intermediate in the Krebs cycle. The TCA cycle is part of all of central metabolism. We're going to see citrate again. It plays a central role in iron homeostasis as well. And citrate-- there is a transporter that gets this into the cytoplasm. So here's the cytoplasm. There's an enzyme citrate, lyase that uses ATP to generate acetyl CoA. OK. So acetyl CoA is a central player. And really, what we're thinking about now, in general-- I'm going back through this-- is what do we expect sterol-responsive element-binding protein to regulate. And I'm going to show you it doesn't just regulate cholesterol homeostasis. There's a big picture [AUDIO OUT] all of this. So you can make-- you talked about this as a prelude to the polyketide synthases, the natural products Liz introduced you to. Anyhow, you can make fatty acids. Fatty acids can do a number of things. If you have a ton of them, then you can react them with glycerol to form triacylglycerol. And they're insoluble messes. If you look at the structures, they form little globules. So we have all these little insoluble globules inside the cell. And people are actually quite interested in studying these things. Now, we don't know that much about whether they are proteins or metabolic enzymes that could be sitting on the surface of these globules. A lot of people are trying to figure that out. But also, fatty acids are required in the presence of glycerol 3-phosphate, which comes from the glycolysis pathway, the other pathway that everybody learns about in an introductory course, to form phospholipids, which are the key component of all of your membranes. Alternatively, acetyl CoA, depending on the regulation of all of this-- that's the key-- gets converted to hydroxymethylglutaryl-CoA and mevalonic acid. Mevalonic acid-- that reduction between these two is a target of statins-- then ends up making cholesterol. And where does cholesterol have to go? So cholesterol is made, and a lot of it's happening in the membranes. A lot of it is associated with the ER, but only a small amount of the total cholesterol is in the ER membrane. Somehow, it's got to be transferred to all these other membranes. So that's a problem we haven't talked about. That's a big problem. Most of the cholesterol is in the plasma membrane. If you have excessive of cholesterol, you can esterify it, and, again, form little droplets of fats, which have fatty acids and cholesterol. So that's the big picture. And so this is the picture of the regulatory network. So I'll say this is a PowerPoint for the regulatory network. And it's governed by-- it turns out there are three SRE-BPs. They have a slightly-- and they're structurally homologous to each other, and they work in ways that they interact with other protein factors and control this whole homeostatic process between fatty acids and cholesterol biosynthesis. So I think there are two things that you need to think about. So we want to control basically its lipid metabolism. And I should say at the outset, we're focusing on SRE-BP, but some of you, in maybe a more advanced biology course, know that there are other transcription factors involved in regulating cholesterol homeostasis. This is a major one, and that's all we're going to talk about in this class. But what else do you need to make molecules, if you're going to make fatty acids, if you were going to make cholesterol? What you need is NADPH. So that's the other thing that you need to think about when you're looking at the regulatory network. So we need to control-- how do we make lipids? Where did they come from? They come from acetyl CoA. And the second thing we need to think about is a source of energy to actually form the molecules. We're after the long-chain fatty acids. Go back and look at that-- or cholesterol, if you go back and look at the pathway we talked about in the first couple lectures. So NADPH is at the center. And I forgot to point out before and probably many of you have heard of but never really thought about malic enzyme in the cytosol. You can go back and think about that, but that's a major source of NADPH. What is another source of NADPH in the cytosol. Anybody know? Where do you get most of your NADPH from? It's key to biosynthesis of any kind of anabolic pathways. Does anybody know? AUDIENCE: [INAUDIBLE] JOANNE STUBBE: No. OK. Did you ever hear of the pentose phosphate pathway? Well, hopefully, you've heard of it. Reproducing it might be challenging, but the pentose phosphate pathway is central to providing us with NADPH. It's central for controlling reactive oxygen species, which is going to be module 7. It's central for providing NADPH for nucleotide metabolism. So the pentose phosphate pathway and malic enzyme are the key sources of NADPH. And if you're becoming biochemists, I think, now, all of these pathways, these central pathways that we talked about in 5.07, should just-- you don't need to know all the details, but you need to know how things go in and out. And it's central to thinking about anything. And if you ever do any genetic studies, you can never figure out anything unless you know how all these things are connected. So knowing these central pathways and how things go in and out and connect is really critical in thinking about many, many kinds of reactions you might be doing in the lab. Because you might see something over here, but it might be way over here that you had the effects. And knowing these connections, I think, is why I spent another-- whatever-- five minutes describing the regulation. OK. So if we look at this, what we see here-- and this is an old slide, so this might have changed. But all of the enzymes in italics are all regulated by SRE-BP. So here's acetyl CoA. What do we see in this path, where we're making cholesterol? So many of the enzymes-- we're not going to talk about them-- that we talked about when we went through the pathway are all regulated by SRE-BP and is predominantly-- again, there's overlap of the regulation between the different forms of the sterol-responsive element-binding protein. But you can see, we have HMG CoA reductase, which is the rate-limiting step. So that might be expected. But many of the other enzymes that are also controlled by this transcription factor. And the one that turns out, I think, to be quite interesting for most recent studies is-- remember, we briefly talked about how you get from a linear chain, and then we had to use a monooxygenase to make the epoxide. That enzyme is a key regulatory enzyme, people now think. It wasn't thought to be so not all that long ago. So anyhow, all of these enzymes that we've talked about are regulated in some way by SRE-BP. But it doesn't stop there. If you go over here, you sort of have a partitioning between acetyl CoA also going into lipids and forming phospholipids or triacylglycerols, depending on whether you store or whether you're dividing and need more membranes. So all of this, again, it's about regulation. And if you look at this, you can see that many of the enzymes in this pathway, for formation of monoacylglycerol and triacylglycerols are also involved. OK. So that gives you the big picture that I want you to think about. So when you wonder where you're going, you should go back and take a look at the first few slides. So what I want to do now is really focus on the first thing. The first factor was how did they identify. So this is identification of SRE-BP. And so probably most people wouldn't talk about this, but I think it's sort of amazing. So I'm going to just show you what had to be done. And this is not an easy set of experiments. First of all, transcription factors, in general, aren't present in very large amounts. To get them out, they also stick to DNA. So that poses a problem. Unlike using his tags and all this stuff, none of that stuff works to isolate transcription factors. And this was all done before the-- a long time ago. And so this was this is quite a feat. And the key to this feat was that Brown and Goldstein recognized that in the front of the gene for HMGR-- Hydroxymethylglutaryl-CoA reductase-- in the LDL receptor, they found a 10-- I'm not going to write out the sequence-- base-pair sequence that was the same. So that suggested to them that there's a little piece of nucleic acid with 10 base pairs that might be recognized by a protein, which could be the transcription factor. So this was the key, this 10 base-pair sequence. And I'll just say, see PowerPoint. And this is the SRE, before the genes, again. And this has now been found in front of many genes. I just showed you that many, many genes are regulated, in some way, by these proteins. But this was an observation they made a long time ago. OK. So where would you expect-- we just went through this. Where would you expect SRE-BP, the transcription factor, to be located? You'd expect it to be in the nucleus. OK. That's a reasonable expectation. And so what step might you do, in the very beginning, to try to help you purify this protein? And let me just tell you at the outset that the protein had to be purified 38,000-fold. OK. Now, you guys, none of you have ever experienced, really, protein purification, starting with kilograms of anything. I have done that and spent three months purifying a microgram of protein. And I would argue that some people still need to do that, because when you do recombinant expression, lots of times, you miss a lot of stuff. So somewhere along the way, somebody needs to really know what the endogenous protein is like, and not the recombinant protein. So we're going to have to do a 38,000-fold purification. And I would say that's not uncommon. I've done 20 liter by 20 liter gradients that take three weeks to get through the gradients and looking for your proteins. So if your protein is not stable, even if you're in the cold room, what happens? Or if there are proteases, it gets degraded. So I'm just saying, transcription factors are not easy to deal with. And this was sort of an amazing feat. Anyhow, they started with-- over here-- 100 liters of tissue culture cells. So most of you have probably seen tissue culture plates. And that's what you work with. They started with 100 liter, and that's why they're using HeLa cells, because you can grow them on this scale. You can probably grow a lot of things on this scale, now. We have much better ways than-- this was a long time ago. So their approach was-- so the first thing-- I got sidetracked again. But the first thing is that if it's in the nucleus, what would you do to try to enrich in the transcription factor? What would be the first thing you might do after you've isolated the cells? AUDIENCE: [INAUDIBLE] JOANNE STUBBE: I can't hear you. AUDIENCE: Maybe, something involving nuclear-binding proteins that transport things into [INAUDIBLE]---- that have transported things into the-- JOANNE STUBBE: OK. So I still can't hear you. You're going to have to speak louder. I'm going deaf. And I will get a hearing aid, but I don't have one now. So you have to speak loud, and you have to articulate. Yeah? AUDIENCE: Wait, so just the absolute first step? JOANNE STUBBE: Yeah. AUDIENCE: How we're just lysing cells and pelleting them? AUDIENCE: Yeah. JOANNE STUBBE: But is there a certain way you would pellet them? AUDIENCE: You would have to do a sucrose gradient. JOANNE STUBBE: You would do some kind of gradient to try to separate the-- well, you have to pellet the cells first. But then, what you want to do is separate the nucleus from all the organelles. The issue is-- we already told you this-- most of the protein is not found in the nucleus. And that was part of this. They didn't know that at all, but that's what they did. They did some kind of a gradient to separate nuclei from the rest of it, because they were trying to enrich, which was a totally reasonable thing for them to have done. OK. So I'm not going to write that down, but that's the first thing they did. The second thing they did is they made an affinity column all out of the SRE. So this is a nucleotide affinity column. And they ended up using that a couple of times. And they ended up using a couple of other kinds of columns and eventually got protein out after a lot of effort. After a lot of effort, they got protein out. And the size of the protein-- so they went through this column. And they went through additional columns. I'm not going to go through the-- and they ended up with proteins that were actually smaller than the SRE-BP, but they still bound to the affinity column. So they ended up with proteins-- I don't remember. And again, the details of this really aren't so important. But they ended up with smaller proteins. Somewhere, I have the size written down. 59 to 68 kilodaltons. So either protein had been degraded, or we will see the protein has been processed, or was being processed during all this workup. And there are many things that could have happened to this process. But what this allowed them to do-- and this was the key to allowing them to do this better-- was they could generate antibodies. So they took this protein that they isolated, and they generated antibodies. And we're going to be talking about antibodies this week. But we're going to be, also, talking about use of antibodies with fluorescent probes, the last recitation, as well. So what did this allow them to do? The antibodies, then, allowed them to go back into the cells and look for expression of SRE-BP. And instead of finding it in the nucleus, what they found was that most of it was localized in the ER membrane. So these antibodies revealed that SRE-BP is predominantly in ER membrane. And again, this question of antibodies-- which Liz brought up-- and the question of specificity, and, moreover, the question of sensitivity is really key. Because now, when you're looking at eukaryotic cells, we know things move around. They move around all over the place, and they move around dependent on the environment. So you could easily miss location. This might be the predominant one under the conditions you looked, but it could be somewhere else. And I think they didn't realize so much about that back in these days, but we now know that a lot. So anyhow, that was a surprise. And then, that provided the basis for them going back and thinking much more about this system. And so what I'm going to show you is the model that's resulted. And if some of you have started working on problem set 7 that's due this week-- the problem deals with some of the experiments-- then I'm going to tell you what the answer is. And you're supposedly looking at the primary data from where this model came-- a small amount of the primary data from where this model came. OK. So this is the model. And I'll write this down in minute. But the model is at low sterol concentrations. So at low sterol concentrations, what do we want to do? We want to-- this transcription factor-- I should write this down somewhere. But the transcription factor activates transcription. It could repress transcription, but it activates. So if you have low sterols, what do you want to do? You want to turn on the transcription factor. So it needs to somehow move from this location in the membrane to the nucleus. So that's where this model is coming from. And we'll walk through it step by step. So what you'll see-- these are cartoons for the factors we're going to be looking at. So this SRE-BP has two transmembrane regions. We'll come back to that. This little ball here, which turns out to be at the N terminus, is a helix-loop-helix, which is a DNA-binding motif. We'll come back to this in a minute. I'm just giving you an overview, and then we'll come back. There's a second protein. And this is the key sensor that we're going to see of cholesterol levels, called SCAP. And it also resides in the ER membrane. And it has a little domain on it that recognizes and interacts with part of SRE-BP. And so this is located in the lumen. And these guys, especially this guy, is located in the cytosol. And we don't want it inside the lumen, we want it on the outside so it can go into the nucleus eventually. So what happens is somehow, when you have low sterols-- and we're going to look at the model for how this happens-- both of these proteins, SCAP and SRE-BP, are transferred by coated vesicles-- we'll come back to this in a minute-- into the Golgi. So they go together into the Golgi. And I would say that, right now, a lot of people are asking the question, once you do the processing to get SRE-BP into the nucleus, what happens to SCAP. And there are lots of papers, now, that are focusing on the fact the SCAP can recycle from the Golgi back to the ER. So it's never as simple. This thing's continually going on that not that much is wasted. So this can actually recycle. And I'm not going to talk about that. And then, in the Golgi apparatus, there are two proteins, called S1P and S2P. And they're both proteases. We'll come back to this in a second. So what's unusual is that we want to get this guy into the nucleus. And one of the proteases cuts here. So then we get this piece. And then the second protease cuts here, and then we get a little soluble piece that can move into the nucleus. Now, this is also revolutionary, in that nobody had ever known there were proteases that are actually found in membranes. Now, we know there are lots of proteases found in membranes. And any of you work in Matt's lab? What is the factor that is regulated just like SRE-BP? Do you know? OK. So go look up the AFT4. Anyhow, so to me, what is common is once we found this, we've now discovered this in many other systems. So this system is a paradigm for many things that people have discovered since the original discovery. But of course, the thing that's amazing-- first of all, this was amazing. The fact that this thing is in the membrane and gets to the nucleus is amazing. And at low cholesterol, what you want to do is activate transcription. And you saw all the genes that could be activated in the previous slide. And it's complicated. There are many factors involved. And so the key question, then, is how do you sense this movement from one place to the other and what do we know about that. So what I'm going to do is look a little bit at the model. So the model will start with-- and the players. So this is part 2-- the players. And the players are-- so if you look at the ER membrane, what we have is two domains. And whenever you see a line through the membrane, that means a single trans helix membrane spanning region. We see that a lot. So I'm not going to write that out. But this is really sort of a single transmembrane helix. And the key thing is at the N terminus, you have the helix-loop-helix. And this binds to DNA. So this is a DNA-binding motif. And so this is the protein SRE-BP. And so the second protein-- and this is the protein you're focused on for your problem set-- has a SCAP. 2, 3, 4, 5, 6, 7, 8. So it has eight transmembrane helices. And they've studied all of this using some of the methods that you're going to be looking at in your problem set. And to me, there's a couple of things that we're going to be talking about in detail, but your problem sets are focused on-- all right. So I haven't really shown you where the loops are, but there are a couple of loops, loop one and loop six, which is what the problem set is focused on. And how do you know these are interesting and important. And we'll come back to this in a little bit. So now, at low sterols-- so we want to turn on the machinery to make more cholesterol-- so that low sterols. And one of the key questions is what is the structure of the sterol. Can more than one do that? We'll see different sterols turn on different domains. And we'll see that there's a domain within SCAP-- so this protein here is called SCAP. And we'll see that SCAP has a sterol-sensor domain, as does another protein called INSIG, as does HMG-CoA reductase. So somehow, you have these transmembrane regions that can bind some kind of sterol, that then changes the conformations, that is going to allow all of this chemistry to happen. So here, for example, we're not going to talk about this now. We're going to talk about that in the last lecture. But here's SCAP with its sterol-sensing domain. So what happens, then, is this has to move. And as I said before, this can return. This moves to the Golgi. So this is the Golgi. And the Golgi are complicated. And so I haven't defined where within the Golgi this is. And these are transferred by COPII vesicles. OK. And so what you then have, again, is your 1, 2, 3, 4, 5, 6, 7, 8. And you have your sterol-responsive element-binding protein. And now what you see-- and so nothing happens in terms of processing, until you get into the Golgi. And then, there's one protein, S1P, which is a protease. And I'm not going to go into the details of it, but if you look over here, what's unusual about this protease? If I gave you this cartoon, what would you say about that protease? Is it unusual compared to, say, trypsin or chymotrypsin. Can you see it? You can pull out your handouts. What are the catalytic groups? AUDIENCE: [INAUDIBLE] JOANNE STUBBE: Huh? Where have you seen those before? AUDIENCE: [INAUDIBLE] JOANNE STUBBE: Yeah, so they're aspartic acid, histidine, and serine. You see these over and over and over again. There are 150 serine-type proteases. OK. But what's unusual about this? Huge-- huge. OK. And then, the other thing that's unusual about it is that you have a transmembrane region. So it's completely different from serine proteases, so there's got to be some little domain that's actually doing all of this. So I just want to note that it's huge. But you could still pick up D, H, S and know that that's the protease domain. And you could study that. You could mutate serine to alanine or something. And then you have S2 domains. So we've gotten here. And this protease ends up clipping. so within the membrane-- so somehow, these things got to come together. And the active side of this protease needs to clip SRE-BP. So it does that. And when does that, what you end up with-- I'm not drawing the whole thing out, but what you end up with, then, is your helix-loop-helix. So this part is still embedded in the membrane. And then you have your second protease. I don't know. I probably have the wrong numbers. So this is S2P. And if you look at S2P, what's unusual about it and what people picked up on is that it has another little sequence motif. And this is what you see over and over again, in enzymology. Once you sort of know something in detail, you know, even though there's no homology between the proteins at all, you can pick up little motifs, just like you can pick out little motifs that are zip codes that move things around inside the cell. This little motif is the key player that tells you that this is probably a zinc-dependent metalloprotease. So this turns out to be a zinc metalloprotease. And this, then, does cleavage. But now, we actually-- it's pretty close to the membrane. OK. It does cleavage. And now what you've done is you've released this thing. It pulls itself out of the membrane. And what you can do, then-- I'll just put this in here for a second. But what you can do now is we now move to the nucleus. And we have our pieces of DNA. And we have our SRE. And now we have this helix-loop-helix that activates transcription. OK. So this is really sort of what I just told you in the other cartoon. And I just want to repoint out again that we now believe that these SCAP proteins can recycle back into the ER and be used again. And so controlling the levels of all these things-- we're going to see at the very end-- is also related to protein-mediated degradation that we're just now beginning to appreciate. OK. So here's the model. This now sets the stage for you to solve problem set 7 that's due. Because the key question you want to ask yourself is how do we know about the structure of SCAP. And so problem set-- sorry, I'm over again. But problem set 7 is focused on how do you know that this little loop here, this little loop here, and this little zip code plays a key role in this whole process of moving from the ER into the Golgi. OK. And we'll come back and talk about this briefly. We're not going to talk in detail about the experiments. And then we're going to move on and look at the post-transcriptional regulation of cholesterol homeostasis.
MIT_508J_Biological_Chemistry_II_Spring_2016
15_PK_and_NRP_Synthases_1.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. ELIZABETH NOLAN: --by talking about ClpX. And then we're going to move into module 4-- which is the last module before spring break-- on synthases and assembly line biosynthesis. So basically last time, where we left off is, we went over experiments that were done to look at denaturation, translocation, and degradation by ClpXP. And closed with a question about what actually is going on in ClpX with this ATP binding and hydrolysis to allow for these condemned protein substrates to be unfolded and translocated into the degradation chamber. And I left you just with the statement that although we think about ClpX as this hexamer that has six identical subunits, what studies have shown is that there's some inherent asymmetry within this AAA+ ATPase. And that's what we're going to talk about a little bit. So this is just a slide from a few lectures ago that's showing the top view and side view of ClpX and how we've thought about this. And I think these studies just really highlight how complicated these machines are and that there's still a lot more we need to figure out here. So as I said last time, this asymmetry comes from whether or not each ClpX subunit is bound to nucleotide. And so basically, from looking at many different crystal structures, what can be done is that the ClpX subunits can be divided into two different types based on conformation here. And so in thinking about this, we want to first think about the ClpX domain organization. And if we just think about this, what ClpX has is an end domain followed by a domain that's called the large domain and then followed by a small domain. So 633 amino acids, just to give you a sense of size, and about 69 kilodaltons per subunit. And so what we're going to focus on are the large and the small subunits and what's observed from many different crystal structures. And so these two different types of subunit have been described as loadable and unloadable, and that depends on whether or not nucleotide is bound. So if we consider of these two types, just thinking about the large and small domains, we have this loadable arrangement which binds nucleotide. And in cartoon, something like this. So we have the large subunit. We have the small subunit. And we have this region that's called a hinge. So this is one ClpX. So ATP binds. And so the other type is described as unloadable. And this type of subunit does not bind nucleotide when in this unloadable conformation. And so we can draw this. Here, again, we have the large subunit. And there's a change in conformation. And here's the small subunit here. So what's been found from looking at many crystal structures is that within the ClpX hexamer, there's an arrangement of these loadable and unloadable subunits. So in many crystals, what's found is that there's four loadable-- I'm just going to do with "L"-- plus two unloadable subunits arranged with about two-fold symmetry, so LULLUL. So there's some asymmetry in the subunits. And so also from these crystal structures, there's some more observations that we don't see with just these cartoons of the 6-mer. So we can learn about how subunits interact, of course, and this is what's shown. So if we look at these structures and consider how these subunits interact, what we find is that the small subunit of one ClpX-- or sorry, the small domain of one ClpX subunit interacts with the large domain of the adjacent ClpX subunit. And so we can draw this. Basically, if we consider a large domain-- and let's say this is subunit 2-- then what we find is that there's the small domain and then the large domain of subunit next door. So let's call this subunit 1. So here's our hinge of subunit 1, and ATP binding happens in here. So we can think about this arrangement. And then what's been defined is something called a rigid body. And so this rigid body is comprised of the large domain of one subunit and the small domain of the next. Rigid body. So large domain of one ClpX and small domain of another subunit that's adjacent. So in thinking about this, we can consider the ClpX hexamer in another way. So how I've initially presented it to you when we introduced these oligomers is just as a 6-mer, right? 6 subunits. But another way to think about ClpX is that it's actually six rigid bodies that are connected by hinges, where each rigid body has a component from two subunits, a large domain from one and a small from another. And so the hinges are within a single subunit based on this cartoon where ATP binds. And so the thinking is that ATP binding and hydrolysis results in changes in the hinge geometry and that this change in confirmation in the hinge with ATP binding and hydrolysis allows for conformational change in another subunit here. So six rigid bodies connected by six hinges, effectively, as opposed to just six standalone subunits. Each subunit's communicating with one another here. So this is pretty complicated, right? It's another level of sophistication within this hexamer here. So what about these loadable and unloadable conformations? I've told you that in these crystal structures, what's seen often are these four loadable and two unloadable subunits with a particular arrangement. So we can ask the question, do these individual subunits maintain the same conformation during these attempts to denature and translocate polypeptides? So is one subunit just committed to being loadable and another subunit committed to being unloadable? Or did they switch dynamically? And so recently, there were a number of studies looking at that. And effectively, as of a few years ago, many studies suggest-- or support switching by a given subunit. And they also indicate that every ClpX subunit must bind to ATP at some point during these cycles for unfolding and translocation. But they're not all doing it at the same time. So the way to think about this is that there's some dynamic interconversion between these loadable and unloadable subunits within the hexamer, and somehow-- yep? AUDIENCE: I just want to ask you-- ELIZABETH NOLAN: Going to make trouble? JOANNE STUBBE: --a question. Yeah. So when you have all these structures, are they all with an ATP analogue? ELIZABETH NOLAN: I don't know the answer to that. JOANNE STUBBE: OK. Because ATP analogues have wide-- you guys have already seen ADPNP or ADPCH2P They really have very different properties when you study these, the ATPs. So if these-- and probably they don't have ATP because they probably-- ELIZABETH NOLAN: Right. They want to get a stable-- JOANNE STUBBE: So anyhow, that's something to keep in the back of your mind. ELIZABETH NOLAN: So just is this an artifact is what JoAnne's suggesting from use of a non-hydrolysable ATP analogue. JOANNE STUBBE: And there's many examples of that in the literature. Everybody uses it. It's just something you need to keep in the back of your mind. That's the best we can do. ELIZABETH NOLAN: So next week, someone should ask during recitation there, for that. So what about the mechanical work? How this is often depicted, in terms of grabbing and pulling on a polypeptide substrate, is via these rigid bodies. And we're not going to go into details about this, but just to describe the typical cartoon picture effectively, imagine we have some polypeptide that needs to enter the degradation chamber. So those pore loops we heard about that are involved in substrate binding are in the large domain of ClpX. So here we have one large domain, and then we can have the small domain of the adjacent subunit here. And just imagine here we have another large domain with its pore loop. And then we'd have the adjacent subunit here. So effectively, it's thought that these pore loops in the large domains grip the substrate and help drag the substrate to allow for translocation into the degradation chamber. So this would be going to the chamber, that direction here for that. So somehow the ATP binding and hydrolysis is allowing this to occur-- so to ClpP here for that. So next week in recitation, you're going to have a real treat because an expert, Reuben, will be discussing some single molecule methods that have been applied to studying this degradation chamber. So bring your questions to him because he really knows what is state of the field right now for this. So we've talked a lot about how the substrate needs to get in. We have the SSRI tag. We have all of this ATP consumption unfolding, translocation by ClpX. And then we talked about the serine protease mechanism in terms of how peptides are degraded in the chamber. So then the final question just going to touch upon is, how does the polypeptide that's been degraded get out of the chamber? So ClpXP will give products that are 7 to 8 amino acids in length, so short polypeptides. So how are they released? And we can think about two possibilities for how these polypeptides are released. One is that they're released through the axial pores. So somehow those pores that allow polypeptide substrate to go in also allow product fragments to go out. And then the second option is that there's release through transient side pores between the ClpP 7-mers. So effectively, if we imagine coming back to our ClpP, we have a 7-mer-- back-to-back 7-mers, do the fragments come out, say, of the hole? Or somehow do they come out from this region here? To the best of my knowledge, this is a bit unclear, and I don't think they're mutually exclusive. So questions have come up. If they're to come out of an axial pore, does that mean ClpX has to be dissociated? In terms of this equator region, there are structures showing that this degradation chamber can breathe. And there's a picture of that in the posted notes from Friday where you can see opening here. And there has been some experiments done where people have put cysteines in this region by site-directed mutagenesis. So you can imagine, for instance, if you have a cysteine here and a cysteine here, and you oxidize to form a disulfide such that those two 7-mers are locked together. You can ask, if we load the chamber with small polypeptides and we have these effectively cross-linked by disulfides, can the polypeptides get out? And then if we reduce this to have them no longer attached to one another, do those polypeptides stay put or not there? Those experiments gave some evidence for release of peptides through this region here, but there's also evidence for release of peptides through the pore. And in terms of cartoon depictions. In the lecture notes, if you take a close look, you'll see that both come out there. So I'd say if you're curious about that, you can read some of the literature and come to some own conclusions. One last point on the Clp system before we move on to module 4, you should just be aware that there's other Clp family members. So not only ClpX and P. And so in the Clp system-- actually, I'm going to make one other point after this too, about degradation chambers. So there are players ClpA, ClpB, in addition to ClpX. So these are all three different AAA+ ATPases. And you've actually encountered ClpB last week. So this is HSP 100, which came up in question 2 on the exam there by another name. And then in addition to ClpP, there's also ClpS and some other players here for that. They each have their own personality within protein quality control here for that. And then we've only looked at this degradation chamber from bacteria. You might want to ask the question, what happens in other organisms? And the answer is that the complexity varies and systems become tremendously more complex as you move from bacteria into eukaryotes there. And so if we consider the different degradation chambers, what do we see? So we find these proteasomes in all forms of life. And as I just said, the level of complexity varies depending on the organism. And so what we've seen with ClpP is the most simple system where we have two rings that have only one type of subunit. So just say E. coli. One type of subunit. What happens if we go to archae? We find that we have four rings, each of which is 7-mer. And these four rings include two different types of subunits. So I'll call these alpha and beta. So what we find is that there's a 7-mer of 7 alpha subunits, then 7-mers that have 7 beta subunits, and here a 7-mer with alpha. So we see two types of subunit and four rings. So then what about yeast? Tremendously complex. So we have this architecture again of four rings, an organized alpha, beta, beta, alpha. But what we find in this case is that in each of these-- I'm not going to draw it like that, but each of these have seven different subunits. There's a depiction of this in the notes. So just imagine-- how does this get assembled? I have no clue. But somehow each of these heptamers has to be assembled with seven different subunits. And then they're put together in this series of four rings. And then as you'll see after spring break in JoAnne's section, the eukaryotic proteasome has this 19S regulatory particle that's involved in recognizing condemned proteins that have polyubiquitin chains. And compared to the ClpX ATPase, it's much, much more complex. So there's many different proteins that constitute this necessary part of the machine. But there is a hexamer, ATPase hexamer, within there to facilitate translocation of the polypeptide into the degradation chamber. So some of this will come back again in the latter half of the class. So with that, we're going to close on degradation and move into module 4, which is focused on macromolecular machines that are involved in the biosynthesis of natural products, specifically polyketides and nonribosomal peptides. And so we're completely taking a loop back to thinking about a biological polymerization, like what we were thinking with the ribosome from the process of breaking down a polypeptide. And so where are we going? We can think about assembly lines, although this is a helpful way on the board to think about these systems. But it's not really what they look like. And you'll learn about that in recitation this week. Yeah? AUDIENCE: Could you explain the interaction between ATP and the hinge area? ELIZABETH NOLAN: OK. So the ATP binding site is just rewinding here in that hinge region. And there's going to be conformational change in the hinge with ATP binding and hydrolysis there. And that's sufficient in terms of the level of detail for this. But the main thing to keep in mind, each subunit binds ATP. But on the basis of the information gathered with the caveats JoAnne brought up, different subunits bind ATP at different times in the cycle. AUDIENCE: OK. Thank you. ELIZABETH NOLAN: And changes in this subunit, conformational changes that result from that, can be translated to the next door subunit here. AUDIENCE: OK. ELIZABETH NOLAN: OK. So where are we going? By a week from now, you should have a good handle on how to think about the biosynthesis of structures like erythromycin, of penicillin. These are products of assembly lines. And so where we'll go is with a brief overview of fatty acid biosynthesis and then look into polyketide synthase and nonribosomal peptide synthetase assembly lines here. And then some case studies. So on the topic of ATP, where we just went back to with ClpX, just taking a look here, what do you know about these systems in ATP by the names? This is just a little language use and definition. So there's a subtle difference here. What's the difference? AUDIENCE: Synthase versus synthetase? ELIZABETH NOLAN: Yeah. And what does that tell you right off the bat? About ATP. So it's a subtlety, right? Synthase is a general term. Synthetase indicates ATP is involved. So as we'll see, these nonribosomal peptide synthetases employ ATP. And we're going to see chemistry very similar to what you saw with the aminoacyl-tRNA synthetases in terms of activating amino acid monomers. But in this case, the machine is forming a nonribosomal peptide rather than a ribosomal peptide here. If you are not familiar with fatty acid biosynthesis, I highly encourage you to go do some review, either from your 5.07 notes last term if you were in the class or from a biochemistry book. And there'll be some additional slides of overview information posted online. So we'll just touch upon it today but not go into tremendous detail here. So what are our questions for this module? I think for most everyone in the room, this module will contain the most new information from the standpoint of a new system compared to what we've talked about so far. So what are polyketides and how are these molecules biosynthesized by polyketide synthases? What are nonribosomal peptides and how are they made by these machines called nonribosomal peptide synthetases? And what we're going to look at is the assembly line organization, so effectively the organization of domains that provide these linear polymers. So what is the assembly line organization and logic for PKS? And likewise for NRPS. And then we can ask, how can a given assembly line for a given PKS or NRPS natural product be basically predicted from the structure of the natural product? So you should be able to work back and forth in terms of looking at a structure and coming up with a biosynthetic prediction and also seeing biosynthetic machinery and getting a sense as to what that small molecule metabolite's backbone might look like. How are these studied experimentally? And we'll look at the biosynthesis of a molecule called enterobactin as a case study. And so one thing I'll just point out right now is that these synthases and synthetases do not look like an assembly line. And we'll draw domains in a linear order which really facilitates thinking about the chemistry, but the structures are not just a line of domains or proteins next door to one another. And this week in recitation, you'll get to see some cryo-EM studies on fatty acid synthase and related machines there which will give you a sense of their dynamics. So just a review. If we think about template-dependent polymerizations in biology, we're all familiar with DNA replication, transcription, and translation. And what you'll see in this unit is that these template-driven polymerizations occur in the biosynthesis of natural products here. And effectively, these assembly lines, in a way, provide this template. So they're small molecules being biosynthesized by microbes using some pretty amazing machinery. So when we think about template-driven polymerizations, we think about an initiation process, elongation process, and termination. We saw that with a translation cycle. And we'll see the same type of systems here. So what does some of these structures look like? Here are just some examples on the top of polyketides, two examples. They look very different at first glance, and they are. So we have tetracycline. We have four fused 6-membered rings. It's an aromatic polyketide, an antibiotic. We have this erythromycin here, which is a macrolide. We encountered macrolines in the translation section because they bind the ribosome, another type of antibiotic. If we look at some nonribosomal peptides, all of that can be used clinically. We see the penicillins. So we have a 4 or 5 fused ring system here, a beta-lactam. This comes from three amino acid building blocks initially. We have vancomycin. This is an antibiotic of last resort. And this structure looks really quite complicated, but what we'll see is that it's based on seven proteogenic amino acids. So it's the 7-mer peptide backbone that gives rise to this structure here for that. And then we see there's some sugars, so these can be put on by other enzymes here. So on top we see a lot of ketones and OH groups. Those are good hints that maybe polyketide logic is being used. Here we see a number of peptide bonds, amide bonds, a good indicator of NRPS at play. And here, just to point out, these systems get very complex. And there's natural products out there that are biosynthesized from a combination of polyketide synthase logic and nonribosomal peptide synthetase logic here. These include molecules like yersiniabactin. This is an iron chelator produced by Yersinia pestis, and some pathogenic E. coli, this immunosuppressant rapamycin as examples. So as we move forward, I put a lot of structures of small molecule metabolites in the slides. You can go back and use them as a way to study and try to make predictions about what is the machinery at play, for instance, to give all of these heterocycles? How are those made? We'll see the assembly line does that. So what organisms produce these molecules? Largely, bacteria and fungi. And there are some correlations out there, I'll just point out, related to genome size and the number of metabolites being made. So bioinformatics guides a lot of current studies of the biosynthesis of these types of molecules. So you can imagine that you sequence a genome. You have some information about gene clusters. So these are groups of genes where the proteins work together to biosynthesize the molecule. And often, the genes that encode proteins in these metabolites are clustered. And so bioinformatics approaches can help find these. What's found is that for bacteria, some phyla are more prolific producers of these molecules than others. And what's been shown in a general way is that organisms with small genomes-- so something like E. coli-- produce fewer of these metabolites. That's not to say none. So enterobactin, which we'll look at for a case study, is made by E. coli. But they don't make as many. And effectively, organisms with larger genomes produce more. And so here is just a correlation between the number of genes. And the genome size of the organism where they see around 3 Mb, there's a switch here. Often, these molecules-- yeah? AUDIENCE: Is there any hypotheses about an evolutionary driving factor for the development of this machinery and why it correlates to genome size? ELIZABETH NOLAN: If there is, I don't know. I don't think about evolution very well, quite frankly. What is thought is that many of these molecules are thought to be involved in defense and that an organism with a smaller genome size uses other strategies. And so for instance, E. coli, which I cited as a small genome, will use a number of ribosomal peptides as defense molecules that get post-translationally modified after the fact. But why that organism chooses to do that versus say something like Streptomyces that produces many, many different natural products, I'm not sure about that. So let's look at an example of a gene cluster, just so you get a sense of how much machinery is required to do the full biosynthesis of a molecule. So this is for a nonribosomal peptide shown here. It has some structural similarities to the vancomycin we saw on a prior slide, and it is a member of the vancomycin family. So this gene cluster for the biosynthesis of this metabolite contains 30 different genes and is depicted here. So each one of these arrows indicates an open reading frame. So each one begins with a start, ends with a stop codon. And it's assumed to be the coding sequence of the gene. And so what is encoded in these 30 genes? Well, first there are the genes for what we call the assembly line. And if it isn't clear what assembly line means, as we move forward through this week, it will be. So there's genes required to make the 7-mer polypeptide backbone. There's genes required for modification of the backbone. So how do these sugars get attached, for instance? Those are going to be some tailoring enzymes. And then if you take a close look, there's a number of non-proteinogenic amino acids in this molecule, and that means they have to come from somewhere. And so this gene cluster also includes genes that are required for the biosynthesis of those monomers. So there's a lot of effort going in to making this molecule by some organism. And so presumably, under some set of conditions, it's important. So moving towards the chemistry, with that background in hand, what are some points to make? So what we'll learn and see is that the assembly lines that produce the polyketides and nonribosomal peptides are macromolecular machines. So there's dedicated macromolecular machines for the biosynthesis of these secondary metabolites. And so what are secondary metabolites versus a primary metabolite? So what's a primary metabolite? AUDIENCE: I'm not even totally sure how to define metabolites. Isn't metabolites what goes in? Or what comes out? ELIZABETH NOLAN: Rebecca? AUDIENCE: Or easily produced directly from the materials the cell's consuming? ELIZABETH NOLAN: So presumably, the cell needs to get materials to biosynthesize the secondary metabolites too, right? Somewhere, these amino acid monomers or the monomers that are used for polyketide synthetase need to-- they'd have to come from somewhere, right? So are primary metabolites important for growth? AUDIENCE: Yes. ELIZABETH NOLAN: Yes. Development? Reproduction? AUDIENCE: Yes. ELIZABETH NOLAN: Yeah, right. Under normal conditions, right? We're in trouble if we don't have our primary metabolites there, whether they're ingested or biosynthesized. What about a secondary metabolite? Just taking that-- AUDIENCE: I'm guessing it's not necessary. AUDIENCE: --something we can make from primary metabolites? ELIZABETH NOLAN: No. Well, you can. You can. So a secondary-- AUDIENCE: --necessary? ELIZABETH NOLAN: Yeah. A secondary metabolite is not required for normal growth, development, reproduction. So for some reason, under some circumstances of need, these secondary metabolites get produced. So for some of these antibiotic molecules, maybe the organism needs to defend itself. In the case of enterobactin or yersiniabactin, maybe that organism needs iron. And so it's producing a molecule that will help it obtain that there. So what is going on? We've seen some pretty complex molecules. What we're going to see is that these assembly lines convert simple acid monomers, if it's a polyketide synthase or amino acid monomers for a nonribosomal peptide synthetase, into linear polymers. So we're going to look at template-driven polymerizations that initially give linear polymers. And in the case of PKS, this is very similar to fatty acid biosynthesis. What we see is that the assembly lines allow for iterative additions of malonyl and methylmalonyl units. And they catalyze carbon-carbon bond formations. In the case of nonribosomal peptide synthetases, what we'll see is that these allow for condensations of amino acids to form peptide bonds and effectively form nonribosomal polypeptides. So polypeptide synthesis without the ribosome. So even though the PKS and NRPS are forming a different type of bond and that requires different chemistry, what we'll see is that they use very similar logic. And just getting the logic sorted out initially makes life much easier down the road. So take some time to look over the depictions in the notes outside of class as we go forward. So these assembly lines use acyl or aminoacyl thioesters as the activated monomer units. So then how do we get from this linear polypeptide to some more complex structure? The short message on that is that the, quote, "polymers" that are produced-- and they may be short, right? We just saw-- they are short, 7 amino acids for vancomycin. They can undergo further elaboration to give these complex structures. So there can be tailoring enzymes that work on the products of the assembly line. Or there can be domains in the assembly line that give additional activities that allow for methylation or cyclization here. So we can think about fatty acid synthase as a paradigm here. And so if we think about fatty acid biosynthesis making some molecule like this oil here, just as brief overview in the last few minutes of class. Fatty acids are synthesized by FAS. And what happens is that there's elongation by one unit at a time. And each unit provides two carbons. So there's two carbon atoms per elongation. And so hopefully you're all familiar with two ways to form a carbon-carbon bond, at least related to biochemistry, one of which is Claisen condensations. So Claisen condensations allow for carbon-carbon bond formation and join the units. To keep in mind, the monomers are always thioesters, not oxoesters. And for fatty acid biosynthesis, the two monomer units are shown here. So we have a starter and an extender, acetyl CoA or malonyl CoA here. And here we have coenzyme A. So just as a brief review, if we think about these monomer units-- so here we have acetyl CoA. So what can we say about this guy here, in this thioester? So is this acidic or not? Compared to an oxoester. How many of you have heard about fatty acid biosynthesis? AUDIENCE: [INAUDIBLE] ELIZABETH NOLAN: So why are thioesters used and not oxoesters? AUDIENCE: [INAUDIBLE] use the other end? ELIZABETH NOLAN: OK. So we'll go into a little more detail on Friday to make sure the chemistry is straight here because I'm not certain it is. So-- AUDIENCE: Is oxoester referring to not that [INAUDIBLE]---- ELIZABETH NOLAN: OK. So for Friday, think about a thioester versus an oxoester, and how do properties differ? And why might we want to be using thioesters? And also review the Claisen condensation because that's the chemistry that's going to be happening to form the carbon-carbon bonds in the fatty acid synthase and in the polyketide synthases. And what we're going to see is that the monomers in each case, they're tethered as thioesters. So why is that? And I will turn around and point at somebody, and you can let us know. Are you excited? OK. So you're off the hook for Wednesday. I need to be out of town, and I'll see you on Friday.
MIT_508J_Biological_Chemistry_II_Spring_2016
R6_Macromolecular_Electron_Microscopy_Applied_to_Fatty_Acid_Synthase.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. EDWARD BRIGNOLE: My name's Ed. I'm a postdoc in Cathy Drennan's lab, and previously, I had worked at the Scripps Research Institute with Francisco Asturias. So some of the work that I did there is what Liz and Joanne like, and we'll talk about that. So I thought I'd start with just finding out, has anybody here done electron microscopy. You've done some EM. OK, on-- AUDIENCE: Gold nanoparticles with a [INAUDIBLE] spirit thing. EDWARD BRIGNOLE: OK. Over here? AUDIENCE: No. In St. Louis. EDWARD BRIGNOLE: In St. Louis, OK. So you sat at the microscope and worked on obs and-- AUDIENCE: That was my favorite [INAUDIBLE] EDWARD BRIGNOLE: Yeah. Anybody else? So how about a light microscope? You guys used? High school biology, maybe? OK. Everybody's used a light microscope. All right. So that's good. And then I guess at this point, you guys have had two lectures on fatty acid synthesis, so you sort of have some feel for the enzymes and who's involved and what they do. All right, so I thought we'd spend the first 20 minutes talking about electron microscopy and what it can do and how it could be used. And there is actually quite a bit that's changed since this paper in 2009. There's a lot that's happened in the last few years. And we can talk briefly about that. And then we can move into fatty acid biosynthesis and tie that into what you guys have learned already. And then there was a bonus paper at the end. If you are really excited about this, there's some polyketide synthase structures that have come out in the last year or two. And those are pretty interesting. So if you've got the handout handy, there's some questions. These are what I thought we'd focus the conversation around. The first part's about fatty acid synthase in EM, and then there's these bonus ones at the end. So when you guys were looking in the light microscope, you're probably looking at biological samples, I would guess. So you were probably looking at, say, a cell. But all the bits and pieces of the cell that perform all these interesting functions, we want to understand these. And so being able to actually see them and see them in action allows us to understand how they work. So for instance, if you pick out this piece of machinery here-- and you can see that it's got an active site where it binds substrate and maybe moves it around or acts on it in some way. And you might have allosteric subunits, and you can find that it's got four subunits that are round like wheels. And it can move from one place to the other. This is just an analogy, but same thing would go for, say, motor proteins transporting cargo, or in this case, fatty acid synthase. So if you've used a light microscope, the electron microscope is conceptually very similar. You've got the light source at the top versus an electron source. Condenser lens will focus that on the specimen. The objective lens forms the image. It's magnified by the projector lens, and what you get out is this enlarged image. So you can see things that you couldn't see by eye. Light microscope, you can get up to about 1,000x. And in an electron microscope, you can go up to 500,000x or even beyond that. So maybe an interesting place to start here is why can electron microscopes do this but light microscopes that. Why can you get only this magnification with a light microscope? Guesses? If you look back up at the top, what are the sources? You're using light versus electrons. Why would you be able to get a higher magnification image using electrons than light? AUDIENCE: Diffraction limit. EDWARD BRIGNOLE: Yeah. Do you know why? Why would light have a diffraction limit that's in the micron range or nanometer range versus electrons in the actually like picometer range? AUDIENCE: Since your wavelength could be-- EDWARD BRIGNOLE: Exactly. That's what I was looking for. Yeah, wavelength is what I was looking for. So either visible or even if you go to a UV light source, you're talking about nanometer-sized waves versus electrons, at the typical electron acceleration voltages that are used, are in the tens of picometer wavelength. And so that's the main reason. You can magnify these images further. But you're not going to get any higher resolution versus an electron microscope. But you don't often hear about tens of picometer resolution images by EM. So I guess maybe I'll flip back to this slide. So this is differences in the source. But you could actually theoretically go 100 times or more beyond these magnifications by EM. Why do you not typically hear about that? What else could be limiting resolution as you go down through this path here? A number of you wear glasses. Do they perfectly correct your vision? They don't for me. Yeah, so same thing with these lenses. No lens is perfect, and you've got different aberrations. So the way light or electrons that are coming in are bent is never perfect. They're not going to achieve-- and there's correctors that you can use to compensate for this. Also the wavelength of the light, having it perfectly tuned to a particular energy of either photons or electrons, there is going to be some distribution. And so you're going to have some that are a little more redshifted or blueshifted, higher energy or lower energy. And those are going to also not come to complete focus there. And so this is for the lenses in the source, why you would typically be limited to about an angstrom resolution unless you buy some fancy correctors for your microscope to correct for spherical aberration or energy filters to correct for chromatic aberration. So what kinds of things in the cell could we look at by electron microscopy? Maybe you guys have seen images in papers, probably in your textbooks, EM images of-- what? Help me out. You see in sections about, say, muscle where there's a section of some muscle fiber where you can actually see some of the proteins that are involved, the filaments. So you've seen things like that. Tissue sections. You could look at tissue sections by EM. You could look at individual cells. Could you look at an elephant by electron microscopy? Have you ever seen that? No. So why would you not image an elephant in an electron microscope? AUDIENCE: Simply because they're too [INAUDIBLE].. EDWARD BRIGNOLE: OK, yeah. Exactly. You'd have a really hard time preparing that elephant even if you had a microscope that was big enough. But you could, say, take an x-ray of an elephant. Right? But what is it about electrons, maybe, that you wouldn't have to deal with with x-rays? AUDIENCE: Killing the elephant? EDWARD BRIGNOLE: I mean-- [LAUGHTER] Right. So why would the elephant have to be dead to image it in an electron microscope? AUDIENCE: When you're shooting it with electrons, even just for the cell, it'll kill the cell. [INAUDIBLE] you do it to a-- Oh, wait. Don't you have to [INAUDIBLE]? EDWARD BRIGNOLE: Yeah. So in some cases, you might negatively stain. Typically, you would stain a sample in some way or another. But what I was getting at is the vacuum. So the microscope is under high vacuum. So electrons have mass, and they're going to interact strongly with the matter that they're going through. You actually couldn't get an electron through an elephant. You could get x-rays through an elephant though. So thickness is one issue. You'd have to cut really thin sections of your elephant, so about 200 nanometers thick. You can go thicker than that, but then there are some other issues with resolution that occur. So this is about the high end of what you would want to be for a good EM specimen. And then if we're looking at, say, individual macromolecules, the size of those macromolecules, to be able to look at the image and pick them out, would have to be-- if there are single particles floating around like a virus particle or something, you can usually do that. Because they're much bigger than 100 kilodaltons. Probably in your textbook, you've seen EM images. Or even in newspapers, you pick up the New York Times, and there's an article on Zika virus or something, and there's an EM image of it. So those are much bigger than 100 kilodaltons. But this is about the lower end for individual particles. So I thought I'd just throw this up so you could look at the different bits and pieces of an ant in a light microscope. An electron microscope largely overlaps with the high end of the light microscope, where you could look at cells or sections of cells if the cell is a micron or more thick. Bacteria, bacteria-like viruses, bacteriophage in this case. Electron microscopy has resolution down to this range. But in order to visualize things like this, you'd have to be able to pick them out of your image. So you could assemble 10 kilodalton particles into, say if it's actin or something, into a large polymer. Then you can pick out the polymer, and in the process of reconstructing it, identify, say, 10 kilodalton-sized subunits. But to look through an image and pick out a 10 kilodalton piece, that would be impossible. And then x-ray crystallography and NMR are typically imaging structures of about this size down to resolutions in the angstrom range. All right. So we've sort of gone through the different kinds of things you can see. And then the one last thing I wanted to say is, there's lots of different kinds of cellular structures you can look at in an electron microscope. And I hinted at, say, if you can assemble smaller pieces into larger structures, then you can image them. So if you can coax, say, a G protein-coupled receptor that you're interested in into a two-dimensional array, then you could visualize these small very interesting proteins as part of this 2D crystalline array. Or in the case of actin, polymerized into a filament. And so there are different ways to reconstruct molecules that arrange themselves into arrays by, say, electron diffraction or in filaments because each unit is related to the unit that's before and after it in the filament. But the brand of electron microscopy I'm largely going to be talking about today is what we call single particle EM, where you've got these freestanding proteins or virus particles in solution. And you're going to try to pick individual ones out and figure out what their 3D structure is. And each molecule is independent and not necessarily related to the other ones that are around it. All right. So we talked a little bit about why an elephant wouldn't survive in the microscope. And that had to do with specimen preparation. So electrons, because they scatter strongly off of the matter that it's traveling through, if you have gas in your column. Then the electrons are going to scatter off of that before they get to your protein. So the really good microscopes have really high vacuums, and the specimen has to be preserved somehow to survive that. So you probably wouldn't want to just put your protein in buffer and stick it into the microscope because basically, all the buffer would evaporate, and you'd just have a dried out protein. So you need some way to either, if you're going to dehydrate it, to stain it, which is what we're going to talk about in this paper. Or you can cryogenically preserve it and then keep it at liquid nitrogen temperatures while you're imaging it. And then I think one of you guys also mentioned radiation damage, that the elephant wouldn't survive being bombarded by radiation. And so at the specimen level, these radiation damage doses are equivalent to an atomic bomb going off if you scale it up. And so basically, this is what you're doing to your specimen while you're imaging it. And so in your case, you're looking at gold nanoparticles, you had said. And so you can hit a hefty dose on a gold nanoparticle. But on biological specimen, you'd be breaking carbon-carbon bonds, and your protein is rupturing as you're imaging it. So typically in electron microscopy, we'll only expose the area that we're going to-- and for biological specimens, just as we'll focus adjacent where we're actually going to expose and then expose the area. So the first time that area sees a decent dose of electrons is when you're actually acquiring an image of it. And I guess one last thing I could point out about this 30 electrons per angstrom squared is, even by 30 electrons per angstrom squared dose on your specimen, a large amount of the high resolution signal is already lost. So the first five electrons per angstrom squared has most of the high resolution information. But it doesn't have enough information in it to actually visualize your whole structure. So you want to give it enough dose that you can see the whole thing but not so much dose that you've destroyed the whole thing. And I guess one other thing that limits what we can see in the microscope is, if you want to image something at atomic resolution and the stage that's holding the specimen is moving by a few angstroms at the same time, then it's going to be blurry. The features you're looking for are blurred out. I mentioned at the beginning that this paper was in 2009 that we're going to talk about. So in the last two to three years, there's some new detectors that have come online. And these are revolutionizing the field. So if you look at structures by single particle EM that are at less than five angstroms resolution, it went from, around the time of this paper, there were one or two to now there are tens to even 100 a year in the last-- like in 2015. So there's a whole mess of developments that are responsible for this, but the one that's the most important of these is these direct electron detectors. So did you guys have a chance to look at, say, the figures of the paper we're going to talk about today? Did you have a chance to look through the methods at all? Did anybody notice how the images were acquired? So this predates direct electron detectors. So what sort of detectors were used? Anybody notice? Going once. OK, so some of the images were collected on CCD cameras. And some were collected on film. So can you think of an advantage of one versus the other? Anybody here into photography? Friends who are into photography? Does anybody still shoot their images on film? Maybe some purists of image quality, something? Anyway, but why do most people use digital cameras these days? You don't have to go and develop your film, for one. Right? You probably don't even know about having to go and develop, though. So that's a distinct disadvantage, is the throughput. You can snap 100 pictures on your camera. You don't have to wait a couple of days to see the results. So the same thing would be an electron microscope. So if you're imaging your specimen on film and then you have to take the film cassette out and go into the darkroom and develop your film and then realize that there was some parameter wrong or somebody didn't change the developer recently, the whole batch would be gone. So throughput with film is low, but the signal-to-noise ratio and the point spread function of detecting the electrons where they strike the film is good. So the image quality is better with film. And also, the area that you would expose is bigger on film too. So typical CCD cameras are, say, 4K by 4K pixels. Film would be like 10K by 6K. So you'd have a much bigger area, which means more particles per image. So that would be the advantage of film. CCD cameras, I mentioned, are a little worse performing. So what they have is a scintillator layer. So it's like a phosphor layer. So the electrons would come down, and some of them hit the scintillator and bounce off. Some of them will hit the scintillator layer and go through. Some of them will hit the scintillator layer and zig around for a little bit and then give off some photons. So you can see what the disadvantages are if you're limited to, say, a 30 electrons per angstrom squared dose. If a decent number of your electrons are being lost or not detected accurately-- in this case, you have a point spread about the area where that electron struck, where you're actually picking it up. So this is going to cause a blurring of your high resolution signal. And then this is just to convert the electrons to photons. Then you would typically have some sort of fiber optic coupling, where you would also lose some signal and also has a point spread. And then this is connected to the actual detector, like what's in your phone, basically. So this is how some of the images were collected in the paper. Basically, the nice thing about the CCD camera is its high throughput. You can get lots of images really fast. But for the data that was used to generate the 3D reconstructions, that was collected on film. And then I guess I'll just say one more word about then, these direct electron detectors. Basically, they cut out all this extra business. You basically detect the electrons directly. So they come in, and actually, each pixel has the ability-- on some of these, the newest top-of-the-line detectors, can actually figure out which quadrant in the pixel the electron struck and can actually count each electron event on each pixel as it's happening. So you have some electronic noise in here, so there's little bits of noise. And on a typical CCD camera, you would integrate this whole signal over time to come up with-- this is down here. So you'd integrate the charge that's accumulated over time. But in these counting detectors, you could say, here's my threshold for an electron event. And you can filter out all this noise. So you can say an electron struck here, an electron struck over here, an electron struck over here. And so you've got better signal-to-noise, much tighter point spread than you would have in a CCD camera. And this is what's allowing, say, in the last-- there was a really nice structure in Science a few weeks ago of p97, which is a AAA-ATPase with the end domain. So this sort of ties back in, I think, to maybe some of the proteasome stuff that you were doing before. P97's not a protein degradation machine, but it's a AAA-ATPase. And that was at 2.3 angstroms resolution. And basically, what's making this possible is these developments. AUDIENCE: Can you explain what the fiber optic [INAUDIBLE]?? EDWARD BRIGNOLE: Sorry, I didn't label anything up here. So this is your phosphor layer. It's like a scintillator. Phosphor scintillator. And then down here, you've got your detector. And then the fiber optics is basically coupling the photons that you see here, channeling them down to pixels in the detector. Some detectors just don't have the fiber optics. They'll have a lens of some sort here. All right. So any other questions about EM before we will dive into fatty acid biosynthesis? OK. So if I show up this cartoon with lots of different two- or three-letter colored short versions for these enzymes, do these names, I guess they look familiar to you now probably. OK. So this is the scheme for the eukaryotic cytosolic fatty acid synthesis. There's some differences in the bacterial system and the yeast system is a little bit different also. But basically, to sort of-- I don't know-- to help me remember what all these enzymes do, I like to group them into the enzymes that are responsible for chain elongation and the enzymes that are responsible for chain processing. So basically, the malonyl acetyl transferase-- so in our fatty acid synthesis, we have this bifunctional enzyme that can transfer both malonate from malonyl-CoA onto the carrier protein or acetate onto the carrier protein. In different systems, so in bacteria, they've got a malonyltransferase, and then they have a specialized ketoacyl synthase that picks up the starter unit. So there's some differences like that. But basically, these are the enzymes that are responsible for collecting the starter unit and the elongating unit and joining them together. And then you've got these three enzymes here, the ketoacyl reductase, the dehydratase and the enoyl reductase that are responsible for processing this beta carbon. So you've got the hydroxyl, the alkene, and then the saturated chain. And then it goes around again. All right. So I mentioned that the different organisms have different systems. So in our mitochondria and in plants, chloroplasts and most bacteria have a system like this, where the individual enzymes are the dissociated players. In fungi, some of these enzymes are joined into one of two different polypeptides. And some of the names here might look unfamiliar. So like this malonyl palmitoyltransferase. So in this case, it's got an acetyltransferase to select the starter unit and a malonyltransferase to select the elongating unit. And then the palmitoyltransferase, which transfers the product back onto CoA. And this is a bifunctional in this case. And then in our cytosol, we've got this giant monster enzyme that's got all of the catalytic domains fused into one humongous polypeptide. This is what attracted me to this project in the first place, just how bizarre it is to have all of these enzymes all tied together. And then we had this one section of the protein. It has homology to methyltransferases, and we called it the structural domain in the paper. So there is a bonus question in the handout, which is, where did this domain come from? Why do we have this non-functional methyltransferase domain in our fatty acid synthase? So think about it. If we have time, we'll come back to it at the end. Then the other cool thing about this enzyme is it has to dimerize to be active. And so you end up with a 550 kilodalton monster protein. So I mentioned that the enzyme's responsible for elongation and for processing. And the cool thing is when you look at the sequence of the protein, you've got the elongation enzymes clustered at the N-terminus, processing enzymes clustered together in the middle, the carrier protein's way out here at the end, and the thioesterase is there. So there was some decades of controversy about how the acyl carrier protein, which is way out here, would be interacting with the enzymes, which are way over here at this end. And so a model was proposed where the enzymes sort of come together in a head-to-tail fashion. So you would have one going this way and the other one going the other way. But then, that didn't jive with some of the biochemical results, which said that this acyl carrier protein could interact with the enzymes on its own chain. And so there was this controversy in the field, which was resolved in part by this crystal structure. So now we could see how the two subunits associate with each other. So one of the chains is just colored in white. The other one's got the catalytic domains all colored in. And so you can see the cool thing here is the elongation enzymes are all clustered together down here like in the legs. And up here, in the torso and arms, you've got the processing enzymes. And the other cool thing about this structure is if we cartoon in-- we know the acyl carrier protein has to be tethered to the C-terminus of the ketoacyl reductase by a 10 amino acid linker. So that puts the acyl carrier protein right here, and it would be completely surrounded by all of the catalytic domains that it would need to contact. So that's kind of cool. Oh, yeah and then the thioesterase has a 25 residue linker. And so it would be somewhere around here. And so basically, all of these enzymes are just sitting there in a chamber, and the acyl carrier protein just has to bounce around to the different things. So if I make a cartoon version of the acyl carrier protein with its phosphopantetheine arm docked into each of the catalytic sites, you can see where the acyl carrier protein would have to go on this reaction chamber. And then the same thing would have to happen on the other side. So now I've got a question for you. What happens if we make a mutant heterodimer. So this is actually an experiment that was done, but if we make a mutant heterodimer where we knock out the ACP on this subunit but leave this other subunit intact. So if the wild type has 100% activity, how much activity would this mutant have? Any guesses? What would you think? It's firing on one of its two cylinders. AUDIENCE: 50%. EDWARD BRIGNOLE: 50%, exactly. So that's all good. That makes sense. What if we do another experiment where we knock out the elongation enzymes in the other reaction chamber? Now what do you think? AUDIENCE: Expect it wouldn't be active. EDWARD BRIGNOLE: You'd expect it wouldn't be active at all. But the experimental results show that it had about 25% activity. So the only way that could happen is if this acyl carrier protein can elongate with the enzymes from the opposite chain. Right? So that looks like a pretty long reach, but let's figure out how far that would be. So a 10 residue linker to the acyl carrier protein would be about 35 angstroms. The acyl carrier protein itself is about 23. Then you have the phosphopantetheine arm, which would be these black things coming off of our acyl carrier protein. So if we draw that to scale from the C-terminus of the keto reductase to the end of the red sphere, it would be about 60 angstroms. So if we draw how big that would be, that's this gray sphere right here. So this is how far the acyl carrier protein can reach, and you can see these are clearly out of range. And actually, even the elongation enzymes in its own side are also sort of at the limit of what the acyl carrier protein can reach to. So it's hard to imagine what would happen, but you would need to have some sort of conformational change to make these things happen. And I'll point out one other difficulty, which is the access to the enoyl reductase and the dehydratase are sandwiched in the space between them there. And so you'd need to have some other separation of these domains, possibly, to get the acyl carrier protein in there. So we wanted to look at this EM. Do you know why that would seem like a good idea based on what we talked about with EM so far? Does fatty acid synthase seem like it would be a good target for EM? I mean, there was already a crystal structure of it. So should we have tried crystallography, say, to answer questions about conformational changes? AUDIENCE: You might not be able to crystallize it in the conformation you wanted? EDWARD BRIGNOLE: Yeah, as it was, that was difficult molecule to crystallize. There were crystals of it from back in the maybe '70s, '80s, but it wasn't until mid-2000s that they had actually gotten-- they solved it initially at, I think, six or seven angstroms. And then this structure was, I think, also not the highest resolution, somewhere in the three to four range. So yeah, you would have to find ways to trap the conformations that you want, lock it in, and cross your fingers to get crystals. Why look at it by electron microscopy? Is it big enough? It's 550 kilodaltons that you can see individual molecules. Possibly we could even see them in different states, and we might even be able to perturb those states if we threw in some substrates. Then we had a whole panel of mutants that our collaborator had meant. The experiments that I had described about knocking out the ACP in one chain versus the elongation enzymes in the other, there was a whole battery of mutants that we had available to us. All right. So to do the electron microscopy, we need to put our protein on something that we can stick into the microscope. And typically, that's a metal mesh with a cart that's supporting a thin carbon film. And then we stick the protein onto the thin carbon film. So this is about three millimeters across, this little grid. You can put about five microliters on it. And then to get a good dispersion of particles on the grid, you need about 15 nanograms per microliter. If you go too much above that, you get protein everywhere, and you can't pick one particle from another. And if you go much below that, then you have to collect lots and lots of images to get a few particles. So this is sort of the sweet spot, in the 15 to 20 nanogram per microliter range. This is one limitation for EM, the concentration dependence. So if you have a molecule that falls apart, it has a high Kd and it falls apart at these concentrations, that could be difficult to work with, for instance. All right. So there's a couple of different ways to prepare specimens. I think we already talked about staining the specimens or cryogenically preserving them. So the way that would look like for a stain experiment is, you've got your thin carbon film, you put your drop with your protein molecules on it, you blot off the excess solution, replace it with a heavy metal salt solution-- typically a uranium salt-- and then you let it air dry. And the specimen is then embedded in this heavy metal. And that's why we call it negative stain because what we're imaging is, you've got your protein molecule, and it's embedded in this dense stain layer. What's scattering the electrons most strongly is the material around your specimen. And so you're imaging where your protein isn't, basically. Or the stain excluded area is what you're imaging. So what you have, in this case, is a dark background, and your particles look light. The other way to prepare specimens is to cryogenically preserve them. So the first part starts out the same. You would put your proteins on the grid. And sometimes, you could have a grid that's got little perforations in the carbon, so you actually would have your protein suspended in these perforations when you blot it. And then you plunge it into liquid ethane that's cooled to just about to liquid nitrogen temperatures. Here is a picture of the dewar with the liquid nitrogen. And then there's a little cup in the middle with the liquid ethane. So why not just plunge it directly into the liquid nitrogen? Does anybody know? Does anybody do rapid freeze quench for any of your experiments or anything like that? So have you ever messed around with liquid nitrogen that any splashed onto you? Did you get burnt? No. So the reason is liquid nitrogen has a lower heat capacity, so if it touches you, basically, there's a layer of gas between the liquid and your hand or whatever it spilled on. But with liquid ethane, the heat transfer-- basically, this grid will go in there, and it'll freeze so fast that ice doesn't have a chance to form crystalline ice. So basically, everything is, on a microsecond scale, frozen. So now you've got this amorphous ice with your protein embedded in it. Can you think of some advantages or disadvantages? If this gives you something preserved and it's happy in its buffer, why wouldn't you always use that? Why would you use stain? Can you think of some advantages? Maybe the obvious thing, why don't I ask you for some disadvantages. Why would you not want to use stain? AUDIENCE: The stain used could possibly disrupt your specimen. EDWARD BRIGNOLE: Yeah, and that does happen. Sometimes people have to play around with different stains. The uranium salt stains, the uranyl acetate, for instance, is a low pH. And if you try to pH it, it crashes out. So if your protein isn't happy in that low pH stain, that could be a problem. Also the stain layer is dried, and so your specimen is dehydrated and dried out here. And typically, that-- I drew my specimen like this. But when it dries out, it flattens out like this. So that's a disadvantage. What about contrast? So if this is my amorphous ice, my water layer with my protein, do you know what the difference in the density of protein versus an aqueous buffer is? They're pretty closely matched, actually. Protein's like 1.2 or something like that. So basically, you have pretty weak contrast in a frozen hydrated specimen because here, you're looking at the difference in density of your protein versus the buffer around it whereas here, you're imaging the difference between the density of, say, your protein and uranium. So you get a lot better signal here. But you've got some specimen distortions. And so we basically just went through these. There's one other advantage I'll mention to sticking your protein onto a carbon surface as opposed to freezing your protein in a hole. And that is that most proteins tend to have a preferred orientation. Many do, and in the case of fatty acid synthase, it's sort of this. It looks like a headless person that's got arms and legs. It'll very rarely hit the grid and stand straight up. It usually falls back onto its back. And so in some circumstances, that could be an advantage, and other circumstances you would actually want to have many different views to make a 3D structure. So I listed that both as an advantage and a disadvantage, the preferred orientation. Depends. You could use it to your advantage. In other cases it would be a disadvantage. All right. So you said you used an FEI microscope. It might have looked like this one. AUDIENCE: Yeah. [INTERPOSING VOICES] EDWARD BRIGNOLE: OK. Yeah, this is an F20. This is the microscope that all the images in the paper were collected on. So there is a specimen port on the side. The electron source is up here at the top. The column with the lenses and apertures in it is here. There's a phosphorus screen here that you can look at through the binoculars to see what's going on. There's the knobs that you can use to control the microscope, focus, move the stage around. And then the camera is right here below the column, right where you can knock your knees into it when you look in here. Yeah, I mean, you put a half a million dollar detector on there. And you can knock your knees into it. Actually, the newer microscopes these days actually look more like giant refrigerators. And basically, all of this is housed in this environmental chamber, and you operate the microscope from the room next door. So we put the grid in the microscope. At low mag, you can get an image like this. Little higher, just zooming in on one of these squares here, you can get an image like this. This is negative stain specimen so there's little chunks of stain around. If you ever happened to do some negative stain experiments, I usually like to look for areas that have this smudgy appearance. It looks like little pencil lead shavings that somebody wiped their hand across. That's usually a good sign. And then if you zoom in another tenfold, you can get an image like this. And if you look carefully at it, there's all the individual 550 kilodalton fatty acid synthase molecules. So how do you get any information out of that? Any ideas? You can pick out the individual molecules here. If you squint at it, can you maybe make out the legs and arms, the processing portion, and the elongation portion? Maybe? OK, it's tough. Does anybody here do spectroscopy? AUDIENCE: No. EDWARD BRIGNOLE: No. So based on what you know, electron microscope images can have potentially high resolution information in them. But you're limited in dose you can apply to the specimen before radiation damage becomes a problem. So what we have is a signal-to-noise problem here. You've got high resolution signal buried in lots of noise. It's like having a low exposure image of something. What could you do to boost your signal? If you're going to take a picture at night, what would you do? You need a really, really long exposure. Right? But you can't take a really, really long exposure. So what would be a different way to do it? Say like, in the case of spectroscopy, if you had a sample that's damaged every time you stuck the cuvette in the area, but you could say, take a cuvette and take a spectra, take another one, take a spectra, take another one, take a spectra, and you can average lots of them together, that would boost your signal-to-noise. So that's what we have to do here. We have to extract all these particles out and find a way to average them together. So if we put soccer players on a EM grid-- if any of you are soccer fans-- and you collect an image of them. You get this noisy image like this. In the computer, you can go through and pick the particles out. And the computer can do its best to line them up for you. And if it does a good job, and you get lots and lots of particles, when you average them together, you get your high resolution signal out. So that's all fine and good, but not every protein is going to land in exactly the same orientation. And in the case of soccer players, you probably would have a hard time finding soccer players that are in exactly the same conformation every time you image them. So in this case, a soccer player might prefer to kick with his right foot or left foot or might have his right arm or left arm up or down. These are just a couple of different conformations maybe that you would observe. So now what do you do? You've got these averaged together, and you're like, I got an insect. Does anybody here-- have you looked at, say-- you could do this by spectroscopy. But it sounds like nobody here does spectroscopy. So you've got different sorts of things that you want to categorize, basically. So say, sequence alignments, that would be analogous to this, where you've got sequences that you've lined up. Here we've got images that we've lined up. And then what would you do? You'd look through columns of residues or the computer would do this for you and say, this cluster of sequences all have these particular residues. So I'm going to put them into one bin. And these have a different sequence, and I'm going to put those into a different bin. The computer can do the same thing in this cage, basically. It'll look at these images and say, some of them have a density here, and some of them have a density there and split them up based on differences in the intensities of these pixels. If this is a dataset of 100 images, then you split it. Now you've got 50 and 50. You might have 25, 25, 25, 25 if everything's evenly distributed. And the one thing you'll notice as you split things down further, you're averaging fewer and fewer particles together. And so your signal-to-noise is getting worse and worse. So you can split these down. And the way I typically do this is a little bit empirical, but I'll split it and then split it some more and split it some more and look till I get to a point where I'm not seeing anything new. Because if you split this image more, it's basically going to be split on the basis of noise because there's no other conformational change. The same thing would go for orientation. If you put this on the grid and it landed in three different orientations, you would want to separate things out using the same strategy. All right, so this gets us to the averages, like the averages that you see in the fatty acid synthase paper. There are also 3D structures in the paper. So how do you go from information like this to a 3D structure? So I mentioned one way is if you've got lots of different orientations of the molecule on the grid and you've got each one of these averages is a different view, you can use the computer to try to put those different views together to come up with a 3D structure. And there's lots of ways to get that wrong. In this case, we've got a preferred orientation where they're all lying on their backs. We don't have lots of different views. So what would you do instead? Yeah? AUDIENCE: Get it from the sides? EDWARD BRIGNOLE: Yeah, exactly. So that's the thing to do. So now you've got this stereo view of your molecule where you've got a tilted view and an untilted view. An extreme example of this would be if every particle-- let's say if you're looking at cells, no two cells are the same. You couldn't do these averaging methods, but to get a 3D reconstruction of a cell, what you would have to do is take the stage and tilt it by a degree, tilt it by a degree, tilt it by a degree, go up as far as you can one way, and then do it again the other way. So you'd have up to maybe plus or minus 70 degrees. And that would be equivalent to the way a CAT scan or something might be done, where you've got images of your broken leg or something like that from all the way around. And then you can have the 3D reconstruction of it. Right. So we've got these two images of our specimen. They're related to each other by some tilt that you know. If you take out these particles from this image and you line them up, that tells you what view you've got of them in this. So take, for instance, this molecule here. If you have to rotate this 90 degrees clockwise, that tells you you're looking at him in the tilted view with his feet up in the air. And this one here had to go 90 degrees counterclockwise. That means that you're looking down on him in this tilted view with his head up. And so you can take the alignment information from this image and apply that as a projection parameter for these images. And so you could take now these tilted views of these soccer players, and you know which view they are. And you can come up with the 3D reconstruction that way. So this is actually a fairly old method. I think it's still widely used and very elegant because it's basically just two images of the same thing. And then you can get a reconstruction that's pretty easy to get out. There is one disadvantage to this approach, one major disadvantage, which is that you can only tilt the stage so far. So if you could tilt the stage up to 90 degrees, then you would have views exactly all the way around. And you could have a reconstruction that's fully complete. In this case, you can only tilt to 70 degrees, and so you've got a missing cone of information in the reconstruction. And so basically what that means is you've got better resolution in x and y than you do in z. Yeah, sure? AUDIENCE: Is there some graphene packet thing that came out of the [INAUDIBLE] lab where it's this packet filled with solution that you shoot your EM at the-- EDWARD BRIGNOLE: Yeah, to keep your protein hydrated, basically. So you encapsulate it in some sort of graphene tube or something. AUDIENCE: [INAUDIBLE] exactly [INAUDIBLE] EDWARD BRIGNOLE: I vaguely remember seeing something like that. I think there are groups working on things like that, but it's not widely adopted or used yet. But yeah, there are some pretty exciting things like that that might allow you to directly image your molecule while it's tumbling in solution, isolated from the vacuum on the microscope. AUDIENCE: But then there's some limit with what your computer can reconstruct. I mean, it's just infinitely many tumbling orientations or something. EDWARD BRIGNOLE: Yeah, it's a tough experiment to do. The other issue is compressing all your dose into a pretty short amount of time so that you basically obliterate the molecule in this field of view but you capture the image of it faster than it's tumbling, say, or something like that. So I don't know. Maybe it depends on its tumbling rate. Yeah, something like that. But yeah, I think it's an exciting time for EM right now because now there's new detectors. There's actually some other examples of advances that I put on that slide that allow us to look at smaller things, potentially specimens that are still hydrated. Yeah. I don't know if my email address is on this, but if you come across that paper or see anything like that, feel free to bounce it to me. All right. So through the methods that I just described to you, basically from the fatty acid synthase image that we looked at a moment ago, we can sort out some images. And if you look at this image, it looks a lot like that crystal structure I showed you earlier. There's the legs you can clearly see in the average and the processing enzymes in the upper proportion. Then we could sort out a whole bunch of other different classes. And these puzzled us at first. They're kind of fun to look at because we thought maybe it's winking at you. It's got one eye open and one eye closed or like the other one. And we described these as different views. This looked like it had a pirate's hat on or something. But one of the other things that puzzled us at first too is, this lower portion of the structure here, it looks like maybe we were getting some sort of proteolysis and these malonyl acetyl transferases at the legs, we thought maybe they were getting cut off. So we were relieved when we generated the 3D reconstructions then, that these actually weren't getting cleaved off. They're just rotated 90 degrees on the grid and coming out towards us. I'll say a quick word now about-- have you read any crystallography papers yet for class when they talk about resolution of the structure, what kinds of things you can see in the structures? So in crystallography, you have a defined resolution limit based on the highest angle of scattering data you collect. So it's defined in the experiment. In EM, we don't have that. We just have images. So the way we calculate resolution in EM is we would take our dataset-- so the data that went into this reconstruction here if there is 1,000 particles-- we'd split it into two subdatasets, one with randomly selected 500 particles and the other with another randomly selected 500. And we'd generate a reconstruction from both, from each of those. And we'd compare those, the reconstruction of this half of the data to the reconstruction from this half of the data and see how similar they are to each other. And that's how we would figure out resolution by EM. So there's some problems with that. Can anybody think of one way this would be biased or any way that this would be biased if you just take your data, split it into two halves, reconstruct it, compare the two? For one, it's sort of like if you have one person do the experiment and they do it again, but nobody else can do it. There is a bias in-- you're taking the exact same approach to initialize both of these experiments. They're both going to converge to the same local minima. So you could be precisely wrong and have a false high resolution. So typically, what you'll see in EM papers is a curve like this, where you basically are comparing the two half reconstructions to each other, one from first half of the dataset, the other from the other half of the dataset. You compare them, and then you look at, say, at low resolution, how similar are they. Add a little bit higher resolution, how similar are they? If you go to, in this case, 20 angstrom resolution, how similar are they? And in this case, they've got a correlation of about 10% or 15%. So in the case of this paper, the way we reported the resolution was when the correlation between the two halves of the data fell to about 50%. So we reported a resolution of about 30 angstroms for these structures, but like I said, that doesn't necessarily mean they're right. And in our case, the advantage we have was that there was a crystal structure. So if we just dropped the crystal structure right into the EM reconstruction, that looks like a pretty good match. And one thing that we didn't do for this paper, but people sometimes do is, instead of comparing half of our data to the other half of our data, we could have compared our data versus the crystal structure and come up with a similar curve to compare our data to this high resolution data and see where things fall off. So in these reconstructions, there's lots of different conformations. You can see the lower portion swinging back and forth, the upper portion twisting relative to the lower portion. And then, you see this other conformational change. So we take the arms off of the end and look at what's happening in the middle here. The enoyl reductase and dehydratase are rotating like this relative to each other. So one side opens up while the other side closes. They sort of cross over each other like that. And so when one side rotates, they sort of rotate, but then one side tightens up while the other side comes loose. So we wanted to know how these related to catalysis. So I guess if you've looked at the paper-- I'm going to try to speed up here a little bit-- what we did was we looked at some mutants in the presence and absence of substrates to look at how this-- we've got all these different conformations-- how the frequency that we see different conformations changes. And I'll sort of cut to the chase. I think in the paper, there is histograms, but I think the pie charts are a little more telling. But basically, what happens is if you add substrates, the conformation that becomes most prevalent is the one that's represented in blue here. So if I go back, you've got these, basically, four different categories where the lower portion is perpendicular or parallel and then whether the upper portion has this asymmetric appearance or not. And the one thing that jumps out right away is that add substrates and you get lots of asymmetry in the upper portion and the lower portion in parallel with the bottom portion. All right. So why would that be? So if we look at one of these conformations where the lower portion's parallel to the upper portion, the upper portion has this asymmetric appearance. What might this reaction chamber be good at doing? So these enzymes in the lower portion, the ketoacyl synthase and the malonyltransferase, they've come up close to where the acyl carrier protein would be. And at the same time, this side is closed off, so it would have a harder time doing processing on this side. So at the same time over here, this side's opened up. The acyl carrier protein can easily get in here to do the processing, but these enzymes over here are out of reach of the elongation enzymes. So what this means is one side could be elongating while the other side's processing. And then the structures are symmetric. So if you flip it around, then this side, once it's done elongating, could process. And then this side could elongate. So it's kind of cool because it can sort of balance out what it's doing from one side to the next. And then we have these confirmations where the lower portion is perpendicular. And remember, at the beginning I had said that we know that this acyl carrier protein can elongate with the enzymes in the opposite portion. So because of the symmetry in the system and also the resolution of the structures, we can't tell the difference between whether the lower portion is flipped 180 degrees relative to the top or not because it would look the same. But the fact that we can see these go 90 degrees is suggestive that it could probably unravel and go the rest of the way around. So in the crystal structure, the way it had them is, they were coiled like this. And so it's pretty easy to imagine that they would just uncoil. One other line of evidence that I think is telling-- so our collaborators made a mutant that has all of the active sites and the acyl carrier protein knocked out of one subunit. So there is one subunit totally wild type and the other subunit that's totally dead. So the interesting thing about this mutant is it has to do the condensation reaction, the elongation, in this conformation, sort of crossed over. But to pick up its starter unit or elongating unit, it has to coil back around for this. And we know based on the rate of this enzyme, that this probably happens about 100 times per minute. There's a functional catalytic event that happens 100 times a minute. So it probably is sampling these much more rapidly. Sure. AUDIENCE: So this isn't compensating for when you knock out one half that's naturally [INAUDIBLE] EDWARD BRIGNOLE: Yeah, that's what we think. It's sort of naturally sampling both sides. It's interesting to think about because let's say this side picks up acetate and-- well, let me think about this for a second. But if one side is ready to elongate and the other one's got a starter unit, and this side over here is already loaded with-- so basically, this will pick up an acetyl group and transfer it to the ketoacyl synthase. And then it comes back, and then it picks up an acetylic group again. So then it would be stuck because it would be trying to extend an acetyl with an acetyl. But it needs a malonyl. So what it could do is flip around, sample the other side, which might have a malonyl group, to continue on on that side and then come back around. And it could transfer the acetyl group on. So it'd allow maybe one way that it doesn't have to necessarily go backwards, though the malonyl acetyl transferase can function in reverse. So that's another thing, is it won't get stuck if everything's all loaded up with acetate because the malonyltransferase will run backwards to cut the acetates off. But this would be one way to maybe not. It wouldn't have to necessarily rely on going backwards to get unstuck. It could just twist around. All right. So I think we'll just finish up with a quick movie that shows these different conformations. So the bottom parts picks up a substrate, elongates it, goes up here to do the processing. Meanwhile down here, it's elongating. You can see how up here in the upper portion, the separation of where the dehydratase and enoyl reductase is so the acyl carrier protein can fit in. Here we go. So then there was that bonus question at the beginning. I know you guys probably have to run. So the bonus question at the beginning is, what is that structural domain that's-- the methyltransferase, where did that come from. Any ideas? You guys have talked a little bit about polyketide synthases yet? A lot of them have domain architectures identical to our fatty acid synthase with functional methyltransferases. So it seems like we probably picked up our fatty acid synthase from something like a lovastatin synthase or something like that. So it's interesting to think about. And then we didn't need it, so it's now not functional. All right. Cool. Thanks guys.
MIT_508J_Biological_Chemistry_II_Spring_2016
35_Nucleotide_Metabolism_2.txt
NARRATOR: The following content is provided under a Creative Commons license. Your support will help MIT Open Courseware continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: So what I want to do is finish up purines today and talk about some interesting aspects of purine metabolism. I hope I'm going to be able to get through. I've given you handouts for pyrimidines and deoxynucleotide biosynthesis as well. The pyrimidines are pretty straightforward, much simpler than the purines. And so I think if I have time today, depending on when I get finished, I might talk a little bit about deoxynucleotide metabolism, since both Drennan's lab and my lab, both work in that area. So it would be good for you guys to know what's going on in the department and it's central to nucleotide metabolism. We started out-- we were drawing this. This is my notes that I tried to reproduce for you to look at. And I'm not going to read. So this was a big overview slide where we're going. And so central to everything. This wasn't in the original packet, but I will put this up. I'll try to get a better version of that. But PRPP is central. And we are talking about de novo purine bio-synthesis, but again, not only is de novo important, so is salvage. It depends on the cell type. You know, if you have cancer cells that are rapidly growing or B cells and T cells, de novo becomes really important. In other types of cells almost everything is salvage. And so I have that PRPP, at least in the purine case-- and I'll show you an example of that in a few minutes-- goes directly-- can make your nucleotide directly. That's a salvage pathway. And we'll see that the de novo pathway, which is what I was describing at the end-- and you've already seen this in recitation from last week-- is 10 steps to get to IMP. But then you need to get to GMP and AMP. And I showed you how all of this branches off with the cofactor folate between purines and pyrimidines. And in the end, we need both purines and pyrimidines. We need it in the nucleotide levels. So two hydroxyls, the two prime three prime cis hydroxyls, which in the diphosphate stage, that's also unusual. Most of the time you don't see high levels of diphosphates inside the cell, either they're monophosphates or triphosphates. So part of the complexity I think of nucleotide metabolism is figuring out where the kinases are and the phosotases are. And you'll notice that I've avoided that. And that's because every organism is different and every cell type is different, and the regulation is a little bit different. But I think it's important to realize that to make deoxynucleotides, which are required for DNA replication and repair, is done at the diphosphate level. So you make deoxynucleotides, but they still have to be converted to deoxy NTPs for DNA. And over here you need to again, make NTPs for RNA. So that's sort of the big picture. We have a purine pathway de novo. We're not going to be able to talk about pyrimidines, but the salvage pathway with pyrimidines is extremely important. It's a major target of cancer therapeutics now. I think only in the last few years has it been realized that in many cancers you have both pathways going on. And it turns out now with isotopic labeling and mass spec, metabolomics is coming into its own. So you can tell actually by feeding the cells this is all done in tissue culture. But you can tell by feeding the cells whether the deoxynucleotides came from de novo or whether they came from salvage. And so we're getting a really different view of nucleotide metabolism. And as I said in the very beginning, I think the next decade we're going to understand a lot more about how all these things interact and the kinases and phosatases that put the nucleotides into the correct phosphorylation state. So that's key to everything and it's complicated. So what I want to do now is briefly talk about the purine pathway. So we can look at the biology. I'm not going to write this down, because most of you already know this. So we'll just go through it again. Purine nucleotides are central to everything. So knowing where they come from and how you control them is really pretty important. And we don't understand that much. So I mean, NTPs and dNTPs are central to our genetic material. So we need to get them and we need to control them. If these levels become imbalanced, you have mutater phenotypes in DNA replication. And so fidelity of deoxynucleotide DNA replication is really important and regulated by ribonucelotide reductases. Building blocks for cofactors. We've seen flavins. We've seen NAD. We've seen CoA. None of this is an accident. Adenine can self-assemble from cyanide formate in the prebiotic world. And so that's why they're central to everything. So they're in a lot of the cofactors we've already talked about is they're not necessarily the business end, but they've got the phosphates and the adenines stuck on to end, which presumably helps in some way for binding. We're using GDP and ATP everywhere in the course of this semester. You've seen it in your macromolecular machines that you've talked about, especially in the first part of the course with the translational and protein folding and protein degradation all require energy ATP. We will see in today's pathway and today's lecture on purine biosynthesis de novo, it turns out 5 out of the 10 enzymes use ATP. So we'll see. And what you will hopefully now will know is what ATP does. I'm going to show you two examples. But you see the same thing really over and over and over again. So this should be sort of-- you might not know whether it uses ATP to get at the gamma position-- chemistry at the gamma position, or at the alpha position but the chemistry is the same over and over again. And so that part hopefully is part of your repertoire now, about thinking about the role of the ATP on primary metabolic pathways. And we've also seen in the last-- in the reactive oxygen species section, we are signaling by many mechanisms, signaling by phosphorylation is all over the place. And a lot of people are trying to understand. And I think one of the futures is how do you integrate signaling and primary metabolic pathways? And we're almost there. I decided-- I wrote a lecture on this and decided-- it's really still very phenomenological. But all of these key regulators and signaling linked to purines and pyrimidines in some way. I think the linkages aren't totally clear, in my opinion. So why else do we care about purines? When I was your age, purine and nucleotide metabolism was front and center. Why? Because people were successful at making drugs based on these molecules. The central role it plays in replication and repair has made them successful targets at many different levels. Here, this has both purines and pyramidines, but I'll just pick out a few. This guy acyclovir is what we use as an anti-herpes medicine. In fact, I think I've taken it. Here, mercaptopurine cures childhood leukemia. Clofarabine is something that's been studied my lab. It's a drug that-- it's not particularly effective, but it's used clinically against certain hematological cancers. And so these are all anti-metabolites not focused on signaling, which is what everybody is focused on. In reality, I think the success-- if there is success against cancer-- is going to be mixing the two. I think you need combinations of metabolic inhibitors. They're toxic. So is everything. But somehow figuring out how to use multiple approaches to avoid the resistance problem, which is a really important problem. And to combine the two once we understand the interconnections better, I think is where it will be to get successful, more successful. Therapeutics, but ultimately what we would like to do is catch it in the bud, rather than waiting to try to treat something where it's completely out of control. So I'll just show you one of my favorite ladies, Gertrude Elion. She worked at Burroughs Wellcome for many, many years. She went to Hunter College, as did many-- in New York City-- as did many outstanding women scientists. And she was involved at Burroughs Wellcome in discovery of mercaptopurines, acyclovir treating herpes, and AZT. She made several contributions. Never had a PhD. So anyhow-- So what I also wanted to show you, we're going to talk about de novo pathways. I just want to show you this is a typical-- in the case of the purines-- salvage pathway. So what does that mean? You get the bases, the nucleic acid bases from your diet. Or you're breaking down your DNA and your RNA. You have nucleic acid bases. Or you have nucleosides. So can you take those and make them into the right components to do RNA biosynthesis and DNA replication, make ATP, et cetera. And so here's an example of hypoxanthine reacting with our central phosphoribosyl pyrophosphate, which I had in the original slide that I talked about last time to make, in this case, the nucleotide. And why is this interesting? It's interesting because it turns out that many parasites like in malaria don't have any purines. So where do they get their purines from to replicate the DNA? They have to use salvage. So the salvage pathways have-- for treatment of those things-- have become front and center. Can you make specific inhibitors of phosphoribosyl pyrophosphate reaction with the bases? And we're pretty good at that actually. Vern Schramm's lab has done some beautiful work. And there's a lot of things in clinical trial targeting salvage pathways. So again, there's something different about the metabolism of us and whatever is invading us. That's not true in cancer. So cancer is a much tougher problem, because you get normal cells as well. It's a question of what the therapeutic index is. So that's all I want to say in the introduction to the biology. And then I want to talk about one cofactor. And then I'm going to talk about the pathway itself. So there's one cofactor, which I sort of told you I was going there in the first place. Let me break this down over here. So the one cofactor that I wanted to talk about is folate. Let me also show you. You don't have to sit and look at this. But I'm going to show you it's all written out. So you don't have to bob up and down. It's all written out on the handout. So this is folate. And let me just point out a few things. This is going to be the business end of the molecule. So I want you to know where the business end of the molecule is. I don't expect you to remember the structures. But what does this sort of look like? Anybody? This is the kind of chemistry-- I mean, I think there's a bunch of heterocyclic chemistry that you find in biology that most of you haven't been exposed to. And it's not intuitive what the most reactive positions are. This cofactor is much simpler than flavins, which we very briefly talked about before. So this has a polyglutamate on the end. So this is folate. And what you really need to know is that this is 5, 6, 7, 8. And this is 10. So the active part of this cofactor is here. So everything's going to happen at either N5-- if you have a copy of this, you can just circle N5, N10 and N5. That's where all the chemistry is going to happen. And it turns out the way this cofactor works-- so this is 1, 2, 3, 4. And so this is 4a, and this is 10a, 8a. It sort of looks like flavins. And it sort of looks like pterins. And pterins actually can undergo redox chemistry under certain sets of conditions. These molecules are only involved in one carbon transfer. So the major focus is one carbon transfers. And it can do it in the methyl state, in the aldehyde state, or it can do it in the acid state. So all three oxidation states from one carbon transfers. And so then how does it do it? And the chemistry actually is fairly simple compared to the chemistry that we've looked at before. And we looked at a little bit at hemes. We looked at a little bit at flavins. This is much simpler. And so what we're after in the end-- and I'll show you how we get there-- so here's N5 methyl. And we'll see this is tetrahydrofolate. So this ring is completely reduced. And so this is tetrahydrofolate. And this can undergo oxidation and reduction. And that becomes very important in the pyrimidine pathway to form thymidine, which is a major target of fluorouracil, which is a drug that's still used clinically. Anyhow, this is the reduced state here. So this is where the tetrahydro is. So both of these can be oxidized. And that would be folate. So you can make dihydrofolate, folate, and tetrahydrofolate. And the oxidations occur here and here. And we're not going to look at that, because we're not going to have time to look at pyrimidine metabolism. But the dihydrofolate plays an important role. It's the target of methotrexate. If you have rheumatoid arthritis, you take methotrexate is one of the drugs that people take nowadays. So what's unusual about this-- and this is key to the purine pathway, it's also key to the pyrimidine pathway-- that's why folate have been central. People made folates for decades. Even when I was your age, people were making folates for treatment therapies in cancer. And it's been successful. In fact, and if you've gone to Princeton's chemistry department, the whole department was funded on an anti-folate that Ted Taylor made 25 or 30 years ago. And they've tried it again under different conditions, and it's now being used clinically. So how does this work? So we have this oxidation state. We have this oxidation state. And then we'll see that this can ring open. And so this would be the aldehyde state. And this can hydrolyze. And that would be the acid state. So I'm going to show you in a second, I'm going to walk you through where those different states came from. So methyl state, aldehyde state, acid state. So there's the model, because I like to have the windows open, you probably can't see the model very well up there now. But you can pull it up on your computer if you want. I'm going to write out the model. So we start out over here with tetrahydrofolate. So this is tetrahydrofolate. And we have nothing here, which you notice was we could have something at N5, something at N10. We'll see the methyl group is always at N5. It could be at either, chemically, but it's always at N5. We'll see the aldehyde group is always at N10. It could be either chemically, but it's not. So these are going to be the key stages. And here we have no carbons. So somehow we have to get the carbons into the molecules. So we start out with this molecule, tetrahydrofolate. So what happens is you can start out here, and use formate. So formate is going to be the source of the one carbon in this case. So the names in this pathway are again, horrible, just like the purine pathway. And on the next slide I've written out the names. So it turns out that one enzyme can do three of these activities. So this is one of the enzymes. And so this is activity one. And it attaches a formate, so they call it a formate ligase. The names again, in my opinion, are horrible. But what it allows you to do is-- so what I'm going to draw out now is not the whole structure. I'm just going to focus on the business end of the molecule over here, and skip this ring over here. But that ring is there, and is key to making all of this work. So I'm just going to do this like that. And so what can happen is that you can formulate and form. And so this is now N10 formal tetrahydrofolate. So this is N10, and we'll call this R. So that's the first step. That's the enzyme. The same enzyme catalyzes the next step. And what you can picture happening here, if you watch me, is this nitrogen is juxtaposed to this imid. So can attack to form a tetrahedral intermediate and then lose a molecule of water. So that's called a cyclohydrolase. So this guy is attacking. And then you have loss of water. And this is a cyclohydrolase. So this is the same enzyme. So this is two. This is one. But they're both on the same polypeptide. So there are three of these on one polypeptide. You've seen that before in recitation last week. And so now what you've formed is-- and again, this is a cyclohydrolase-- now what you're formed is this structure. So we've lost a molecule of water. So you can draw NR. And so if you hydrolyze this, you can get back to the aldehyde stage. So if water adds here, this is an iminium system. Water can add, it can collapse, it can ring open, it can ring close. So the chemistry here-- we're going to see some really similar chemistry actually, because we can use N10 formal tetrahydrofolate in two steps in the purine pathway. So this chemistry I'm drawing right now is related to the pathway in general. And so this is called 5, 10 methylidine-- the names again, are horrible-- tetrahydrofolate. And then the third enzyme in this pathway is a dehydrogenase, so DH. And so what you can imagine you could do here is we have an iminium system. And NAD pH is the reductive. So you can reduce this down to methylene tetrahydrofolate. So this can be converted to an NADP. So this is the dehydrogenase. We've seen that used over and over again. This is the same enzyme. So this is also MTHFD. And I've given you the nomenclature on the next slide. So if you want to look at-- So this is tetrahydrofolate whatever. So it has of formal ligase, it has a cyclohydrolase, and it has a D hydrogenase all on one enzyme. And so what do you generate then? You generate-- so this is methylene tetrahydrofolate. And this is the key player in pyrimidine biosynthesis, which we are going to talk about. And it's an enzyme called thymidylate synthase, which makes thymidine, which is a major target for drugs in the treatment of cancer. So now you can even take this a step further and reduce this further. We're still now here. If you ring open this, you're at the aldehyde stage. You can reduce the aldehyde stage down to the methyl group. And that's then getting us into the methyl state, the aldehyde state, and the acid state. So I think when you sit down and look at this, it looks complicated at first. It's really not that complicated. So this can just ring open. And conceivably, it could ring open in either direction. It depends on the enzyme that's catalyzing it. But we always get N5 methyl tetrahydrofolate. That's what's used inside the cell. People don't find N10 methyl tetrahydrofolate, but chemically, that could happen. So what happens? This is now a new enzyme. And again, it's a dehydrogenase. So NADPH is going to NADP. So this is a new enzyme. I'm not going to write out the name. But this then reduces this to N5 methyl tetrahydrofolate. So what we've done then is, in the pathway I've drawn out here is, where do we get the one carbon from? Here, we got it from the formate. And we can change the oxidation states to get all three of these oxidation states, depending on what we need to do with it. You have to have the right enzymes and the right complexes to be able to make this all work. Now many of you might not recall this, but in the Benkovic paper you read for recitation last week, one of the controls with this tri-functional protein. And it does not exist in the purinosome. Benkovic's been interested never in these enzymes, and channeling of reactive intermediates in these systems. This does not exist in the purinosome. So then the question is how do you get back? And so there are three methylating agents inside the cell in a biology. Does anybody know what the other two are? So this is unusual, N5 methyl. So this is N5, this is again, N10. STUDENT: [INAUDIBLE]. JOANNE STUBBE: So S-adenosyl methionine is probably the most prevalent. What's another one? STUDENT: Methylcobalamin. JOANNE STUBBE: Yeah. So methylcobalamin. So S-adenosyl methionine is the universal methylating agent inside the cell. And then you also have-- I'm not going to draw the structure out. We're not going to talk about it, but methylcobalamin. And there's a single enzyme that uses all three of these methyl groups. And if I had another five lectures, I would talk about this enzyme. This was studied extensively by Rowena Matthews' lab, who was one of Cathy's mentors. And then Cathy was involved in getting the first structures many years ago with the little pieces. So it's one of these enzymes. It's huge. And it's got to juggle these three methyl groups to do the chemistry. It's really sort of fascinating. And so what it does is it takes homocysteine-- so this is homocysteine-- and converts it to methionine. I'm not going to draw the structure. So you methylate it. So you're going to methylate that cysteine. And then you're back to tetrahydrofolate. So there's another important reaction that I just want to put in here is that there's another way to go from tetrahydrofolate to this one, which is methylene tetrahydrofolate. And so in addition to being able to put on the one carbon with formate, does anybody have any idea what another major way-- it's probably the major way of doing one carbon transfers from metabolic labeling experiments? It comes from an amino acid. What amino acid could you use? So somehow we want to get from here to here. This is also a major target of therapeutics. Anybody got any ideas? We need to get one carbon out of an amino acid. What did you say? STUDENT: Thymine? JOANNE STUBBE: Thymine? That's not an amino acid. STUDENT: Thiamine. JOANNE STUBBE: Oh, thiamine. STUDENT: Methionine. JOANNE STUBBE: Oh, methionine. No. See, I guess I'm deaf. OK, I didn't hear you. No, that's, not it. So I'm not going to spend a lot of time, but serine-- so this is ours-- I'll draw this out, because I think this is really important. This can be converted into formaldehyde. Does anybody know what the cofactor would be that would do that? And then what you end up with is glycine. So this is the major way-- serine is a major one carbon donor. So seramine is going to generate the formaldehyde equivalent, which then can get picked up here and make methylene tetrahydrofolate. Anybody have any idea of how you would convert serine into glycine? You do learn about this cofactor. What is the cofactor that works on all amino acids, if you want to do something to it? There's only one. STUDENT: PLP. JOANNE STUBBE: PLP, yeah. So this isn't unusual-- PLP is sort of an amazing cofactor. It can do alpha decarboxylations, racemizations. It can do aldol reactions. And then it activates the beta positions so you can do beta eliminations replacements. It can do probably 10 or 15 different reactions. This one is unusual in that what you're doing is you're doing an aldol reaction. So you're cleaving that bond, and a reverse aldol reaction in this case. And then the other thing is if you want to link this into pyrimidines, you have dihydrofolate. So this is dihydrofolate. And that's a major player in pyrimidine metabolism to make thymidine. I'm not going to have time to talk about this. But folate is a central player in both purine and pyrimidine metabolism. And people have spent a lot of time thinking about it. And I think the chemistry of interconversions, once you sit and walk through this yourself, start over here and see if you can draw out the mechanisms. It's the same mechanisms we've seen over and over again, in addition to a carbonyl and loss of water. So that was the introductory part. And really what I want to do now is-- we can put that up here for those who still want to stare at it-- what I want to do now is talk about the pathway. And what I want to do is write out the pathway, and then use a PowerPoint to talk about a few features of the pathway that I think are the most interesting, and that you can make generalizations to other pathways, like, what is the role of glutamine? That's universally conserved. What is the role of ATP? And we're going to see the roles you see in the purine pathway are used in many metabolic pathways. So those are the ones I decided to focus on. So what I want to do is go step by step and just make a few comments. And then I'm going to use a PowerPoint over here so you can see what I have written down. I'm going to write down a few things. So that's the nomenclature. There's the pathway. We will start there. So I told you that the first step in this pathway is we start with phosphoribosyl pyrophosphate. That's central to a lot of things. It's chemically very unstable. It falls apart. It's hard to isolate. And the first step in this pathway-- we talked about this briefly in recitation-- is to make phosphoribosylamine. So the interesting thing about this pathway-- so is you start out-- and again, the nomenclature, I've written out. On the exam, you probably will have something about purines there. I will give you the pathway, and I will give you all the names and the enzymes. So you don't need to memorize that. I'm probably the only one that knows the names, because I've worked on it. Very confusing. So what's unique, again, and we've already mentioned this, is you start out with ribosyl phosphate. And what you're going to do-- and this is what we're going to walk through-- is that the first thing you do is you build up the imidazole moeity of your purine. So using sort of basic metabolites in ATP-- there are five steps out of the 10 that use ATP-- you make this amino imidazole ribonucelotide. And then what you do again, step by step, is convert this into the pyrimidine moiety. So you make your purine. So that's a step, one step at a time. And this was unraveled using metabolic labeling experiments. So the first enzyme I'll spend a little bit of time on, because I think it's a paradigm for many enzymes in metabolism in general, where do you get ammonia from most of the time? The major source of ammonia is glutamine. So that's something that you see in this pathway. So glutamine-- you all know glutamine has this part in this side chain-- is going to glutamate. And so you form glutamic acid. And the ammonia from the amid is going to interact with phosphoribosyl pyrophosphate, which is always bound to magnesium to form phosphoribosylamine. And so I'm now going to start being sloppier. Instead of writing phosphate here, I'm going to have a phosphorus with a circle around it. That means we always have the five prime phosphate. And furthermore, what I'm going to do is replace all of this with an R group, ribosylphosphate is present at every single step in the pathway. And in fact, one of the reasons I thought this pathway was interesting, every enzyme in the pathway has to have a binding site for ribosylphosphate. Well, have any of you ever thought about how metabolic pathways evolved? Where does it come from? You have these really complicated pathways. Where do you start? How do you think about that? Well, this might be a fantastic place to look at that. Why? Because you might have a ribosyl binding site for everything. So maybe it starts with something that binds ribosylphosphate. Anyhow, this is an unusual pathway in that you have something. You have a really good handle on to hang on to. And as we already talked about-- so this, we're going to call R-- what's unusual, there are a couple things I want to say about this. But we already talked about this a little in terms of channeling and this question of why you would ever want to have clustering enzymes. And that's because the half-life of this is about 10 seconds at 37 degrees. So it took a lot of effort to see this thing. I mean, you couldn't see it by normal methods. People inferred its presence because Buchanan actually was able to see the next intermediate in the pathway and inferred the existence of this. And many of these intermediates in the pathway, which is why Benkovic focused on this, are chemically unstable. Let's see if I have one of these. I don't have it in this pathway. Anyhow, I'll show you another one, which has a half-life of five seconds or something like that, that took forever for people to identify it, because when you try to work it up as a chemist, it falls apart. And I would say this is something any of you get into metabolomics, people are looking for metabolites now. There's one metabolite that people have found quite frequently, and it seems to be involved in regulation of glycolysis. It's this one. See, where am I? Aminoimidazole ribo-- this one. And that's because it's stable. And the lot of the other ones are not very stable. So I wouldn't be surprised if you ended up finding a lot more metabolites that are playing a central role in regulating enzymes in primary metabolism, because where does the serine come from? Does anybody know where the serine comes from that plays a key role in making this folate analog? Anybody have any idea? So serine is three phosphoglyceric acid in the glycolysis pathway. It's actually very straightforward to write a mechanism of how you get there. Intimately links the glycolysis pathway to purine metabolism. And we'll also see here of course, this is folate, but we also need glycine. That's the next step in small molecule in this pathway. It needs glycine. So everything is integrated. Once you see-- you sort of see the big picture and have central pictures of primary metabolism, everything becomes much more integrated. So how does this happen? So what I want to do is I want to talk a little bit about this enzyme. So here, let me just talk about this this. So if we call this Pur F, just so we have a name, Pur F is called an amidotransferase. And what it's going to do is it's going to take glutamine-- and it turns out these enzymes have a domain. They always have multiple domains. And the domain that uses the glutamine can be the same. There are actually two different convergent evolutions of glutamine binding domains that do the same chemistry. So what you do is-- we've seen this again many, many times-- so you form a covalent intermediate, which then hydrolyzes to glutamate regenerating ESH. And what happened during this reaction, you generate ammonia. So the goal of these amidotransferases in general, in many, many metabolic pathways, is to generate ammonia. And so to me, what's striking about this is the way nature evolved these metabolic enzymes that generate ammonia. And so what you see in a cartoon view-- so we are always going to have all of these enzymes. They may be a single polypeptide. They may be two polypeptides, but they all have a glutaminase domain. So the glutaminase is just generating the ammonia. But what do we have? We start out with phosphoribosyl pyrophosphate. So once we generate the ammonia, what can happen? You can now by-- it turns out by dissociated mechanism-- displace a pyrophosphate to form phosphoribosylamine. So all of these kinds of reactions involve dissociative rather than associative transition states. That's not important. But what it what's amazing about this is that PRPP, in this case, binds to one domain, and the glutamine binds to this second domain. So ammonia, what would happen to ammonia if it went out into solution? STUDENT: Protonated. JOANNE STUBBE: Yeah. Gets protonated really rapidly, becomes unreactive. I don't know why nature designed this. But what you see with all these enzymes is she makes a tunnel across the domain interface that's about 25 to 40 angstroms long. So the ammonia this released never gets out into solutions. This is another example of channeling a reactive intermediate, which we talked about as potentially a reason for channeling in the purine pathway. So there's a tunnel. And the tunnel can be 25-- we have a number of structures in the ammonia channels. And I have no idea-- I mean, this surprised the heck out of me. I thought the way nature would hold on to this is by hanging on to not the covalent intermediate, but the preceding tetrahedral intermediate. And then when the white substrate was there, release it and then bind it, sitting right next to it. But nature, in all designs, has done this thing where you have this channel. And here is an example of Pur F. This is the glutaminase domain up here. And here is where the phosphoribosyl pyrophosphate binds down here. You can't see the channel, but this is work of Jan Smith a number of years ago, was the first one that showed the channel in this pathway. So that's common. And we're going to look at another glutamine requiring enzyme in this pathway. It's the fourth enzyme in the pathway. Also is a channel. Again, it's distinct. It all does this glutaminase covalent intermediate, but the structure of the glutaminase domain is distinct. So what's the next enzyme in the pathway? So the next enzyme in the pathway again, is a paradigm for many, many unsungs and primary metabolic pathways. And if you look at the structure, let's just go back to the pathway. If you look at this pathway, what you now want to do-- so we keep the ribose 5-phosphate all the way through the whole thing. That's the scaffold. Now what are you going to add? You're going to add glycine. So here is your phosphoribosylamine. And you're going to add glycine. How do you inactivate an amino acid? You've seen activation of amino acids now many times. What are the two ways you can activate amino acids? STUDENT: [INAUDIBLE]. JOANNE STUBBE: So either adenylate or phosphorylate. So that's a paradigm that you see over and over again in nature. This enzyme uses ATP. This is one of the five enzymes. And it forms inorganic phosphate. So you're phosphorylating, not adenylating. And so I'll show you what the mechanism is up there. You've already seen this mechanism, but the idea is you phosphorylate this. You're going to form the phosphoanhydride. And then the phosphoanhydride can react with the amino group. And kinetically-- this is something that's one of my students working on a long time ago-- there was evidence that this intermediate, which is chemically unstable, could channel between the two proteins. So you don't generate this own solution where it can fall apart and it can anomerize. It gets transferred directly. In fact, in the early days when we invented the first biochemistry labs at MIT, they used this system. I really pushed them to the limit, because they were dealing with the substrate. They had a very short half-life Anyhow, they learned a lot from the exercise. So what you're going to have then is ribos-- I'm just going to call it R. And so here's our glycine. Whoops. Guess I'd better get the structure right. So this is from lysine. So what we will see is that this is another-- we're not getting very far-- but this is a member of the ATP grasp superfamily of enzymes. They all do the same chemistry. So let me just move forward a little bit. I'm not going to draw this out. You guys have seen this chemistry many times. So what's happening in this chemistry is you have a carboxylate. ATP phosphorylates it, and then you attack by a nucleophile, in this case, the nucleophile is the amino group of phosphoribosylamine. So what I just want you to see here if you look at this, there are four enzymes that are involved in purine metabolism that all have the same structure. They all have ATP grasp structures. They all go through phosoanhydride intermediates. And you can, from bioinformatics, pick these structures out. So this is again, an example. Once people defined-- there's almost a no sequence homology between these proteins-- but by knowing this chemistry, you can actually pick out that these are going to be family members. And if you know if they are organized in bacteria and operons, you can even guess at the substrate. And then you can test this model that they go through phosoanhydride intermediates. And I'm over, but the next step in this pathway-- the next step in this pathway-- we're going to come back. And what are we going to use? We're going to use N10 formal tetrahydrofolate. That's why I went through this. We're going to put a formal group here. And again, the chemistry is just the same. Go home and think about the chemistry of how you generate all the different oxidation states of the carbon. And then I think you can see the chemistry in this pathway actually is pretty simple, once a few basic reactions. So the ATP grasp family is interesting. The amidotransferase and the channel is interesting as being general in metabolism.
MIT_508J_Biological_Chemistry_II_Spring_2016
36_Nucleotide_Metabolism_3.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: We're talking about the purine biosynthetic pathway. Here's the pathway. I told you, in this part of it we were going to go through, so at least you saw what the steps in the pathway are. The key thing is you start out with the ribose 5-phosphate, and then you build up the base a step at a time, which is completely different from pyrimidines, where you make the base, and then stick on the ribose 5-phosphate. And I told you at the very beginning, there were a few interesting steps in this pathway that are universal in almost all metabolic pathways. And one of them we were going over-- two of them we already went over. I'm going to briefly go back over this, but the role of glutamine in the purine and pyrimidine pathway as the source of nitrogen. There were five of these enzymes. That's not an accident. Glutamine is one of the major ways you deliver ammonia into molecules. And purines and pyrimidines both have a lot of nitrogens. The second thing we were talking about, and we had gone through the first few steps here, was the second enzyme in the pathway, where we use ATP, and in this particular pathway, this is the mammalian version of the pathway, which is pretty similar to the bacterial, but there were five different steps that require ATP. This pathway demonstrates how you see ATP use over and over and over and over again. There are defined structures for the binding sites of the ATPs. Once you have these in your brain, it becomes easy. You might not know which one of these mechanisms it is, but after you do a little bit of reading, or bioinformatics, you can immediately tell what the structure of the enzymes actually are. The other thing we talked about already was the role of folate. Those are the three things I want you to get out of this, and we're going to go through the rest of that today, and then, after we finish that, we'll come back to the purinosomes, which is the reason I chose this topic a long time ago, because it speaks to the question of the importance of transient protein-protein interactions in metabolism inside the cell, which has been something that people have been interested in for decades, and this paper in 2008 that you read for recitation was very interesting to a lot of people, and we'll come back and talk about that at the end. The first enzyme-- the names are horrible. I gave you the names of all these things. If you look at last year's exam, you will have the purine pathway with the name stuck at the end. I don't expect you to remember this, but we go from PRPP-- we've already gone through this step-- and the enzyme is PurF-- I'm not going to write it out-- goes to PRA. The reason I'm writing that again is because a key reason that Bankovic's lab and my lab, many years ago, was focused on this is because of the instability of the intermediates in this pathway. This guy has a half life of 15 seconds at 37 degrees, so this is chemically unstable. This is enzyme 1, and this is the first place we saw glutamine going to glutamate as the source of ammonia. And I wanted to go back and say one more thing about that. Again, there are two enzymes that use glutamine as a source of ammonia. This one is simply, if you look at the pathway displacing pyrophosphate ammonia, you have a nucleophile displacing pyrophosphate which, when complexed to magnesium, is a good leaving group. The idea here is that all of these proteins, and there were five of them, in the purine and pyrimidine pathway, have two domains. Sometimes the domains are separate polypeptides. Often they're linked together. The glutaminase domain is in one of these domains, and the way the chemistry goes, the way the ammonia is going to displace whatever the leaving group is in the second domain, requiring a tunnel that varies from 25 to 40 angstroms to actually mediate ammonia release. PurL is the fourth enzyme in the pathway. Again, here's the glutaminase domain. It's upside down, and here's where the chemistry occurs in the other system. What I wanted to say about that is that all of these enzymes in the active site have a cysteine. All of these enzymes have a cysteine in the active site, and you should go back and look at the PowerPoint, because I'm not going to write this out on the board. You've seen this chemistry now, over and over again, but, in some way, the glutamine is going to be attached covalently with loss of ammonia to a cysteine in the active site. Let me show you what the mechanism of that is. Here is a generic mechanism, but it could be a cysteine protease. These are the same things we've seen over and over again, so this should now be part of your basic vocabulary. So the goal, then-- here's our glutamine-- is simply to liberate ammonia. The cysteine needs to be activated somehow for nucleophilic attack. How is that done, normally? With a histamine. This particular enzyme. There are two superfamilies of enzymes that do this. This one doesn't use histamine, but it still needs to be activated. You go through a tetrahedral transition state, which collapses to form an acylated enzyme, and, in the end, you need to hydrolyze this off to give you a glutamic acid. One of the reasons I wanted to go back to this, again, is because, in the Bankovic paper, we talked about, but didn't go through in any detail, the fact that, in that paper, to study whether these purinosomes could assemble and disassemble, they use an inhibitor of the purine pathway, which then should want the enzymes to assemble, because they need to make purines because you've blocked the pathway. And the inhibitor they used is a molecule that looks like this. They used azaserine, but it has another methylene in it. This is DON. And this is a diazoketone. This is a natural product, and it was discovered by Buchanan's lab at MIT, and it was the first diazo compound that people had seen. And it inhibits all-- this is something that's important when thinking about what's happening when you're treating cells with it to stop purine metabolism-- it inhibits all glutamine-requiring enzymes, because the mechanisms are similar. So the mechanism, if you sit down and think about it, is pretty simple. You have a diazo group, and now the proposal is that this needs to be protonated by the cysteine in the active site. And now you have an N2 to that's dying to leave, N2+, and so you just do an SN2 reaction forming a covalent bond. That's the basis for how azaserine in the Bankovic paper works. There was another way that they block the pathway, which hopefully we'll have time to come back to in the end. So, again, this idea of coming together and going apart-- how do you perturb this? One way they perturbed it was depletion of purines. We discussed that. We didn't really discuss this particular step. The next step in this pathway. Now we have R, which is ribose 5-phosphate. I'm not going to write that out, because every single step now has ribose 5-phosphate as a scaffold. And what we added was glycine. Again, here's the first time that we need to use ATP and Pi. Lots of times, you don't know, when you look at this, whether you're going to transfer pyrophosphate or you're going to phosphorylate, so where you have attack on your ATP. Almost all the enzymes, but not all of them, in the period pathway have ATP going to ADP, so that tells you the attack has to be on the gamma position. This is an ATP grasp superfamily member, and they all go through the same mechanism, which I briefly talked about last time, so I'm not going to write this out again, but basically, you're going to go through a phosphoanhydride, which is then attacked by a nucleophile. We're converting the hydroxyl group of the carboxylic acid into a good leaving group. You've seen this used over and over again over the course of the semester. But over here, this is all written out for you. Here we have glycine. R is CH2NH2. You phosphorylate to form the anhydride. You still need to neutralize this to make it into a good leaving group, which is done in the active site, and then you can have a variety of nucleophiles that could come in and attack to form the covalent linkage. In this case, the nucleophile is not the NH3+. It needs to be converted to the NH2-- Sorry. The nucleophile is over here. It's phosphoribosylamine. So it's the NH2 of the phosphoribosylamine that's attacking. Again, to be a nucleophile, it's got to be deprotonated. Hopefully, you all know that at this stage. So what do these enzymes look like? They all look the same. It turns out that if you look at, globally, purine biosynthesis, not just focus on mammalian systems, there are four or five enzymes that actually are ATP grasp superfamily members in the purine pathway. And they all look like this. They have a little domain with a lid, and all the chemistry happens in between, and the lid opens and closes. You can pick these out by bioinformatics. That's the second step in the pathway. And this just shows what all of the products can be, so if you go back and you pull out the pathway, there are ATP grasp superfamily members, and these are the products that are formed by this common type of mechanism through a phosphoanhydride. The next step in the pathway. So now we formed-- The next step in this pathway, let's see if I put this. All right. Sorry. I thought I put another copy of this in. The next step in the pathway is we need to formylate. What do we use as formylation? That's why we spent the introductory part of this course talking about folates, which can transfer carbon at three different oxidation levels. What you have here is, and I'm not going to draw the whole thing out, this is the part I told you was the business end. This is N10-formyltetrahydrofolate. Theoretically, this could be either here or here, and chemically they can actually interconvert under certain kinds of conditions. But we know, for all purine pathways that people have looked at, it's always the N10. That's distinct from methylation, where it's always from the N5. I don't know how things evolved, but that's what the results are. How does this happen? Hopefully, you all know this without me having to write this down, but this needs to be a nucleophile. It needs to be deprotonated. You need a base to remove a proton, and then you form a tetrahedral adduct, and then the tetrahedral adduct high energy intermediate collapses, and the formyl group gets transferred from here to here. This then becomes a molecule that looks like that. I've just transferred the formyl group, which is called FGAR. Formylglycinamide ribonucleotide. Horrible names. This molecule is unstable. It loses its formyl group actually quite rapidly. It took them a long time to figure this out. One of the premises is a purine pathway, because people were interested in it, is that it falls apart. When you're trying to look at metabolomics, which is the next decade-- hundreds of people are using mass spec, which you guys have thought about, to look for metabolites-- you need to know something about the stability of the molecules you're looking for, and how you separate them from everything else. So this is going to be a major focus, and most people haven't found very many intermediates in this pathway, and I guarantee you it's because they break down. I think that was clear from Buchanan's work really early on. The next enzyme in the pathway. We've seen this, again, before. Now we're going from an amide to an amidine. That's all we're doing, so an oxygen is being replaced by ammonia. So what are we going to use? We use glutamine. The next enzyme in the pathway uses glutamine to glutamate, and again, this is the source of ammonia. As I showed you before, there's a channel where this happens. This is another way you can use ATP going to ADP and Pi. This is the second kind of mechanism. This enzyme is called PurL. Anyhow, we're using ATP again. Why are we using ATP in this case? What we're trying to do is convert this amide into an amidine. We're converting this into this. So we need a source of ammonia. That's the source of ammonia. What we have is, we're using ATP to facilitate a dehydration reaction. Again, you've seen this before with a carboxylic acid. Now we're doing it with the oxygen of the amide. The ATP is used to remove oxygen of the amide. What I'm going to show you, and then we'll come back to this again, is the generic mechanism for this. Let me show you now, before we move on, the next enzyme in the pathway. Here is using glutamine, and we use ATP to help us attach the glutamine to the carbonyl. The next enzyme in the pathway. What you're doing, basically, I'll show you this in a second, but you're just cyclizing. This amino group becomes this amino group, and this guy has to attack that position. That position, again, is an amide, and the mechanism, again, uses ATP, just like this enzyme, PurL, and I'm going to show you how it works. These two enzymes in the pathway are structurally homologous to each other. The product of one enzyme is the substrate for the next enzyme in the pathway, and they clearly evolve from each other. This is something that everybody's been interested in. How can you tell something about the evolution of a biosynthetic pathway, and thinking about how to control this. Why? Because everybody and his brother now is focused on bioengineering of metabolic pathways. So the more you understand about the basic principles of how nature designed this, the better off you're going to be in trying to get this to happen robustly and control things by using an enzymatic system and enzymes from many different sources. So what's the generic mechanism? This is called-- this enzyme is part of the PurM-- the nomenclature is horrible-- superfamily. So I just told you this ATP was the ATP grasp superfamily. This is the PurM. Why is it called the PurM superfamily? Because it was the first structure of any molecule that looked like this, and it was the PurM enzyme. So that's where the horrible name came from. This enzyme is PurL, and this enzyme is PurN, and they're structurally homologous to each other. How do they work? Again, I think once you see it. Here's the general mechanism. Here we have our amide, and what we want to do is facilitate dehydration of the oxygen. What you're going to do is phosphorylate the oxygen of the amide. Now what you have is a system that is activated for nucleophilic attack by a nucleophile. That's the generic mechanism. There is a generic mechanism where you simply phosphorylate this. Now, if this is positively charged, this is activated for nucleophilic attack, and then you lose phosphate. People have studied this over the course of years, and the mechanism for this is understood. I don't have the structures but, again, this enzyme and then the next enzyme in the pathway use the same sort of approach. The next enzyme in the pathway takes the amidine. What it's going to form is a cyclized product. This is aminoimidazole ribonucleotide. So we finally found-- Remember, I told you, you form the imidazole ring, and then you're going to put on the pyrimidine ring afterwards. How does this happen? It looks sort of wonky. But what you can see is that this guy-- so let's just put a box around this guy-- becomes this guy. This guy is where we're doing the chemistry. That's the one we're going to attach, we're going to phosphorylate. What you have here, now, is an intramolecular attack. So, the nucleophile, instead of being ammonia, which is external, now happens intramolecularly. In the end, after you activate this, you get intramolecular chemistry. This was the site. This was the site that was activated in the beginning. The chemistry in these two systems is pretty much the same, and now we've got our imidazole ring, and now what we need to do is build up the rest of this system. Is everybody with me, or am I going too fast? I'm probably going too fast. Anyhow, that gives you the generic mechanism for this. I didn't draw the structures all out. The folates we've already talked about. So I'm not going to talk about that again. We're going to see the folate-requiring enzyme again later on in the pathway. Now the pathway just repeats itself. Really, I think what's most striking, this is really an ancient pathway. There are huge numbers of ATPs used in this pathway. I think, if any of you wind up thinking about cancer therapy and stuff, and whether you have de novo biosynthesis because you need a lot of purines fast, or whether you use salvage, this really requires a huge amount of energy to make this pathway actually work. Now we have this molecule, and then the next step in this pathway. In the human system, what you do in the human system-- this-- it's not right. This enzyme, cross this off. This is a Bankovic's lie. Cross that off. It doesn't use ATP. So you need to cross that off. It just picks up CO2. If you look at this, what do you have happening here? We're going to go from here, and we're going to pick up CO2 there. CO2 actually can react really rapidly at this position. So you need CO2, and let me write this down. No ATP. I don't know why. I probably didn't look at this very carefully, but there's no ATP required for this step. What's unusual? Do you think it's unusual to use CO2? This is called PurE. How much CO2 is there inside the cell at physiological concentrations? Think there's a lot or a little? Where have you seen CO2 used before? Remember fatty acid biosynthesis? Do you use CO2 in fatty acid biosynthesis? Anybody remember? AUDIENCE: [INAUDIBLE] JOANNE STUBBE: The what? Anybody know how you-- do you use CO2 directly? AUDIENCE: [INAUDIBLE] JOANNE STUBBE: We'll use bicarbonate? OK, why do you use bicarbonate? That's where the equilibrium is at pH 7. There is almost no CO2 unless you go down to acidic pHs, so almost no enzymes use CO2. So this is unusual. That's also true of biotin. And, in fact, so this is the human enzyme, and it generates that product. In bacterial systems, it turns out that it does use bicarbonate and ATP, and generates a carbomate of the same molecule. What do we know about the stability of carbonate? So, number one, why are we using bicarbonate and ATP? Have you seen that before? What does ATP due to the bicarbonate? We just saw this reaction two seconds ago. What does ATP do to bicarbonate? AUDIENCE: Phosphorylates it. JOANNE STUBBE: Yeah, phosphorylates it. You need to neutralize your charges. These are all magnesium ATP and you form carboxyphosphate, which has a lifetime on the order of a millisecond. That's the way biotin is made inside the cell. Almost all organisms do not use CO2, they use bicarbonate. And to activate bicarbonate, the other reason-- What's wrong with CO2? How do you hold on CO2? You think that's easy to bind in the active site? No, there's nothing to hold on to. There's no charge. It's symmetrical. So what nature does is put bicarbonate, which is charged, into the active site, and uses ATP to phosphorylate it, to form carboxyphosphate, which then reacts with the nucleophile, in this case the amino group. Yeah. Did I screw something up? AUDIENCE: So you do need ATP? JOANNE STUBBE: You do need ATP for the bicarbonate-dependent reaction. So, there are two different reactions. This is eucaryotes, and this is bacteria. They have two different pathways. I think this is sort of amazing, because what happens now is the bacterial enzyme then takes this, and generates this. So nobody even knew that this intermediate-- my lab discovered this a long time ago-- existed on the pathway. Why? Because its half-life is on the order of 15 seconds. Carbomates. That's how you carry CO2 from the tissues back to the lungs. It's carried on the surface with lysines forming carbomates, and what's striking is that these enzymes-- one enzyme uses this substrate, one uses this substrate. The proteins are structurally homologous to each other. Nobody really understands that. Nature has done a shift on what normally happens in the eukaryotic system. CO2 is added in the procaryotic system. You need bicarbonate, and you need CO2, and I think that tells you something about where these things evolved. What's the pH? And was there enough CO2 to be able to do these kinds of experiments over the evolution of these systems? I think things change. You've now produced this molecule, which is called CAIR. So, that's carboxyaminoimidazole ribonucleotide. Then the next step in the pathway. Now, we only need this and another carbon to complete the pyrimidine ring. It turns out aspartic acid-- which is also a major player in pyrimidine biosynthesis-- this nitrogen is going to come from aspartic acid. What are we going to do? We need to activate this carboxylate to attach the amino group of aspartic acid. How do we do that? With ATP. We phosphorylate it, and then we have nucleophilic attack. I'm going to go up onto the board up there, so you can still see what's happening. This next reaction. CAIR now reacts with aspartic acid. And we need, again, ATP, and we go to ADP. Now what we have, aspartic acid, we form an amide linkage. R is ribose 5-phosphate. What we've done now is attached-- we're deviating, but we're going to see it near the end of the purine pathway. We use this strategy again. Almost all the time, if you have to guess at this, the source of ammonia or nitrogen is going to be glutamine. So if you don't know and you're seeing a new pathway, use glutamine. But here's an example where nature has used something different. She's used aspartic acid. And the ATP is, again, activating the carboxylate. So we're using the same strategy over and over and over again. Then the next enzyme in this pathway. What are we going to do? We convert this intermediate called SAICAR. We now lose fumarate. Where have you seen fumarate before? Does everybody know what fumarate is? That's an intermediate in the TCA cycle. This is an anaplerotic pathway, and you've got to feed the fumarate back in. What are we doing here? We're going to lose fumarate, which has the two carboxylates transfer the double bond. We're going to do an elimination reaction. The next step in this pathway is catalyzed by PurB, and we'll see that nature uses the same strategy to convert IMP into AMP at the end of the pathway. Uses the same enzyme, actually. So you lose fumarate. So what we're doing now is we're going to do-- actually, the enzymes have been very well studied. We have structures of all these things. You use fumarate. Now what we have is this guy. And this guy actually has now been found as a regulator of glycolysis. So we're linking now. You're going to see this, and I think you're going to see more of this. The only reason these guys have been found is this guy's pretty stable, so people can find it using metabolomics. But this molecule is a regulator of glycolysis, and I think the more we look, the more we're going to find basic intermediates and metabolic pathways controlling fluxes through other things. We need glycolysis to ultimately generate energy, because we need a lot of ATP to synthesize things, but the connections between all these things, I think, remains to be established. So this is involved in regulation of glycolysis. If I'd had another couple of lectures, I would have showed you how that fit in. And then, where are we? We're not very far away. We only need one carbon left. Where do we get the one carbon from? AUDIENCE: Folate. JOANNE STUBBE: Yeah, from folate. So here we have it again. Now we have N10-formyltetrahydrofolate. That's why I spent the time in the beginning. And this guy, through the same kind of a mechanism, is going to be attached to that guy. Once we have the one carbon there, then you can cyclize. You attach that. Now we're ready to cyclize and lose a molecule of water. So, the last step is cyclization and loss of water to form inosine monophosphate. Inosine monophosphate is the end goal. That's the first time we now have it purined. So we have both the imidazole ring and the pyrimidine ring, generating this purine, which then is the branch point to form GMP and AMP. Both of these are going to involve two steps. And this tells us something about the overall regulation of the pathway. Pathways are often regulated by feedback inhibition. The M-products can come back and inhibit the first step, so things don't build up. If we come over here, If we look at PurF, This is a stop. These are inhibitors-- our AMP and GMP. We're going to see, in this pathway, AMP inhibits its own biosynthesis, and we're also going to see GMP inhibits its own biosynthesis. So what you see is, ultimately, we want to control the relative ratios of purines and pyrimidines, which we're not going to get to, and these are examples of simple allosteric effectors. They bind outside the active site and shut things down. And we actually understand a lot about how that works, we just don't-- we're not going to have time to discuss that. So what we've gotten in to through all of this is inosine monophosphate. If you look at the next step in this path. If we go back here, here is IMP, and we want to go to AMP, and we want to go to GMP. If we look at AMP, what do we see? Have you seen this before? We're attaching aspartic acid. Where have we just seen that? We've just seen aspartic acid attachment. And what's interesting about this is, instead of using ATP, it's using GDP. Is that an accident? I don't know. GTP is regulating the flux to form AMP. So again, AMP, ATP, GTP, you've seen this over and over again over the course of the semester. You saw, with translation, it was all GTP. In other cases, you saw, with folding, with the proteasome, it's all ATP. You've got to control all of these ratios. Here is a place where the ratios are controlled. So how does this happen? What are we going to do with the GTP in that molecule? We want to go from here to here. This carbonyl is replaced with the nitrogen of aspartic acid. What are we going to do to that oxygen? AUDIENCE: Phosphorylate it. JOANNE STUBBE: Phosphorylate it. And that's done by GTP rather than ATP. So what you do is you phosphorylate through the mechanism that we just went through, that I wrote over here somewhere. Where did I write it? All right. I can't see where I wrote it, but it's in your notes. You then have your amino group of aspartic acid, displaces this, and then what happens in the last step? This is exactly what we saw over here. We're kicking out fumarate. So this is the same enzyme. So PurB also happens here. So it's kicking out fumarate. Now, what about this pathway? This pathway is of great interest, because it's a major target-- when you have a transplant, to prevent rejection-- of mycophenolic acids. There are many compounds that inhibit this step in the pathway, and it's widely used for organ transplant, subsequent to the transplant. This is called IMP dehydrogenase. How do you get from here to here? This is not so trivial. What you see, and this is the unusual thing about this, hopefully now you could actually think about this, but we're adding an oxygen here. So, somehow, we have to add water, and then we're using NAD and ADP, so we're going to have to do an oxidation and NAD gets reduced. If you look at this, what happens is this molecule is activated for nucleophilic attack at this position to add an OH here. So what you generate is-- then this guy needs to get oxidized by NAD. That's an unusual step. You should go back and you should think about that. It took people quite a while to figure this out. What about the last step? How does this work? Where have we seen this before? Glutamine. What we're doing is converting this oxygen to an amino group. What's doing that? I told you there were five glutamine-requiring enzymes in the pathway. This is one of them. What do we need to do to this oxygen to make it into a good leaving group? We need to phosphorylate it. Use ATP to phosphorylate this, and then glutamine supplies the ammonia, and that's how you get GMP. As an exercise, you should go back and think about these interconversions. If you have trouble, you can come back and talk to me. I put on the Stellar site a new version of a chapter on purines from a book by Appling that has come out last year. Within this section, it's by far and away much better than any of the others. So those of you who want to look at the chemistry of this, they've written this all out in detail. So you can pull it out and flip to that page. You don't have to read the whole chapter. You can flip to the page where they describe all of these things. So that's the purine pathway. My goal was to try to show you that everything has made from ribose 5-phosphate as a scaffold. You build up the imidazole, you build up the purine, and you use three types of reactions that are used over and over and over again in metabolism. One of the reasons I picked this topic, besides the fact that I like deoxynucleotides, which I never get to talk about, is the discovery of what we talked about in recitation 13, this purinosome. What's the purinosome? You all know what it is, And we talked about some of the experiments, but the idea is that you have proteins from all over the place that organize transiently. So you have transient protein-protein interaction that arise to the occasion. There's going to be some signaling mechanisms that they know they're depleted in purines. That's the model. They come together, they do their thing. Why would you want to do this? The choice that everybody has looked at has been the purine pathway for this idea of multi enzyme complexes that form transiently, and I've asked you this question recitation, why would you want to do this? One reason you might want to do this everybody agrees on, and that's because if you have unstable intermediates, and these intermediates go into solution, they can degrade. So that would be a waste. Is that true? We don't know. But one reason would be to protect unstable intermediates. A second reason that you might want to do this is if you have a long metabolic pathway-- this is tensed up, it's a long pathway-- oftentimes, in the middle, you can have branch points to other pathways. Say you want your intermediate to go this way and not that way. If you have this organized, you can control where it goes. If you have a pathway, and you have some intermediate X, and it can go another way, so this would be a branch point, you can prevent formation going into another pathway. And in the purine pathway, this feeds into histidine metabolism and thiamin biosynthesis, and tryptophan biosynthesis. So, there are intermediates in this pathway, and when you start looking at metabolism, you find these connections all the time. We know a lot of these connections. I don't have time to go through them, but that would be another reason that you would like to be able to do that. The reason Bankovic got into this, and that's whose work you've been reading, is he was interested in the question of whether N10-formyltetrahydrofolate-- remember we talked about all the interconversions-- whether all of those intermediates were sequestered. That's why he got into it. And what is the answer? He was interested in this question of tetrahydrofolate metabolism-- central to both purine and pyrimidine metabolism. And what do we know about that? In the control experiments in the paper you needed to read, what did he use as a control? He used-- remember we talked about this trifunctional protein that has three activities? It puts on the formate, it does a cyclohydrolase, it does a dehydratase. So if you go back and you look up the enzyme in his notes, this is not in purinosomes. And that's one of the first experiments he did. It's not there. So why isn't it there? I don't know. And maybe that means that we should be thinking about these things in other ways. In the last minute or so. So, that summarizes the key thing. Unstable intermediates and multiple pathways, and sequestration. I think there's no debate about that. If you have things sequestered, can you increase fluxes through pathways? A lot of bioengineers say you can, other people say you can't. This, to me, becomes really important to metabolic engineering. If you read metabolic engineering papers, people will take a polymer, and they'll stick all the enzymes in the pathway onto a polymer. Why? Because they think it's important to have these things in multi-enzyme complexes, where you increase the effect of molarity. That's something else we've talked about extensively over the course of the semester. Methods used to study this. OK, we've talked about that in recitation 13. We talked about what the issues are. In all cases, he used the enzyme fused to a green fluorescent protein. You could have problems with aggregation. You could have problems with altered activity. We talked about all of that last time. Looking at these-- punctate staining, if you look at the punctate staining with one protein and another, they're widely different. The shapes of the stains are widely different. Azaserine and hypoxanthine-- Azaserine we just talked about. Hypoxanthine-- hopefully, you now remember that that-- IMP, hypoxanthine, with PRPP, this is salvage. You should now be able to, thinking about this, go back and read that experiment he did. That experiment makes no sense to me. That was an experiment he did because he made a prediction, knowing how all these things fit together, and it didn't do what he predicted. So then he made up something else. These are the kinds of things you need to think about when you're trying to test a model like this. It's a very appealing model, but it's also a very controversial model. I'm sort of at the end of my time, so I think I'm going to go to the end. We've looked at all of these-- punctate staining with no purines, when we add purines, we lose it. And I just want to go to a paper that was recently published. This is probably hard to see, but this just shows this is an ongoing area of research. The latest is, now, instead of looking at this fluorescent stuff where a lot of you commented, you really can't see the green overlapping with the red to form yellow. The pictures were terrible, and if you go back and you look up there, I can't see it either. Fluorescence changes, and red and green on top of each other showing yellow showing they're sort of in the same general area are often hard to see. So now, they've turned to super-resolution, and if you look at when you turn off the lights, this is mitochondria, and these little purple things are the putative purinosome using green attached fluorescent proteins. And what you can see is there-- and again, you need to look at the statistics of all of this-- they appear to be associated with the mitochondria. Does that make sense? I don't know. That's where you need purines to make all of your ATP. Anyhow, it's linked to signaling pathways, and they do that in this paper. But again, to me, this is just another example. I don't think they expected to find this. And they found that, and so now we have more complex systems to really try to understand why these things-- do they sequester, number one, and if they do sequester, what is the advantage to biology? So, we end here, and the bottom line is, when you think about all the data, it's a moving target. You can't prove something. If you're a mechanism person, you can't prove a mechanism. It keeps changing. That's the way life is. So you have a model. You make it as simple as possible, you get some data, you find something that doesn't agree with your hypothesis. You've got to change it. That's why science is so much fun. That's the end of purines, and I'm sorry I didn't get to tell you about ribonucleotide reductases. It's much more chemically complex than anything you saw in purines, so I am sure you are delighted that you didn't have to look at all the radicals. So we'll see you on, I guess, Tuesday.
MIT_508J_Biological_Chemistry_II_Spring_2016
22_Cholesterol_Homeostasis_2.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: Where we were at at the end of the last lecture was trying to figure out what do we do with the fact that cholesterol-- its solubility is five micromolar. Yet if you look inside your blood, the levels would be 5 millimolar. And so the question is, how does it gets transported? And it gets transported in a complex fashion. We need to deal with that with any kind of very insoluble lipophilic materials. And I briefly introduced you to lipoproteins, which are mixtures of different kinds of lipids, triacylglycerols, phospholipids, cholesterol, cholesterol esters. And the key question we learned in the first couple lectures that cholesterol could be biosynthesized. And what we started focusing on in the last lecture was that it can be taken up by the diet. That's what we're focusing on now. And then after we do a little more background, then how is it taken up and then how is this all regulated? How do you control biosynthesis versus cholesterol from the diet. What are the sort of major mechanisms? So at the end of the last lecture I'd given you a second picture. And the PowerPoint-- the original PowerPoint didn't have this figure. This is taken out of a new Voet and Voet-- the newest Voet and Voet-- which I think better describes what's going on. But really sort of what you need to know is you form these particles, chylomicrons, if you look at the handout I gave you have lots of proteins, all kinds of lipids, cholesterol. And they get into the bloodstream and they pass off as they go through adipocytes or as they go through muscle. The surface of these cells have lipases, phospholipases that can clip off the fatty acids that you need for metabolism at most cells. And what happens is the size of these particles just change. And so in the end, you remove the triacylglycerols and you remove phospholipids. And what you're left with is more of a cholesterol. And that-- and so what happens is the chylomicrons change size. They call them the remnants. And there are receptors on liver cells, which can take up these remnants, these lipoprotein remnants. And then they repackage them into other lipoproteins. And again, the differences in the lipoproteins we talked about very briefly, we have an outline. Somebody measured these with a-- again, they're variable, but they're based on density. And so the liver repackages these things to a particle that's very low density, lipoprotein. And then again, they can dump off components into the tissues where you can use the lipids to do metabolism, changing the size, intermediate density, eventually low density lipoprotein which is what we're focused on now. And then today what we're focused on is how does the low density lipoprotein get taken up by the liver? And also, can it get taken up by other kinds of cells? And if you have excess cholesterol produced in any of these extrahepatic cells, it can be taken up to form particles called high density lipoproteins. And they can come back. So they act as cholesterol scavengers, come back and deliver it back into the liver by a mechanism that is really different from what we're going to be talking about today. So that's the overview picture. And so what I want to do now is focus on the question, why do we care about cholesterol and what was the motivator for Brown and Goldstein's discovery of the low density lipoprotein receptor. So this is the motivator. They were seeing when they were at medical school, a number of children that presented at an early age. These guys were six and eight. And the way they present, if they turn out to have both genes, both copies of the gene are messed up for low density lipoprotein receptor, that's called familial hypercholesterolemia. The way they present is they have these little xanthomases that are apparently yellow. And what they are is they're full of cholesterol. OK, and so if you have someone that's heterozygous rather than homozygous-- these guys are homozygous-- you still see these but you see it at a much later time in their life. And so again, what it is, it's a function of the fact that you have too much cholesterol and this is the way-- one of the ways-- it manifests itself. The second way it manifests itself is if you look at the concentration of low density lipoprotein and the plasma, which is given in milligrams for 100 mils, what you see is the concentrations of cholesterol are actually 5 to 10 times higher. So that's the manifestation. And children that manifest at this early age die of heart attacks by the time they're 30. And so this was the motivator. They were trying to figure out what is the basis or bases for this disease. So that's what I said. This is a dominant effect. At the time, the gene or genes responsible for this were not known. It turns out from the data that I've gotten from some paper, one in 500 people are heterozygotes. That's quite prevalent, actually. But the ones that manifest themselves in this really terrible way early on is something like one in a million. And so-- but even the heterozygotes, Brown and Goldstein study all of these people, also manifest in this way. They have elevated cholesterol levels. And so this was is a huge problem. And so they decided they wanted to really devote their life to it. And I think they didn't know this in the beginning, but it's really associated with one gene. Most diseases are much more complicated than that. And so I think because of the, quote, "simplicity" unquote, you'll see it's not so simple, they were able to make progress. And these experiments were carried out really sort of in the-- started in the 1970s. So I think Brown and Goldstein-- we talked about the cholesterol biosynthetic pathway. And we talked about what was rate limiting. So hopefully you all know that the rate limiting step is the reduction of hydroxymethylglutaryl CoA down. So the CoA is reduced all the way down to an alcohol and that product is mevalonic acid. And if you can't remember this, you should pull out the biosynthetic pathway. And that was proposed to be by other people working in this field to be the rate limiting step in this overall process. And when you take an introductory course in biochemistry, you talk about regulation. I guess it depends on who's teaching it, how much you talk about regulation. But of course, one of the major mechanisms of regulation that's sort of easy to understand in some fashion, is that oftentimes the end product of a pathway can come back way at the beginning and inhibit the pathway. So that's called feedback inhibition. We saw that cholesterol biosynthesis was 30 steps. And if you go back and you look at the pathway, you know, I think this is step four or five. I can't remember which one it is. And so the model was-- and there was some evidence that suggested that from what had been done in the literature-- that cholesterol was potentially acting as a feedback inhibitor. And that's what their original working hypothesis was. So the hypothesis was-- this is how they started it out. And what we'll do is just look at a few experiments of how they were trying to test their hypothesis and then how they change their hypothesis to come up with a new model for cholesterol regulation. So you start out with acetyl CoA and you go through mevalonic acid. And then we get to cholesterol. And so the model was that-- this is HMG reductase-- that this was a feedback inhibitor. And that it inhibited by allosteric regulation. And that's true of many pathways. And often, that's one out of many mechanisms that are involved in regulation. So the first problem they faced-- and for those of you who want to read about this in more detail, the original experiments, I'm just going to present a few simple experiments and I'm going to present them in a simple way. OK, everything with human cells is more complicated than the way I'm presenting it. But for those of you would like to read a little bit more about the actual experiments, there are two papers that I think are particularly compelling. And in previous years, I've actually used these papers in recitation. OK, so this is one of them. I'll put the other one up later on so that you can look at the detail, more about the experimental details. And I think in these particular experiments, what you're being introduced to, which most students don't experience, is the fact that you have-- all you do with these insoluble membrane-like proteins and how do you deal with membrane proteins. Most of us-- I haven't had any experience with this at all. So this week's recitation, for example, sort of shows you what they had to go through to be able to answer these questions. And it's complicated. And I think reading the experimental details in the end, if you're going to do something like this, this provides a nice blueprint of how you try-- how you try to design experiments. And you'll see some of the complexity from the few experiments I'm just going to briefly describe. OK, so what they needed was a model system. And of course, you can't do experiments on humans. So what they wanted to do was have some kind of tissue culture system. So they wanted a model system. And there was some evidence in the literature that human fibroblast skin cells were actually able to biosynthesize cholesterol. So they wanted to ask the question, do these skin cells recapitulate what people had seen from the biological studies in humans? And so the first experiments I'll show you, does recapitulate that. It didn't have to. But then this became their model, human fibroblast cells became the model for which they're carrying out all of these experiments that we're going to very briefly look at. OK, so the experiments, I think, are simple, at least on the surface. Although I think it wasn't so easy to figure out how to do these experiments. So what they wanted to do, they had patients-- whoops. I didn't want to do that. Anyhow, sorry I'm wasting time. OK, this patient is JD. And all of the experiments I'm going to show you is JD. But they had 25 other patients. And what you'll see is they all manifest themselves in different ways. And we're going to see that that, in the end, becomes important in sorting out really what was going on. OK, so the first set of experiments they did was the following. So they had some kind of normal control. And then so we have a normal-- so we have skin cells from a normal person. And this is the control. And then you have the FH patient, JD. And in the two papers I'm going to reference, they did a lot of experiments on JD's fibroblasts. And so they did some simple experiments. And remember, the rate limiting step is proposed to be hydroxymethyl-- HMG CoA reductase. And so they wanted to first ask what happens if you treat the cells, so you have them growing. OK, and you let them grow for a certain period of days. And then what you do is you take the media, change it, and remove low density lipoproteins from the media. I don't know whether they removed them all. They said they removed 5%. I don't know what the percent that was there. And so we're going to do that for both the experiment and the control. So this is the experiment. This is the normal person. This is the experiment, the FH patient. And if you look at the axis in measuring HMG CoA reductase activity. So what they're going to do is look at plus or minus LDL. So in this panel, they've removed the LDL, OK? And if we remove the LDL, you remove the cholesterol, what might you expect to happen to the normal HMG CoA reductase levels or activities? If you remove the cholesterol from the plasma, what might you expect to happen to the activity? What would you want to do? Would you want to turn it on? Would you want to turn it off? STUDENT: Turn on. JOANNE STUBBE: Turn it on, right. So that's what they're going to be assaying. They remove it and if you look at the normal patient, the normal control, what's going to happen is the biosynthesis is turned on. So it'll look at this, then, you need to have-- and this goes back to the things we've talked about a little bit about in class-- and in fact, the original recitation that we had on radioactivity was completely focused on Brown and Goldstein's work. So we're going to see that they use a lot of radioactivity and all the assays I'm going to be describing today. So what we're going to be doing is revisiting radioisotopes. They couldn't have done that without these radioisotopes. And this is converted to this. OK, what's the cofactor for this reaction? So, I'm not going to draw up the rest of this. This is mevalonic acid. What's the cofactor required for this process? Any DPH. So you have any DPH. OK, so how would you assay this? So we're doing this now in tissue culture systems. That's what-- we are doing this in fibroblast cells in tissue culture. So we don't have very much material. You might have a plateful of cells. How would you do the assay? So this is the first thing you have to figure out. And I would say, almost everything in this class, when you're studying the biology, first thing you have to do is figure out a robust assay. This case, I think it turned out to be quite easy. But it's not necessarily easy in many cases. So this is something, as a chemist, you bring a lot to the table. Yeah? STUDENT: You would measure the change in the absorption at 340. JOANNE STUBBE: 340. So that's the way chemists would do that. Why can't you do that here? STUDENT: You have to isolate the HMG CoA reductase or somehow be able to parse it from everything-- JOANNE STUBBE: Well, you might be able to do it in crude extracts if you had a lot of it. But it's tough. NADPH is used in hundreds of reactions. It's a great assay because the absorption change is removed from where most of the material absorbs, which is, you know, 280, 260, 280. It's not that sensitive. The extinction coefficient is 6,300 molar inverse centimeter inverse. And the bottom line is if you look at it, it's nowhere near sensitive enough. So if it's not sensitive enough, then what do you need to go to? That's what-- the radioactivity. So what you're going to be doing here is-- so you could use either 14c-- hopefully you remember that's a beta emitter, which then gets converted into mevalonic acid. And then you need a way of separating starting material from products. And there are many ways that one could do that. But in the original paper, they use TLC. And that's how they monitored their reactions. And you need to have material that's of hot enough radioactivity so you can see these into conversion. So that's the assay that they used. And so in the PowerPoint, I decided not to draw out. So if you PowerPoint, you look at the data, what do you see? What you see is that if you look at the experiment where they removed the low density lipoprotein from the media-- so they've taken it out. They've grown the cells they have HMG CoA reductase activity. What do you see immediately-- and the control and the patient's cells are growing exactly the same way. What do you see immediately? You see a huge difference in the amount of activity. So this is 2. This 150 or something. And so there could be a number of reasons for all of that. And so the question is, what is the basis for this increase in activity due to increased huge amount, the amount of HMG CoA reductase. Has the activity changed? Is there a mutation that changes the activity? There are lots of explanations. And so what they then did, when they remove this, they started doing assays over 24 hours. And they crack open the cells and do this radioactive assay. And then they looked at the rate of formation of mevalonic acid. And so what do you see with the normal control? You see exactly what you might predict. So if the cholesterol levels become low, you might want to biosynthesize it. But then what do you see with a homozygote, the JD patient? What you see is the levels start out high you have complete absence of regulation by changing the concentration of cholesterol. That's what you're seeing. So it seems like a simple experiment. It is a simple experiment. The basis for these observations is still open to debate. But the experiment turned out to be straightforward. Then what they did is at 24 hours, they then started adding low density lipoprotein back into the media. So they start over here, they removed it. They add it back. Here's with non. Here's with two micrograms per mL. Here's with 20 micrograms per mL. And what do you see with a normal patient? With a normal control? What you see with the normal control is a loss of activity. So that's exactly what you would expect that cholesterol-- you have a lot of cholesterol, you don't need to make it anymore. So this data, then, this simple data told you-- the control told you that minus LDL, you increased HMGR activity. And plus LDL, you decreased activity. And what about the patient? The FH JD patient? So here what you see is that removing cholesterol from the plasma has no effect. What about adding it back? Has no effect. So somehow the patient is-- the patient's cells is oblivious to the presence or absence of cholesterol. So in this case, plus or minus LDL had no effect. So we say loss of cholesterol regulation, which could be due to feedback inhibition, it could be due to something else. We'll see it is due to something else. And so this was consistent with what they predicted. And they furthermore learned that these fibroblast cells might be a good model for actually studying what's going on in the liver. I mean, you always have this issue. You have to figure out what you can study as a model system since we don't work on the humans. And so you always have to worry about how that extrapolates to humans. So basically, you're looking at cholesterol in the media. You're looking at cholesterol not in the media. And these are the experiments we just described. And so one of the questions you can ask, then, is what happens now? Another thing that can happen is what if cholesterol can't get into the cell? So what they did is another experiment where they-- they did two things to look at the HMG CoA reductase activity in the normal control and in the FH patient. And one of them was they repeated this experiment in the presence of ethanol, where they dissolved the cholesterol. And apparently that allows the cholesterol to get across the membrane. OK, so we're bypassing what we now know is going to be a receptor. So they did a second experiment and they used ethanol cholesterol. And it goes across membrane. And then they looked at the HMG CoA reductase activity. And the activity of both the patient and the normal controls was the same. OK, so the activity, HMGR activity the same. They don't report the details of this experiment. But another way you could do this is you could pull out the protein or partially purify the protein in crude extracts and try to measure the activity using this assay. And if you have a good measure of the amount of protein, which is key, so you can measure specific activity, micromoles of product or nanomoles of product produced per minute per milligram of protein, you could actually see that the HMG CoA reductase activity was the same in the wild type in the normal and in the patient. So you could also measure this using assay. And again, the result was that they were the same in both the normal and the patient. So then the elevated levels could be-- elevated levels, you saw in the very beginning of the HMG CoA reductase activity, could be due to the fact, they had a huge amount of protein, more so than you do with the fibroblasts. And so, there's no reason to think a prior if you looked at that previous slide, that the control, that normal control in the wild type-- I don't know what the scatter is in the data for HMG reductase activities, but that's something you need to think about. But a 60-fold change is a huge change. So this data, the initial set of data said that, yeah, cholesterol may be acting as a feedback inhibitor. But here, we can get cholesterol into the cell and the activities are the same. So they needed to come up with an alternative hypothesis. OK, so they then, using these two sets of data, came up with an alternative hypothesis. So they concluded that it's not cholesterol feed back regulated. And so then they set out to do a second set of experiments based on a new hypothesis. And the new hypothesis is that there would be some protein that might be involved in taking up the LDL particle, which has a cholesterol into the cell. So the new hypothesis was there is an LDL receptor, so r is receptor. That's how I'm going to abbreviate it. That's key to taking up LDL. And so that's what's shown here. And so then the question is, what sets of experiments do they do next. So this is a second set of experiments that was done in a paper that's also quite interesting. And so, for those of you who want to look at the details of this, this was published in 1976. And so this is where the data that I'm going to show you on this slide came from. Because I think they actually put it in one of the two review articles I gave you to read. But if you want to read the original data, the papers aren't that long. And they go through the details of the rationale of how they design their experiments. OK, so what we want to do now is test the idea that to get cholesterol into the cell, there is an LDL receptor. And that that's going to play a key role in controlling cholesterol levels. That was the working hypothesis. OK, so how would you go about testing this experimentally? So these are the results of the experiments. And the question is, how would you go about testing this experimentally if this were your hypothesis? And so if you think about it, you might like to know, does the LDL particle bind to the surface of the cell? Does it bind? OK, so that would be one thing you could do. And in fact, Brown and Goldstein were treating many, many patients. So they had fibroblasts for many patients, 20 to 25 patients. They all had different phenotypes. And again, these were differences in the phenotypes actually helped them to try to dissect this process. And so could it bind? And so we can ask the question, how would we look at binding? I'm going to ask you that question. We're going to have a recitation on binding, I think, not this week, but next week. Then it gets into the cell. OK, so how do you know it gets into the cell? And so that's another question. Inside, outside. And then the next question is, what is LDL? Hopefully you remember it's a lipoprotein that has a single protein on it, apoB. And then it's full with cholesterol, cholesterol esters, and phospholipids. What happens to that stuff once it's inside the cell? OK, so those are the questions in this experiment that they set out to ask. OK, so what I want to do-- so binding, internalization, and then the fate of LDL inside the cell. So that's what they were focused on. So what I want to do is show you the tools that they developed to try to answer these questions. OK, I'm going to show you a few things because this isn't such an easy set of experiments to carry out. And then what they observed on the normal cells and the patient cells. OK, so the tools that I want to talk about are the following. OK, so we just talked about the fact that, to do the assay, we needed radioactivity. We needed to be sensitive enough. If you're going to be looking at binding on the surface, how do you do that? Do you think there are a lot of receptors? Are there a few receptors? So you might not know that. But in general, there aren't huge numbers of receptors. So measuring binding to the surface of the cell usually requires a very sensitive assay. So the first thing they needed to do was they decided that they needed to make the LDL radial labels. And if you go back and you look through your notes in recitation three where we talked about radioactivity, we saw that we have beta, c14 beta, which is what they used up there. But they also used i125, which is a gamma, which is much more sensitive. And so what they decided they needed to make was i125 labeled LDL. So if you haven't radio labeled, can you somehow see it sitting on the surface of the cell? So the question is, how can you do that? Well, we talked about the composition of the LDL particle. There is cholesterol. There's cholesterol esters, phospholipids, and one protein. And so what they're doing to put the iodine in is putting it into only the protein. OK, so what they use is a method called Bolton Hunter, which uses radial level iodide and a reagent. I'm not going to go through-- you can look it up if you're interested-- I'm not going to go through the details. And what it does is it takes a protein-- this is still actually widely used. So this would be apoB. And it iodinates at the ortho position. So what you end up with, then, is iodinated apoB. So that's going to be your handle. You can make this a very high, specific activity. OK, so that's one thing that they needed to do. OK, the second thing that they needed to do is if they're going to look for binding to the surface, how would you design that experiment? What might you need to do to figure out how you're going to look at binding only and not binding and uptake? What parameter could you change that would help you do that? Temperature. So everything-- and you'll see this also in experiments this week-- the temperature is really critical. Why? Because hopefully you all know lipid bilayers are very fluid. And if you cool the temperature, you prevent uptake and other things. You have to test all this out. They did a huge number of controls. So the second thing that they wanted to do is they used temperature. So four degrees, they're going to use to look at binding. Or if they're looking at a time course and they want to stop the reaction, and the reaction is normally done at 37 degrees-- so uptake experiments would be at 37 degrees. OK, so again, temperature is the key parameter. You could, if you wanted to a time course and stop the reaction, you could cool with down to four degrees. I mean, this was a hypothesis they had. And so that's the second tool that they're going to use. And the third tool, which I think isn't necessarily so intuitive, is if you're looking at something binding on the surface, you have to always worry about non-specific binding. You'll talk about that in the recitation. On this that's always a problem. You're using really hot, iodine-labeled materials, so you could get neuron specific binding. And so how do you-- so you need to wash it. So if the LDL particle bound loosely to the LDL receptor, that makes the problem extremely challenging because when you're trying to wash away the excess as you change the concentration of the LDL, you're going to start to lose-- you're going to have an equilibrium and you're going to start to lose binding. It binds really tightly so they had to have some kind of a wash. So they figured out and optimized a wash. So you need to have a wash. So if you have a wash and then you're still looking at the receptor with the particle bound-- so that's the LDL-LDL receptor-- then the question is-- and it's tight binding-- how do you get that off? And remember, you're also going to have LDL that's been internalized. So the creative approach they used was to use the molecule heparin. OK, so heparin-- I'm not going to draw out the structure-- but this is a third tool and this was key. And so they have heparin-sensitive and heparin-resistant. And what does this mean? Heparin turns out to-- it's a sugar. Many of you have probably heard about it. It plays a key role in blood coagulation. But anyhow, from the point of view of today's lecture, you just need to know it's a sugar and it's got sulfates all over the outside of it. So it's negatively charged. So heparin is a sulfated sugar. So basically, you have something like this with SO3 minuses on the outside. And so what happens is if-- what you want to do is release the LDL particle from the receptor. And apparently, treatment with heparin at certain levels-- I think they tried a lot of things-- was able to release the surface LDL. So this is involved in release of surface bound. So then what you have left after you release this, is you could still have radio label that's been internalized. So that then becomes heparin-resistant. And so you can count that. And so then you have bound and internalized. Now, if you're studying this as a function of time, what can happen to-- once you internalize the LDL particle, what can happen to the iodinated LDL particle? What can happen? So this is something else you need to think about in these assays. So now we have internalized LDL, i125 label. What can happen-- if you remember from recitation this past week, you remember what happened to the LDL? So you got protein. You got lipids. What's going to happen? You might not know the details. That's what this whole-- that's what Brian and Goldstein uncovered, which we're going to talk about in the next few minutes. But LDL, you have a protein. What can happen to proteins? They can get degraded. So if you have the apoB, what can happen is inside the cells-- so this is inside-- you could have proteases that degrade this down to peptides. This happens in a lysosol where you still have iodinated tyrosine. Or it can be broken down all the way to just iodinated tyrosine. So if you're breaking this down all the way here, the iodinated tyrosine could likely exit the cell. So you need to really think-- so what do you do to control for this aspect of the metabolism? What happens to the LDL inside the cell? And so to do this, how would you distinguish LDL itself from, say-- as a chemist, what could you do to distinguish LDL protein from LDL on small peptides or LDL as an amino acid? So the key question was, what sort of bulk method do you use to try to distinguish between these two things. So then you can incorporate that into the analysis, which is what's on the slide here. So what happens if you treat proteins in general with acid? They what? They hydrolize? So peptide bonds are really strong. If you want to break a peptide bond, you have to heat it for 16 hours at 100 degrees. So that's not going to happen. So that's not an option. But what else happens? What do you do when you put a protein into acid? What happens to the protein? It what? Yeah, it crashes out. So proteins in general, not all proteins, most proteins precipitate, but these kinds of things would be soluble. So they've been able to take advantage-- so you have to, again, treat the cells in a certain way so that you can look at what's still retained in the LDL versus what's undergone degradation. OK, we're going to see that's key to the model we're going to come up with. OK, so those are the tools that they needed to develop. And so the question is, then, what did they observe? OK, so we're doing these same experiments. We're looking for binding on the outside, internalization, and breakdown. That's what we're looking for. And so here is the patient and here is the control. So if we look at here, these guys are the binding. So Brown and Goldstein, in this particular paper, which is underneath here but in the cell paper, looked at 22 patients. And out of the 22 patients, most of them were binding deficient. They could see no binding at all. Some of them were binding modified. That is, they had lower levels of binding. And this one patient, JD, had normal binding. So in this experiment, we're looking at here-- so in the PowerPoint, this is one of 22 patients they had normal binding. And the others-- and that's because we'll see that there are multiple ways you can have defects in your LDL receptor. We'll come back to that in a minute. But you can have deficient binding or you could have no binding. Or you could have normal binding. So those are all possible. And the one that we've taken the data for here and that's described in the paper, is normal binding. And they did a lot of experiments I'm not describing to try to show you that this experiment, which suggests normal binding, is in fact normal binding. They looked at off rates. They looked at competition with HDL and LDL. And so if you look at that, if you look at the levels of binding, they really aren't very different between the experiment and the control. And so now what happens, if you look at the normal, what happens is with time, the LDL on the surface goes away. And that's because it's becoming internalized. Whereas down here, what happens? You started out the same, but now you can see over-- this is hours down here, it really hasn't changed very much. It's not becoming internalized. And so then they wanted to use their method to look at internalized LDL. And so internalized LDL, using the heparin-resistant versus heparin-sensitive, that's the assay they used, what you see is as the surface binding at least early on decreases, the amount internalized increases. But what happens over here to the patient? With the patient, you get nothing internalized. And the other question is, what happens to the LDL-- and it's labeled on the protein-- does that get degraded? And so using a method with TCA, they used a couple of different methods, what they see is that you slowly degrade the protein into small pieces. And again, with the patient, it's not internalized so you can't get degradation. So this type of experiment with this particular patient and also with the other patients that I talked about, one through 21, they drew a strong conclusion that there are two things that have to happen for cholesterol to get into the cell. Number one, it has to bind. And number two, there's got to be some mechanism for internalization. So the conclusions from this is we need binding, which is consistent with the LDL receptor. And then we need, in some way, internalization. And of course, JD was the only one out of all of these patients where they can study internalization because in the other patients they didn't-- they had really poor binding or no binding at all. So they needed to have this spectrum of patients to be able to start to sort out what was going on in these experiments. So I think on the surface, the experiments look pretty-- you'll look at them, they look like they're really simple. But technically, they're not so simple. And if you care about the technical details, which we'll see again in this week's recitation dealing with these membrane proteins and stickiness, becomes the key how creative you can be. And usually, we're not really plugged into that. And you usually don't do experiments like that unless you work in a lab that is focused on membrane-bound proteins. So this resulted in the model. So this kind of experiment and many other experiments resulted in the model for receptor mediated endocytosis. And you've seen this before. You saw this in recitation last week because we saw interference with the PCSK9 with receptor mediated endocytosis. So we're back where we started last week. And the first slide I showed you was this slide. And so what is the model? So there are many, many more experiments that have gone into coming up with this model. And the model is really still incomplete. I have a cartoon here, the whole process, every step along the pathway, how you go here and there and what the kinetics are, it's all complicated. But this is the working hypothesis. And so the first thing is you make the LDL receptor. It's a membrane protein, has a single transmembrane spanning region. Is made in the ER. And because of this transmembrane spanning region, it's got to be transported to the surface. And it's done so in little coded vesicles, which keeps things soluble. And it does this by passing through the Golgi stacks, which we talked about at the very beginning. Eventually, it gets to the surface. These little things here are the LDL receptors. You can go home and sleep on this and look at it again because I'm over. And it just seems like I just started and it's already over. I'm sorry. OK, I must have spent too much time talking about something I wasn't supposed to talk about. But anyhow, hopefully you now all can go back and look at this and think about this, because we're going to be talking about this again in recitation this week.
MIT_508J_Biological_Chemistry_II_Spring_2016
3_Protein_Synthesis_2.txt
The following content is provided under a Creative Commons license. Your support will help MIT Open Courseware continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT Open Courseware at ocw.mit.edu. ELIZABETH NOLAN: We're going to continue where we left off last time. So briefly I'll make a few points about initiation of translation in prokaryotes. And then where we're going to spend the bulk of the time today is with a review of tRNAs and then discussing the aminoacyl-tRNA synthetases, which are the enzymes responsible for loading amino acids onto the three prime end of the tRNA. And these points are important because these process has to happen in order for the amino acids to be delivered to the ribosome, which is where we'll go on Wednesday. So the first questions are, how does initiation happen? So how does this ribosome, 70S ribosome, get assembled with the mRNA and initiator tRNA bound? And then we're going to ask, how do we get an aminoacyl-tRNA, such that the amino acids can be delivered to the ribosome? So first, for initiation in prokaryotes, there's a few steps to this process. We'll just look at these at a basically superficial level of detail. But recall that there are translation factors. And during initiation, there are three initiation factors-- so IF 1, 2, and 3-- that are required to help assemble the 70S ribosome here. So first in terms of initiation, what happens is that the mRNA needs to bind to the 16S RNA of the 30S subunit. And so I point this out because at this stage in the process, the 70S ribosome isn't assembled yet. So we have the mRNA binding to the small subunit. And this process requires initiation factor 3. And effectively what happens is that the mRNA has a region called the Shine-Dalgarno sequence in prokaryotes, which is the site of ribosome binding. And then upstream of that is a start codon that signals for the start of translation. So if we think about the mRNA of the five prime end, and somewhere there's a sequence that signals for ribosome binding. OK, and then we have our start codon that signals the start of translation. OK. And so this gets translated here. OK, so this start codon pairs with initiator tRNA. And this initiator tRNA is special. One reason why it's special is because the amino acid attached is an N-Formylmethionine OK. So sometimes the initiator tRNA is called f-met tRNA f-met as an abbreviation there. So just as some overview here, what we're seeing in this alignment is a number of the ribosome binding sites, or Shine-Dalgarno sequences in prokaryotes. We have the start codon on that pairs with the initiator tRNA. And here's a schematic depiction of what I've indicated here on the board. OK, so the mRNA binds to the 16S of the 30S subunit. So the 70S is not assembled at this stage. And IF3 is involved, as I said. The Shine-Dalgarno sequence determines the start site. And we determine the reading frame, as well. So here is just an indicating translation of a polypeptide. What happens after that? So after that, it's necessary to assemble the 70S ribosome, have the initiator tRNA in the P site, and have the cell ready to go for translation. And here's just one cartoon overview that we'll use as a description of this process. OK. So what do we see? We've talked about this step so far. We see there's a role for initiation factor 1. And in this cartoon, if we imagine the E site, the P site, and the A site, what we see is that IF1 is binding to the site of the ribosome. And one way we can think about this is that the initiator tRNA has to get to the P site. And so that region is blocked to facilitate the initiator tRNA getting to the P site. OK, we see that initiator tRNA binding to the P site. And this happens via formation of a ternary complex with IF2 and GTP. So initiation factor 2 hydrolyzes GTP. There's an event that results in joining of the two subunits. And there has to be dissociation of these initiation factors for the ribosome to be ready to accept its first aminoacyl-tRNA in the A site. OK, so the outcome of this process here is that we have an assembled 70S ribosome with the initiator tRNA in the P site. The A site is empty, so it can accommodate an incoming aminoacyl-tRNA. And the E site or exit site is also empty. So that's the main take home for initiation. And that's the extent to which we're going to discuss it within this class. So in order to get to the elongation cycle, we need to get the aminoacyl-tRNA into the A site. And that's going to require the help of EF-Tu, so elongation factor Tu. Before we discuss how elongation factor Tu is going to help deliver that aminoacyl-tRNA, we need to talk about how we get the aminoacyl-tRNA in the first place. So what is the tRNA structure, just as a review to get everyone up to speed. How are amino acid monomers attached to the tRNA? And how is the correct amino acid attached? So this is an aspect of fidelity, which came up as a concept last week in lecture. And so we'll look at the mechanism of aminoacyl-tRNA synthetases to see how is the correct amino acid attached, and then what happens if the wrong amino acid is selected. Are there mechanisms to correct that? And if it's not corrected, what are the consequences here? So moving forward with that, we're going to focus on the tRNAs and addressing those questions. So just as a review, so we can think about tRNA secondary structure, which is often described as cloverleaf. So we have a five prime end. The tRNA has several arms. OK. So we have a D arm. This arm here has the anticodon that pairs with the codon of the mRNA. We have a variable arm, this arm here. And we have this three prime end here, where the amino acids get attached. So this, in terms of base numbering, we have C74, C75, A76 here. OH. This is often called the CCA acceptor stem. And the amino acids are attached here. I'm going to abbreviate amino acid as AA via an ester linkage. And these ester linkages are important for the chemistry that happens in the ribosome. OK. So we can imagine just if we have abbreviating the tRNA structure like this and if we think about the sugar of A76-- bless you. OK. We have one prime, two prime, three prime here. This type of connectivity here. And this is abbreviated throughout as amino acid tRNA, aa in general terms, or the three-letter abbreviations, like what we saw for f-met tRNA f-met with the initiator tRNA here. So here's a schematic of a tRNA secondary structure with a bit more detail than what I show you on the board. And something we need to keep in mind is even though we often draw the tRNA in this cloverleaf type depiction, it has tertiary structure. And so it's very important to think about this structure as we think about how the tRNAs enter the various sites of the ribosome. OK. So this structure is L-shaped. And I like this depiction here because regions of the secondary structure are color coded with the corresponding regions of this tertiary structure here. OK, so we see the shell shape of an L, rather upside down here, where we have the CCA acceptor stem over here and the anticodon arm and anticodon region down here. So what is a consequence of this structure? The tRNA is quite narrow. So we're thinking about 20 to 25 Angstroms in width. And if we think about this in the context of the ribosome and the peptidyl transferase center, 3 tRNAs need to fit into that catalytic center during the elongation cycle. So it makes sense that they're relatively narrow. This allows three to fit there. So as we think about the translation process and also think about some of the translation factors, we want to keep this type of structure in mind here. Here's just another view of that, with some additional descriptions of the overall structure. And this includes the numbering of the tRNA bases within that structure here. Just a point to make, this won't be a major focal point in the course, but do keep in mind that tRNA contains many post-transcriptionally modified bases, so you'll see an example of that in problem set one. Up to 25% of the bases can be modified. Typically, we see about 5% to 20% of them modified here. OK, you're not responsible for these structures, these modified structures, in the context of this class. So the key question for today is how are amino acids attached to the tRNA, as shown here? And in order for that to happen, there's a family of enzymes called aminoacyl-tRNA synthetases, or abbreviated aaRS. OK, so this name tells you right away, synthetase, that these enzymes use ATP. And these enzymes catalyze the attachment of amino acids to the three prime OH, or sometimes two prime OH, of the tRNA here for that. And so we're going to consider this overall reaction. And then we're going to think about the reaction mechanism and experiments that were done to give support to the mechanism that we see. So all aminoacyl-tRNA synthetases require ATP and hydrolyze ATP to AMP and PPI. And so they catalyze this overall reaction where we have an amino acid monomer. We have the tRNA that encodes this-- that is for this amino acid. ATP to give us the aminoacyl-tRNA AMP and PPI. So if the ATP is being hydrolyzed to AMP and PPI, what phosphate is being attacked? So we saw on Friday there's the alpha, beta, and gamma phosphates of ATP. OK, pardon? AUDIENCE: Beta. ELIZABETH NOLAN: Beta. Any takers? AUDIENCE: Alpha? ELIZABETH NOLAN: Any takers? Gamma? Yeah, so it's alpha. If you're getting AMP, it's attack at alpha. If you're getting ADP, it's attack at gamma here. OK, so P alpha is next door to the ribose of the nuc-- there. Yeah. OK. So if we consider this overall reaction, how does it work? Just before that, one other observation I just want to point out, if we're thinking about these enzymes and asking what is it that they recognize of the tRNA, so we have the anticodon. And that goes in hand-in-hand with the identity of the amino acid. Just keep in mind that it's not just the anticodon. So here we're seeing an example of an aminoacyl-tRNA synthetase with its tRNA bound. And we see that there's many contacts between the tRNA and this enzyme here. OK, so here we have the amino acid end, the anti-codon end, and all throughout here. So what is the mechanism to get us where we need to go? We have our overall reaction that I'll put up on the board, just to keep it straight as we move forward. So amino acid plus ATP plus the tRNA for that amino acid. Aminoacyl-tRNA synthetase to give us the aminoacyl-tRNA plus AMP plus PPI. So let's consider a mechanism. This is going to be a two-step mechanism. And so in the first step of this mechanism, we have the amino acid plus ATP. And we have formation of an OAMP intermediate. Plus PPI here. So this intermediate is called an amino adenylate. Adenlyate because adenosine here. And we need to think about why this intermediate might form. Why would we propose this in a mechanism? And then in step two-- we'll come back to that in a minute-- we can take our amino adenlyate, have our tRNA, this is the three prime end here. We can have attack with release of AMP. OK, so here we have the ester linkage at the three prime end, like what we see on that board here, to give us our aminoacyl-tRNA. OK, so we see in step one, there's formation of this amino adenylade intermediate. And in step two, there's transfer of the amino acid monomer to the three prime end of the tRNA here. So why might these enzymes go through that OAMP intermediate? What needs to happen for this chemistry to occur? AUDIENCE: You need a more activated reading group to have that acyl substitution form an ester from a carboxylate. ELIZABETH NOLAN: Right. We need to activate the CO2H group there. So this affords that. So what might be another possible mechanism, right? Imagine you're the experimentalist and you've combined your eighth amino acid ATP tRNA and this enzyme you've isolated in a test tube. And you see you've got this as a product. And this as a product. And you're wondering how did we get from reactants to products? This is one possibility. Maybe there's also a possibility of a concerted mechanism where there's no intermediate like the one I'm showing you here. These are just things to keep in mind when thinking about reactions. This two-step mechanism is the accepted mechanism for the amino aceyl tRNA synthetases. And so what we're going to think about are what are the experiments that were done to support this mechanism here. So what are the things we need to think about? And so we're going to think about this by examining one aminoacyl-tRNA synthetase as a paradigm. And this is the one for a isoleucine here. OK, so what are the experiments that need to be done to characterize this reaction and determine mechanism? OK. So one thing we need to confirm is reaction stoichiometry. So there's a stoichiometry up in what I've written above. But experimentally, that needs to be determined. So one, reaction stoichiometry. And so how can we think about this? We can think about the equivalence of the amino acid. So in this case, isoleucine. How many equivalents of isoleucine? And presumably, this isoleucine binds to the enzymes. We can think about it of equivalence of isoleucine bound. And we also see that ATP is consumed, right? That's hydrolyzed to AMP and PPI. So how many equivalents of ATP are consumed in this reaction? What else do we want to know? We need to know something about kinetics. So what are rates of formation? What is the rate of formation of the product, the aminoacyl-tRNA, and since I've told you this intermediate forms, what is the rate of formation of the intermediate? And since this is an intermediate, it's something transient. So we need to think about how are we as experimentalists going to detect this intermediate over the course of this reaction. It forms and decays in order to get product here. So rates of formation. And so we have formation of our product, which in this case-- and then formation of the intermediate, which I'll just abbreviate Ile-AMP. And what else would we like to know? We can figure out how, in addition to rate of formation of the product and the intermediate, we can think about the rate of transfer of Ile from the intermediate to the tRNA. So what this tells us is that we need a way to look for or detect the intermediate. Here. So imagine let's just have a hypothetical situation. If we find the intermediate, that tells us something about the reaction. If we don't find the intermediate, what can we conclude? Pardon? AUDIENCE: That there was no intermediate? ELIZABETH NOLAN: So that's one possibility. Are there other possibilities if our method doesn't let us detect the intermediate? AUDIENCE: Second step is to test. ELIZABETH NOLAN: Can it be hard to detect an intermediate? It can be very hard, right? So they don't always-- there aren't around all the time very much or in very abundant quantities. So if it's not detected, could it be there? Yeah, it might be there. And the method just didn't allow for it to be seen. So you always need to keep that possibility in mind. This will be a case where there is a robust method that allows us to detect this type of intermediate. But always keep that in mind. OK, so first thinking about reaction stoichiometry. We're not going to go over the experiments that were done to define this. I'll just tell you some facts that result from some experimental studies. So this isoleucine aminoacyl-tRNA synthetase binds 1 equivalent of isoleucine as indicated in the overall reaction. And it consumes one equivalent of ATP, also as shown in this overall reaction, to make one equivalent of the aminoacyl-tRNA. OK and these stoichiometries were determined experimentally. So now we need to think about points two and three to characterize the reaction kinetics. So what experiments were done? So there are several different sets of experiments, some of which we're familiar with from 7.05 or 5.07 and others that will be new and presented in more detail in recitation this week and next week. So we can imagine doing steady state kinetic experiments, as well as pre-steady state kinetic experiments. And the general aims here are, one, to determine the rate of aminoacyl-tRNA formation, to determine the rate of amino adenylate formation, so this intermediate-- and again, we need a method to detect the intermediate. And at the end of the day, we'd like to know what is the rate determining step. So a method that is commonly employed for these types of studies involves the use of radioactivity. And we'll just go over a few points about radioactivity now to help with understanding these experiments. And you'll hear more about this method in recitation this week. So the experiments I'm going to tell you about are going to involve the use of radio isotopes like C14, P32. And the question is, why do we like to use radio isotopes in biochemical experiments? And they're really excellent probes. It's the bottom line. And one reason for that is that if you can use a radio isotope like C14 or P32, it's introducing minimal perturbation into your system. So you're not needing to attach a fluorophore whether it be a small molecule or a protein. You're not modifying the structure of a component of your system. So the overall size and the chemical properties are maintained when you use different isotopes of the same element. And some of the ones we'll see today are, for instance, C14 labeled isoleucine, P32 labeled ATP. They have the same chemical properties as the unlabeled forms, and same size. The other point to make is that we can detect very small amounts of radioactivity in a sample. And you'll see some of those calculations and how to do them in recitation this week. So we can detect small amounts, and that's good for looking for something like an intermediate. And there's readily available techniques for quantifying radioactivity in a sample. So if you see nomenclature like this, the NX nomenclature indicates the radioisotope in this sample. And I'll just say in passing here, we all know the isotopes are atoms bearing the same number of protons but different numbers of neutrons. And radioactive isotopes have an unstable nucleus, which means there's a radioactive decay. And typically-- well, we often use beta emitters in biochemical studies. And that's what you'll see today. So what are some of the experiments? We're first going to consider looking at the steady state kinetics to ask what do we learn in the steady state. So from our steady state experiments, we're able to get our Kcat and our Km and the catalytic efficiency, which is the Kcat over Km. We're going to compare our Kcat values or turnover today. So experiment one is to monitor formation of product. So how is this done? This reaction is done by taking C14 labeled isoleucine and unlabeled tRNA and watching for transfer of that radio label to the tRNA. And so what comes from these studies is a Kcat on the order of 1.4 per second. And now we have a way to detect this amino adenylate intermediate. And we'll talk about that assay in a minute, after we get through this comparison. We do a steady state experiment to monitor formation of this amino adenylate intermediate. And this assay also uses radioactivity. And it's called ATP PPI exchange assay. And we'll go over how this works in a minute. So the results of these experiments give a Kcat on the order of 80 per second. So what does this comparison tell you? These values are quite different, correct? So we're seeing that this ATP PPI exchange assay is telling us that ATP PPI exchange, which is a measure of formation of this intermediate, is about 60-fold faster than formation of product here. That's an important observation to have. So how are we going to figure this out? How are we going to see this intermediate? That's the question we need to ask next. And so we need to go over this ATP PPI exchange assay. And this is an assay that will come up again in module 4 when we talk about the biosynthesis of non-ribosomal peptides. So we'll return to this type of assay and data many times. So the question is, if we have this reaction, OK, how do we detect this? OK, it's not so easy. And we need an assay. And this is some of the background towards the development of this assay. So we need to suppose that our amino acid and ATP react with the aminoacyl-tRNA synthetase in the absence of tRNA. And that's indicated by step one, more or less. But that doesn't show it experimentally. So in the absence of tRNA, this amino acid and ATP react with the enzyme and they form the aminoacyl AMP intermediate and PPI. And they do this reversibly. OK, so the reversibility of this reaction is key for ATP PPI exchange to work. So if this occurs and they do this reversibly, therefore we can deduce formation of the aminoacyl AMP. If we add radio labeled PPI, the amino acid, and ATP to the enzyme and we see that radio labeled phosphorus from the radio labeled PPI incorporate into ATP. That's only going to happen if this chemistry is reversible. And bear in mind, we can detect very small quantities with radioactivity. So it's not that it has to be reversible to some large degree. We're relying on the detection of this radio label. So how does this work chemically? Let's take a look. OK, so imagine here we have our ATP. We have our amino acid. And we have our enzyme. And step one, we have binding. So there's some ATP binding site to the enzyme and some site for the amino acid to bind. And I'm leaving magnesium out of this depiction, but remember that magnesium and ATP come together. Now what? Step two, OK, we're going to have a chemical step where we have formation of the amino adenylate and PPI. And they currently are bound to the enzyme. We have step three. So imagine in this step our PPI is released. And this is another key aspect of this assay. So what does this mean? We now need to think about going backwards. If the PPI is released and we spike this reaction with radio labeled PPI and work our way backwards, will the radio label end up here in the ATP? OK. So this is going to be going backwards. We've left off with this enzyme with the amino adenylate bound. We have the PPI that was released. And then we spike this reaction with our radio labeled or hot PPI. So then what happens? Step four, working backwards. Imagine that some of the radio labeled PPI binds. Then what? Working backwards another step. 32 P ATP and the amino acid. And then we have release here. OK, so then the question is, can you detect this? And so if you can detect some incorporation of this radio label into the ATP, that indicates that this enzyme worked through that type of intermediate. AUDIENCE: So are PPI not also sometimes [INAUDIBLE] and then if you had some competing hypothesis where it made ATP and ADP, then your PPI would maybe sometimes turn into just a single radio label phosphate that could then have the same reverse reactions as the [INAUDIBLE]? ELIZABETH NOLAN: Yeah. So whether you initially end up with PPI or PI is going to depend on how the ATP is hydrolyzed. And so you could imagine maybe there could be some background ATP hydrolysis that gives ADP and PI in this type of assay. That's something you always need to look out for. For the purpose of this, let's assume that we're not having some background problem in terms of the ATP source, and also that the enzyme is specific in terms of what it's doing to the ATP. But yeah, certainly background ATP hydrolysis can be a problem. So how will this be detected? And how will you know the radio label is associated with ATP and not something else in your mixture? AUDIENCE: [INAUDIBLE] ELIZABETH NOLAN: Pardon? AUDIENCE: [INAUDIBLE] ELIZABETH NOLAN: No. So we're going to look at the radioactivity. So this will come up more in recitation this week. But we need to be able to measure radioactivity by, say, scintillation counting here. But what's also needed is a separation because you need to know where that signal's coming from. You need to know it's coming from ATP and, say, not a background from however much of the PPI you introduced. Or if you have no idea what's going on with your chemistry, maybe the data are going to tell you it's not this mechanism. So you need to have a separation. So how might you separate ATP from all of these other components? AUDIENCE: Based on affinity column. ELIZABETH NOLAN: Some affinity column. So I like the column. But we're not going to have some sort of tag on the ATP. That might be a problem for that enzyme. But your notion is correct in the sense that we'll use some sort of chromatography in order to separate. OK, so maybe HPLC, how many of you have used an HPLC or at least know what one is? AUDIENCE: [INAUDIBLE]. ELIZABETH NOLAN: Right. So typically looking at UV vis. But you can imagine hooking up an HPLC to a detector that allows you to do scintillation counting and some sort of column that will allow you to look for ATP. Is all of the ATP going to be radioactive in this assay? No. So again, we can detect small quantities. And as long as there's a little bit of reversibility, we can see this here. OK, so what's critical in this assay is the reversibility of steps 3 and 4. What would happen in this assay if the PPI is not released? AUDIENCE: [INAUDIBLE]. ELIZABETH NOLAN: Right. Under the conditions, or if for some reason the PPI is not released, we're not going to see this exchange reaction. We're going to have a readout that doesn't give us this. Does that mean this didn't form? No. OK, so there's many caveats and details that you need to think through when thinking about a reaction and then the experiment is done to test this. So in the case of these aminoacyl-tRNA synthetases, these ATP PPI exchange assays work well. And these assays can be used to get steady state kinetic parameters, to get Kcat, Km, Kcat over Km, which is where this type of value comes from, in this case here. So back to these analyses up here, what they're telling us is that formation of this amino adenylate intermediate is about 60-fold faster than formation of the product. OK. And what we all want to recall when thinking about steady state experiments is that they're set up with a great excess of substrate and with the enzyme concentration. The reaction is zero order in respect to substrate. And you'll have some additional notes about that in your recitation materials this week for review. So something else biochemists like to do when looking at reactions and understanding reaction mechanisms is to look in the pre-steady state. And this came up briefly in lecture 1 as a method. And again, you'll hear more about it in recitation over the next two weeks. In these experiments, the goal is to look at the very first, early moments of a reaction. And they're set up quite differently. So limiting substrate is used. There's no turnover, so huge contrast to what we know about steady state experiments. And one of the goals is to look at the formation and consumption of intermediates here. So this type of chemistry often happens on a fast timescale. You can imagine millisecond timescale here, which means that we need a special apparatus that has fast mixing capabilities, because there's no way for one of us to do this on our own with our pipette. And so the type of experiment or apparatus used is called a stop flow. And I just show one depiction of a stop flow apparatus here. You'll get some other variations on this theme in the recitation notes. OK, but effectively what happens is that you have two drive syringes, a and b, and each of these syringes will contain certain components of your reaction. And this stop flow has a drive motor and a stop syringe. And it effectively allows you to rapidly mix the components of these syringes in a mixer, shown here. And then you either have some way to detect product-- so maybe if you can use optical absorption, you have a UV vis detector or a fluorescence detector. Or in other cases what you'll do is you'll punch the reaction at a certain time point. So you need a third syringe not shown here with a quencher. So you can imagine if you're working with an enzyme, maybe you quench by addition of acid or base, something that will denature and precipitate that enzyme. And then you can take that sample and analyze it in some way that fits in terms of what you need to detect there. So this type of methodology was used in order to monitor transfer of isoleucine to its tRNA. And so where we'll pick up in lecture on Wednesday is the design of that experiment in terms of what will we put in each syringe, and then what are the results of those experiments? And ultimately, what does that tell us about rates of transfer here? That's where we'll continue.
MIT_508J_Biological_Chemistry_II_Spring_2016
16_PK_and_NRP_Synthases_2.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. ELIZABETH NOLAN: We're going to get started and what we'll do today is continue with fatty acid synthase. Because that's the paradigm for these macromolecular machines, like the PKS, and then we'll go over the logic of polyketide synthases. So we left off last time with this discussion about some molecules that will be involved and in particular thioesters, and I asked about the alpha H. So just going back to introductory organic chemistry, what are the properties of this atom here? AUDIENCE: [INAUDIBLE] acidic. ELIZABETH NOLAN: Yeah. OK, right. So this is acidic. So if you have-- OK? So what that means is if there is a base that can deprotonate that, we can get an enolate. OK, and this is the type of chemistry that's going to be happening with the thioesters that are used in fatty acid synthase and also polyketide synthase. And just to rewind a little bit more, if we think about carbon-carbon bond forming reactions in nature, which is what's happening in fatty acid biosynthesis and in polyketide biosynthesis, effectively, nature uses three different types of reaction. OK, so one is the aldol, two are the Claisen, and three [INAUDIBLE] transfer. OK, and so we're going to see Claisen condensations in FAS and PKS biosynthesis. And then after spring break, when Joanne starts with cholesterol biosynthesis, that will involve [INAUDIBLE] transfers. And hopefully, you've seen aldol reactions sometime before within biochemistry here. OK? So we need to think about just what the general Claisen condensation is that we're going to be seeing here and the consequences of this acidic proton. So also just keep in mind, rewinding a little more, nature uses thioesters not esters, and so the alpha H is more acidic. The carbonyl is more activated for nucloephilic attack. And there's some resonance arguments and orbital overlap arguments that can guide those conclusions, if you wish to do them here. OK. So let's imagine that we have a thioester. We have a base. OK, that's going to be [INAUDIBLE],, which is going to get us to here. So this is our nucleophile, and what you'll see coming forward is an enolate. So imagine we have that, and we add it with another thioester, and here's our electrophile. What do we get? We get formation of a beta-keto thioester, which is the Claisen condensation product. OK, you have two thioesters. OK? So effectively, this acyl thioester is doubly activated, so it can be-- did I lose it? Oh no, problems. Sorry about that. It can be activated as an electrophile at the C1 position, so next door to the sulfur. And it can be activated as a nucleophile at the C2 position here. So this is the general chemistry that's going to be happening by FAS and PKS in terms of forming carbon-carbon bonds between monomers here. OK? So in fatty acid synthase, we have two monomer units. OK? So we have acetyl-CoA and malonyl-CoA. Acetyl-CoA is the starter unit, sometimes called unit 0, and then malonyl-CoA is the extender. And so recall that in fatty acid biosynthesis, each elongation event adds two carbons, and if we look at malonyl-CoA, we have three here. Right? So there's decarboxylation of malonyl-CoA to generate a C2 unit, and there's details of that in the lecture 15 notes. And SCoA is coenzyme A, here, and there's some information as to the biosynthesis of these starter and extender units in the notes. We're not going to go over that in lecture here. So in terms of using these monomers to obtain fatty acids, first what we're going to go over are the domains in FAS. And so we can consider domains that are required for extension of the fatty acid chain and then domains that are required for tailoring of that effectively to reduce the carbonyl, as shown. And we're going to go through these, because what we're going to find is that with polyketide biosynthesis, the same types of domains are used. So this logic extends there. OK. So first, we have domains required for elongation of the fatty acid chain by one two-carbon unit. OK. So these include domains that may be abbreviated as AAT or MAT, and they can be grouped as AT and stand for acetyl or malonyltransferase. OK. We have an Acyl Carrier Protein, ACP, and this carries the growing chain between the domains of fatty acid synthase. And so in recitation this week, you're going to see how these domains move around and talk about the length of this acyl carrier protein. We also have the ketosynthase. So what the ketosynthase does is it accepts the growing chain from the acyl carrier protein, and it catalyzes the Claisen condensation with the next monomer. And what we'll see is that this ketosynthase uses covalent catalysis, and via a cysteine thiolate residue. So these are the key domains required for elongation of the chain. OK? And then what we also need are domains required for tailoring, and just to clarify, I'm defining domain here as a polypeptide with a single enzymatic activity. So domains can be connected to one another, or they can be standalone in different types of synthases, but domain means polypeptide with a single enzymatic activity. So what are the domains required for tailoring? And these work after addition of the C2 unit to the growing chain. So first, there's a ketoreductase. And as indicated, what this enzyme does is it reduces the carbonyl of the previous unit to an OH and uses an NADPH H plus. We also have the dehydratase here, and this forms an alpha, beta-alkene from the product of the ketoreductase action. And then we have an enoyl reductase that reduces this alpha, beta-alkene, and this also requires NADPH H plus here. And then some fatty synthases use a domain called a thioesterase for chain release, and that's noted as TE. And we'll see thioesterases in the PKS and in our PS sections here. So one comment regarding the acyl carrier protein, and then we'll just look at the fatty acid synthase cycle and see how these domains are acting. So in order for the acyl carrier protein to carry this growing chain, it first needs to be post-translationally modified with what's called a PPant arm. And that arm provides the ability to have these monomers, or growing chains, linked via a thioester. And so just to go over this post-translational modification, so post-translational modification of acyl carrier protein with the PPant arm. OK. If we consider apo acyl carrier protein, and apo means that the PPant arm is not attached. There's a serine residue. An enzyme called the PPTase comes along, and it allows for post-translational modification of this serine using CoASH, releasing 3', 5'-ADP to give ACP post-translationally modified with the PPant arm. OK? And we'll look at the actual chemical structures in a minute. What I want to point out is that throughout this unit, this squiggle, some form of squiggle here, is the abbreviation for the PPant arm. OK? And this is very flexible and about 20 angstroms in length. So what does this actually look like? So here we have CoASH. So PPant is an abbreviation for phosphopantetheine, here, this moiety, and here's the 3', 5'-ADP. And so effectively, what's shown on the board is repeated here. Except for here, we're seeing the full structure of the phosphopantetheinylated acyl carrier protein. So this squiggle abbreviation indicates this post-translational modification onto a serine residue of the ACP. Just as an example of structure, so here is a structure of acyl carrier protein from E. coli. It's about 10 kilodaltons, so not very big, and we see the PPant arm here attached. OK? So if we think about fatty acids biosynthesis, we can think about this in three steps, better iterated. OK. So first we have loading, so the acyl carrier proteins need to be loaded with monomers. Sometimes, this step the reactions are described as priming reactions. We have initiation and elongation all grouped together here and, three, at some point, a termination. OK? So we've thought about these before from the standpoint of biological polymerizations. So what about the FAS cycle? Here's one depiction, and I've provided multiple depictions in the lecture 15 notes. Because some people find different cycles easier than others, but let's just take a look. So this charts out the various domains-- the starter and the extender and then the chemistry that occurs on these steps. And so what needs to happen is that there needs to be some loading and initiation where the acetyl-CoA is loaded onto an acyl carrier protein. So that's shown here via transferase here, and then, from the acyl carrier protein, this monomer is loaded onto the ketosynthase. If we look here, we have one of our extender units, the malonyl-CoA, and the CO2 unit that gets removed during decarboxylation, as shown in this light blue. OK? We need to have this extender unit also transferred to an acyl carrier protein via the action of an AT. So we see lots of the CoA. Here we have the acyl carrier protein with the PPant arm. It's not a squiggle here. It is the next one with this malonyl unit loaded. There's a decarboxylation, and what do we see happening here? We have a chain elongation event, so Claisen condensation catalyzed by the ketosynthase between the starter and the first extender to give us this beta-keto thioester. So once this carbon-carbon bond is formed to give us the beta-keto thioester, there's processing of the beta carbon via those tailoring domains-- the dehydratase and the enoyl reductase. And so we see reduction of the beta ketone here, we see formation of the alkene, and then we see reduction to get us to this point. And so this cycle can repeat itself until, at some point, there's a termination event. And in this case here, we see a thioesterase catalyzing hydrolytic release of the fatty acid chain. This is the depiction you'll see in recitation today, or saw before. And I guess what I like about this depiction is that you see color coding separating the elongation and the domains involved in elongation with then the processing of the beta ketone here and then termination. OK. So we get some fatty acid from this. And so where we're going to go with this overview is looking at the polyketides and to ask what similar and different in terms of polyketide biosynthesis? And so where we can begin with thinking about that is asking what are the starters and extenders? And so these are the starters and extenders we saw for fatty acids, and here are the starters and extenders for polyketides, so very similar. Right? We just see that there's some additional options, so we also have this propionyl-CoA here. In addition to malonyl-CoA as an extender, we see that methylmalonyl-CoA can be employed. So what are the core domains of the PKS? They're similar to those of FAS, and we'll just focus on the PKS side of this table. So this is a helpful table when reviewing both types of assembly lines. So the core means that every module, which I'll define in a moment, contains these domains. So we see that there's a ketosynthase, an acyltransferase, and a thiolation domain. So this thiolation domain is the same as the acyl carrier protein. So there's different terminology used, and within the notes, I have some pages that are dedicated to these terminologies. OK? So for PKS, here, we have the ketosynthase, we have acetyltransferase, and then we have this T domain which equals acyl carrier protein here. OK? So then what about these tailoring domains that were required to produce the fatty acid? What we see in polyketide biosynthesis is that those domains are optional. So one or more of these domains may be in a given module. So that's an overview, and then we'll look at an example of some domains and modules. So we're going to focus on type 1 polyketide synthases. And in these, what we're going to see is that catalytic and carrier protein domains are fused, and they're organized into what we'll term modules. So a module is defined as a group of domains that's responsible for activating, forming the carbon-carbon bonds and tailoring a monomer. So there is an individual module for every monomer within the growing chain. And the order of the modules in the polyketide synthase determines the functional group status, and that functional group status is determined by whether or not these optional domains are there. OK? How do we look for modules? The easiest way is to look for one of these thiolation or ACP domains. So each module has one of these. So you can count your number of T domains, and then you know, OK, there's 7T domains, so there's 7 monomers, for instance. So each Claisen condensation is a chain elongation and chain translocation event. Keep in mind, the starting monomer-- so whether that's acetyl-CoA or propionyl-CoA-- does not contain a CO2 group. So there's no decarboxylation of the starting monomer, but decarboxylation of malonyl-CoA occurs, like in fatty acid synthase, and if that's the case, it provides a C2 unit. And if methylmalonyl-CoA is the extender, this decarboxylation provides a C3 unit because of that methyl group. So key difference, as we just saw, in fatty acid biosynthesis, we have complete reduction of that beta-keto group in every elongation cycle because of these three tailoring domains-- the KR, DH, and ER. In PKS, what can happen is that reduction of this beta-keto group may not happen at all, or it may be incomplete in each elongation step. So what that means is that polyketides retain functional groups during chain elongation. And if you look back at some of the structures that were in the notes from last time, you'll see that, in terms of ketones, hydroxyls, double bonds, et cetera. And also, the other point to note is that there can be additional chemistry, and that these assembly lines where polyketide synthases, non-ribosomal peptide synthatases can contain what are called optional domains. So these are additional domains that are not required for formation of the carbon-carbon bond or amide bond in non-ribosomal peptide synthases. But they can do other chemistry there, so imagine a methyltransferase, for instance, or some cyclization domain. So how do we show these domains and modules? So typically, a given synthase is depicted from left to right in order of domain and bond-forming reactions here. So let's just take a look. So if we consider PKS domains and modules, we're just going to look at a pretend assembly line. OK? So this I'm defining here as an optional domain. So in this depiction, going from left to right, each one of these circles is a domain, so a polypeptide with a single enzymatic activity. Note that they're all basically touching one another which indicates in these types of notations that the polypeptide continues. It's not two different proteins, but we have one polypeptide here. I said that there's modules, and we can identify modules by counting T domains. So here, we have three T domains. So effectively there's three modules. So we have a module here, we have a module here, and we have a module here. What do we see? Two of these modules have a ketosynthase, so that's the domain that catalyzes the Claisen condensation. We have no ketosynthase here, in this first module. Why is that? We're all the way to the left. This is effectively our starter or loading module. So the propionyl-CoA or acetyl-CoA will be here, as we'll see, and there's nothing upstream to catalyze a condensate event with. So there's no KS domain in the starting module here or loading module. OK. So this is often called loading or starter. So if we think about these optional domains for a minute and think about how they work. If we go back to fatty acid synthase, and let's just imagine we have this species attached. We have the action of the KR, the dehydratase, and the ER to give us the fully-reduced species. Where here, we have a CH2 to group rather than the beta-ketone. So what happens in PKS in terms of the different optional domains? So we could have this and have full reduction. We can imagine maybe there's no enoyl reductase. So the module has the ketoreductase and the dehydratase but no enoyl reductase, and so as a result, this polyketide ends up with a double bond here. OK? What if we have nobody dehydratase, like this? OK. We just work backwards from the FAS cycle. We'd be left with this OH group at the beta position. Right? And if we have none of them, so no ketoreductase, dehydratase, or enoyl reductase, the beta-ketone will be retained, here. So what this also means is that you can just look at some polyketide and assess what the situation is from the standpoint of these optional domains. So let's just take an example. If we have three cycles of elongation, and let's imagine we had an acetyl-CoA starter plus three malonyl-CoA. So what do we end up with? Let's imagine our chain looks like this. What do we see? So two carbons are added during each elongation cycle to the chain here, and we can see those here, here, here, and here. OK? So a total of four C2 units, one from the starter and then three from these three extenders. And then we can look at what the functional group status is and say, OK, well here, we have no ketoreductase. And here, there was ketoreductase action, but there's no dehydratase. And here, what do we see? We see that there was a reduction of the beta-ketone and then the action of the dehydratase, but we're left at the alkene, so no enoyl reductase. Right? So just looking, you can begin to decipher in a given module what optional domains are there. So what we'll do is take a look at an actual PKS assembly line and then look at the chemistry happening on it here. These are just for your review. This is a polyketide synthase responsible for making this molecule here. So D-E-B or DEB is a 14-membered macrolactone. It's a precursor to the antibiotic erythromycin here, and this is the cartoon depiction of the polyketide synthase required for the biosynthesis of this molecule. So what do we see looking at this polyketide synthase? So it's more complicated than this one here, but the same principles apply. And what we'll see is that it's comprised of three proteins. There's seven modules, so one loading or starter module and six elongation modules, and there's a total of 28 domains. OK? And I said before, the placement and the identity of these domains dictates the identity of the growing chain. So let's take a look. So first, how do we know there's three proteins? We know that in this type of cartoon because we end up seeing some breaks between different domains. So here, for instance, the AT, the T, the KS, et cetera, they're all attached to one another in the cartoon. That means it's all one polypeptide chain, but this one polypeptide chain has many different enzymatic activities in it, because it has different domains. When we see a break-- so for instance here this T domain and this KS domain are not touching one another. That means we have two separate proteins. So this T domain is at the terminus of DEBS 1, and DEBS 2 begins with this ketosynthase. OK? Likewise, we have a break here, between the T domain and this ketosynthase. So three proteins make up this assembly line, and so when thinking about this, these proteins are going to have to interact with each other in one way or another. And so there's a lot of dynamics in protein-protein interactions happening here. How do we know there's seven modules? And remember each module is responsible for one monomer unit. We count the T domains, so we have one, two, three, four, five, six, seven T domains. So like the acyl carrier proteins of fatty acid synthase, these T domains will be post-translationally modified with a PPant arm. And that PPant arm will be loaded with the acetyl-CoA or methylmalonyl-CoA or malonyl-CoA monomers. We have a loading module. So the loading module has no ketosynthase, because there's nothing upstream over here for catalyzing a carbon-carbon bond formation event. And then we see modules one through six, so sometimes the loading module is module zero. We see that each one has a ketosynthase, so there'll be carbon-carbon bond formation going along this assembly line. And we see that the optional domains vary. So for instance, module one has a ketoreductase as does module two. Look at module four. We see all three domains required for complete processing of that beta-keto group here. Here, only a ketoreductase, and here only a ketoreductase. OK? So just looking at this, you can say, OK well, we'll have an OH group here, here. Here we have complete processing. Just ignore this. It's in lower case, because it's a non-functional reductase domain. It's not operating as annotated here. So what happens? So again, there's post-translational modification of this T domain, so it has a serine. The serine gets modified with the PPant arm, as shown here, and we use that squiggle depiction, as I showed for the acyl carrier protein of FAS. So post-translational modification of these T domains has to happen before any of the monomers are loaded onto this assembly line. And these PPant arms allow us to use bioesters as the linkages and through the chemistry I showed earlier. So here, what we're seeing in this cartoon, going from here, this indicates that the T domains are not post-translationally modified. And here, we see the assembly line after action of some [? phosphopentyltransferase ?] loading these arms. OK? So each T domain gets post-translationally modified. What happens next? We have loading of monomers. And we'll look at module zero and one on the board and then look at how the whole assembly line goes. AUDIENCE: Do you ever get selected post-translational modification of the T domains and if so, does that facilitate different modules being like on or off, so to speak? ELIZABETH NOLAN: I don't know. I don't know in terms of the kinetics, and say, does one T domain get loaded by a PPTase before the other? These enzymes are very complex, and there's a lot we don't know. But that would be interesting, if it's the case. I wouldn't rule it out, but I just don't know. One thing to point out too, these assembly lines are huge. So this is something we'll talk about more the next time, as we begin to discuss how do you experimentally study them? But some are the size of the ribosome for the biosynthesis of one natural product. And what that means, from the standpoint of in vitro characterization, is that often you just can't express a whole assembly line, let alone say one protein that has a few modules. So often, what people will do is individually express domains or dye domains and study the reactions they catalyze in their chemistry there. And so it would be very difficult even to test that in terms of in vitro. Is there an ordering to how the T domains are loaded? And then there's question too, do you even know what the dedicated PPTase is? So there's some tricks that are done on the bench top to get around not knowing that, which we'll talk about later. So back to this assembly line to make DEB. So we're just going to go over the loading module and module 1 and look at a Claisen condensation catalyzed by the KS. And this chemistry pertains to the various other modules and other PKS. So we have our AT domain and our thiolation domain of module 0, and then we have the ketosynthase, the AT domain, the ketoreductase, and the T domain of module 1. OK. I'm drawing these a little up and down just to make it easier to show the chemistry. So sometimes you see them straight, sometimes moved around here, but it's all the same. So we have these PPant arms on the two T domains. So what happens now, after these have been post-translationally modified? We need the action of the AT domains to load the monomers onto the PPant arms here, so action of the AT domain. So what do we end up with? In this case, the starter is a propionyl-CoA, so we can see that here. And we have a methylmalonyl-CoA as the extender, that gets loaded, and I'm going to draw the cysteine thiolate of the ketosynthase here. So what happens next? We need to have decarboxylation of the methylmalonyl-CoA monomer to give us a C3 unit. And it's C3 because of this methyl group, but the growing chain will grow by two carbons. And then we need to have transfer of this starter to the ketosynthase. So the ketosynthase is involved in covalent catalysis here. So what happens, we can imagine here, we have attack, and then here, we're going to have the decarboxylation. We have chain transfer to the ketosynthase, and here, decarboxylation leaves us this species. OK? OK. So now, what happens? Now, the assembly's set up for the Claisen condensation to occur which is catalyzed by the ketosynthase. Right? So what will happen here? You can imagine that, and as a result, where do we end up? I'll just draw it down here. And what else do we have? We have a ketoreductase. So this ketoreductase will act on the monomer of the upstream unit, and that's how it always is. So if there's optional domains in module 1, they act on the monomer from module 0. If there's optional domains in module 2, they'll act on the monomer for module 1. OK? So we see here now we have reduction of the ketone from module 1 to here via the ketoreductase. OK? So if we take a look at what's on the PowerPoint here, what we're seeing is one depiction of this assembly line to make DEB indicating the growing chain. OK? So as we walk through each module, we see an additional monomer attached. So the chain elongates, and then you can track what's happening to the ketone group of the upstream monomer on the basis of the optional domains here. If we look in this one, which I like this one because they color code. So they color code the different modules along with the monomer, and so it's pretty easy to trace what's happening. So for instance, here we have the loading module, and we have the starter unit in red. And here we see that it's been reduced by the ketoreductase of the upstream blue module. Here, we have the green module, here is its monomer, and we see its ketoreductase acted on the blue monomer from module 1, et cetera here. So I encourage you all to just very systematically work through the assembly lines that are provided in these notes, and it's the same type of chemistry over and over again. And if you learn the patterns, it ends up being quite easy to work through, at least the simple assembly line. So as you can imagine, complexity increases, and we'll look at some examples of more complex ones as well. So where we'll start next time with this is just briefly looking at chain release by the thioesterase. And then we'll do an overview of non-ribosomal peptide biosynthesis logic and then look at some example assembly lines. So we have the exams to give back. I'll just say a few things. So the average was around a 68, plus or minus 10, 11, 12 for the standard deviation. I'd say, if you were in the mid 70s and above, you did really well. If you're into the low 60s, that is OK, but we'd really like things to improve for the next one. In terms of the exam and just some feedback-- and I'll put feedback as well in the key which will be posted later today or early tomorrow. There wasn't one question that say the whole class bombed, so that's good. There were a few things for just general improvement, and I want to bring this up, so you can also think about it in terms of problem sets. One involves being quantitative. So there's certainly qualitative trends and data, but there's also quantitative information there, and that can be important to look at. And one example I'll give of that involved question one. If you recall, there was an analysis of GDP hydrolysis and an analysis of peptide bond formation. And quantitative analysis of the peptide bond formation experiments will show that all of the lysyl-tRNAs were used up in the case of the codon that was AAA. Whereas, some of those tRNAs were not used up when the codon contained that 6-methyl-A in position one. Right? And if you linked that back to the kinetic model along with the other data, what that indicates is that proofreading is going on. Right? Some of those tRNAs are being rejected from the ribosome there. So that was one place where quantitiation, a fair number of you missed that. And another thing I just want to stress is to make sure you answered the question being asked. And where an example of that came up was in question one with the final question asking about relating the data back to the kinetic model. And so if a question asks that you really do need to go back to the model which was in the appendix and think about that. So many of you gave some very interesting answers and presented hypotheses about perhaps the 6-methyl-A is involved in regulation and controlling like the timing of translation. And that's terrific and interesting to think about, but it wasn't the answer to the question. Right? Which was to go beyond the conclusions from the experiments with GTP hydrolysis and formation of that dipeptide, and ask how can we conceptualize this from the standpoint of the model we studied in class? And then just the third point I'll make is related to question two and specifically to GroEL. But the more general thing is that if we learn about a system in class, unless there's compelling data presented in a question to suggest the model is something other than what we learned or its behavior is something other than what we learned, stick with what you know. So in the use of GroEL, the idea in that experiment was that, if you recall, this question was looking at these J proteins and asking, how do J proteins facilitate disaggregation? Right? And so a GroEL trap was used that cannot hydrolyze ATP, which means it's not active at folding any polypeptide. But the idea there is that these J proteins end up allowing monomers to come out of the aggregate, and then GroEL can trap and unfold the monomer to prevent reactivation. And so a number of people came to the conclusion that GroEL was binding that aggregate somehow in its chamber. And what we learned about GroEL is that its chamber can't house a protein over 60 kilodaltons. Right? We saw that in terms of the in vitro assays that were done looking at what its native substrates are. Right? So always go back to what you know, and then you need to ask yourselves, are the data suggesting some other behavior? And if that were the case, like what is your analysis of those data there? So please, even if you did really well, look at the key and see what the key has to say. And if you have questions, you can make an appointment with me or come to office hours or discuss with Shiva there. OK?
MIT_508J_Biological_Chemistry_II_Spring_2016
30_Metal_Ion_Homeostasis_6.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: OK, so what I want to do today is hopefully finish up or get pretty close to finishing up module 6, where we've been focused on bacterial uptake of iron into cells. In the last lecture, I briefly introduced you to gram-positive and gram-negative big peptidoglycan, small peptidoglycan, outer-cell membrane. They both have the same goals. They've got to get-- They take up iron the same way from a siderophore, which is what we talked about last time, or by a heme. And we'll talk a little bit about that. And that's what you focused on in your problem set. But they have different apparati to do that, because of the differences between the outer-- because of the cell walls' distinctions between gram-negative and gram-positive. So we were talking, at the end of the class, about, this was for the siderophores which we talked about. We need to take them up. These are common to all uptake systems. You have some kind of ATPase system and ABC ATPase. We're not going to talk about that in detail, but it uses ATP to bring these molecules and also heme molecules across the plasma membrane. And then, in all cases, you have this issue of how do you get the iron out of whatever the carrier is, be it a siderophore where the carriers can bind very tightly or heme where you also have to do something to get the iron out of the heme so that it can be used. And so what I want to just say, very briefly-- and this you all should know now. So now we're looking at heme uptake. I'm not going to spend a lot of time drawing the pictures out, but, if you look at the PowerPoint cartoon, what you will see is there is a protein like this, which hopefully you now have been introduced to from your problem set. So this could be IsdB or IsdH. And we'll come back to that, subsequently. And it sits on the outside of the peptidoglycan. So this is the protein. The key thing that is present in all these Isd proteins is-- let me draw this differently-- is a NEAT domain. OK? And we'll come back to that later on. But this domain-- So you have a big protein, and there's one little domain that's going to suck the heme out. And so what happens is we'll see in Staph. aureus, which is what we're going to be focused on, you have hemoglobin. And somehow-- and I'm going to indicate heme as a ball of orange, with a little planar thing as the protoporphyrin IX. OK, are you all with me? And then somehow this gets sucked out into the NEAT domain, where-- And again, all of these gram-positive and gram-negative systems are slightly different, but in the Staph. aureus system we'll be talking about today and you had to think about in the problem set you basically have a cascade of proteins which have additional NEAT domains from which, because this is such a large peptidoglycan, you need to transfer the heme to the plasma-membrane transporter. And what's interesting about these systems and is distinct is that they end up, they're covalently bound to the peptidoglycan. And I'm going to indicate peptidoglycan as "PG." And we'll talk about that reaction today-- the enzyme that catalyzes those reactions. And all of these guys end up covalently bound to the peptidoglycan-- which is distinct from all of the experiments you looked at in your problem set. Nobody can figure out how to make the peptidoglycan with these things covalently bound. So what you're looking at is a model for the actual process. OK, so, also-- so that's the gram-positive. And in the gram-negative, one has two ways of doing this. And again, these parallel the ways with siderophore uptake. So you have an outer membrane-- So this is the outer membrane. And you have a beta barrel, with a little plug in it. And so these beta barrels, they're at, like, 20 or 30 of these things in the outer membranes. And they can take up siderophores, as we talked about last time, but they can also take up hemes. OK? So each one of these is distinct, although the structures are all pretty much the same. And so what you see in this case is, there are actually two ways that you can take heme up. So you can take up heme directly. And we'll see that what we'll be looking at is hemoglobin, which has four alpha 2 beta 2. So this could be hemoglobin. That's one of the major sources, and it is the major source for Staph. aureus. And so this can bind directly to the beta barrel-- gets extracted. The heme gets extracted. The protein doesn't get through. And so the heme is transferred through this beta barrel. OK. So that's one mechanism. And then there's a second mechanism. And the second mechanism involves a hemophore. And the hemophore is going to pick up the heme. And so every organism is distinct. There are many kinds of hemophores. And I have a definition of all of these-- the nomenclature involved. And so, after class today, I'll update these notes, because that's not in the original-- the definitions aren't in the original PowerPoint. OK? So what you have, over here, is the hemophore that somehow extracts the heme out of hemoglobin or haptoglobin. We'll see that's another thing. So this gets extracted and then gets transferred, in that fashion. And so these hemophores come in all flavors and shapes. They're different-- for example, in Pseudomonas or M. tuberculosis. And we're not going to talk about them further, but the idea is they all use these beta-barrel proteins to be able to somehow transfer the heme across. And what happens, just as in the case-- if you go back and you look at your notes from last time, there's a periplasmic binding protein that takes the heme and shuttles it, again, to these ABC transporters. OK? So, in this system, again, you have a periplasmic binding protein. And this goes to the ABC transporter, which uses ATP and the energy of hydrolysis of ATP, to transfer this into the cytosol. OK, so this is the same. That remains the same. And the transporters are distinct. And then, again, once you get inside the cell, what do you have to do? You've got to get the iron out of the heme. So the problems that you're facing are very similar to the siderophores. So, in all cases-- So the last step is, in the cytosol, you need to extract the iron. And you can extract-- usually, this is in a plus-3 oxidation state. So you extract the iron. And this can be done by a heme oxygenase, which degrades the heme. OK. In some cases, people have reported that you can reduce the iron 3 to iron 2, when the heme can come out, but that still probably is not an easy task because you've got four-- you've got four nitrogens, chelating to the heme, and the exchange, the ligand exchange, rates are probably really slow. So I would say the major way of getting the iron out of the heme is by degradation of the heme. And we're not going to talk about that in detail at all, either. OK. So that's the introductory part. And here's the nomenclature, which I've already gone through. I've got all these terms defined. And if you don't remember that, or you don't remember it from the reading, you have a page with all the names-- which are confusing. And so the final thing I wanted to say, before we go on and actually start looking at peptidoglycans and gram-positive bacteria and heme uptake in Staph. aureus, which is what I was going to focus on in this little module, is to just show you, bacteria desperately need iron. So what do they do? This is what they do. OK, so, here you can see-- and some bacteria make three or four kinds of siderophores. Others only make one or two kinds of siderophores, but what they've done is they've figured out how to scavenge the genes that are required for these beta barrels. So they can take up a siderophore that some other bacteria makes. OK? And that's also true of yeast. Yeast don't make siderophores, but most yeast have, in their outer membranes, ways of picking up siderophores and bringing it into the cell, since-- and remember we talked about the fact there were 500 different kinds of siderophores. But you can see that the strategy is exactly the same. You have a beta barrel. You have-- these are all periplasmic binding proteins. This picture is screwed up, in that they forgot the TonB. Remember, there's a three-component machine, TonB, ExbB and D, which is connected to a proton motive force across a plasma membrane, which is key for getting either the heme or the iron into the periplasm. And you use a periplasmic binding protein, which then goes through these ATPase transp-- ABC-ATPase transporters. So what I showed you was heme uptake, iron uptake, but in all of these cases, like Staph. aureus we'll be talking about, we can also get iron out of transferrin. We've talked about that. That's the major carrier in humans. The siderophores can actually extract the iron from the transferrin. And remember the KD was 10 to the minus 3, so somehow, again, you've got to get iron transferred under those conditions. And that's how these guys survive. So they're pretty desperate to get iron. And inside, once they get inside the cell, you have all variations of the theme to get the iron out. But they're all sort of similar. Somehow, you've got to get rid of whatever is tightly binding it. And if you're creative, you can reuse whatever is tightly binding it, to go pick up some more metal. OK. So that just summarizes what I just said. And so, in two seconds, I'm going to show you, now-- we've spent one whole lecture, a little more than a lecture, talking about iron uptake in humans via DMT1, the iron-2 transporter, and the transferrin transfer receptor. So, in the plus-two and plus-three states, we just started looking at the strategies by bacteria and saw how widespread they are. And then the question is, how do you win? OK, bacteria need iron. We need iron. And the question is, how do you reach-- and we have a lot of bacteria growing in us, [LAUGH] so we've reached some kind of homeostasis. But with the pathogenic ones, of course, we really want to get rid of them. And so that's what the issue is. And there have been a bunch of articles. You can read about this in a lot of detail, if you're interested in the more medical aspects of this. But this war between bacteria and humans. And really it's sort of fight for nutrients. And, in this case, the nutrient is iron. Has received a lot of attention, because we're desperate for new kinds of targets for antibiotics, because of the resistance problems. And so nutrient limitation and iron sequestration from a pathogenic organism might represent a new target. Of course, what are the issues? The issues are, we also need iron. And so, if you lower the amount of iron, then you might be in trouble, as well. So what we know is, bacteria, viruses-- bacteria have been extensively studied; viruses, less so, also protozoa, such as the malaria system-- all are known to depend on iron for growth. And so, again, if you want to read about this, you can read about some of the strategies these organisms [LAUGH] use to get iron away from the human systems. And it's sort of amazing, when you look at the details of how things have evolved, back and forth, back and forth, [LAUGH] in terms of survival. And so really what it's all about is homeostasis. OK? And that's what was all about in cholesterol. And we'll see, with reactive oxygen species, that's what it's all about. So, somehow, using hepcidin-- which is the human master regulator, the peptide hormone-- we need to figure out how to keep ourselves alive while killing off these bacteria, in some way, by sequestering the iron from the bacteria. OK. So this is an important problem that has received a lot of attention. And most of you know that the Nolan Lab is doing beautiful studies in this area. OK. So what I want to do now, for the rest of the lecture, is focus on Staph. aureus. OK? And Staph. aureus is-- methicillin-resistant Staph. aureus is a major problem, throughout the world. We don't have any ways to kill this guy. And so that's why I decided to pick this target, but there are many other [LAUGH] of these pathogens around that have problems-- have also resistance problems, Staph. aureus being the one that's been most extensively studied in the last decade or so. But bacteria has come back in vogue. For years, nobody on campus cared anything about [LAUGH] microorganisms or bacteria. The microbiome has brought it back in vogue, because people think they're going to be able to figure that all out. OK. But anyhow, bacteria have always been extremely important, not only in terms of human health but in terms of how the whole world functions. There are so many of them, and they do so much interesting stuff. And we have to live with them, side by side. So anyhow, we're going to look at Staph. aureus. That's what we're going to focus on, because of this problem. And I think Staph. aureus, which many people don't realize, is that 30% of all people have Staph. aureus on your skin or in regions that are not breaching into the bloodstream. So we all have Staph. aureus. So 30% of us have this bacteria. If you get-- if wherever it's localized is breached, and it gets into our bloodstream, then it's all over, because Staph. aureus can colonize almost anywhere. That's different from other organisms. Some organisms can only colonize in the lungs. Some colonize in the heart. So these can colonize almost all tissues. And what you know is, if you start thinking about physiology-- and again, I'm not an MD-- but different tissues have different environments. OK? And so a lot of organisms find siderophore an environment where they can best live and then take up-- make their home there. But Staph. aureus is one of these guys that can go anywhere. And so this makes it specifically very insidious. And you can get septicemia, or you can get endocarditis, or you can get all kinds of horrible diseases associated with Staph. aureus, once it breaches the barrier. OK. So what we need to do, as you've already seen from your problem set, to understand how Staph. aureus can get heme into its cytosol to be able to function, to be able to grow effectively, is, we need to look at the outer cell wall or the peptidoglycan. So what I'm going to do is spend a few minutes talking about the structure of the peptidoglycan. And then we'll go back in and we'll talk about how these proteins that you worked on in the problem set covalently bind to the peptidoglycan and allow you to take up iron to the cell. And why is heme a major target? Heme is a major target for Staph. aureus. They've evolved. The major source of iron, we all know, is hemoglobin now, in red blood cells. And so Staph. aureus has developed proteins-- endotoxins, really-- that can go in and-- there's proteins that can insert into red blood cell membranes, make a pore. The blood cells lyse, and now the bacteria are extremely happy because they have huge amounts of heme. And then they want to take that heme into-- to help them survive. So Staph. aureus are amazingly creative, in terms of getting the heme that they need for survival. OK. So, peptidoglycan. Most of you have probably seen peptidoglycan before. I'm just going to say a few things about peptidoglycan. So let's look at-- let's see. Where do I want to do this? All right. So I'm going to erase this. We're going to look at the cell wall. OK. And what you can see, here, I'm going to draw just a few things on the board. But what you can see here, in this cartoon, is you have two kinds of sugars-- N-acetylglucosamine and N-acetylmuramic acid. N-acetylglucosamine is a precursor to N-acetylmuramic acid. And what you see, attached to N-acetylmuramic acid, are little blue balls. And that's the peptide that turns out it starts out with a pentapeptide and goes to a tetrapeptide. And what you see here, in the purple balls-- and this is unique to Staph. aureus-- is, other amino acids, they're all the same, and this is glycine. So, if you look down here, here are the disaccharides, shown up here. Here is-- yeah, one, two, three, four, five. Here is the pentapeptide. And what do you notice unusual about the pentapeptide? You have a D glutamine. OK? And I was just reading a whole bunch of papers on somebody's thesis-- tomorrow, actually. And you're trying to make this guy, nobody can study this stuff. Why? Because you have to make a peptidoglycan. And I'll show you. It's complicated. You have to stick on a pentapeptide. You have to stick on the glycines. And how do you get the substrates for your enzymatic reactions? So we've known this pathway for decades, but it's taken really good chemists to be able to figure out how to look at these individual steps. And so what's unusual, here, is, if you replace glutamine with a glutamate, it doesn't work very well at all. OK, so it's that subtle. Here you've got this huge macromolecule, and you're replacing an NH2 with an OH, and you alter the resistance to different bacteria. And again, you have this unusual pentaglycine. And you'll see in the cartoon, in a few minutes, where do you think this glycine, pentaglycine comes from? Well, it actually comes from a tRNA that binds glycine. OK, you've seen that before. But, instead of using the ribosome to make this little peptide, it uses nonribosomal peptide synthetases. And this all happens in the cytosol of the cell. So, what do we know about the structure? I'm just going to draw N-acetylglucosamine. And what I'm going to do is put some R groups on here. So I'm going to put OX. And then here we have N-acetyl. So that's an acetate group. Here I'm going to put another OR group. OK, so the two things I want to focus on, the two things I'm going to focus on, is this X and this R. So is N-acetylglucosamine. And then the second one is N-acetylmuramic acid. And, in both of these cases, X is equal to UDP. So we're going to come back to this in the last module on nucleotides. So nucleotides play a central role in RNA and DNA, but they also play a central role in moving around all sugars inside the cell. So what you have here, actually, is a pyrophosphate linkage to UDP. OK? And if we look at N-acetylglucosamine, R is equal to H. OK? But if we look at muramic acid, what we're going to see is that nature has put on a lactic acid in this position. OK, so here's your methyl group, from your lactic acid. And here's the carboxylate. So this is the R group in N-acetylmuramic acid. OK. Now, what we're going to see is, while most sugars-- and this is true in humans, and it's also true in bacteria-- are carried around and transported within the cell as linked nucleotides, what we'll also see in the cell wall-- which has made them extremely challenging to study, made the whole pathway extremely challenging to study-- in addition to X equal to UDP, X can also be equal to sort of an amazing structure. And the structure is slightly different in different bacteria, but this strategy is also used in humans, where you have a lipid and you have a lipid that acts as-- is made from-- hopefully you now know-- is isopentenyl pyrophosphate. OK? And there are seven of these, where you have the trans configuration. There are now three of these, which have the cis configuration. Just make sure I get my-- is that right? Yeah, that's right. OK, so you have three of these that have the cis configuration. And then you have a terminal dimethyl L configuration. And this is C55. So, if you're a synthetic chemist, and you're trying to stick on a couple of these sugars with hydrocarbon on the tail, with C55, you can imagine you would have one heck of a trouble, number one, synthesizing it but, number two, dealing with it. And so this goes to the question which I think is really interesting is, many people think about polymerization reactions. We're going to see this polymer is non-template-dependent, in contrast to polymers of DNA RA, where you have a template. And furthermore, DNA and RNA are pretty soluble. These things become insoluble. So you're making a phase transition from soluble state to an insoluble state, around the bacteria. And I think it's really sort of a tribute to Strominger, who worked on this many years ago, that he figured out sort of the pathway. But now it's only with recent studies, and really some very hard work synthetically, and also in terms of the microbiology and biochemistry, that it's really allowed us to elucidate this. So X, in this case, can also be this lipid. So I'm just pointing out what the issues are. And if you look at the cell wall, biosynthetic pathway-- so this is inside-- you're not going to be responsible for the details of this. But this is outside. OK. So you start out with a couple of sugars. These are the sugars we just talked about. OK. So now what you do is add on these five amino acids. So, over here, we ultimately need to add on five amino acids. And what do we see about the amino acids? They're unusual, because they can have the D-- they are not necessarily L-amino acids. They can be D-amino acids. And these things unfortunately are unique to different organisms. So, if you worked out a synthetic method for one, you're still faced with the problem that every one of them has different pentapeptides stuck on the end of it. Now, how would you attach-- you've now had a lot of biochemistry, where you've dealt with amino acids, in the first half of this course. How would you attach amino acids-- form and the linkages-- to this lactic acid? Can anybody tell me? What would you do, to make that attachment? AUDIENCE: You activate the carboxylate. JOANNE STUBBE: Yeah, so we have to activate the carboxylate. How do you activate the carboxylate? AUDIENCE: Make an AMP. JOANNE STUBBE: Yeah. So you make an AMP, just like you've seen with nonribosomal-- the adenlyating enzyme of nonribosomal polypeptide synthases, and you've seen with tRNAs. OK. So you see the same thing, over and over and over again. So you add these things on. The difference is that, again, these things, which are all soluble, down here, these are all soluble with the nucleotides. Now, because ultimately this needs to go from the inside of the cell to the outside of the cell, what you do, presumably, is take this lipid-- so you have the C55 lipid, with one phosphate on it. And then you attach it to one sugar. So here it's attached to the muramic acid, and that's called "lipid 1." You add N-acetylglucosamine with a glycosyltransferase. That's lipid 2. And that's the substrate for the polymerization reaction. What is the issue? The issue is, it's in the cytosol and all the chemistry happens on the outside of the cell. But, of course, if you move it from the inside to the outside, you don't want your substrates to float away. You've got to keep them there. OK? And that's especially [LAUGH] true in gram-positives, where we have no outer membrane. So the question is, how does this species get from this side to this side? OK. In the last couple years, people have proposed-- and so this has taken a long time. People have been looking for these proteins for decades. These are called "flipases." So you still have this issue-- again, this big, huge thing that needs to be transferred. And I think what's even more amazing, in the case of Staph. aureus, is that you put on the pentaglycine in the cytosol. So, here, what you'll see-- I think this is E. coli. I can't remember one from the other. But, instead of having DAP, which is diaminopimelate, you actually have lysine. So, here, what you have in Staph. aureus is a lysine, and the lysine has an amino group. And attached to this amino group is the pentaglycine. And this all occurs in the cytosol. So this is quite remarkable. So then, not only do you have to get the disaccharide with the pentapeptide on it, you need to have, here, the pentaglycine on it, as well. And this becomes really important in thinking about trying to study what's going on in the polymerization reaction, which is the target of natural products that are currently used, clinically. OK. So this thing's got to flip. And then what you have is a substrate. You have a growing chain. OK, and then what you need to do is extend this chain, so you have a glycosyltransferase. So you have two things. You have phosphoglycosyltransferase. And then the other thing you have is a TP, which is a transpeptidase. OK. And so the transpeptidase-- we're going to come back to this in a second, but-- is ultimately responsible for making a cross link. Which is what gives the bacteria cell wall rigidity. Now, in many organisms, the glycosyltransferase and the transpeptidase are on the same protein. They're two domains. But, in many organisms, they're not. OK, so you have two separate proteins. And furthermore, in Staph. aureus there are now five of these kinds of proteins. So the question is, what are all five of these glycosyltransferase doing? Which ones are involved in which? Which ones are involved in antibiotic resistance? And I think, when you start looking at it like this, you know, it's very complex. You realize what a hard problem this actually is. But we now have the tools, I think, because of beautiful studies that have been done in the last few years, to start investigating this. So this just shows, here, again, we have our lipid 2. We have our growing chain. And here we have our pentaglycine. So this is Staph. aureus. And we take D-alinine D-alinine, and form a cross-link and kick out D-alinine. And many of you have probably seen this before. I used to teach this in high school. [LAUGH] So that D-alinine D-alinine looks like penicillin. And we understand that this works-- it looks amazing, like a serine protease, which you're all very familiar with. We've seen this hundreds of times, now, in the earlier part-- to form this cross-link. And that cross-link is essential for the viability of the organism, in different ways. And you can imagine, if a bacteria is dividing, that you might have different peptidoglycal structure at the site, where the two dividing bacteria are going to split apart. So that might be why you want to have multiple glycosyltransferases in this overall process. OK. So this is just a cartoon that shows you targets. These are all natural products. Here's penicillin. It targets-- It looks-- not in this picture, but you can use your imagination. It looks just like D-alinine D-alinine. Binds in the active site, and covalently modifies a serine involved in that reaction. Moenomycin. What does this look like? This is sort of amazing. It's got this lipid thing, hanging off the end. That's a natural product. It binds, also, to the glycosyltransferase. And people are actively investigating this. You can imagine, this is not so easy to make as a new antibiotic. And then we have vancomycin, and vancomycin is able to bind D-alinine D-alinine. So these are all natural products that target cell wall. And, by far and away, the penicillins are the ones that are used much more prevalently. We have hundreds of variations of the theme. And, again, it's the war between the bacteria and the human, to figure out how to keep themselves growing. And so we have many variations on the beta-lactams. And you can take this even a step further, if you go-- in addition to the peptidoglycan, you have polymers of teichoic acid-- which I'm not [LAUGH] going to go into. But now people, for the first time, this year, have been able to reconstitute this polymer biosynthetic pathway. And this is a new target for design of the antibacterial. So I think it's exciting times, and we have really smart people working on this problem. And they now, for the first time, can set up the assays, so they can screen for small molecules that hopefully can target cell wall, which is unique to bacteria. OK. So what I want to do is talk about, in the last few minutes, as we're now moving into Staph. aureus. OK? And we're going to focus in on heme uptake rather than siderophore uptake. But if you look at this, what do we know about Staph. aureus? We know what a bit, because everybody and his brother has been studying it because of the problems with resistance. So, here, again, Staph. aureus actually has two biosynthetic pathways encoded in its genome. And what these pathways code for are these two siderophores. OK? And if you look at this, what's unusual? Does anybody see anything unusual about the siderophore structure, if you look at it carefully? I don't want to spend a lot of time on this, but what do you see in the structure? Can you read it? Or, if you brought your handout, you can probably read it. Since I insist on having the windows open, it's harder to read this. But what do we see, in siderophore, in this siderophore, Staphyloferrin A? See anything you recognize? Yeah. AUDIENCE: Some citrates? JOANNE STUBBE: Yeah, citrates. So, again, we're using citrate. We saw polycitrate can bind iron as a siderophore in itself. And, in fact, most gram-negative bacteria have iron-siderophore uptake system. Here, actually, all of these-- if you look at this carefully, the biosynthetic pathway, you know, is made out of basic metabolites. OK? That you see out of normal, central metabolic pathways. And what happens is, there's an ABC transporter and an ATPase-- FhuC is an ATPase-- all of this is written down in your notes-- that allow the siderophore to bring iron into the cell. And I think what's interesting here, and I've already pointed this out, in addition to the siderophores that the organism makes it also has a generic transporter that allows siderophores made by other organisms to bring iron into the cell. And so, again, that's a strategy that's used over and over again. So here's a xenosiderophore transport system, desperately trying to get iron. OK. So the ones we're going to be talking about and focusing on specifically are the heme uptake systems. And these are the ones you've already hopefully thought about, now, from your problem set. We have to extract-- I just told you that red blood cells have most of the iron. So Staph. has been incredibly creative in generating endotoxins that lyse red blood cells, allowing the heme-- hemoglobin, OK? So we have endotoxins from the organism that lyse red blood cells. And so what you get out, then, is hemoglobin. Which, again, has four hemes and iron. And you want to get-- the key thing is to get the iron out of the heme. So you want to be able to extract the iron out of the heme. And also-- and I have this down in your nomenclature-- it turns out red blood cells have another protein, called "haptoglobin," that binds to hemoglobin. And that's another place that these organisms have evolved to extract the heme-- to extract the heme. So, in all of these cases, you're extracting the heme out of the protein. And so, over here, you see the two different ways to do that. And we have different proteins that are able to do this. And then, eventually, the heme that's extracted is passed through this peptidoglycan, eventually to the plasma membrane, where the heme goes into the cytosol. And in this organism, to get it out, you have to break down the heme. You have to cleave it into pieces by the enzyme called "heme oxygenases." OK. So I don't want to really say very much about the siderophores, except to say-- let me comment on iron sensing. And you saw-- and this would be Staph. aureus, but it's in true iron-sensing for most bacteria. You saw iron-sensing predominantly at the translational level. Which was unusual. That's why we talked about it, in humans. Here, iron-sensing is predominantly at the transcriptional level. So this sensing occurs transcriptionally. And so you have a transcription factor, which is called "Fur." And Fur is a transcription factor. That name is used for almost all organisms. And I'm not going to say much about this, but we're going to look at the operon in a minute. But here's Fur. And if Fur has iron bound, what it does is a repressor. And it shuts down transcription of all the proteins that you might think it would shut down. They can no longer take up iron into the cell, because you have excess iron and you don't need anymore. Again, you want to control iron, because you have problems if you have too much iron with oxidative stress. OK. So, if you look at the operon-- let's see. So look at the operon, here. So here's the operon. And we're going to see that the key proteins involved in heme uptake are called the "Isd" proteins. And so, if you look at all of these Isd proteins, this Isd protein and that one, they all have these little Fur boxes. [LAUGH] So we have a Fur box ahead, which regulates whether you're going to make a siderophore or whether you're going to make all this equipment required to take up heme. So all of that makes sense, and people have studied this extensively, in many of these organisms. OK. So what I want to do now is, I'm going to show you this cartoon overview. And then we'll look at a few experiments that people have done to try to look at what basis in reality this cartoon model has to what actually happens inside the cell. So let's look at-- I can never remember the names of these things. I'm just going to call it the "Isd proteins." And so there are two proteins, we're going to see, that are closest to the surface, that directly interact with hemoglobin-- or haptoglobin and hemoglobin-- the other ones that are going to somehow get the heme out of the proteins. And then these each have little NEAT domains. So N1 is a NEAT domain. So they have a name for that, which I've also written down. It's, like, 120 amino acids. And each one of these proteins sometimes has two, sometimes has three, sometimes has one, and they're structurally all the same. But it turns out that you can't just pick up one and replace it with another. There's something about the spinach on each side of these NEAT domains that is key, you can imagine, for the directionality of the transfer. So you want something that the heme is going to get down here. You don't want something where the equilibrium is going to stay up there. So this is not an easy problem. And this is a problem that we discussed in the beginning-- the importance of exchange ligands. Because somehow we're going to have a heme in a little NEAT domain, but it's going to move into the next domain. It just doesn't hop. It's covalently bound. So how do you transfer one heme to the next heme? And we have a lot of structural information, but I would say we still don't understand how these transfers actually occur. OK. So there's a couple other things that I want to point out, here. So IsB and IsH extract from heme and hemoglobin. This gives you a feeling, which you also saw from the problem set, that these little domains-- N1 domains, N2 domains-- are all NEAT domains. So we have multiple domains. And what we're going to see, and this is key to the way these organisms function, is that these Isd proteins are covalently attached to the peptidoglycan. So the issue is, we need to covalently attach the Isd proteins to the peptidoglycans. And the protein-- There are two different proteins that do this. So the Isd proteins have ZIP codes. Where have we seen this? We see this over and over and over again. We have little sequences of peptides that are recognized by another protein. OK? So we have ZIP codes. And the ZIP codes, I'll just say "see PowerPoint" for the sequence. And it turns out, if you look over here, all of these proteins with a yellow anchor have little ZIP codes in them. OK? [LAUGH] And they're recognized by a protein called "sortase A." OK. So we'll see that, in addition to the Ist proteins, we have sortases. And we have sortase A and B, and they recognize the ZIP codes, distinct ZIP codes, and are required to attach the Isd proteins covalently to the peptidoglycan. And in the peptidoglycan of any gram-positive, a lot of things are covalently attached to the peptidoglycan. So, I mean, can you imagine-- how dense do you need these proteins, to be able to do these switches? I mean, this is a cartoon overview that really doesn't tell you anything about the complexity of all that-- what does a peptidoglycan look like? Well, it's got a lot of water and a lot of space in between these N-acetylglucosamine, N-acetylmuramic acids. So this is involved in the covalent attachment. And it, in fact, involves what you've seen over and over again-- involves covalent catalysis with a cystine in its active site. OK? So what I want to do is briefly look at what these sortases actually do. I'm not going to write it on the board. I'll walk you through it and then, next time-- hopefully, you've already thought about this in some form, but I'll walk you through it and go through it next time. And then what we're going to do is simply look at a few experiments with Isd proteins, to look at this movement of heme across the membrane, similar to the kinds of experiments that you had on the problem set that was due this week. OK. So, because I don't have much time and I can't write that fast and you can't write that fast, either, [LAUGH] I'm going to walk you through sort of what's going on in this reaction. OK. So, remember, all of these things are anchored to the plasma membrane. OK, so that's the other thing. Sometimes they have single, transmembrane-spanning regions. Sometimes they have lipids that are actually bound. I wanted to say one other thing, here. So these yellow things are anchored by sortase A. The blue thing is anchored by sortase B. And IsdE is anchored by a lipid, covalently bound. OK, so we have three different strategies, to anchor. OK? And every organism is distinct. Whoops, I'm going the wrong way. OK, so what happens in this reaction? So here's our ZIP code. OK, and what we know about this-- and here's sortase A. Sortase A is anchored to the plasma membrane. In a further cartoon, they don't have it anchored, but I tell you it's anchored. And we know we get cleavage between threonine and glycine. And we know we have a sulfhydryl on the active site. So, this chemistry, we've seen over and over and over and over again, whether it's with serine or with a cystine, you have to have the right equipment to acylate the enzyme. So what happens here is you acylate the enzyme. And so this is the part of the protein that's going to get transferred, ultimately, to-- this is lipid 2, with the pentaglycine. And at the end of the pentaglycine you have an amino group. That's-- you're going to attach this protein, IsdA or IsdB to this lyse-- the end-terminal amino group of glycine in the pentaglycine. So you form. Again, you cleave this peptide bond. And you have this piece left over from your Isd protein. You now have this covalently attached to the sortase. And again, what you're doing is going to regenerate the sortase, so you can do more of these reactions. And here you're forming your linkage to the Isd protein. OK? Does everybody see what's going on in that reaction? So another cartoon version of this, and then I'll stop here. This is a more chemical version. Again, this is the sortase. Here is your amino-acid sequence. You go through a tetrahedral intermediate. This is all a figment of our imaginations, [LAUGH] based on what we think-- what we do understand, in the test tube of peptide bond hydrolysis-- not so much in the enzymes. But you generate an acylated attached protein. And then we have our pentaglycine, the terminal amino group that goes through, again, a tetrahedral intermediate to form this linkage. So what's happening-- I think this is, like, so, again, amazing-- what's happening is, you're transferring-- you've got your lipid 2, and you've transferred it across this membrane into the outside of your bacteria. So you've gotta hang it there. That's why you need these big, huge lipids. And what you're going to do is attach to this pentaglycine. You're going to attach each of these Isd proteins, covalently. And then what you do-- So you make this guy. Then you attach this whole thing onto the growing polypeptide chain. I mean, this is, like, an amazing machine that they've unraveled, I think, from studies that have been done in the last five years or so. So, next time, we'll come back and talk a little bit about the Isd proteins, but I think you should be fine, looking. You've looked at-- all you're doing is transferring heme, and we don't understand the detailed mechanism of how that happens. That's something hopefully some of you will figure out.
MIT_508J_Biological_Chemistry_II_Spring_2016
R1_Determining_Analyzing_and_Understanding_Protein_Structures.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. SHIVA MANDALA: And so just as an overview, today we're going to be talking about different techniques that are used to determine protein structure. We're talking a little bit about the protein data bank, or the PDB. And then the latter half of the recitation will be on-- we'll be doing a worksheet to look at the structure of ubiquitin and di-ubiqiotin using PyMOL. So that's just to get you familiar with using PyMOL. And so for the first question I'd like to pose to you is why should we determine protein structure, what we can learn from determining protein structure? And so I look to you for answers, a lot of answers. But does anybody have any ideas? Yeah? AUDIENCE: [INAUDIBLE] SHIVA MANDALA: Yeah, absolutely. Knowing putting structures does help you determine enzyme mechanisms. Anything else? AUDIENCE: Structure can indicate function. SHIVA MANDALA: Structure what? AUDIENCE: Can indicate function. SHIVA MANDALA: Yes, absolutely structure does indicate function. Can you be a bit more specific with respect to that? AUDIENCE: [INAUDIBLE] SHIVA MANDALA: Yeah, absolutely. Yeah you can determine active site of enzymes. Any other ideas? Still a lot more to go. What about can you learn interactions with other macro molecules? OK. Well, let's just go through some of them. So yes, structure does, indeed, determine function. And the idea is if you know the structure of a protein, you can learn a lot about its function in vivo. And so some of the things that you can study is you can study enzyme mechanisms. It's relatively hard to do in x-ray crystallography, because every time you solve a crystal structure, it's only one snapshot of the enzyme. But you can definitely do it. You can design drugs or substrate that bind to the protein if you know what the active site looks like. You can design a high affinity inhibitor, and this is used in the pharmaceutical industry a lot. You can study translation and transcription, which is what we'll be learning about in class this week, and we'll be learning next week, as well. You can also make co-crystals of proteins with other proteins and nucleic acids and study interactions between macro molecules. And this is something that has been emerging. It makes it harder to solve a crystal structure, but x-ray crystallography is a very powerful tool for this. You can also study immune system functions. So this is more on the biological side of things. You can study host-pathogen receptors and their interactions. And many, many more. Really there's no reason why you shouldn't have a structure for whatever protein you're studying. And so the key idea is that if you know the protein structure, that allows you to carry out biochemical studies. But then also if you determine the structure, that can help rationalize results that you get from biochemical studies. So it goes both ways. And the final point that is not perhaps not emphasized that much is that structure is a result of sequence. And ideally, what we would like to know is if we know the primary amino acid sequence of a polypeptide, we'd like to be able to predict its complete three dimensional fold. And that's sort of the idea behind protein folding. But we're not quite at that stage yet computationally. And so that's why we need experimental data. But that's something we're moving forward, moving towards as a science. And so just to go over, I'm going to cover the three main techniques in protein structure determination. And so the most common one is X-ray diffraction. And the I'll go over some of the details of X-ray later. I'll be focusing on X-ray diffraction in this talk. Some of the selling points of X-ray diffraction are that you can study a protein of any size-- small proteins, large proteins. You can study complexes of proteins. There is a need to crystallize your sample, which makes it challenging. It's very hard to crystallize proteins. In your body, no protein is crystallized. So you're trying to make proteins do something that they don't really like doing. You can obtain a high resolution structure. So a resolution of two Angstroms or even less is pretty common for good protein X-ray structures. But then also it's difficult to observe dynamics. So an X-ray structure is just a snapshot of the protein at a specific time. And so you need often a series of X-ray structures to really learn something about the mechanism or the dynamics of these proteins. The second most popular technique is nuclear magnetic resonance, or NMR. NMR is typically used to study small proteins. And the reason for this is that if you look at-- so this is a 2D NMR spectra. If you look at the 2D NMR spectra, there's a lot of peaks, right? And the more residues you have, the more amino acids you have, the more peaks you have and it gets very crowded. And so that's the limiting factor with using NMR to study large proteins is that you can't resolve all the chemical shifts. The plus point is that there is no need to crystallize your protein. You can study it in solution state or solid state. Solid state NMR is typically used for memory in proteins. You can use solution NMR to study any soluble proteins. You do need isotopically labeled samples. So these are 13c or 15n enriched samples, which is very hard and expensive to do. So that's one of the drawbacks with NMR is that it's a relatively expensive technique while X-ray is more accessible. You can obtain a high resolution picture with NMR, as well, but it often requires more work than X-ray crystallography in that you need to do about five NMR experiments. That can sometimes take months to determine high resolution structure. X-ray is more accessible. But really, the big upshot of NMR is that you can observe dynamics within proteins. So you can really see-- proteins are living, breathing machines. And you can see that with NMR better than you can with any other technique. Another quick point which I don't have written up here is that NMR is sensitive to protons. And you can study hydrogens with NMR. You cannot study hydrogens with X-ray diffraction, for reasons that I'll come back to later in the talk. AUDIENCE: Can you explain how exactly NMR observes dynamics? SHIVA MANDALA: Yeah, so there is a series of different-- I mean, I don't know how much detail you guys know about NMR. But basically, you can study the relaxation of nuclei. That's often one that's used. So you study T1 and T2 relaxation of nuclei. And more mobile residues and more mobile atoms relax faster. But really, the idea is you can use-- there are a whole bunch of different experiments in NMR. And you can access timescales from the nanosecond up to till millisecond. So 10 to the negative 9 to about 10 to the negative 3 seconds of motion. So it's quite-- and they use different experiments for different parts of that timescale. And we can talk more about that in detail. The third technique is electron microscopy. So this is restricted so far to large proteins. The reason for this is that resolution is not so good. Again, you don't need to crystallize your proteins. So that's an option. You don't need to use labeled samples, either. So the sample preparation is probably the easiest for electron microscopy. The picture that you get is sometimes lower resolution, but the technology is moving forward to the point where we can get a resolution as good as X-ray structures. And I know that there's a 2.2 Angstrom resolution structure out there definitely, and there are others that are 3.2 Angstroms. But maybe there's something better than that out there in the literature. Again, it's difficult to observe dynamics. So similar to X-ray, it's just a snapshot of your enzyme. AUDIENCE: Is that picture-- is the concept similar to a normal microscope? SHIVA MANDALA: Yeah, absolutely it is very similar. And the only thing is you're looking at how electrons interact with with your sample as compared to light, right? Visible light, I guess. So each of these particles here is your protein, is a protein molecule. And then these are three dimensional reconstructions. So there's computer software that does this. So to go from this, you basically signal average over all of these different molecules. And then you signal average over all of your different orientations of the protein that are trapped in your static electron microscope image. And then using some image processing, you generate a three dimensional image of your protein that has a higher resolution than what you can see just with one single photo, I guess. So there's lot of computer processing that happens behind the scenes. AUDIENCE: So is the electron-- the interactions with electrons, is that similar to fluorescence microscopy? Because that's where you're seeing where your proteins are located, right? SHIVA MANDALA: So the difference with fluorescence-- I mean, here electrons can interact with any atoms, right? Any material. AUDIENCE: Oh, so you can distinguish what different atoms the electrons are interacting with? SHIVA MANDALA: Yes, because different atoms interact with-- different nuclei interact, and electron densities interact with electrons differently. But with fluorescence microscopy, you're usually looking at just a single molecule a fluorophore that's reporting on where your protein is. But electron microscopy is a much higher resolution picture. It's actually an atomic level-- well, maybe a few atoms level-- picture. Fluorescence microscopy is usually just used to study where your protein is if proteins are interacting. So that's more macromolecular interactions. But you can get single molecule resolution with fluorescence microscopy if you use the correct techniques. And so just as an introduction to the protein data bank, so the first graph tells you the number of structures in the PDB as a function of a year going from 1975 all the way to 2015. So you'll see today there are about 110,000 structures, of which 100,000 were determined using X-ray crystallography, and about 10,000 using NMR, and about 1,000 using electron microscopy. So really quite a nice ratio there. And if you see the yearly increase in the number of PDB structures, you'll see that X-ray is, of course, really big. NMR has been fairly consistent over time. I think that has to do with the fact that it's expensive and it takes time to prepare your samples. But you also see a huge spike in electron microscopy of late. And so with the advent of cryo EM, a lot more people start using cry EM to determine protein structure. AUDIENCE: Doesn't [INAUDIBLE] produce [INAUDIBLE] or you just put it in [INAUDIBLE] SHIVA MANDALA: Yeah, it can be. But the problem with that is when you put your sample in the [INAUDIBLE],, you can get chemical shift information. But chemical shift doesn't tell you anything about protein structure. I mean, it tells you a little bit. It tells you about what the electron density is at the atoms. But what you really need to get from NMR experiments are distance of strains. And so this is through space experiments. So you can say that, oh, this one carbon nuclei is at a distance of 6 Angstroms away from this other carbon nuclei. And you typically want to accumulate about five strains per atom. And so to collect five times how many ever atoms you have in your sample, it can take time. It's hard to do. So chemical shift by itself doesn't tell you much about protein structure. Any other questions so far? All right. So now we will focus the rest of the talk on-- well, another part of the talk on X-ray crystallography. And so crystallography is the science of determining the t dimensional position of atoms in a crystal. And so what crystal is, a crystal is a solid material whose constituents are arranged in an ordered pattern expanding and extending in all three spatial dimensions. And so the key idea is that this translational symmetry-- so if you go in any of the three directions for a certain amount of time, a certain amount of length, you'll come back to the same pattern that you start off with. And so this is a crystal of your protein of interest. What you want to know is you want to know how the proteins are packed or arranged within this crystal structure. And also as a result, how the atoms are arranged within the crystal structure. And the way this works is by diffracting X-rays through your sample of interest. And with this slide, I just wanted to point out that it's not restricted to proteins. You can study salts, you can study your favorite small organic molecule. Whatever you want, really. And so the general workflow is that you have a source of x-rays that can be a singleton-- or local source, singletons are much brighter than local sources-- that you shine in your crystal. And you obtain what is known as a diffraction pattern. And so this tells you how the X-rays are in track-- this tells you something about how the X-rays are interacting with the atoms in the crystal. And so this used to be collected on a photographic plate. This particular image is on a photographic plate, but now people use CCD sensors. It's a lot easier. And knowing the-- sorry, one more thing. Each of these dots, light and dark, on the diffraction pattern is called the reflection. And that contains some information about the electron density and the crystal structure. And from your diffraction pattern, you can then back calculate the electron density in your crystal structure that gave rise to this diffraction pattern. And the way you do that is by looking at the intensity of these reflections. And you also need-- there's also something called phase, so you need to determine phase. And sometimes you'll see in the literature you'll see heavy atoms being introduced, or mercury being introduced. And that's often to determine the phase, which is essential for calculating the electron density. Once you determine the electron density, you know what protein you started off with. And so you know what your protein looks-- you know the sequence of your protein. And so then you just take your electron density and fit whatever polypeptide change you have to that. And then usually this is all automated nowadays. So you press a few buttons and it goes through, software does everything for you. But it was much more challenging early on. And even now, the computers will get you up to a certain point. And then in the last, last stages of refinement, you always want to-- usually people do that by hand. Any questions about X-ray crystallography? AUDIENCE: [INAUDIBLE] how strongly or complex, but how do you get that electron density from diffraction pattern. Like in organic chemistry, in basic [INAUDIBLE] I thought that you can-- from diffraction pattern, you can learn the distance between atoms in the lattice points. But here with proteins, every point is itself a protein, right? In the lattice? No? SHIVA MANDALA: No, no, no, no. Because you're still looking at every point in the lattice is still an atom if you're doing proteins. It's just that there are a lot more atoms, and the lattice is a lot bigger, which is what makes protein crystallography hard compared to small molecule crystallography. So it's harder to solve a protein crystal structure than it is a small molecule crystal structure just because there's so many more atoms in your lattice. But the idea is exactly the same as a small molecule. It's just a lot harder to do. And for more information, I actually have a resource at the end that goes in-depth into the math of the process. But just briefly, this diffraction pattern is collected into what's called reciprocal space. And to go from this to electron density, you need to do a Fourier transform into Hilbert space, which is what electron density is spaced in. But I will provide a reference for more information on that. And so the next part of the discussion part of this recitation will be thinking about some of the limitations of X-ray crystallography. So there's a lot of them, but I'll turn to all of you for your inputs. AUDIENCE: Is it difficult to develop crystals of certain types of proteins? SHIVA MANDALA: Yes, absolutely yes. First point, it's really hard to purify and crystallize proteins. It's really not a trivial task. It can take months or even years to do so. And nowadays you have robots that can set up reactions under hundreds of different crystallization conditions. It's sort of a black magic sort of hard. It's hard to predict what crystallization conditions are going to give you a high quality crystal. Anything else? Yes? AUDIENCE: Like the crystals you get may or may not be physiologically relevant? SHIVA MANDALA: Yes, absolutely. Just on point with questions. But yeah, it's hard to tell whether the crystal structure that you get is depicting what's happening with the protein in vivo. And I mean, this is a problem that's inherent to X-ray crystallography, right? Every time you solve a crystal structure, you don't know whether it's relevant. But usually I think it turns out that it's pretty close, if not completely accurate in solution. But sometimes you do have to be careful about this. It's especially challenging for stuff like membrane proteins, where you don't really know. Any other ideas? So when proteins are translated, do they usually-- are they usually used just like that, or does something else happened to the proteins in most cells? AUDIENCE: [INAUDIBLE] SHIVA MANDALA: Yes, absolutely. Post translation modifications. So a lot of proteins are post-translationally modified. And so when you're growing a crystal of your protein, you usually just use a purified version of your protein so you can't really calculate. And sometimes these PTMs are essential for the function of the protein. So you're missing some part of the picture. Anything else? What about movement? Can you tell what proteins are flexible, what parts of the-- sorry. AUDIENCE: Well it's like if part of protein is mobile, then you won't have the density for it. SHIVA MANDALA: Yes, that is true. You can discern anything about dynamics and flexibility. And the answer is you can tell something. You can tell something about the relative motion of different parts of the protein with respect to each other, but it's hard to tell something about the absolute motion of these proteins. So you can't see, say, larger scale motions, right? Most proteins are living, breathing machines, and it's hard to capture that in an X-ray structure. And one more thing. So is there any element that you cannot detect in X-ray crystallography very well? This has to do with the way-- so in X-ray crystallography, you're setting interactions of X-rays with electrons, right? So does anybody know? Yeah, I heard somewhere. AUDIENCE: Protons. SHIVA MANDALA: Protons, yeah. So protons have one electron. And so their X-ray signal, so-called, is really weak. Really, really weak. And so you can't really see protons with X-ray crystallography. And so you can't really study hydrogens or hydrogen bonds. And if you look at the structure of proteins that has hydrogens in it, those hydrogens were put there as a result of an average bond length, the typical bond calculation. So it's not actually experimentally determined. You can use neutron diffraction to get around this, but neutron diffraction is hard to do because you need to grow very large crystals to study. And I think there are about 80, I think, neutron-- around 100 neutron structures in the PDB so far. But for small molecules, neutron is much more accessible. Neutron diffraction? And so the idea is the same as X-ray crystallography, except for you're using neutrons. And the final point is that one structure only tells you part of the story. Again, this is emphasizing the fact that one structure is just a snapshot of the protein at a certain time. And you want to correctly interpret your data and learn something more about the protein, you often have to use complementary biochemical techniques. Are there any questions at this point? So the last part of the talk is on how to assess the quality of structures in the PDB. They're large structures, and you want to be able to know whether the model that's presented to you is actually accurate, actually reflects the data that was collected. And so the first point is what is the resolution of the structure? And so the take home message is that a lower number means a greater resolution. And the resolution actually here is referring to the distances between the atoms in the plane. And so that's where that's coming from. So that's why if you have a lower resolution, that means you can resolve atoms. A one Angstrom resolution means that you can resolve atoms that are one Angstrom apart on parallel planes. But the take home message at a one Angstrom resolution, you can see individual atoms and you can discern the identities of those atoms by looking at their electron density. But if you come bound to a four Angstrom structure, you'll see the benzene ring doesn't really have a clearly defined electron density, and there's no hole in the center. But if you look at the one Angstrom structure, you can see that there's even a hole in the benzene ring to confirm the electron density there. And then if you look at-- so these are the data statistics. So this is just pulled from PDB off 2JF5. So this is the PDB ID for di-ubiquitin, and we'll be looking at the structure later in the worksheet. The resolution tells you, of course, about the resolution of the crystal structure. And so in this case, it's at 1.95 Angstroms or two Angstroms, and so that's pretty high resolution. Reflections are each of those points in the diffraction pattern that you collect, and a unique reflection refers to the fact that you've only collected it once. So usually when you collect diffraction patterns, you put your protein a certain orientation with respect to the X-ray beam, and then you collect the diffraction pattern, and then you rotate your crystal a whole bunch of times, and you collect a whole bunch of different diffraction patterns. And then you superimpose all of those together to get the master diffraction pattern. Redundancy refers to the fact of how often each reflection was observed. And so this is a signal averaging. The more redundancy you have, the greater number of times you observed that particular reflection. Completedness refers to how many of the data points were actually measured. And so this is when you created your model, you can back calculate your diffraction pattern. And then you see how many of those reflections were experimentally observed in our data set. And so for this usually you want as close to 100%, and anything above 95% is considered fairly good. R merge is an indicator of how consistent measurements are. So this is a measure of what the difference between different measurements for the same reflection are. So if you look at the intensity for the same reflection a different number of times, you're seeing the standard deviation of that. So you want as low a number as possible. So usually you want it to be about 1/10 of the resolution of your crystal structure, which is 0.2 Angstroms in this case. I forgot to mention this, but the values in parentheses are for a high resolution bin. So this is just a certain subset of this data set that's considered to be higher quality than the original data set. And that's actually coming from this value, which is signal intensity over sigma of signal intensity, and that's a measure of the signal to noise ratio. So you want a higher signal to noise ratio is better. And this is fairly good. And the higher the better. And cutoff is at 2 for the high resolution bin of your data points. All, right so this is just the raw data. And then the refinement statistics tell you something about your refinement process that gave you the crystal structure that you calculated. So our crystallization, which is R cryst, which is also called R work, and also called the R factor. That tells you how well your model and your data match. And so this is where you calculate the difference in the diffraction patterns between what you experimentally observed and what you calculated using the model of electron density. R free tells you how well your model and data match when corrected for overfitting. So the idea behind this is that when you collect your data set, you put aside about 5% of the reflections that you observe in the data set to prevent overfitting. And then when you've created your model, you go back and see how well those 5% of data points fit the model that you've come up with. And if they fit well, that means you've accurately predicted your crystal structure using the data that you have. If you don't fit well, that means you've just fit whatever data points you have to some model. You've used a bunch of data points and you fit it to some model, but that's not actually accurate to the protein structure that you determined. And so for this you want it to be about 1/10 of a resolution. That's also true for R cryst. But then the other key point is that it should be very close to the R cryst, because it's telling you that it's random error, or it's the quality of your data set that's causing this and not sum overrefinement that you've done during the refinement process. B factor tells you about how mobile atoms are in the crystal lattice. And this is something that's it's not particularly useful if you look at the bulk statistic. But if you need to, it's important to evaluate this by residue. And so if you look at amino acids in the loops of proteins, you'll find that they have a higher B factor usually, meaning that they're more mobile. And so you can often tell what residues are important for function by looking at the B factor. B factor is kind of-- there's inherent vibration motions in all atoms, right? So there will be a B factor at any temperature greater than 0. But it also does tell you a little bit about the disorder in protein structures. And the B factor of water is often included just so that people who are looking at the crystal structure afterwards can decide whether that water is really there, or really was part of the electron density, or whether it's just an artifact of ore refinement, or something like that. And then the final statistic that you can look at is the RMSD from ideal geometry. So the geometries of these bonds and angles are usually well known. And so if you compare the results from your structure to the known stereochemistry, you'll find that this is actually just at the threshold of the cutoff for what is considered good. So 0.015 Angstroms, this is standard deviation of 1 lens of your model versus what is already known. And so 0.015 Angstroms is just about the cutoff that's considered good for X-ray structures. And same for bond angles. 1.5 degrees is considered the threshold of what is considered acceptable. And you can also look at Ramachandran's statistics-- so this is looking at five sided back one angles-- to tell you if there are any static clashes, say, for side chains that really shouldn't be there. And you can if you do have static clash nowadays, you have to report it to PDB. The only exception is for glycine, which doesn't have a side chain. And so it can adopt a strange phi psi angle that's outside the Ramachandran plot. Any questions on this? Anything else about X-ray crystallography? So this brings us to the end of the talk, and this is a resource that I found that has more information about X-ray crystallography, and the math behind X-ray crystallography, and more of the theory behind it. But again, I want to emphasize that today it's a very automated process is that you click a few buttons and anybody can do it.
MIT_508J_Biological_Chemistry_II_Spring_2016
17_PK_and_NRP_Synthases_3.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. ELIZABETH NOLAN: Last time we were working on this PKS assembly line that makes acrylide. And just as a review of that, we left off having gone over the domains and module architecture for this assembly line. So recall, each module activates a given monomer. And we can use depictions like this to show how the PKS builds a growing polyketide chain. OK, and as you saw in recitation last week, the actual structure of one of these synthases is very different than what's depicted by this left-to-right kind of assembly-line depiction there. So you saw some amazing conformational changes of the fatty acid synthase. And they're all different, but just keep that in mind when thinking about these. So this sort of notation is very helpful for us in terms of thinking about how the biosynthesis goes, but it's not an accurate representation of structure. OK, so where we left off was with looking at how these optional domains can do chemistry on the upstream monomer. And the last thing we're going to do related to this assembly line is, one, ask how is the polyketide released from the assembly line when the biosynthesis is over. And then we'll just do one exercise looking at the macrolide and working backwards. So last time, we were looking at the domain organization to determine what sort of chemistry happens to a given monomer. We can do, effectively, the opposite, looking at a natural product and identify what those monomers and properties of the assembly line are. OK, so in terms of chain release, there are thioesterase domains. And these domains are involved in chain release from the assembly line. So if you take a look in the final module here, what we see at the end is a TE for the thioesterase. OK, and so what happens in the case of DEBS is, ultimately, the chain gets transferred to a serine residue on the TE domain. OK, and I'm just going to draw the polyketide like that. And then in this case here, we remember we have the propionyl-CoA from the loading module, so the starter unit. In this case, what happens is there is a macrocyclization. So we can imagine deprotonation-- oh, excuse me, I forgot the linkage here. So for this TE, we no longer have the growing chain tethered by a Ppant arm. With the TE domain, it's tethered to the serine residue. So it's transferred from the thioester to this serine. And this here is just the polyketide in between. I'm just abbreviating it. So we can have that. We can have attack and loss. OK, so in this case what we end up with is the TE domain plus a macrocycle. And so that's how we end up with the structure as shown here. So some TE domains will result in formation of a macrocycle. Some TE domains will catalyze a hydrolytic release. And you get the linear chain. So you need to look at the natural product structure. And based on that structure, you can make an assessment as to how the TE works. And so that's also shown in this depiction and one other depiction in the notes. So here, the entire chain is drawn. And we're seeing deprotonation donation here and then attack here to give the macrocycle. So here is the product of this DEBS. And what we're going to do as a last exercise with the PKS is just look at this structure and work through identifying the monomer units and what optional domains acted on each monomer. And basically, where can we start? So if the thioesterase catalyzes a macrocyclization, that's an easy starting point, because basically, the final monomer needs to be involved there. And we know that the only place we can get a structure like this is from the starter, from that propionyl-CoA. So here, if we just look, we have the monomer from module 0, the loading. And then as we learned last time, each additional unit that gets attached to the growing polyketide gives two carbons, so two carbon units to the growing chain. So we can work our way around by two carbons, 1, 2. OK, here we have module 1, another two carbons, module 2 here, module 3, module 4, module 5, and here, module 6. So looking at a structure, you can begin to dissect what the assembly line will look like in terms of the number of modules by counting C2 units to the growing chain here. And then the other thing we can do is look at the functional group status and ask what types of optional domains needed to be there in order to give a given functional group. So for instance, in this case, in module 1 here, we're seeing an OH group. So we know there had to be the action of a keto reductase to reduce the ketone. Here we see we have this carbonyl, so there was no optional domain. In this case, what happens? We have a methylene. So that ketone we started with was fully reduced. So in this case, we have the keto reductase, the dehydratase, and the enoyl reductase. Again, here we can look at this unit. We see an OH, which tells us that there was action of a keto reductase. And here we have another OH, so we have a keto reductase. And in this case, we have none, this final one. And here, I didn't write it, but none in terms of optional domains. So this can be pretty fun. This is a pretty simple structure, but as structures get more complex, you can map out what are the optional domains there. And maybe you'll see, some other unusual structural features will indicate there is other optional domains beyond these three. And we're going to see some of that as we move into the non-ribosomal peptides. So that's given in the notes if you want to practice on that. So with this, we're going to transition into an NRPS and look at the assembly line logic for non-ribosomal peptides. And so this is a slide from last time, where we considered the starter units and extender units for fatty acids and for polyketides. And so in non-ribosomal peptides, we also have starter units and extender units, but in the case of the non-ribosomal peptide, as the name indicates, we're going to be thinking about amino acid monomers. And we're also going to be considering examples where there is aryl acid monomers. So these NRPS assembly lines will form polymers that incorporate amino acid and aryl acid monomers. And this is another slide from last time that is just summarizing the core domains and then examples of optional domains for the PKS and NRPS. So we learned last time that for PKS, every module will have a KS and a T domain, with the exception of the loading or starter module. That has no keto synthase, because there is no upstream group here. For NRPS, the core of a module is CAT trio. So condensation domain, or C domain, this is the domain that's going to catalyze peptide bond formation between two of the monomers. We have an adenylation domain. And we'll see this does chemistry similar to the aminoacyl tRNA synthetases. And then we have the T domains that are carrier proteins for the monomers and growing chain. OK, and then within a given NRPS module, there can also be optional domains. And just two examples are shown here. So maybe there is an epimerization of an amino acid. Maybe there is a methyl group and there needs to be methyltransferase to put that on. There is a lot of diversity that comes into these structures on the basis of these optional domains. And just to highlight that, I've presented here a list of possible optional domains you can find in NRPS, or for that matter, a PKS here, so all sorts of things. Look at halogenase, cyclase, reductase. There is tremendous structural diversity that can occur. OK, so if we consider the NRPS assembly line structure and notation similar to what we did with the polyketide synthases, what do we see? So I'll just draw one with two modules here, although n can indicate more. So initially what we have here is a starting or loading module, OK, so for instance, module 0. OK, here we have module 1, 2 for extenders. And here we have a thioesterase for chain release. So we'll find that in the final module like what we saw with the polyketide synthase for DEB. So this whole thing can be called an NRPS here. And what happens in terms of the action of these different core domains-- so A, we have adenylation. OK, and what these domains do are select and activate the amino acid or aryl acid monomers. OK and after these monomers are activated, the A domain also transfers them to the T domain. And we'll go over the chemistry in a minute. OK, this T domain is like what we saw with the PKS. We can call it a thiolation domain or a peptidyl carrier protein. So these T domains are going to be modified with the Ppant arm, like what we saw for PKS. We have the C domain, condensation. And so this domain capitalizes peptide bond formation. And I'll just point out here that, in contrast to the keto synthase we saw in PKS-- so we saw the keto synthase doing covalent catalysis via its cysteine residue-- the condensation domains of NRPS are involved in non-covalent catalysis. So that's just an important distinction. The growing chain does not get attached to the C domain here. And then we have the TE, so thioesterase, as we saw, for chain release. And this can be hydrolytic or macrocyclization. OK, so let's consider just the example of an NRPS that is responsible for synthesizing a tripeptide. So what is the net reaction? So imagine that we have three amino acid monomers. And I'll just point out here too that beyond knowing an epimerization domain epimerizes an amino acid, you're not responsible for stereochemistry in terms of the various structures we'll look at going through here. So I'm just not drawing stereochemistry here. So we have three amino acid monomers. There is going to be some NRPS that's responsible for formation of the tripeptide. And what we'll see is that making a trimer requires three ATPs, so one ATP per amino acid or aryl acid monomer, giving us three AMT plus three PPi here to give us our tripeptide plus three water molecules here. OK, so how does this happen? How does the NRPS take these monomers and build, say, a tripeptide? We're going to look at the ACV synthetase as a model for this. And so the ACV tripeptide is important. It forms the backbone of antibiotics of the penicillin and cephalosporin classes. So many of these are used clinically. So here are the structures of penicillin N And the cephalosporin. So at first inspection, you might not guess that these are effectively built from a tripeptide, but what happens is that a non-ribosomal peptide synthetase, the ACV synthetase, is responsible for forming two amide bonds between the three starters-- or the three monomers. And then there is additional enzymes that are responsible for modifying that peptide scaffold to give, say, this four-five fused ring system or this four-six fused ring system. OK, so what is the overall reaction of this? So similar to having these three amino acid monomers here, what we have are aminoadipate. We have L-cysteine and L-valine. And the synthetase takes these three monomers and makes this molecule here, which is called ACV. And so if we look at the synthetase in cartoon form, this is the cartoon. So we see a loading module, so just AT. Similar to the PKS, there is no catalytic domain to make a new bond in the loading module because there is nothing upstream. We see a module here, CAT. We have another CAT trio here. And then what's this? This is our first example of an optional domain within an NRPS. OK so this E is for epimerization. I mean what we'll see is that the synthetase epimerizes L-valine to D-valine during the synthesis, and then the thioesterase. So similar to what we did with the PKS, for the NRPS, you can count T domains as a way to identify the modules and to figure out how many monomers are involved. I also just point out-- and this builds upon Colin's comment from last time-- is that this assembly line is responsible only for the synthesis of a tripeptide, but look at its size. It's greater than 450 kilodaltons. That's quite big-- so a large enzyme, 10 different domains, all just for synthesis of this one tripeptide here. So what happens? We're going to go over the action of the A domains and the T domains first. And then we'll look at a cartoon in the slides. So the first points to make are that we need to have loading of the assembly line. So amino acids need to be selected and activated. And that's where these A domains come in. So what's happening? So we have some amino acid monomers-- so maybe it's the L-cysteine, for instance, or the L-valine-- plus ATP. The A domain does chemistry similar to what we saw with the aminoacyl tRNA synthetases to form an activated intermediate. So we get an amino adenylate here. And then what happens? So the T domain-- OK, and this T domain must be modified by a PPTase, like what we saw for the PKS, to have the Ppant arm. After we have activation of the amino acid or aryl acid monomer, the A domain is going to assist with transfer of this monomer to the Ppant arm of the T domain here. OK, so we got an aminoacyl-S-T covalently tethered via a thioester linkage. So one ATP is consumed per monomer loaded. And the ATP PPi exchange assay we discussed back in the translation module for studying the aminoacyl tRNA synthetases is used all the time to study new A domains and ask what amino acid or aryl acid monomers do they activate here. So that assay comes up in this type of work. So what happens then in terms of formation of a peptide bond, we're going to consider condensation by the C domains. And so let's just imagine-- we're just going to draw two modules. So we have a loading module and then a first extender module. And the T domains have been post translationally modified with the Ppant arm. And the action of the A domains has loaded the amino acids at this stage. OK, so we have some amino acid loaded here. And then we have some amino acid loaded here. OK, and what happens? We're going to have nucleophilic attack from the alpha amino group onto the upstream monomer and then transfer of this monomer. And this occurs via the action of the C domain. We have R2. And now we have formation of our new peptide bond. Sorry, this is R2 here. And as I noted above, there is no covalent catalysis with the C domain. Somehow it's helping to bring these chains together and to allow this nucleophilic attack to occur and to allow the monomer to be transferred, but this unit is never transferred to the C domain itself. Yeah? AUDIENCE: So is the C domain responsible for deprotonating the NH2? Or is that just always-- ELIZABETH NOLAN: Yeah, I don't-- how this gets deprotonated, I don't know. But this is back to similar, like what we saw in the ribosome. And somehow, this alpha amino group needs to be deprotonated. And there is something in the environment of this machine that's allowing that to happen, but whether it's the C domain or something else, yeah, I don't know the answer to that. So let's look at a cartoon of this with this ACV synthase. So here we have on top the synthase loaded with the amino acid monomers. OK, so we see loading module and then two extender modules. We have the aminoadipate. So it's not a canonical amino acid, but it's amino-acid-like. We have the cysteine and the valine. What happens as these condensation reactions occur, we get chain elongation. So this is depicted here in a similar manner to how that PKS assembly line was depicted. So formation of two peptide bonds, and then what happens? Ultimately, we have chain transfer to a serine residue on the thioesterase domain. And this is a case where the thioesterase domain catalyzes this hydrolytic release. So as opposed to macrocyclization, we're seeing activation of a water molecule and attack, which releases this ACV tripeptide. OK, and I've drawn the ACV tripeptide here to indicate effectively getting to this structure. So what happens after this tripeptide is released from the assembly line, is that there is additional enzymes that play a tailoring role. So like, for proteins we talk about post-translational modification, for these types of natural products, we talk about post-assembly-line tailoring. And so in this case, there is some enzymes such as IPNF, and non-heme iron enzyme that's responsible for oxygenated cyclization to give the fused ring system characteristics of these beta-lactams like isopenicillin N. We can look at this in another cartoon form. So here is the holoform. Recall, we called the T domains apo when the serine is not post-translationally modified with the Ppant arm. And the T domains are holo when the Ppant arm has been attached, as indicated by this squiggle. We then have loading of the amino acid monomers via the action of the A domains. So formation of that aminoacyl AMP or amino adenylate intermediate, so one monomer per module. We have chain elongation events catalyzed by the condensation domain. We have chain transfer to the TE domain as shown here, chain transfer, and then chain release here, and then post-assembly-line tailoring. So with that in mind, what we're going to do now is look at another non-ribosomal peptide synthetase. This one synthesizes the backbone of the antibiotic vancomycin. And the structure of vancomycin is shown here. This is an antibiotic that's basically considered one of last resort for bacterial infections. And there is a huge problem of vancomycin resistance in the clinic these days. So at first glance, this molecule might not look like it's based on a peptide. But then if you look more carefully, you see there is a lot of amide bonds. And there is also some other things going in to get this final structure. So effectively, the backbone of vancomycin is a polypeptide that's a sevenmer. So within this heptapeptide scaffold, there are two proteinogenic amino acids and five non-proteinogenic amino acids here. And because we have seven amino-acid-type monomers, we need an assembly line that has seven modules, one module per amino acid monomer. And what we'll see is that these seven modules are distributed over three proteins. We have a case of a thioesterase catalyzing hydrolytic release. And then we're going to need to think about what are the other tailoring enzymes involved in giving vancomycin this structure. So for instance, look here. We see there is this aryl-aryl C-C bond. We see these aryl-ether connections. And we also have these sugars attached. And look, there is also an N-methylation here of leucine 1, so a lot happening. And the consequence of this post-assembly-line tailoring is that, what's a linear sevenmer polypeptide ends up having an architecture that's described as a dome, so a dome-shaped architecture. And what vancomycin does is that it blocks biosynthesis of the bacterial cell wall by binding to a certain lipid precursor in that. So let's look at the assembly line. And this is just an overview of the tailoring I just told you about. And this is the amino acid sequence in order of the different monomers there and the identities of the non-proteinogenic amino acids. So here is the assembly line. And if we take a look, we have the loading module, AT. We can count the T domains to give us the modules involved in extension. So there is seven T domains. And look, CAT, CAT, CAT-- we have a number of optional epimerization domains. And at the end, we see this TE domain. And so you can walk through and look at each monomer being attached to the growing chain. And then what do we see? What we see happening down here is that when we have the linear polypeptide attached to this module here, what happens is that there is some tailoring happening while the polypeptide is still attached to the assembly line. So enzymes that are not parts of the assembly line but are involved in the biosynthesis can come in. And sometimes they'll modify the chain when it's still attached to the NRPS or PKS. Or sometimes they do the chemistry after the chain is released. And often, this is a question that people need to sort out experimentally. So in this case here, we see that there is some oxidative cross-linking that occurs while the chain is still attached to the T domain. So there is formation of the aryl-ether bond and this aryl-aryl bond here. And then after the chain is released in a hydrolytic manner, what happens is the sugars get attached post-assembly-line here. Do you have a question? AUDIENCE: Yeah, are the enzymes ever actually in the assembly line, like the optional domains of PKS? Or in this case, is it always such that the enzymes are separate? ELIZABETH NOLAN: It will depend on the assembly line. Yeah, so that's something you need to look for in the assembly line from the bioinformatics. So in this case, we're only seeing epimerization domains in the assembly line, but there can easily be methyltransferases, or reductases, or cyclases-- any number of possibilities within the assembly line itself there. And these optional domains will work on the upstream monomer. This is just an example of the tailoring enzymes involved for cross-linking of this vancomycin scaffold. In this case, there are three cytochrome P450 enzymes that are needed in order to make these cross-links. And that chemistry is shown here to get to what's called the vancomycin aglycone, which means that there are no sugars attached. And I won't draw this one on the board, but you can do a similar exercise with this molecule or any others in terms of identifying the monomer units from the structure for yourself. So if we're looking here, we have effectively the N-terminus, so the starter, and then effectively look at the peptide bonds and work your way through to find the different monomers here. So by doing that, if you're given a natural product, you can figure out how many modules are needed in the assembly line. And you can also make an assessment as to what other types of chemistry might have to happen. And I'll just keep in mind, for something like this-- let's just take this for an example with this halogen. You might ask, well, is that part of the monomer? Or is that atom incorporated sometime down the road? OK, those are types of questions people who explore biosynthesis of these molecules think about. OK, so with that in mind, let's take a look at some examples. And the questions are, what kind of assembly line is this? How many monomers? And maybe there will be some extra questions as we go. So here we have an assembly line that's required to make an antibiotic called daptomycin. And a company down the street in Lexington called Cubist has done a lot of work on this natural product. So how many monomers are here? Yeah, 13, right-- so count these T domains based on what's seen here. How many optional domains? AUDIENCE: Three. ELIZABETH NOLAN: And then what else do we see? So we see that this assembly line is divided over three proteins, effectively, here. And similar to what we saw with DEBS, when we have a break in the cartoon, that indicates a new polypeptide chain. What's missing? AUDIENCE: Loading module. ELIZABETH NOLAN: Yeah, there is no loading module here, right, no AT at the beginning. So what's going on? So in this case, I haven't shown you a structure. It highlights there is always exceptions to the rule. What happens here is that the loading module actually loads a fatty acid, so not a standard monomer for NRPS. So that fatty acid has to come from somewhere. And you can think about discussions here as to where that may have come from. Look at how big this is, 624 kilodaltons, 783, 256-- we're on the order of 1.5 megadaltons. This is huge for a 13-mer natural product. What about this one? What do we see here? So this is a natural product-- this makes the natural product produced by Streptomyces that has insecticidal activity. And it kills parasitic worms. But anyhow, what kind of natural product is produced by this assembly line? We have a polyketide, right? How many modules? [INAUDIBLE] the T domains. AUDIENCE: [INAUDIBLE] ELIZABETH NOLAN: Yeah, 13 again, right-- four proteins, 13 modules, so how many unmodified beta ketones? What would you want to look for for a modified beta ketone? AUDIENCE: [INAUDIBLE] ELIZABETH NOLAN: Exactly, no optional domains-- so how many of those? Yeah, right, so two modules, we have one here and then one over here with no optional domains. What about this one? This is for a molecule called bleomycin. JoAnne is an expert on the mechanism of this molecule. What's going on? OK, there is a lot going on. This one is very complicated. But in terms of making an assessment about the type of biosynthetic logic, what do we see here? AUDIENCE: [INAUDIBLE] ELIZABETH NOLAN: Right, so what we see is that there is both non-ribosomal peptide synthesis happening and polyketide biosynthesis happening in this assembly line. And that tells us that the product metabolite is a PKS-NRPS hybrid. OK, so what do we see? We see all of these CAT trios which are indicative of non-ribosomal peptide biosynthesis. And then what's happening here? We have a module that's using polyketide machinery. And then we go back to non-ribosomal-peptide-based logic here. We have many proteins, right? So this assembly line is divided over many proteins. And look, we see that even some of the modules are divided up. So for instance, this CAT trio is divided between two proteins. So you may not have all domains of a module on a given protein. AUDIENCE: What happens if you have two C domains in a row? ELIZABETH NOLAN: So where do you see two C domains in a row? AUDIENCE: Between BlmV and BlmX. ELIZABETH NOLAN: Five and-- AUDIENCE: Is that actually in a row? ELIZABETH NOLAN: Yeah, so then that's the question. Are they actually in a row? AUDIENCE: Further down, four Cy cyclases without any C domain. ELIZABETH NOLAN: Yeah, so that's actually where I was going next. So what's going on with the Cy without a C domain? So what's happening-- and we'll probably, if there is time, go over an example of this on Friday-- is that Cy, so these cyclization domains are a variant on a condensation domain. And what they do is, they both catalyze formation of the peptide bond and then they catalyze-- after that, they catalyze formation of a heterocycle. So if you recall, I believe we looked at the structure of yersiniabactin during the first lecture on these. It has a number of heterocycles. And those form by this Cy domain. And we can see that here in the structure. So what I've done on this slide is just present to you the structures, so the natural products that result from these different assembly lines. And if we take a look at the bleomycin, what do we see here? We have these two heterocycles that are fused together. And those are formed via the action of these two cyclization domains down here. So effectively, these originate from cysteine. So cysteines, and serines, and threonines can end up forming structures like these if there is the appropriate type of domain. This molecule is extremely complicated here. And so it's a good puzzle to look at it and try to sort out what are the monomers in it in here. Does anyone know what this does, bleomycin? AUDIENCE: [INAUDIBLE] ELIZABETH NOLAN: Well, so it's an anticancer antibiotic here. It can intercalate into DNA. And these heterocycles are important for that. And then it causes strand breaks. And I've actually learned recently it's also used for, like, treating arts. So it will kill HPV that causes warts. Anyhow, all of these compounds have interesting activities, which is one reason why they can be of interest. So with the logic in place, where we're going to close this module is thinking about how folks study these in lab. So say you want to figure out the biosynthesis of a molecule like daptomycin or bleomycin, what is it that one needs to do? And something just to keep in mind with this right off the bat, is that these are huge. So some of these examples here, if you take a look at the sizes, they're, like, comparable to the prokaryotic ribosome. That's a huge protein assembly. And that presents a limitation from the standpoint of doing experimental work, because trying to overexpress or produce these assembly lines in something like E. coli is typically just unreasonable. And in terms of a native producer organism, say, something like Streptomyces, we may or may not know conditions that cause the organism to make the natural product, so conditions that cause it to express this machinery, and then even if it made at a-- in an amount that's useful. So what happens? What are we going to do as experimentalists? So as I said, we need to keep in mind that these machines are enormous. And so we need to take this into account during experimental design. And these days, bioinformatics drives a lot of the studies. So rather than first finding a natural product and determining its structure and then hunting down the protein machinery, a wealth of genomes are becoming available. And so you can use bioinformatics to search for PKS or NRPS gene clusters. And then you can make some assessment as to what type of molecule these gene clusters might be responsible for making. So bioinformatics plays a huge role. And it allows us to predict the domains, to predict their locations, and predict their boundaries here. So as I just said, overexpression of a complete assembly line is generally not feasible. So what do people do? People will typically express individual domains or maybe di-domains and study those in the test tube. So you can imagine PCR amplifying an A domain or a T domain, or maybe the A and T domain together, and then creating some plasmid that allows you to express that in E. coli. So there is a lot of overexpression. The proteins need to be purified, so maybe something like affinity chromatography that we've spoken about before. And then a key point is that, in order to have any of this chemistry work, these T domains need to be post-translationally modified by the Ppant arm. And if you're overexpressing a T domain from Streptomyces or some organism in E. coli, you can pretty much assume there is no PPTase in E. coli that's going to do this for you. So you need to do that after the fact. And so there needs to be a PPTase. And what we'll see is that there is a PPTase from B. subtilis called SFP that's very promiscuous. It will basically modify any T domain. And so experimentally, this is what people use, because often, one has no clue what the endogenous PPTase is here, so SFP to the rescue. In terms of activity assay, so once you have your domains or di-domains purified, what happens? This is the typical flow. So the first is to characterize the A domains and to ask, what amino acid or aryl acid is activated by the A domain and what is the selectivity? And by getting that information, you have a good clue as to what monomer a given module is responsible for. And the ATP-PPi exchange assay we discussed in the context of the aminoacyl tRNA synthetases is commonly employed. So this is where we use the radiolabeled ATP and took into reversibility there. So go back and review that assay as needed. There will be some examples of this in the problem set. So once the A domain activity is known in terms of preferred monomer, the next question is, will that A domain transfer the amino acid monomer to a given T domain? So you design assays to look for transfer of the activated monomer to the post-translationally-modified T domain here. So in these assays, there is a lot of work with radiolabels, with HPLC, and mass spec. So once these T domains are loaded, you can look for peptide bond formation. So imagine you have an isolated T domain from a loading module that you've stuck the amino acid on and then you have this guy, the next question is, does the C domain catalyze bond formation reaction? And again, we'll see there is a lot of use of radiolabels, HPLC, SDS-PAGE here. And then you know, there is the question of the TE domain and the TE domain catalyzing chain release. So it's quite systematic in terms of how you work through from identifying an assembly line to then teasing apart the various activities of the different domains and different modules. And so where we'll close this module on Friday is with looking at the experiments that were done for the biosynthesis of an iron chelator produced by E. coli and working through basically you know, how was it that this NRPS was found? What were the experiments done to identify the different activities of the different domains? And it's really that work that has served as a foundation and a paradigm for many, many further studies of these systems here. And so with that, we'll close for today. And there is no class Wednesday, so I'll see you on Friday.
MIT_508J_Biological_Chemistry_II_Spring_2016
6_Protein_Synthesis_5.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit Mit OpenCourseWare at ocw.mit.edu. ELIZABETH NOLAN: So where we left off last time, we were talking about using antibiotics as tools to study the ribosome. And recall that antibiotics have many different structures, can bind to the ribosome at different places. And we closed with talking about this antibiotic, puromycin, that can bind to the A-site and cause chain termination, and also molecules that are derivatives of puromycin, such as that more elaborate one with a C75 there. And so the example of a system where puromycin has been employed, and this is just one of many, many examples, but also gives us a little new information about players in translation, involves studies of elongation factor peak, so EFP. And if you recall, where I closed last time was with the comment that this EFP over the years was implicated in a variety of cellular processes. But its precise function remained unclear. And so Rodnina and co-workers conducted a series of experiments to ask, what is the effect of EFP on peptide bond formation when different dipeptides are in the P-site? OK? And their experiments were motivated by the fact that there was some preliminary work out there suggesting that EFP accelerates peptide bond formation, but really, the details were unclear. So we're going to look at the experiment, their initial experiment they did, which led to some new understanding about how EFP affects the translation process. So what is it that they want to do in this experiment effectively? Imagine we have our ribosome, and we have our three sites, OK? And so what they do in this experiment is they have a dipeptide loaded in the P-site, OK, where x is some amino acid. OK, and then what they want to do is have puromycin in the A-site and then effectively monitor for peptide bond formation with or without EFP added such that the product is effectively a tripeptide, where we have fMat, the amino acid and puromycin, OK? And keep in mind, if this is what's being monitored, there needs to be a step to hydrolyze this tripeptide off the tRNA that's in the P-site, OK? And throughout this work, how they monitored this is that they have a radio label on the formal methionine. So you can imagine that you can somehow separate and see the dipeptide as well as this tripeptide-like molecule with the puromycin attached. So how to set up an experiment to test this? So they do a stop-flow experiment, so you heard some more about that method in recitation last week. And so in thinking about this, we need to think about what will be mixed. So what are the components of each syringe? How will this reaction be quenched? And so beginning to think about that, the question is, how do we even get the ribosome we need to start with in order to see the reaction? Right? So imagine that the goal is to have a pre-translocation ribosome, so effectively that dipeptide is in the P-site, and the A-site's empty. And then that assembled post-translocation ribosome needs to be mixed with puromycin such that puromycin can enter the A-site and peptide bond formation can occur. OK? So there's quite a bit of work that needs to happen to even get this experiment set up, because somehow that post-translocational ribosome needs to be made. OK, so if we think about this from the standpoint of the experiment and using the stop-flow to rapidly mix, we have syringe 1, and we have syringe 2, and we have our mixer. OK, so what are we going to put in syringe 1? OK, so here, we're going to have the post-translocation ribosome. The A-site is empty, and the P-site holds the dipeptide attached to the tRNA. And then in syringe 2, we're going to have puromycin here. OK, so before we get to EFP, I'm thinking about how we're going to look at that in this reaction and what it does. How are we going to get here? So what needs to be done to get this post-translocational ribosome? Is it in the sigma catalog? Bio rad? No way! And even if it were, you would be broke needing to purchase enough to do this experiment, right? You talked about needing high concentrations in recitation last week for these types of experiments. So where does this come from? What you need to do before even getting this into your syringe here, to do a rapid mixing experiment? AUDIENCE: You have to isolate it from cells? ELIZABETH NOLAN: OK, so what is the likelihood of isolating-- well, what's it, what do you need to isolate from cells? AUDIENCE: Well, you're going to need to modify it afterwards because there'll be all sorts of other things. ELIZABETH NOLAN: Right, but what's it? AUDIENCE: The ribosome. ELIZABETH NOLAN: OK, so we need a ribosome. Right? What else do we need? So we need the ribosome, and we need to get this into the P-site. So how are we going to get that dipeptidyl tRNA into the P-site? AUDIENCE: You need an mRNA. ELIZABETH NOLAN: We need an mRNA, and we're going to design that mRNA based on what amino acids we're interested in. So we need to come up with an mRNA. What else do we need? So think back to the whole cycle. AUDIENCE: You need EF-Tu, GTP. You need everything necessary to form the fMat to x-peptide. ELIZABETH NOLAN: Yeah. So what does that mean first? And when does that bond form? That's the next thing, right? So can we deliver this species to the P-site, based on what we understand about translation in the past four or five lectures? No. Right? So first, the initiation complex needs to be prepared in lab, which means you need initiation factors, a ribosome, mRNA, the initiator tRNA. And then that initiation complex needs to be purified, which is done by a type of sucrose gradient centrifugation. OK, and then what? Once that initiation complex is formed, there needs to be a round of elongation, where the ternary complex of EF-Tu. The amino acid and GTP comes in to deliver that x-tRNA x to this A-site, and then have peptide bond formation occur. OK? And then, we also need the help of EFG to move that to the P-site, right? So that whole cycle we've talked about from a fundamental perspective needs to be done at the bench in order to get here. So there's a lot of factors that need to be purified and obtained, quite a bit of effort to just even set this experiment up. OK? So always think about where these things come from. So we have this. We have puromycin, right? And then we want to look at the effective EFP. So the idea is, are there differences in peptide bond formation? Is it accelerated in the presence of EFP, as how some of this preliminary data indicated? And if so, is that for all amino acids? Or is it specific for certain amino acids, right? So we need to include EFP. And in these experiments, it was either omitted or included in each syringe. And something just to think about when thinking about these rapid mixing experiments is what happens in the mixer, right? If you're having the same volume, which is the case coming from syringe 1 and 2, you're going to have a dilution in here of all of the components. Right? So these are going to be rapidly mixed in the absence or presence of EFP. There'll be some time to allow for reaction to occur. And then, in this case, the reaction is going to be quenched. So it's the quench flow-type setup that came up in the recitation notes from last week. So in this case, we're going to have a syringe 3 with a quencher. And in this particular work, they used base, so sometimes it's acid. Sometimes it's base. And this was a solution of KOH. OK, so then after some time, OK, we can have the reaction quench. OK, and then there'll be some sort of workup and product analysis. OK? So in this case, they chose to hydrolyse the peptidyl tRNA's and look at the peptide fragments. So you can imagine you need a method that's going to separate fMet x, whatever amino acid x is, from that product there. And then the radio label on the fMat is used for quantification. So what happens here? And I'll just give a summary, and then we'll look at it in more detail. So what they did in these experiments-- and recall that JoAnne talked about in recitation last week, when doing these kinetic experiments, you have to tweak them quite a bit to get the exact good conditions to observe what you want to see. So imagine that happened. We have our k observed, and I'm going to show these on a log scale. So always keep in mind, paying attention to what type of scale the axes are in. And so what we're going to look at is the k observed for formation of this tripeptide, depending on amino acid. And I'm going to generalize a bunch of the data here, and then we'll look at all the individual cases. OK? So here, we have x does not equal proline OK, and here, not colored in, is no EFP. And shaded is k observed for the reactions conducted in the presence of EFP. OK? So what was observed in these studies, looking at having many different amino acids here? With that, many of these amino acids showed negligible difference, whether or not EFP was included in the reaction. OK? And we can look at that data in more detail from the paper on the slide. What was very striking about these initial experiments was what happened in this case, when x equals proline here. So effectively, what they observed in this case was about 90-fold rate acceleration. Effectively, if we compare the k observed for peptide bond formation in the absence of EFP, we see it's significantly diminished for proline if EFP isn't there. And along those lines, it was known before that proline attached to its tRNA is a poorly reactive tRNA. So different aminoacyl tRNA's react differently in the ribosome. So there's that layer of complexity we haven't really talked about in this class yet here. So if we take a look at all these different examples, this one is the outlier. OK? So what these data indicated is that EFP has some special role in accelerating peptide bond formation for peptide bonds that contain a C-terminal proline residue here for that. And so these experiments were just a starting point for many additional experiments that ended up showing EFP is really critical for helping the ribosome translate sequences that have consecutive prolines in a row. So either three prolines or maybe a PPG sequence here. And in the absence of EFP, what can happen is that the ribosome stalls. So these aminoacyl tRNA's are not very reactive, and the ribosome just gets kind of stuck. And you can imagine that's not good for the cell. And then if we bring these observations back around to some of these early works that were suggesting EFP has a role in a diversity of different cellular processes, what might we ask? We might ask, well, where do the sequences of multiple prolines come up? So what types of proteins have three prolines in a row some place in their sequence? Or something like PPG. And so they took a look at that. And if we think about E. coli, there's about 4,000 different proteins, and there's a subset of around 270 that have these types of sequences in them. So not hugely common, but they exist. And so then ask, what do these proteins do? Right? Provided a function is known. And so what we see is within that subset of about 270 proteins, there's examples of proteins that are involved in regulation, in metabolism, you know, important cellular processes. So you can begin to understand why it might be that this protein got implicated in all these different types of phenomena, right? But in terms of the details, it's really back here in terms of how this translation factor is helping the ribosome make a certain subset of peptide bonds there. So if you're curious about this, the paper's really wonderful. There's a number of additional interesting experiments that are done and additional methods to these kinetics there. I'm happy to point you in that direction. So yes? AUDIENCE: Does this rate of the reaction affect upon ribosome folding? ELIZABETH NOLAN: It could. I mean, basically, you're talking about what happens as the polypeptide extrudes from the ribosome, right? And if you're stalled and have some piece of this nascent polypeptide on the outside. Ribosomes stalling, yeah, what does that do in terms of how trigger factor, for instance, interacts. That's something we'll talk about in the next module, and we'll be getting there on Wednesday, I hope, if not Friday. So with that, we're going to close discussions of module 1 in the ribosome with looking at some biotechnology and thinking about how we can use this fundamental understanding of the ribosome to do some new things. And so we're going to talk about re-engineering translation and ways to use this machinery to incorporate unnatural amino acids. And so to begin thinking about this, we can just consider some questions. And so many of us in this room are chemists or chemistry majors. We can think about organic chemistry, so 5.12, 5.13, and all of the different organic transformations that are presented. So if we think about all these organic transformations and how they're available to synthetic chemists, we see a lot of versatility. And we can simply ask ourselves, can such versatility be achieved for protein modification? What is the toolkit? How can that toolkit be expanded? And then thinking about this further, can we use the translation machinery? So is it possible to modify the translation machinery to allow us to make peptides or proteins that have unnatural amino acids? So amino acids are moieties that are not the canonical ones. And can we do this in cells? Can we do this in a test tube? And if we can, what does that provide us with in terms of possibilities? So the answer is yes, and we're going to focus on the how and strengths and limitations in terms of our discussions of this machinery here. I also note-- I believe, JoAnne, this will come up. Will you be talking about this in the nucleotide parts, too? JOANNE STUBBE: If we get that far. ELIZABETH NOLAN: If we get that far. So in addition to here, this may come up again towards the end of the course, as a tool. So hopefully we'll get that far, because that's exciting. So let's think about re-engineering translation. And we can think about two things. We can think about the genetic code here, and we can think about the ribosome. And so I'll just present you with the questions. If we consider the genetic code, what can be done to this genetic code to change an amino acid in a protein? And if we think about the ribosome, what can be done to the ribosome to change an amino acid in a protein? And effectively, can we expand the genetic code to encode something other than what it's supposed to encode? So can this code allow us to encode an unnatural amino acid? And from the standpoint of the ribosome, is it possible to design new ribosomes? So can we make a new ribosome that can incorporate unnatural amino acids into proteins? So these are separate but related, and we're going to first discuss basically reassigning-- is it possible to reassign a codon? So why would we want to do this? And let's think about that for a minute. And what do I mean by expanding the genetic code? So if we think about the genetic code, we all know that it encodes these 20 amino acids building blocks, there's the start codons and the stop codon. And effectively, the codons are all used up, right? There aren't extra codons floating around that we could poach and assign to something else here. So can we overcome this? And why would we want to do that? Just broadly, if we think about being able to put something other than a natural amino acid in a protein at a specific location-- so exactly where we want it-- that opens up many possibilities for experiments. And we can think about those experiments both happening within a cell or outside of a cell. And these are experiments that just wouldn't be so easy or feasible otherwise. So maybe we'd like to study protein structure. What could we do? So fluorine is used in NMR quite a bit. Imagine if you could site-specifically label an unnatural amino acid that has a CF3 group, for example, and use that in your NMR studies. So that's something you'll get to think about in the context of problem set two. Ways to study protein function, protein localization. So for instance, instead of attaching GFP, which is big, to a protein of interest, maybe it's possible to incorporate a fluorescent amino acid that lets you see that protein in the cell. Protein-protein interactions. And maybe we'd like to make a new protein that has some desired characteristic. So there's a lot of possibilities to such technology. Just to keep in mind, what do many of us do? Many of us are familiar with site-directed mutagenesis, where we can change an amino acid in a protein. And we learn many, many things from this, but it is limited to naturally occurring amino acids. Right? So we'd like something more versatile. If we think about strategies also just a little bit, backing up here. OK, the first thing I'll just point out is that how I'm going to divide this, just in case this wasn't clear, is considering the native ribosome and then considering engineered ribosomes. And this is where we're going to focus today. And if we consider strategies, other strategies to incorporate unnatural amino acids, and I guess I'll call these standard, we can imagine chemical and biosynthetic. And I'm not going to go over a plethora of examples for either route. There'll be some slides included in the posted lecture notes that gives examples and pros and cons. But one example I will give here is just thinking from the standpoint of a chemical modification, what's an example and why we might want to do better. OK, so this is independent of something like site-directed mutagenesis, where you're having an organism do the work. So if we just consider an example of a chemical modification, there's certain amino acid side chains that are amenable to modification. So imagine you purify a protein, and you want to somehow tag that or label it, right? One option is to modify cysteine residues. And so iodoacetamide and related reagents are commonly employed, so imagine that you have some cysteine. You can react this with iodoacetamide that has some R group, right? What happens? Here, OK. You can get a covalent modification, and maybe this is a fluorophore or something else, right? So this is terrific, but what are some potential problems? AUDIENCE: Sorry, would this be a way to modify the amino acid before it's incorporated into the protein? Or would this be something you would do to modify the cysteine in an assembled protein? ELIZABETH NOLAN: Yeah, this would be after the fact. So imagine you have some protein. You've isolated your protein, and you have some cysteine. Right, and you'd like-- for some reason, you'd like to modify this protein. So maybe a fluorophore to see it. Maybe you know, a CF3 group for NMR here, which then gets to the point, what are possible problems with this? AUDIENCE: Do you have to use a mild base to be deprotonated, or is it maybe deprotonated based on where it is in the protein? ELIZABETH NOLAN: Yeah, so that gets to an initial issue, which is what's required to have this chemistry to happen? Right? The cysteine needs to be deprotonated. So probably the pH of your buffer is going to be elevated some. Does your protein or enzyme like that or not? Maybe, maybe not. Yeah? AUDIENCE: You can also run into selectivity issues-- I mean, having free cysteine residues isn't common, but it could be a potential problem. ELIZABETH NOLAN: Yeah, so you need-- well, it will depend on the protein, right? Is the cysteine free or a disulfide? Is it a native cysteine, or have you done site-directed mutagenesis first to put this cysteine in the position you want? Right? And then what happens if your protein has multiple cysteines building on what Rebecca said, and you want to have this label at a site-specific location? Right? What are you going to do about that? Are you going to have non-specific labeling? Are you going to mutate out the other cysteines? If you do that, what could that mean for your protein fold or function? There's a number of caveats that need to be considered. Nonetheless, it's a possibility to do. In terms of time, this is a pretty extreme example, but I'll just show one example here in thinking about this whole process and what you do, which also builds upon Rebecca's question. So imagine a protein with two subunits. And subunit 1 has a cysteine, and subunit 2 doesn't. So for some reason, you want to do this labeling. This is actually a protein from my group. And we wanted to stick a fluorophore on it. So we have a cysteine on one of the two subunits. You can run this reaction and get this fluorophore modified form here. And then you can see that's the case, looking at SDS-PAGE. So here we're looking at Coomassie stain that shows us total protein, and we see there's two subunits, 1 and 2. So the molecular weights are a little different, and we can separate them on this gel. And then if we look in the fluorescence channel, what do we see? We only see fluorescence associated with subunit 1 and not subunit 2, which tells us our labeling strategy has worked well. Like, what we're showing in this equation. But what's everything that needs to be done? Well, we need to overexpress the protein in some organism. In this case, E. coli. We need to purify the protein. And once we have this purified protein in hand, we need to do the chemical reaction for the labeling. And then we need to purify that product somehow, and that's going to depend on the system you're working at. And then it needs to be analyzed, right? You always want to know what you're working with, right? So was this reaction to 100%? Did we end up with a mixture? If it's a mixture, what to do about that? So what does this mean in terms of time? And this is not for all cases, OK? This is for this exact case involving this protein shown as a cartoon here. So it takes about six days from start to finish to overexpress and purify it. Steps 2 to 4, based on the purification, we do another four days, right? So that's 10 days from start to finish, just to get this protein you'd like to use in your experiment. Right? And you can imagine if somehow a label could be put on in vivo, during this initial step here, that that would save some time at the end of the day. So before moving on to what's done for unnatural amino acid incorporation by what we'll call the Schultz method out of Professor Peter Schultz's group, just to think about biosynthetic methods for a minute. So some common ones are done for structural studies. So for instance, you can imagine feeding an organism something like selenocysteine or selenomethionine. Another example is labeling nitrogens or carbons for NMR, where the organism is fed, say, a labeled amino acid, maybe with N15 or C13 there, right? So that's just a biosynthetic method, where you're changing the growth conditions, rather than doing something to manipulate the genetic code or the ribosome. So what's the conclusion here? What we want is we want a method of site-specific incorporation of unnatural amino acids in vivo. So in a cell and in a desired organism, depending on what you want to do with high efficiency and also fidelity, so getting back to that idea and before. OK, so why do we want to do this in vivo? It allows for studies within cells, and you also can purify protein from cells, so you can do in vitro experiments as well. OK, and you can imagine, if you could have all of the pieces of this machinery in a cell, maybe there's some technical advantage to that. So this is what we're going to consider here. So to this question, can the ribosome incorporate unnatural amino acids into proteins? Effectively, what do we need to think about? One, we need to think about relaxing the substrate specificity of the aminoacyl tRNA synthetase to accommodate some unnatural amino acid, right? Somehow that unnatural amino acid needs to get to the ribosome. So if this can be done, and we can make a tRNA that has an unnatural amino acid attached to it, can this aminoacyl tRNA get to the A-site and do the work? So this is the method we're going to talk about in some detail for the rest of today and into Wednesday, this Schultz method. So the idea is that there's a tRNA that's dedicated for this unnatural amino acid. We see this unnatural amino acid shown here, where the UAA is indicated by probe. We need an aminoacyl tRNA synthetase that will take this unnatural amino acid and attach it to the three-prime end of the tRNA to give us this aminoacylated tRNA. And then what? Imagine this tRNA can make its way to the ribosome. What happens? We need a codon for this aminoacyl tRNA. It needs to carry the anticodon, and we're going to talk about this in some more detail in a minute. So we can have a plasmid that has the DNA with the gene of interest in it, right? This plasmid DNA can be transformed into, say, E. coli that has this machinery here. We can have transcription to give the mRNA that is from this plasmid DNA. And then imagine translations such that this unnatural amino acid is incorporated. So effectively, where we're going is that we need a general method. We want this method to be broadly useful, where we can genetically encode this unnatural amino acid and have it incorporated in response to a unique triplet codon, here. So in thinking about that, what are the pieces that we need? And we'll think about E. coli for the moment, but this could be other. So yeast, mammalian cells, right? Let your imagination run wild with this here. When the incorporation of the UAA in response to a unique triplet codon. So if we're going to do this, what do we need? OK, effectively, we need some new components of the trans-- like protein biosynthetic translation machinery, right? So we need to rewind and think about the whole translation process. OK, so the first order of business is that we need a unique codon. Right? So this only designates or uniquely designates the UAA. And so we need to ask, where does this come from? Because we just went over the fact that the codons are used up for amino acid start and stop. We need a new tRNA. OK, so this tRNA needs to be specific for the unique codon. OK, and we need the corresponding aminoacyl tRNA synthetase, right? And we need this to load the unnatural amino acid onto the unique tRNA here. So what is a key feature of all of this? A key feature is that if we want to do this in some organism, we need this machinery to be orthogonal to the machinery in that organism. We cannot have cross-reactivity, because then there's not going to be any selectivity of this incorporation. So no cross-reaction. So what do we need to consider, in terms of these? We need to think about all of the machinery, right? And I just list some considerations here. So this new tRNA can only allow for translation of the codon for the UAA. It can't be a substrate for any of the endogenous aaRS, because then it will become loaded, potentially, with the wrong amino acid. So think back to lectures 2 and 3. This new aminoacyl tRNA synthetase can only recognize the new tRNA and not endogenous tRNA. So cross-reactivity again. This unnatural amino acid also can't be a substrate for endogenous enzymes. And also keep in mind, there needs to be some way to get this unnatural amino acid into a cell if we want to do this in a cellular context. So there's just transport issue that needs to be kept in mind. Will this unnatural amino acid get into the cell? OK, so what we're going to do is consider these requirements and what was done to build up this methodology during initial work. So the first issue is this unique codon, and what is its identity here? And so if we consider the 64 codons, they're used up with the 20 common amino acids. We have the three stop codons and the one start codon. And so in thinking about this, can ask, do we really need three stop codons? We certainly need our start, and we need codons for the amino acids. But is there some wiggle room here? And so in terms of these stop codons, we have TAA, TAG, and TGA. And these all have names. Ochre, amber, and opal. OK, and so the idea we're going to see is just the question, can we reassign a stop codon? And can we reassign a stop codon such that it's the codon for the unnatural amino acid? And so basically, if we want to reassign a stop codon, how do we choose? Right? So two things to consider. One, how frequently is each stop codon used? So what do we know about that? And then does this stop codon terminate essential genes? So we can imagine that if we were to reassign a stop codon that's used frequently by E. coli or another host, or if we were to reassign a stop codon that's important for terminating the synthesis of essential genes, in either case, the outcome could be pretty bad, right? So what was found in thinking about those issues is this amber stop codon, TAG, one, it's the least frequently used. And just for an example, about 9% in E. coli and about 23% in yeast for terminating genes. And additionally, it rarely terminates essential genes. OK, so based on this, it was decided to reassign TAG as the codon for the unnatural amino acid. OK, so we've gotten through to here. So the question is now, what about requirements 2 and 3? So yeah? AUDIENCE: I have a question. So it seems interesting to choose a stop codon to change because if the stop codon messes up, it seems more catastrophic to us all than one of the other redundant amino acids. Like, why use a stop codon? I think it's interesting. ELIZABETH NOLAN: Yeah, so there is a risk. There's certainly a risk, right? But these considerations were made to try to diminish that risk. Right? So you could make the argument that maybe all of these stop codons aren't essential, right? And what is more deleterious? Will it be to try to use a stop codon that's infrequently used, or to reassign a codon that's for an amino acid that comes up in many, many different proteins in the cellular pool? Right? So there's a judgment call there. But if we consider in E. coli, this TAG stop is for about 9% of proteins. How does that compare to, say, reassigning one of the codons to incorporate a lysine or a valine. I don't know, but that was just want to think about, how frequently is that codon used? Because certainly there's different codon usage in different organisms. Do you have something to say? JOANNE STUBBE: So it depends on what you want to put the unnatural amino acid in for. So if you want it in endogenous levels, it could be a problem. But if you're overproducing your protein-- ELIZABETH NOLAN: Yeah, it may not be a problem. JOANNE STUBBE: Then it's not a problem, because you induce, and then you flood it with that and you get high levels of cooperation. So it depends on what your purpose is. ELIZABETH NOLAN: Yeah, so is JoAnne's point clear to everyone? So you could imagine expressing at an endogenous level, right? Or you could imagine causing the cell to overexpress the proteins, like off a plasmid, like what many of you have done in lab class or maybe in research there for that. Are there many examples of reassigning a different one? JOANNE STUBBE: I think it's really tough. I mean, inside the cell, you really do have problems if you don't re-engineer, because you get truncations also. ELIZABETH NOLAN: Yeah, we're going to talk about that. So there is a big problem about the stop that we're going to talk about, once we get to how we have this done, which is premature termination. AUDIENCE: I'm kind of confused more or less at that point, because stop codon, we're just using because it's not currently-- it doesn't go for anything, it just ends, like, endogenous sequences that are not the UAA so if we replace it, we might not get those. But we'll get the one that we're trying to synthesize. ELIZABETH NOLAN: Right, so we need a codon for the unnatural amino acid. And right now, we're limiting our space to triplet codons, which is what was initially done when this type of methodology was developed. So the question is, we have four options in terms of bases. AUDIENCE: That are not coding, right? ELIZABETH NOLAN: No, no, in terms of our codons, right? So three, right? Four, three, so there's 64 codons, and they're all used up. AUDIENCE: Yeah. ELIZABETH NOLAN: Right, so there's not some extra. AUDIENCE: Yeah. ELIZABETH NOLAN: So then what can be reassigned? AUDIENCE: These stop-- so the stop. ELIZABETH NOLAN: Well, yeah, well, we can't reassign the start. Then there's no proteins, right? There's only one start codon. So the thinking was, is a stop codon dispensable? Right? And then a decision was made, based on basically the frequency of use of the stop codon and whether or not the stop codon terminates essential genes. Is this something foolproof? No, there's major problems in terms of yields that come up as a result. And we'll see that on Wednesday, right? But you need a starting point to get a method underway. So where will we begin tomorrow is talking about where this tRNA and the aaRS come from to do this.
MIT_508J_Biological_Chemistry_II_Spring_2016
11_Protein_Folding_4.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. ELIZABETH NOLAN: So where we're going to begin today is continuing with our discussions of the substrates for groEL, groES, and analysis of the data. And after that we'll talk about the DNAK DNAJ chaperone system here. So recall last time we left off with the question of the groEL groES substrate. So inside an E coli cell, what are the polypeptides that are folded by this macromolecular machine? And so there was the pulse chase experiment, there was immuno precipitation, and then analysis. And so in this analysis, we talked about doing two dimensional gel electrophoresis, and then trypsin digest and mass spec of the various spots. So where we left off were with these data here and the question, how many polypeptide substrates interact with groEL in vivo, so inside an E coli cell? And what we're looking at are the various gels for either total soluble cytoplasmic proteins on top at either 0 minutes-- so at the start of the pulse-- recall that these cells were treated with radio labeled methionine, and then there was a chase for a period of time when excess unlabeled methionine was added. So here we're looking at total soluble cytoplasm proteins 10 minutes into the chase. And then at the bottom, what we're looking at are the polypeptides that were immunoprecipitated by treatment of this cell lisate say with the anti groEL antibody. So the idea is this antibody will bind to groEL, and if polypeptides are bound those will be pulled down as well. So it's kind of incredible this experiment worked. There was a bunch of questions after class in terms of the details of this immunoprecipitation just to think about is it a groEL monomer, or is it a groEL heptamer? How tightly are these polypeptides bound? How do they stay bound during the course of the workup? Where's groES? These are a number of questions to think about and to look at the experimental to see about answers. So where we're going to focus is right now looking at these gels. And so what we need to ask is, what do we learn just from qualitative inspection of these data? So on these along the y-axis we have molecular weight, and along the x-axis the PI. So if we first take a look at the total soluble cytoplasmic proteins at zero minutes and 10 minutes, what do we see? Do we see many spots or a few spots? Many spots, right? And we see many spots both at 0 minutes and at 10 minutes. So the E coli genome encodes over 4,000 proteins-- roughly 4,300. And if one were to go and count all of these spots, how many do we see? It's on the order of 2,500. So they detected on the order of 2,500 different cytoplasmic proteins on these gels. What do we see in terms of distribution by molecular weight? Is it a broad distribution, or narrow distribution? Broad, we're seeing spots of all different molecular weights, so from low to high on this gel. What about PI? AUDIENCE: It's also broad. ELIZABETH NOLAN: We also have a broad distribution in these gels, right? So we see polypeptides of low through high PI on this scale from 4 to 7. So now what we want to do is look at the gels obtained for the samples from the immunoprecipitation and ask what do we see, and is that the same or different from what we see for the total cytoplasmic proteins up here? So if we look at the data here which are the polypeptides that were obtained from immunoprecipitation at 0 minutes, what do we see? So do we see a few spots, a lot of spots? AUDIENCE: It's still a lot, and it's still distributed over a pretty wide range. ELIZABETH NOLAN: OK, so let's start with the first point Kenny made, which is that we have a lot of spots, and I'd argue that's true. In this gel, we see many spots where each spot indicates a distinct polypeptide. Do we see the same or less than here for the total cytoplasmic protein? AUDIENCE: It's less. ELIZABETH NOLAN: We see less, right? AUDIENCE: And they seem more concentrated. ELIZABETH NOLAN: Yeah, just wait a second. Right, so we see less, and that's a good sign because an antibody was used to pull down some fraction of this pool. So about how many are here? They found about 250 to 300 polypeptides there, so about 10% of these cytoplasmic proteins were found to be interacting here. So on the basis of the experiment, we can conclude these are polypeptides that interact with groEL here. OK so now Kenny has a few additional observations in this gel. What are those? So how are these polypeptides distributed? And we'll just focus on C for the moment. So in terms of molecular weight, what do we see? AUDIENCE: It's all scattered pretty wide range of molecular weights. ELIZABETH NOLAN: And so we have a wide range, and where is that range and how does that range compare to here? So I agree, but look at the subtleties. AUDIENCE: Most of them are above 8 kilodaltons? ELIZABETH NOLAN: Yeah, so let's roughly say in the range of 20. So if we look at the bottom part of the gel versus the top part of the gel here, and we compare that to the bottom part of the gel here and the top part of the gel here, we see some differences that aren't just the total number of spots. Rebecca? AUDIENCE: So it's like the ones that are smaller-- so the spots that respond to the smaller proteins, they seem to be more highly charged. ELIZABETH NOLAN: More highly charged. Yeah, so let's first stick to the size. So we're seeing that in the bottom region of this gel where we have lower molecular weight species, we see fewer of these here than here. So why might that be if there's less polypeptides with molecular weight smaller than 20 kilodaltons? Steve? AUDIENCE: If you just consider the total number of possible confirmations of protein can adopt or peptide to adopt as a exponential function of its size, larger proteins are more likely to have more non-productive folding pathways. So it's just less likely to have something that needs a chaperone at a smaller size. ELIZABETH NOLAN: Right, so maybe these smaller polypeptides, they need less help. Their domain structure is more simple. For instance, they're easier to fold, and other machinery can take care of that here. And then if we look at PI, what do we see? So how is the distribution in terms of PI? AUDIENCE: Large molecular weight proteins are pretty evenly distributed, but the smaller ones have more of a charge. ELIZABETH NOLAN: Yeah. How do you use the word charged? AUDIENCE: Sorry, I was looking at the scale. They actually have a PI closer to 7. ELIZABETH NOLAN: Yeah, just like you heard in recitations 2 and 3, pay attention to the scale and what kind of charge-- if you're talking about charge, you have negatively charged and positively charged amino acids. So where in that regime are you? But if we look at these areas here, we see a wide distribution. And maybe when they're smaller we're seeing some more over here, but then ask yourself, is 22 an outlier there? So what can be done in terms of these data? This is actually an analysis of the gels looking at total proteins and groEL bound proteins for the total percentage in terms of PI and in terms of molecular weight. And so you can compare. And so what we see is that overall, and look a bit closer, that PI distributions are quite similar. Molecular weight we see some differences. We also don't see that many proteins that are greater than 90 kilodaltons being folded by this machine. And then again, why might that be? We learn that the chamber can accommodate polypeptides up to about 60 kilodaltons, so maybe they're just too big here. So what are the identities of these proteins here? So this is where the trypsin digest and mass spec comes into play. So you can imagine extracting the spots, digesting them with the protease trypsin, and then doing mass spec analysis to find out the identities and comparing that data to databases of E coli proteins. And so from that, of the 250 to 300 proteins that they identified in these immuno precipitation gels, they were able to identify 52 without a doubt. And what are some of those 52 proteins? So I've just highlighted a few examples. What do we see? So here's our friend DFTU as one example. We see subunit of RNA polymerase, ferritin, and certain rhibosomal proteins. So just thinking about these proteins and their role in translation, in RNA polarization, ferritin is an iron storage protein. What do we think? What are our thoughts about these proteins? They're pretty important, right? Imagine if EFTU you couldn't adopt its native confirmation. There might be some major problems. And recall when I introduced groEL, groES, we learned that they fall into the category of chaperonin, so they're essential for life. So that makes sense in terms of seeing some of these proteins as being very important. And what about structural motifs? It's then we see, OK, these are the 50 proteins we identified, what are their structural features, and what does that tell us about this chaperone? The conclusion is that overall, the proteins identified have quite complex structural features. So these can range from complex domain organization to beta sheets, including those that are buried and have large hydrophobic surfaces here. And so we can speculate that maybe some of these hydrophobic surfaces interact with the groEL-applicable domain to have these polypeptides enter into the chamber. Here, was there a question? AUDIENCE: Well, I was going to ask, I don't know for ferritin, but I know that you need a lot of ferritin molecules to form the thing. But all of those are also-- and again it's only 4 out of 52, but they're all proteins that exist in relatively high abundances. So could you also be making the argument that proteins that are more likely to have high concentrations, and therefore a higher probability of aggregating just because it's a prime molecular reaction could favor binding to groEL? ELIZABETH NOLAN: Yeah, I even thought about it in terms of they certainly are abundant. It could be, I just don't know. AUDIENCE: The experimental setup also biased it towards more abundant proteins. ELIZABETH NOLAN: Yeah, so could that have happened in the experimental setup? It's a possibility. So we learned that what EFT was about 10% of all rhibosomal proteins. So that's something also to keep in mind, and a good thought there. So what else can we learn? One more observation from these experiments before we move on to DNA K J. So recall last time when we talked about the actual pulse chase experiment, they took samples at multiple time points. And so why did they do that? You can imagine doing this analysis not just at 0 minutes and 10 minutes, but at a variety of time points and ask, if we compare gel to gel and we compare spot to spot-- so going back, these spots are labeled many of them in here-- we can ask the question, how does the intensity of that spot change over time? And what does that tell us about the interactions of that polypeptide with groEL? So just for example, here. Example, just imagine at time equals 0 we see some protein or polypeptide x. So then what happens at, say, time equals 2 minutes? If we do not see it, let's consider two options. Do not see x, maybe we conclude that x dissociates quickly or folds relatively easily. Imagine if we do see x after 2 minutes here, maybe the conclusion is x is not yet folded here. And then we can imagine doing this at different time points, and they went out to 10 minutes here. So maybe if we see x at 10 minutes, the conclusion is x is difficult to fold. And too, we want to think about these time points also from the standpoint of what we saw in terms of the residency time of a polypeptide in the groEL chamber. So we saw from the various models that that's somewhere on the order of 6 to 10 seconds. So there can be multiple binding and release events that occur. So in this paper, what the authors did is trace the spots and compare the intensities of the spots over time. And you can do a little exercise from these gels looking at spots they circled and just ask qualitatively, what's happening to the spot? Is the intensity staying the same? Is it being reduced? So for instance, it's easy to look at spot number 22 here at 0 minutes versus spot 22 at 10 minutes. And what do we see? Does it look the same, more intense, less intense? Less intense, right? What about spot number 12 at 0 minutes versus 10 minutes? They look quite similar by eye. So you can imagine doing this type of exercise through each gel and actually doing it quantitatively using some instrumentation. So what do they see? Effectively in this, they divided the data into three groups based on certain trends. And that's shown here where what we're looking at is the relative intensity change versus time. So you can imagine at some time point that spot has a maximum intensity that they've put at 100. So we see the three groups here. And the question is if we look at these as groups, what do the data show? So in group one, we see examples where the spot at time equals 0 is at a maximum, and then the intensity of that, so spots decrease over time. And the other thing we see is that at some time that isn't very long, the intensities go to approximately 0. So we're not seeing these polypeptides bound any longer. And then effectively what we want to ask is do these polypeptides have any similar features? And what the authors observed is that the polypeptides falling into this group showing this behavior are smaller than 60 kilodaltons. And as shown here, they're seeing them completely released over the time course of this experiment, and in general within the first 2 minutes. So what does that correspond to? How they interpreted this was that these polypeptides are either binding groEL once or have several rounds of binding and ultimately reached their folded state in this relatively short time period. So how does group two differ? Looking at these data, what do we see in group two that's different from group one? [INAUDIBLE] Yeah, we're seeing the relative intensity never go all the way to 0. So here we've gone to 0, here we see 20% to 30% as the cutoff. So how are these data interpreted in this work, and what are the identities of these polypeptides? So similar to group one, these polypeptides are also all smaller than 60 kilodaltons. And how this behavior is interpreted is that even after 10 minutes, there's some fraction of these polypeptides that are still associated with groEL. So they haven't reached their native fold and are remaining bound. What's going on in this group here, group three? This behavior is very different. AUDIENCE: You see peak intensity is a little bit later than the rest of them, and they also don't go to 0 after 10 minutes. ELIZABETH NOLAN: Yes. So these proteins are interacting with groEL because they were pulled down, but it looks like they're interacting at later time points. So we see this growth in terms of increase in intensity over time, and then they go down. And here we see 40% or higher. So they are not readily dissociating, binding at longer time points. So one question here is are these dead end species? And within this work, the authors did some additional controls which there's some detail in the notes I'll post in lecture. But effectively asking, what happens if we add in groES, what happens if we add in ATP? Do we still see these species or not? And some of them were released under those conditions there. So in summary, what we see from this is a method to look at chaperone substrate selection in the context of a cell. We see that groEL folds proteins over a range of sizes, but not really the small ones. So under 20 kilodaltons not so much, and over 60 kilodaltons not so much here, and that these polypeptide substrates have complex native folds. So where we're going to close the chaperone unit is with looking at the machinery DNA K J. And so we'll introduce that system and then look at a similar series of experiments where the substrate scope for this chaperone system was evaluated. So if we go back to the overview from the start where all of these players were introduced, this is where we are now. So we're looking at DNA K and its co-chaperone DNA J. So these are downstream of trigger factor. What do we have for DNA K and J? So these are heat shock proteins. DNA K is an HSP 70. So 70 kilodaltons, and HSP 70s are ubiquitous. So just to note, they're involved in a variety of protein quality control functions. So we have folding, as we'll talk about in the context of today's lecture in this module, but even rolls that range from protein transport to assisting with protein degradation occur. So here we have HSP 70 for DNA K and HSP 40 for DNA J. So in this system, DNA K is the chaperone and DNA J is the co-chaperone, and DNA K is ATP dependent. So it's monomeric. So with this system we don't have a chamber like we have with groEL, groES, and it's ATP dependent. DNA J is the co-chaperone here. So what happens in terms of this system? So effectively DNA J, the co-chaperone, scans hydrophobic surfaces of proteins or polypeptides, and it associates with them so it binds. And then what DNA J does is it delivers non-native polypeptides to DNA K. And then how we think about DNA K is that DNA K binds and releases unfolded polypeptides. And this is another case where there can be multiple cycles of binding and release. So DNA K will bind to a polypeptide that has an unfolded region, there'll be some period of time that that complex exists, and then DNA K will release it. And so in terms of where it likes to bind, these are typically six to nine amino acid segments that are hydrophobic. So it likes residues like leucine and isoleucine. And statistically, this type of region occurs about every 40 amino acids. And for these segments, just to note that there's a range of binding affinities. You can imagine there's a variety of possibilities here. And what's found from studies is that the KD of DNA K for various polypeptides can range from about 5 nanomolar to about 5 micromolar, so by several orders of magnitude. In terms of size of polypeptide, it stated that DNA K has some preference for polypeptides on the order of 20 to 30 kilodaltons, but it can bind larger ones and it can bind polypeptides greater than 60 kilodaltons, as we'll see later. So in this system there's another player that we need to think about, and that's this GrpE, or grip E. And what we have here is a nucleotide exchange factor. So any f, and it's also a thermal sensor. And what GrpE does is that it regulates DNA K binding to a substrate by inducing ADP release. So what we'll see is that the ATP and ADP bound forms of DNA K have different affinities for these polypeptide substrates. So what we're going to do is look at the structures of the components of this system and then look at the cycle. And so if we consider DNA K, so we think of this protein as having two different domains. So there's an N terminal domain and a C terminal domain. And in this end terminal domain what we have is the nucleotide binding domain, NBD. So this is where ATPase activity occurs, and this is about 44 kilodaltons here. There's a linker region, and then the C terminal, and we have the peptide binding or substrate binding domain. This is 27 kilodaltons. So here if we think about this part just in cartoon form, what's observed is that there's a cleft for binding ATP or ADP. So ATP or ADP binds here, and this is also where the nucleotide exchange factor GrpE will interact, because that's its job as a nucleotide exchange factor is to help with that. So basically we have GrpE here. What we see in this domain, it's often described as being a beta sandwich plus an alpha helical latch. And the idea is that this latch closes in the presence of the polypeptide. So effectively, if we look at this as a cartoon-- and we'll look at actual structures in a minute-- this peptide binding domain can either be in an open form, and this is the latch. You have the alpha helical part, here's the beta sandwich. And if there's some polypeptide to bind, what happens is that the latch closes and the polypeptide is bound here. So this is the closed form, and this pocket is hydrophobic. And that makes sense based on what we know about DNA K liking to bind hydrophobic stretches. So let's look at some structures of DNA K. I present two slides of structures here, one from the assigned review and this other version. And I'll just focus on this one for here. So here we're looking at the domain organization. What we have here is the nucleotide binding domain. So here's that clef for ATP binding. Here we're looking at the peptide binding domain. So the beta sandwich region is in green, the alpha helical latch is in yellow, and we see that there's a model polypeptide here, and this is in the closed form. Here's another view of DNA K with a peptide bound. So we see the beta sandwich, here's the alpha helical latch. This depiction here from the review is showing the closed and open states, and closed and open is referring to the green area here. So don't get confused with the nucleotide binding domain and how these are shown. So what we see here, again, there's a bound polypeptide in this peptide binding domain, and here there's no bound polypeptide. And we see that now this alpha helical region is sticking up there. So what about DNA J? You consider DNA J, we're just going to focus on the domain organization and just a more simplified view than what's on the slide. We have two domains for DNA K binding, and then for peptide binding. So DNA K is going to go out there and find some polypeptide that needs the help of DNA J. It's going to bind that polypeptide and deliver it to DNA K. So effectively, it interacts both with the polypeptide substrate and it also acts with DNA K when delivering this polypeptide. So just to point out DNA J is part of an HSP 40 family, and these are quite diverse. I just illustrate that from the range of different sizes, so from about 100 to about 2,000 amino acids. And all of these HSP 40s have what's called a J domain, and in this more detailed depiction here it's indicated these 70 amino acids at the N terminus are the J domain, and they're important for interacting with DNA K or another HSP 70. So what about GrpE? This nucleotide exchange factor, so GrpE is a homodimer. And if we just look at one monomer, and then I'll show you the structure. So in '97 a crystal structure of GrpE with DNA K nucleotide binding domain was published, and this is what came from that. So just use your imagination, maybe I'll draw this a little differently. Basically what we see with GrpE is that there's a beta sheet, and this is the C terminal region. And then what we see here is an extended alpha helix. And this is the end terminal region. And this is just a cartoon of the monomer. So what happens is that the GrpE homodimer uses one of the beta sheets of one monomer to insert into that ATP binding clef here of DNA K. And when that happens, it forces it open there. So let's look at the structure, and this is something that actually puzzled me for quite some time, but there's been a recent update. So this is a crystal structure of GrpE homodimer. So we see one monomer in blue and one monitoring green bound to an end terminal nucleotide binding domain of DNA K, which is shown in pink. And so we see the beta sheet region of each monomer, we see the extended alpha helix. The C terminal end is here, the N terminal end is here. And I note that not shown in this structure is there is an unfolded region after the end here of GrpE. And so we see this nucleotide binding domain interacting with one of the beta sheets. So a one to one stoichiometry. So the idea, as we'll see when we go forth with the cycle, is that GrpE is inserting the C terminal beta sheet into the nucleotide binding class of DNA K. And this happens for the ADP bound form, and it facilitates ADP release. So what's going on down here? Why is there this extended alpha helix? And I'll just note there is a study just in the past year where interactions between DNA K and GrpE were studied in some more detail. So they used some biochemical experiments, some cryo electron microscopy. And what they learned is that the interactions between GrpE and DNA K are more complex than what's seen here, and what they observe in their cryo EM is evidence for this N terminal region interacting with the substrate or polypeptide binding domain of DNA K. So there's some dynamics and flexibility that we can't appreciate from this crystal structure. And so that begs into question, how else is GrpE facilitating this cycle and modulating confirmation and function of DNA K? So you're not responsible for these details, but if it's something you're curious about I've included the reference. So effectively GrpE accelerates the release of ADP, and that in turn promotes binding of ATP. So what is the functional cycle? And we'll look at this depiction here. There's another depiction in the notes from the reading. This is the current model. And in this model, we're going to start here. So what do we see? We have DNA K in the ATP bound form. So we have the two domains-- the nucleotide binding domain, and here the polypeptide substrate binding domain. And in this cartoon, we see that alpha helical latch is open, so no polypeptides bound. And what we also see is that the ATPase activity here is very, very low. So DNA K is not hydrolyzing its ATP. So then what happens? DNA K-- sorry, DNA J, the co-chaperone, has found some polypeptide substrate-- indicated by this S-- that needs the help of DNA K. So J binds the polypeptide substrate, and it delivers that polypeptide to DNA K. So what does this cartoon tell us? It tells us that J is interacting with K, and here we see the polypeptide substrate being delivered. So when DNA K is in the ATP bound form, it binds peptides with relatively low affinity and in a reversible manner. So there's fast exchange, that polypeptide's going to come on and off. And when DNA J binds and delivers the polypeptide, it activates the ATPase activity of DNA K. So that's indicated here. So the ATPase activity is enhanced substantially so you can compare the values for some quantitative insight. There's ATP hydrolysis. ATP hydrolysis results in release of DNA J and PI. So now what do we have? ATP's hydrolyzed, and now we have ADP bound in the nucleotide binding domain. And what do we see? The latch has closed-- open, closed. So like what we saw in the structures with those model polypeptides mound, we have the substrate clamped in this latch. So here we have a form of DNA K that binds the polypeptide with high affinity and slow exchange. So this state is considered to be long lived, on the order of 10 to 15 seconds. So the question is, if this is binding the polypeptide with high affinity and slow exchange, how do we release it? And that's where this nucleotide exchange factor GrpE comes into play. So here comes along GrpE. GrpE binds. GrpE binding results in release of ADP from the nucleotide binding domain. So GrpE is inserting its beta sheet into that clef, and it looks like something else is happening with that long alpha helix to facilitate this. But this was drawn before that 2015 study, so we just see it interacting here. But imagine that this region here is maybe interacting down here and doing something to facilitate peptide release. So now what? No nucleotides bound according to this model. Since the ADP is released, ATP binding is facilitated so ATP can bind. And what do we see? There's release of the peptide, release of GrpE, and this cycle can start over again. So effectively, the release of ADP is accelerated about 5,000 fold from the action of GrpE. And so GrpE is called a thermo sensor and can begin to think about why that might be. If, say, there's condition of heat shock or stress, maybe the cell wants DNA K to be able to hold on to this polypeptide rather than release it. So GrpE won't be doing its job under those conditions. So another example of ATP binding and hydrolysis modulating activity of these chaperones. So we need to think about what are the substrates for DNA K J, and what is the chaperone system doing? So we define possibilities as foldases-- like what we saw with groEL-- holdases, unfoldases, what's happening here? And so in thinking about the in vitro substrates, what are the experiments we're going to do? Or sorry, in vivo substrates. So can we take the method used for groEL, groES, and adapt it to this system? Are you convinced that method was useful, or are you down on that method? AUDIENCE: It can probably be adapted. ELIZABETH NOLAN: Yeah, right, it can be adapted. So can imagine again going to do a pulse chase here, and can imagine the same experiments where we have our E coli with no methionine to deplete. We can pulse with radio labeled methionine-- again, 15 seconds, 30 degrees Celsius-- to let us see newly synthesized polypeptides. And this gives us a way to ask what newly synthesized polypeptides did DNA K and J act on. Then we can chase with excess unlabeled methionine for 10 minutes. And again, can take samples at varying times. Do rapid lysis. And in this case, rather than using EDTA to quench, what they did is do rapid ATP removal by adding an ATPase here. So just to realize that there's theme and variations in terms of how you can quench these. So what do they find? And we'll go over the data in more detail starting on Monday. And what do they need to do to find that? So in this case, they need an antibody to DNA K if there's going to be an immuno precipitation, right? So in these experiments, effectively we're to this point. They immunoprecipitated with their DNA K antibody. Of course, the specificity of this antibody needed to be studied, and then they used SDS-PAGE to analyze the immunoprecipitates. And so what we'll see when we discuss the data next time, the experiments were analogous to what was done with groEL, groES, but a few differences. They were less sophisticated in terms of the approach. So they use just standard 1D STS page rather than 2D, and it didn't go through the process of doing trypsin [? digestion ?] mass spec to identify the polypeptide. So it's more of a qualitative look. But we're going to ask starting on Monday what did they learn from analyzing these gels about the substrate scope of DNA K J? And then we have to ask the question, how does that help our understanding in terms of the type of chaperone activity that's occurring? So with that, I'll close. We'll end the chaperone unit with those experiments on Monday, and then we'll transition into module 3, the proteasome and degradation chambers.
MIT_508J_Biological_Chemistry_II_Spring_2016
R12_Mass_Spectrometry_of_the_Cysteine_Proteome.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: OK, so I think what I'm going to do is I'll give you an overview of what the hypothesis is that we'll be talking about in class the next time, probably the next time and a little bit into Wednesday. And then this module will be over. So this is covering the second lecture, the good NADPH, the good function of NADPH oxidases. OK, so it's taken from this Kate Carroll's paper who-- and we'll continue where we left off last time. We'd gone through part of the paper you were supposed to read in Angewandte Chemie, but we hadn't gotten through the whole paper. And there were parts of it that I wanted to look at in more detail. But this is sort of a model. So this is really sort of-- and I think the model is easy to understand. And these kinds of models have been in the literature for 10 or 15 years. But really, there hasn't been experiments that sort of show how this all goes together using multiple methods. And the underlying theme-- and let me also say, in the handout I will give you, I've given you an Annual Reviews in Biochemistry paper by the [INAUDIBLE] lab, which shows how much more complicated life is than this simple cartoon. OK, so everything we look at, you start out with the simple cartoon. But when you study it, the more you study it, the more complicated it gets. So if you care about things only at the circle and square level, then you get a cartoon. But if you really care about how it works, you have to do a lot of work to figure it out. OK, so this sort of model is similar to all-- we're going to be talking about epidermal growth factor. And it's similar to how many growth factors work that I think wasn't outlined in this paper, but maybe it was the first paper, that this model is sort of a generic model for many of these systems. Although, the signaling pathways are distinct. So you have the epidermal growth factor receptor, which has a single transmembrane region. And this little-- what is this little domain in the cytosol? That's the tyrosine kinase activity. And it can react on itself. And this little ball is the growth factor. And so the model has been that, when the growth factor is around, that the proteins can dimerize. And so that's what's indicated here. So this is the model that was taken out of the paper. And in the presence of ATP, they can phosphorylate each other. OK, so that kinase domain has become active. And one of the questions is how do you know that phosphorylation makes it active. So you have to design experiments to test this. And if you go back, and you think about in introductory biochemistry course, you saw many cascades. And sometimes, phosphorylation activates. And sometimes, it inactivates. So all of that needs to be studied in some fashion. And so what we're going to be talking about in the module 7, which is the reactive oxygen species, is this protein, the NOX protein, NADPH oxidases. And what's the function of NADPH oxidases? What are the cofactors required? Anybody get that out of the paper? So you have an NADPH oxidase. So obviously, what's one of the cofactors? AUDIENCE: NADPH. JOANNE STUBBE: NADPH. OK, so what is another cofactor? Did any of you look at that? Or you haven't read the handout, first handout on reactive oxygen species, because it's also-- there are six isozymes of these NADPH oxidases. And they all form different complexes, but the basic chemistry is the same in all of them. So does anybody know? No. Any guesses? Ultimately, we have to convert the function of this protein-- hopefully, you know this is an unusual function. It converts oxygen into superoxide, which is one of those reactive oxygen species we saw this morning. OK, and this is the only known protein that does that, that's specific function is to generate superoxide. A lot of times, you do generate superoxide, but it's an artifact of uncoupling reactions or other consequences inside the cell. So we have NADPH, and that's on this side and the cytosolic side. This is the extracellular side. No guesses as to what the cofactor is? OK, so it has two kinds of cofactors-- FAD, OK, and the second cofactor are two heme b systems. So you have three cofactors-- FAD that are bound within the membrane and two hemes. And so that's true of all these systems. So NADPH is a two-electron donor. And what are we generating over here? We're generating superoxide from oxygen. So that's one electron. So the major function of NADPH is not to go from two electrons to one electron, but it transfers electrons to the flavin. They have similar redox potentials inside the cell. And then the flavin, because it's long and planar, can do both two-electron and one-electron chemistry. And if you have a heme, you can only do one-electron chemistry because you toggle between iron(III) and iron(II). So we'll talk about that in class next time. But somehow, you do chemistry over here. And this to me is the most amazing thing, and it should have upset you when you read this paper. OK, you should now start-- we're at the end of this course. You should now start, you know, you say how the heck can that happen. I don't believe any of this. This is the way I read all papers. I don't believe of this. Right? [LAUGHTER] So I come in. I'm at the one extreme. I don't believe anything I read because I think people are really sloppy experimentally. But you guys should start thinking like that. So we're going to generate superoxide. And it's on the outside of the cell. OK, This is a transmembrane system. So is there anything that links, if you read this paper carefully, and we'll look at data, but is there anything that links these two proteins in this paper that you were supposed to read? Which, even if you read it, there's not that much data there. You know, I guess there is a lot of data within each panel. But what the data is in each panel, they show you something, and then they quantitate it adjacent to it. So it's the same data just transferred into some kind of a bar graph because that's the way biologists do things, rather than actually drawing some kind of-- showing you some kind of a model with kinetics on it. So what do we know about these two proteins from what's in this paper? Any of you get anything out of that? AUDIENCE: They form a complex. JOANNE STUBBE: Yeah, they form a complex. So we'll come to that. But one of the experiments they did was they had antibodies to this, antibodies to this, and they showed you a picture where green and red went to yellow. And that was the evidence that these things might be linked. So what's unusual is that you're generating superoxide extracellularly. OK, and it's charged. In general, it doesn't go through membranes. But what you see, if you read the paper again, is the function of the NOX protein is to modulate the activity of the tyrosine kinase, which is intracellular. So the first thing that's bizarre is how do you get this superoxide inside the cell. OK, so that should have bothered-- it still bothers me. I mean, we have a model for how this works, but that's what the model is in all of these systems. And superoxide we'll see in the presence of protons. And protons, you don't have to have very acidic to have protons. It can cause disproportionation of superoxide to form oxygen and hydrogen peroxide. That happens actually quite fast. So the proposal is that somehow this disproportionates to form hydrogen peroxide. And it's the hydrogen peroxide that gets inside the cell. Hydrogen peroxide is neutral. Superoxide is charged. OK, and so the model has been that this molecule, AQP-- does anybody know what that was? Did-- AUDIENCE: It's an aquaporin. JOANNE STUBBE: Yeah, so aquaporin, so what does that do? They won the Nobel Prize for this two years ago for it supposedly is a water channel. AUDIENCE: For water, yeah. JOANNE STUBBE: Yeah, and we three-dimensional structures that people have gotten very excited about. So hydrogen peroxide sort of looks like water. AUDIENCE: And you can fit it through the aquaporin pore. JOANNE STUBBE: Yeah, so well, that's the model. Yeah, and there is a model from people that have looked at the structures, have modeled that this could go through. So now you generate hydrogen peroxide. OK, and so the hydrogen peroxide, what we've been talking about is post-translational modification by reacting-- what are we reacting? We're reacting a sulfhydrl group. Let's see if I brought my chalk. I don't have any good chalk. They don't make fat chalk anymore. All right, so they're converting-- we what did we talk about last time? We were talking about this reaction. This was the major focus. So we're forming the sulfenic acid. OK, and this whole paper is about, number one, do you form sulfenic acid inside the cell. And does it affect signaling? That's the focus. And her hypothesis was that it does. And so then you need to know something about the signaling pathway, which other people have studied. And so there are apparently two signaling pathways, which we're not going to pay that much attention to, but you need to pay attention to it enough because some of the figures in the paper-- what did some of the figures in the paper look at? Anybody remember that? Were you confused by that? It was in figure 1 if I remember correctly. AUDIENCE: Yeah, was it phosphorylation of these [INAUDIBLE]. JOANNE STUBBE: Right, so they looked at phosphorylation. And how can you do that? You can have an antibody. AUDIENCE: Western. JOANNE STUBBE: Yeah, by some kind of Western. So in the first figure, if you hadn't looked carefully at this figure, you wouldn't know why they were looking at phosphorylation at all. But what they're trying to look at, like in many of these systems, is a signaling pathway where something phosphorylates then dephosphorylates. OK, so these are the two signaling pathways, and they integrated that into their analysis. And so now the key questions, if you look at the tyrosine kinase activity, why would you think-- what did you learn in this paper sort of grossly about the potential for sulfenylation? Anybody read? Did anybody look at the supplementary information? I think that's always the best part of the paper. See, this is what the issue is nowadays. Now I'm digressing again. So I'm allowed to do this. I only have another five lectures to teach, and I'm finished anyhow. I mean, to me, it's sort of like that's where all the data is. And so nowadays, nobody puts data in papers. OK, it's impossible to evaluate a paper. It's all in the supplementary information. And what irritates the hell out of me is that most people just dump it. That is they don't present it in a way that's thoughtful so that someone like me-- I don't care. Most people don't want to read all the details. That's fine. But then if you're going to present it, it should be so that I can read it and go back and forth and figure out what's going on. Anyhow, here's a case where they talk-- do you remember that they talked about the kinase activity in this paper? Anybody remember that? We will get to that data if I get off this slide. AUDIENCE: I think in Blake they talked about the fact that it enhances the kinase activity if you have the post-translational modification. JOANNE STUBBE: Right, so what did they measure? Did you look at that? There's a figure where they focused on that. I can't remember what figure it was, probably figure 4, 5. So were you confused? It was one thing where you should have been confused because you wouldn't know what any of the words meant. So well, no. So what did you do? I mean, I had to do the same thing. I went and googled it and looked up, and I figured out what the words meant. And then I understood. And we're going to go through that because the reagents are a key thing. You need to understand what the reagents do to be able to think about what you're looking at. So anyhow, we'll come back to this, but, you know, we have-- this is a major target of drugs. There are drugs that are used clinically in treatment of cancer that target the epidermal growth factor receptor. And they inhibit the kinase domain, which were used to ask the question-- remember we talked about this last time, the importance of reversibility of this. And if you modify it, what is the effect? OK, so is the effect that you alter the downstream signaling, whatever is downstream if you know something about that? Or the other effect can be is do you modulate the catalytic activity. And does anybody get that out of one of the figures? Or was this paper so hard we didn't even get that? Look in figure-- I need a copy of the paper. Oh, I don't think this paper was that hard, but you had to work at it because the figures were small, and there was a lot of information. So if you look at figure 5, anybody look at figure 5? We'll come back to that in a minute. So to me, when I look at-- and you'll see the way I write this down. You have a panel of stuff. You have a panel of all these figures. Well, you know, usually, the title tells you what the whole figure is about. But then at the end of this, you'd like to be able to look at the data without any input from the person writing the paper and draw your own conclusions. So at the end of this, this is what I do in every paper. I look at the figures, and then I draw a conclusion from the figures. And if I can't draw this-- I do this with my students too. If I can't understand from looking at the figures what the key conclusions are, they didn't write a good paper. So this here, if you look at figure 5, you will see that there's activity. And so remember, if you're going to do this, why would you want to do it? It's got to have some effect. Right? Or it's not interesting. And so the key issue is that you could get chemistry like this that's happening, and it isn't interesting. And it's very challenging to look at this chemistry inside the cell because thiols get oxidized fairly easily. So what you want to do is not only-- and this is what we were focused on last time is seeing if this happens. OK, we haven't gotten to inside the cell yet. But then the question is is it connected to signaling. And that could be related to the activity of the tyrosine kinase, which triggers off the phosphorylation cascades. In a way you, you have to look up what other people have done to sort of understand that. OK, so now we have the issue is that we made hydrogen peroxide. OK, supposedly, by this model, it's gotten inside the cell. And so now the model is that it does this. OK, so what you see is the issue with this model, which is why people have been fighting over this, is the rate constant for this reaction. Even if you have a thiolate, it's not very fast. OK, so that's something we'll talk about in class. But the Winterbourn paper, when you read that, focuses on the rate constant. So this is a second order rate constant. It's like 1 per molar per second, really slow. So if this was happening over the period of hours, and your signaling is finished in 15 minutes, you've got a serious problem. So you have to deal with that problem. And people are finally dealing with that problem. And there are proteins you'll see in the next lecture that, you know, the pK of the sulfur isn't all that perturbed, but they're able to react with hydrogen peroxide much, much faster. And that that's going to be a key piece of information. So if you get modification, then the question is what is the consequences of the modification, which is what the paper is about. So the other key player in all of this, if you have something phosphorylated, so here we are something phosphorylated. So that's a kinase. It phosphorylates itself. But then whenever we have a kinase, we usually have something that clips off the phosphate, which is a phosphatase. So that's PTP. And we'll see that the phosphatases we're dealing with all have cysteines in the active site. OK, and the cysteines are all-- so the PTPs all have an active site cysteine. And this active site cysteine-- you've seen this over and over again-- is involved in covalent catalysis. OK, we've seen hundreds of examples of this now. So you might not know what's involved, but that would be a good guess based on everything we've seen. So it turns out the question is this is the active form. OK, so how could you shut it off? You might be able to shut it off by sulfenylation. So this would be active, and this would be inactive. So what you're looking at is, again, another method of post-translational modification-- this is the hypothesis-- that can affect the activity in these cascades. And does anybody remember what conclusion people drew about the phosphatase? That was another figure in the paper. Does anybody remember? There's a lot of information. But in the end, there aren't that many conclusions you could draw. But part of the problem is that I haven't gone over this cascade in lecture. AUDIENCE: They were dark. JOANNE STUBBE: So I realize that's putting you at a disadvantage. But you've had a couple of weeks to read this now, so yeah. AUDIENCE: They were differentially sulfenylated depending on whether or not you had EGF present. JOANNE STUBBE: Right, so they were differentially-- so nobody knew what the phosphatase was. OK, so they had a bunch of candidates because we have the whole genome sequence. We know what all the phosphatases are. Just like we know there are 500 kinases, there are 100 phosphatases. And they know which ones are connected to certain kinds of signaling pathways. So one of the key conclusions from this paper is that they identified, or they claimed to identify-- you may or may not-- when we get that far, you can look at the data and see whether you believe that. Based on what they reported here, they claim to know that it was SHP. I think it is. I don't remember the name of the phosphatase, but that's the one that was modified. And it turns out even the phosphatase can be modified further in a cascade by proteins called peroxiredoxins, which aren't in this paper. Anyhow, so that's the overview. And what I wanted to do was spend a little bit of time going back through what we had gone through last time, not this part. We're going to go through this very fast because we got this far. Again, the development of a specific reaction with either iodo-dimedone or dimedone to modify either the sulfenic acid or the sulfhydryl group. OK, and the issue is you can do this inside the cell. These are cell permeable. But then how do you ever find it? Right, so you've got 10,000 proteins. They can all get modified to some extent. We don't know how much of the protein is there. We don't know whether it's been partially modified. But there's no way to identify this currently. And of course, the first thing is you're assuming that the linkage is stable. That's important. But even so, there's no handle on this. So the focus of the Nature Chemical Biology paper was to figure out how to make this so you could find this inside the cell and use this in some way. And so the mass spec method we had focused on last time, which is also used again in this paper, but if you didn't read supplementary information you won't know that, is they use isotopically labeled materials. OK, so this is only if you have an extra mass of 6. You have the deuterated methyl groups. And that means you have a sulfenic acid that you've modified because she's shown that it's specific versus the methylated, which reacts with thiols only. So that's the basis of the assay. And again, it's not easy to find a reagent that allows you to do this. And this is where we were last time at the end of the class. I didn't get this far with all the classes, and I can't remember who was in the class. But we were looking at this. This was just proof of concept. So we have a glutathione peroxidase, which was also in the current paper. And what do we know about the glutathione peroxidase in this paper? If you read the paper, did this ring a bell from the previous paper? So what was unique about glutathione peroxidase? AUDIENCE: It has an active site cysteine JOANNE STUBBE: Yeah, it has an active site. AUDIENCE: --that they can use to validate their approach. JOANNE STUBBE: So it has an active site cysteine, which can get modified. There's something reactive about that. And it catalyzes. It has peroxidase activity. So it plays a very important role in controlling these reactive oxygen species. So it's a small protein that's been very well characterized. And so if you look here, what do you see? This is where we were at the end of the lecture. What do you see? So this is an in vitro experiment, not an in vivo experiment. So we're in the test tube. And so what do you see? So they treated it either here with dimedone, OK, which labels sulfenic acids, or they treated it with iodo-dimedone. And so what did that tell you? This is where we were last time. When you looked at that, did that say anything to you without looking at the analysis out the other side? So you're looking at the figure. What did you think when you looked at the figure? What's your name? AUDIENCE: Nicole. JOANNE STUBBE: Nicole, what do you think? AUDIENCE: You can see that, as hydrogen peroxide increases with the dimedone, the levels increase. But with the iod0-dimedone, the levels decrease. JOANNE STUBBE: OK, so that's good. And then you can say one more thing. And what you could say is, in this lane-- so this is where they're looking at the iodo-dimedone, and there's no hydrogen peroxide. What does that tell you? So that's an extra piece of information that was more subtle out of this. So if you look at this, you could think about it. You could do it. So look at this, so we're increasing hydrogen peroxide. And you saw it increases. Here we have just cysteine. Of the 100%, we're hitting it with something that reacts with sulfhydryl groups, the iodo. And look at this compared to this. What do you see? What does it look like? AUDIENCE: It looks like there's more [INAUDIBLE].. JOANNE STUBBE: So that's exactly it. And it's the eyeball method. So you can't tell anything by the eyeball method. You have to have a way of quantity-- we talked about phosphorimaging. That's what people do. But what this tells you is, if you looked again at the details, you know, they use 50 micromolar of the protein. And they went to I think with 100 micromolar. And even when they went to 100 micromolar, they didn't inactivate. They didn't modify all of the GPx-3. OK, now would you expect them to? I don't know. They probably tried a lot of different conditions. I mean it's concentration dependent. It's time dependent. And that wasn't given in the details. OK, but this tells you then-- that takes you to the next one of the sets of data here, which you could have gotten in some form by looking at that data. So now what you're looking at is they're looking at a ratio of, you know, what's sulfenylated versus what's a sulfhydryl plus sulfenylated. And what do you see? Cysteine 36 is known to be at the active site, but they showed that in this experiment. And you see that you don't reach 100% labeling. OK, so that you saw in the previous set of data. You couldn't tell what the ratio was. And this is sort of you're just looking at this is the mass. OK, and if you know what the protein is, remember, and you cleave the protein down with trypsin, you're going to get all these little peptides out. That's what we talked about last time. And we're looking at charge-to-mass. We're looking for charge-to-mass differences in ratios depending on what the charge is for deuterium versus protons to tell whether it's sulfenylated, or it's just an SH group. And so what they found was this charge-to-mass of 541 versus 554. And that told them-- if you look at the sequence that I told you in the computer can you analyze all this, it told you you were looking at peptide 36 to 43. And we know cysteine 36 is within that peptide. So that didn't tell us that the modification is based out that cysteine. But then you could go back in, and you could sequence. Yeah? AUDIENCE: Why is the m/z different by three? JOANNE STUBBE: Because you have charge. Because of the charge of the system. AUDIENCE: [INAUDIBLE] JOANNE STUBBE: Yeah, so you can get this at every-- you know, depending on how many masses you see, you can still pick up the data, but the number will be different. OK, so you would like to have six, but you might not be in that mass range. I mean here you would be. So you could have looked at that to have a difference of six. So this tells you that you're not getting complete sulfenylation. So then the question is did any of you recognize that when you looked at the paper. Or did it just go by you? What could be going on? So we already talked about the fact that maybe they didn't do the experiment right. They didn't have high enough concentrations. They didn't let it go long enough. Let's ignore that, OK, because they're good scientists. With some people, you might not want to ignore that. But then the question is what could be accounting for this result. AUDIENCE: Antibodies they used. JOANNE STUBBE: The what? AUDIENCE: They antibodies they used may not have the [INAUDIBLE]. So maybe we should [INAUDIBLE]. JOANNE STUBBE: OK, I mean, you know, you're going to use enough antibodies to be able-- you've go to check that out. So that would be another control you would need to do to make sure everything is under control. OK, so what we talked about last time in addition is hydrogen peroxide can oxidize this to these things as well. It's much slower. And so you don't know how would you look for this. So one explanation is that you have some competing rates-- again, we don't know anything about it-- but that you had changed it and that the dimedone no longer reacts with these forms based on the mechanism we described. So that's a possibility. And in fact, if you read the supplementary information, how might you distinguish between this and this? Say you thought that this was happening. I mean we're missing a lot of stuff. So there's a number of explanations. And that's just one of them. So what method are we using? AUDIENCE: Mass spec. JOANNE STUBBE: Yeah, how can you distinguish this from this? AUDIENCE: Per mass. JOANNE STUBBE: Yeah, by mass. And so if you look at that, you're going to have a different mass, right, on your little peptide species. You'll have the peptide species. And if you look at the supplementary data, they show mass in both of these states. OK, so that suggests that that might provide an explanation for what's going on. Maybe you could think of a lot of other explanations. Here I won't go through this. But they looked at mutants. And they wondered if all the cysteines could be modified, OK, because cysteines are reactive. Although, the ones in the active site often have lower pKa's. So this all-- there are thiolates. So you might expect the reaction to be faster. But again, the reaction, even with the thiolate, with hydrogen peroxide, is slow. OK, so that's not the whole story. And in fact, I think it was in the Nature Chemical Biology paper, there are some proteins they look at that all the cysteines can be modified. So this should start making you nervous in terms of how important sulfenylation is if it contributes a lot, if all of these things are heavily sulfenylated. And in this experiment, they made a few mutants, and then they repeat the experiments. And they see that they don't see any modification. So that suggests that the cysteine 36 is the one. In the case of in the test tube, glutathione peroxidase is the one that's interesting and is being modified. So it is selective. It doesn't say anything about the biology because they're not looking at the biology of GPx-3. And so then this is the method that they used. So once you identify that peptide, now you can have a second mass spectrometer and use a method to do collision-induced dissociation, which allows you to sequence the peptide. And now you ought to be able to-- it depends on the stability and the fragmentation patterns, but we know peptide bonds can fragment in a number of ways. And people have figured this out over the years, namely Klaus Biemann figured this out that we have B and Y. And so one reports on the C-terminus of the peptide, the other on the N-terminus of the peptide. And those are the fragmentation patterns that are observed most frequently. And lots of times, one side, you don't see both with equal intensity. They fly differently. But that allows you then to sequence, and that's what they did in these papers. And this just is an example of-- this isn't from glutathione peroxidase, but you can see you can now assign. If you wanted to go through this, you can walk through all of this. And you can see B2, B3, B4, et cetera, and you can see Y1, Y2, Y3. And the two of them should add up to give you your peptide fragment back. I mean once you get used to looking at these things and then understand the mechanism of fragmentation, which is what Biemann worked out, you then have a picture of sequencing by mass spec. And this is just another example that, again, this is not related to this particular problem. But you should see all these B fragments, all these Y fragments. And the two of them should sum to give you the total mass of whatever your peptide is. OK, so they then looked at glyceraldehyde 3-phosphate dehydrogenase is another control in the last paper. That's an enzyme in the glycolysis pathway that has a cysteine in the active site that plays an essential role in catalysis. And they asked the question does it get sulfenylated. So it's implicated in all kinds of regulatory mechanisms as well. And they did the same experiment. And what did they see here? So this is, again, the same ratio, deuterium over the sum of deuterium and protonated. And here they see one. OK, so this one, they are able to titrate stoichiometrically on this active site. So again, it's a question. What that's telling you, which is an issue in the end, is, you know, they all have different reactivities is what-- yeah? AUDIENCE: In the other slide, was there only one cysteine in the protein that they sent, like the fragment of the protein that they sent through the mass spec? JOANNE STUBBE: No, so they actually-- whoops. Where is it? Yeah, so this one-- no, so they were looking at-- so they mutated this so they now had two cysteines, the one in the active site, and they had another cysteine, cysteine 64. AUDIENCE: So was it still measured by-- you said it reacts initially with-- JOANNE STUBBE: Well, you react it with both dimedone-- you react it with hydrogen peroxide, and then you react it either with dimedone or with iodo-dimedone. And what they found is they saw no-- so they found fragments. OK, so the fragment that they were interested in, since that was the cysteine, is 64, OK, with 58 through 67. So that's the mass charge 639. So that's what they focused on. OK, and so then they asked the question do you see D6. OK, D6 would be indicative of sulfenylation, and they saw no sulfenylation. So they can get modification. And they pulled out that cysteine, but they get no sulfenylation. Only the 36 was sulfenylated. So they did a whole bunch of experiments like this. This is again just proof of concept, but I think this data, just comparing the two proteins they looked at, shows you that you have different reactivities, which I think is part of the issues with these labeling methods that you're trying to use in cells. Yeah? AUDIENCE: So in the method where they were just looking at the cysteine 36, they got 50% sulfenylation. JOANNE STUBBE: So 50% sulfenylation, right. AUDIENCE: Yeah, and there were no other cysteines in that fragment because-- JOANNE STUBBE: There were no other cysteines in that frame. So if you look at the sequence, you know, I mean this is a small protein. And so you know-- OK, so and now let's move into today or the precursor to today's paper. But this was the major focus. So what they wanted to be able to do, they had proof of concept. And now the question is how do we show that this can-- can we do this inside the cell? OK, so what are the issues inside the cell? I've labeled some here. But have you guys done any thinking about this? So we want this thing. What do we need of any reagent we're going to use? And what you see-- I think they've probably tried 20 or 30 reagents. I think these are the ones that came closest to working. But if you look at this, there was also-- it probably was in supplementary information. I don't remember. There it is maybe. Let me-- does anybody remember looking at a figure where they address this issue? Nobody remembers. AUDIENCE: Which issue? Yeah, which issue are you referring to? JOANNE STUBBE: Oh, the issue is whether you're getting sulfenylation inside the cell. Yeah, OK, and which reagent? So the first thing you want to ask are you getting sulfenylation. And which reagent works best? AUDIENCE: It's too deep. JOANNE STUBBE: OK, so it's definitely in here. I don't remember which one it is. AUDIENCE: They talked about it in the supplementary [INAUDIBLE]. JOANNE STUBBE: Is it? OK, so let see if it's 2D. So I have this on a slide. I can go forward. So the question is which one of these guys do you want to use. AUDIENCE: Alkyne. JOANNE STUBBE: OK, why? AUDIENCE: In one of their preliminary experiments, it worked the best. JOANNE STUBBE: OK, so this is the experiment. So what is this? This is the experiment. So somebody want to describe this experiment to me? So these are the kinds of questions they're asking in this paper. OK, so this is the same. This is the same in vitro. That's what we just did with the GPx. That's what we just went through. And what conclusion? You know it's really clean. And so you're looking at an antibody to dimedone. OK, that's what they're using. And so what conclusion can you draw from this with respect to these three systems, these three reagents? AUDIENCE: That they're comparing basically if you label with the N3 on the guy that you're trying to look at or with the alkyne. And in vitro, they-- JOANNE STUBBE: Right, not only N3, but two different N3's-- you know, what is the nature of the linker? OK, that's something you need to pay attention to. AUDIENCE: I think that's in the supplementary. This is just with one linker. Yeah, they didn't do this experiment with the-- JOANNE STUBBE: OK, so here they have DAZ2 and DYN. So here they have-- whoops. I don't know where DYN2 is. AUDIENCE: So that's with the alkyne. JOANNE STUBBE: OK, so that's in-- so they're the same linker? AUDIENCE: Yeah. JOANNE STUBBE: OK, because I don't remember that. So that's what they did here. And then they clicked it. OK, we were going to talk about that. We haven't gotten there. And what did they end up seeing in this particular reaction? AUDIENCE: [INAUDIBLE]. Yeah, like they really that equal in vitro. JOANNE STUBBE: So you see it here, hydrogen peroxide. So they sulfenylated. OK, so without sulfenylation, without hydrogen peroxide, they don't see anything. So that's good. And then here, they have hydrogen peroxide. So they sulfenylated. This is with-- AUDIENCE: Alkyne. JOANNE STUBBE: --the alkyne. And this is with-- AUDIENCE: Azide. JOANNE STUBBE: --the azide. So both of these is seeing something. OK, so now what they do over here is do the same experiment inside these cells. OK, and so inside the cells, what are you going to end up seeing when you look at this? AUDIENCE: More stuff that gets sulfenylated. JOANNE STUBBE: A mess, yeah. So you see a mess. But what do you see here? So again, they're doing the same kind of thing. We haven't talked about the reaction yet of how you pull these out. But which one is most heavily modified? This one, and this is the one where you're using-- AUDIENCE: Where you label with alkyne-- JOANNE STUBBE: --alkyne and click with-- AUDIENCE: --and azide. JOANNE STUBBE: --azide. So this is the one that-- is this the one they use? This is the one they should have used. AUDIENCE: [INAUDIBLE] JOANNE STUBBE: Yeah, so if you go back-- whoops. So if you go back over here, we're trying to decide which one we want to use. And so that's the kind of experiment they did to try to design. So what are you concerned about in this experiment? The first thing is what has to happen with this molecule. I needs to get in the cell. OK, so I guess they know it gets into the cell, but they didn't really do any experiments to determine how much got into the cell. And it could be one that could have gotten into the cell much more than the other, which would have given you the same result. Presumably, they looked at that. So it is cell permeable. The key thing, I think, is in the cellular milieu, you've got to be able to do all these reactions. OK, but what is the issue in the cellular milieu doing these reactions? How do you decide how to do the experiment? What do you have to worry about? AUDIENCE: You want to have-- I mean both of these are the case. You want to have some sort of chemistry that's orthogonal to all the processes in the cells. JOANNE STUBBE: So we want to do that. So that's important. But OK, so that's one thing you have to worry about. We think that they have that under control. What else do you need to worry about? AUDIENCE: That this doesn't alter the sulfenylation profile in any way. JOANNE STUBBE: Well, I mean you have to be able to sulfenylate first. OK, what governs sulfenylation if you look at these, if you look at these molecules? So you sulfenylate. Let's assume you can sulfenylate. We can't control that if you've got hydrogen peroxide in there, if you've generated hydrogen peroxide. So they've somehow got to do that. And you sulfenylate. Or maybe they don't. They do EGF stimulated reaction. Then what do you have to worry about with respect to these analogs? AUDIENCE: How well those react with the sulfenyl group. JOANNE STUBBE: Right, and so what do you have to think about? AUDIENCE: What the time scale is with the sulfenylation equilibrium versus like this reaction reacting with that species. JOANNE STUBBE: And so that's all bimolecular. And the rate-- and it's bimolecular meaning something might be there in nanomolar. Something might be there in micromolar. The rate of the reaction is automatically a factor of 1,000 fold difference. So how long do you let this go? If you let it sit for a very long period of time, sulfenylation, we're talking about the importance of reversibility. So are there redox systems? There are that can remove the sulfenyl group. So designing the experiment to be informative is, in my opinion, not trivial. So looking at the details is key. And you need to know a lot about the system, which I don't know anything about. And then, you know, this has going to be able to get into the active site. OK, so the active sites have got to be big enough. We know it can do that in glutathione peroxidase and GAP dehydrogenase. We looked at that. But this needs to be long enough so, wherever the active site is, you can do some chemistry out here. So all of these things are issues that you have to deal with. And then the thing we we were facing before, if we only labeled with this, we have no way to tell where the label is. And so again, the idea is that you use these things to click it to something that's going to inform you where the label is. OK, yeah? AUDIENCE: Was it synthetically driven that the [INAUDIBLE] linker they install at the like [INAUDIBLE] position of the reactive site versus-- JOANNE STUBBE: My guess is yes, but I didn't read the papers carefully enough to know that. So in the paper that you were assigned, they had a lot of synthetic chemistry. So if you read the details, which is key, I'm sure a lot of these things are going to be driven by what's easiest to make. You need to be able to make large amounts of it. And these are now all commercially available. OK, so this is-- so yeah, I think it is. But to me, you would do an experiment-- AUDIENCE: To show that? JOANNE STUBBE: --at different positions because one of them might be much, much more efficient at much lower concentrations. And you might be much better off. I don't know. Or you certainly might want to use more than one reagent because of the issues of trying to get this thing to react. And then you have this question of, even if you get these things, how do they react. And most of you have probably heard about click chemistry since it was invented by-- it wasn't invented by Barry, but Barry popularized it. And it's copper-catalyzed in most reactions. But to do that, you do it in cell extracts, OK, because copper is really toxic to cells. So it's not useful for looking at this inside the cell. And so the Bertozzi lab made a strained alkyne with a fluorine on it-- actually, Jeremy did that who was an Alice Ting undergraduate here-- that makes it click. But it's still not good in my opinion. This still needs a lot of work. And so I think the best methods actually are the new methods that are coming out of this guy's lab where he makes these tetrazine analogs. And then he's made cyclopropenes. And they react much, much faster under mild conditions. And people are using these to put on fluorescent probes. So anybody who is interested in that, you can read about-- this guy, he's done a lot of creative science. He's a young guy about Brad's age. But I read all his papers because I think there so interesting. There aren't many young people that I do get to. Anyhow, this guy is good. OK, so the issue now is how did how does this happen. And how do you do the analysis? And so now that we're finished, none of you have come to the board yet. OK, all right, so the reagents used, you need to think about the reagents used. So I'm going back to the model. I already gave you the model. All right, so one of the reagents they used is this guy-- dihydrochlorofluoroescein. What did they use that for? AUDIENCE: [INAUDIBLE] JOANNE STUBBE: For what? AUDIENCE: For hydrogen peroxide detection. JOANNE STUBBE: OK, but is it hydrogen peroxide? Do you know how it works? AUDIENCE: No. JOANNE STUBBE: So to me, the first thing you should do is you should have googled it. And so it's dihydro. So this is in the reduced state. I mean, I could draw this structure up on the blackboard for you, but the importance is that it's in the reduced state. And furthermore, what does the DA stand for? So this is where the reagent-- you know, to me, I didn't have a clue either because I don't work on fluorescent probes inside the cell. So I immediately went and looked it up because I didn't know what it was, and I didn't know what I was looking at. And so here was one whole figure with this reagent. And so how could you understand what the reagent was telling you if you didn't know what it was? And so what it's got-- I can't remember the whole structure. You can google it. But it's got a couple of hydroxyl groups. And the hydroxyl groups are acetylated. Why might they want to acetylate? Why would they use the diacetate? AUDIENCE: It's not fluorescent, right? JOANNE STUBBE: It's not. The reduced form is not fluorescent. AUDIENCE: So in order for it to-- well, in order for it to be fluorescent, you're going to have to take off the acetates, which will happen in the cell [INAUDIBLE]. JOANNE STUBBE: Right, so the first thing happens. Number one, but how do you get the reagent into the cell? It turns out so it depends on the pKa of this hydroxyl. And so the acetate groups facilitate uptake into the cell. Then it gets inside the cell, and the acetate groups get hydrolyzed off. It's in the reduced state. And so this molecule in the reduced state-- AUDIENCE: Is fluorescent. JOANNE STUBBE: --reacts with, quote, "reactive oxygen species," unquote, to get into the oxidized state. And that's what becomes fluorescent. So it starts out in the reduced state. And it has two acetate groups, sorry, two acetate groups. OK, and so this gets into the cell. And then you have-- you still have the reduced state. And now you have a hydroxyl. And now this with reactive oxygen species-- so this is still non-fluorescent. And we're going to talk about fluorescence next time. And then it gets in. It gets oxidized by reactive oxygen species and becomes fluorescent. So that's the assay, but it doesn't just react with hydrogen peroxide. It can react with a lot of molecules. And in fact, the Collins paper that I talked about in class where I made some snide comment on it-- anyhow, I mean the problem was, in the original paper, people just used these things blindly thinking they're reacting with specific molecules. And in the last five years, there's been a huge number of people that have focused on making sensors specific for each reactive species. And that's really what you need to do if you're going to make a sweeping generalization about something like this. So yeah, we're at the end of our time. But what you should do is go back, and there are a whole bunch of reagents in this paper that, if you didn't know what they were, there's no way you can understand the data. OK, so the first thing you did-- or when I looked at this, the first thing I did is I made a list of these reagents because I understand what was going on. You know, like I never heard of-- what is it-- apocynin. That's a specific inhibitor of NOX2 isozymes. So there are a bunch of different isozymes. People are really interested in these therapeutically. So people have developed specific inhibitors. If you look at these guys, which they call them the wrong thing. They have a longer name, but these are covalent and irreversible inhibitors of epidermal-derived growth factor receptor. OK, so you need to know that to be able to look at each one of these panels to figure out what the data tells you. OK, so now I would suggest you go-- you might see this again. Maybe you'll see this on the final exam, some of this data. Anyhow, you should go back, and you should look at the data. This is what we've been trying to get you to do over this course is, you know, it takes a lot of energy to read a paper. that's one of the take-home messages from the course. OK, so we have finished.
MIT_508J_Biological_Chemistry_II_Spring_2016
32_Reactive_Oxygen_Species_2.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: What I want to do is proceed where we left off. We're in module seven on reactive oxygen species. I'm introducing you to the concept, what was the big picture. And at the end of the last lecture, I gave you a few introductory slides, but what we're going to be focusing on, I told you was oxygen can be one electron reduced to superoxide. Superoxide can pick up another electron. They should have a couple protons here to form hydrogen peroxide. So these are two of the species we'll be looking at. If you have iron 2 around, so this goes back to the connection to iron homeostasis that we talked about in module 6, this is called Fenton's chemistry. And you produce hydroxide radicals, so that's the third reactive species. And hydrogen peroxide in the presence of chloride with myeloperoxidase can form hypochlorous acid, which then can chlorinate amino acids or sugars or a lot of other things. So you get rampant chlorination inside the cells. So this cartoon also has reactive nitrogen species. We're only focusing on the reactive oxygen species. So that's what we're going. And so at the beginning of the lecture, I wanted to introduce you-- and I did give you an overview of where we were going. We were going to look at what are the reactive oxygen species, what does reactivity mean, how do you define chemical reactivity-- I think that's a key issue-- and then what are defense mechanisms against these reactive oxygen species, before then focusing on the NADPH oxidases, which those are the enzymes that we're focused on in this whole module on reactive oxygen species. So identification, and I had put this on the board the last time. We have-- just repeating what's over there, superoxide hydroxide radical, hydrogen peroxide. Because this is together, and hypochlorous acid. And we're going to look at the reactivity of these guys. These are one electron oxidants. These are two electron oxidants. So reactive species don't need to have free radicals. They can do two electron chemistry, or they can do one electron chemistry. OK, so a key thing to think about, and I don't expect you to remember the detailed reduction potentials, but again, as the inadvertent consequence of our environment, you know, 1.8 to 0.8 billion years ago we moved from an anaerobic to an aerobic world, where we have metals all over the place. The question is how do you control these reactivities. And so we need to think about the redox chemistry of oxygen. So oxygen, and the details the actual reduction potentials, and these are some of them given here. And this one somehow got lost from here to here, which is 0.94. But it's in a table later on in the PowerPoints, depend on the balanced equation. So if you're looking at this, and you really want to think about the reduction potentials, you need to count the numbers of electrons and protons and balance the equation that you're looking at, because the reduction potentials vary. But what you need to remember is the more positive the number is, the easier it is to reduce, the more powerful the oxidant. So we're going to have oxygen. And in the first one, we have one electron reduction. And this is the only one where the reduction potential is negative. So this is uphill. It doesn't want to be oxidized. And this produces superoxide So this is one of the guys we're going to be talking about, and here's the reduction potential. This can be further reduced with an electron and a couple of protons to form hydrogen peroxide. So I'm not going to write that out. Everybody hopefully knows what hydrogen peroxide is, and this is the number that for some reason got left off that handout, and this is favorable. And so are all the other numbers. And the numbers are actually large and favorable. That means they're good oxidants. OK, so more positive, better oxidants. These are all biological reduction potentials, relative to the normal hydrogen electrode. So these are the ones-- there's actually a table in there where you see the numbers that people interested in biological systems focus on. Now, we'll see later on that hydrogen peroxide gives rise to hypochlorous acid in the presence of chloride. But hydrogen peroxide can also get further reduced to-- and this is one where I don't have the balanced equation to hydroxide radical. And so the number I have written down here is 0.38 volts. But again, the numbers aren't so important. And then this can get further reduced to water, and this number is-- you'll see in a number of the tables I give you is 1.31 volts. So we have different states, and all three states, consequence of moving from the anaerobic to the aerobic world that give you species that are called reactive oxygen species. And the one that's hardest to form is superoxide and this is the one that's probably the least reactive. But what I'm going to do is give you a couple sets of criteria for reactivity, because when people say something is chemically reactive, you know, what does that mean? It depends on what it is reacting with. And so this, I think, is the real problem in the field is that people really don't define this very well. And so this is why I think the Winterbourn paper is so important. So she has a table in there. I'm going to have the table up there. I reorganized the table to focus on what I want to focus in this brief introduction to this topic and show you how they define reactivity. But if you are thinking about something, you need to define what the reaction is that you're interested in with any of these reactive oxygen species. So these are the guys we care about. And the second question I wanted to address is-- so first we were identifying last time the identification. Second is a chemical reactivity. OK. And I've taken this-- maybe it wasn't from her paper. I can't remember where I've taken this. I should have referenced it. But anyhow, what I'm going to do is give you a simple table that I think is useful to think about reactivity. So if we look at reactivity. We're going to look at the oxidant. And then another category is going to be the biological defense. And the third category is thermodynamic properties. And you all know that things can be thermodynamically favorable, but not happen at all, like oxygen oxidizing glucose on the table. So you not only need to think about the thermodynamics, which involves the redox potentials, and you need to define the sorts of conditions that you're looking under, but then you need to also think about the kinetics. OK, so we have thermodynamics. So we have biological defense, thermodynamics, and then the kinetic properties. And the kinetic properties are often given in terms of reactivity with a molecule called glutathione. I'm not going to draw out the structure. But it's a tripeptide with gamma glutamyl cystine glycine. OK, so it's a tripeptide with an unusual linkage to the next amino acid. What do you know about glutathione? Have you guys ever seen that before? So it's a major redox buffer inside human cells. Now, if you're in a microorganism, we don't use the same major redox, so you need to look at that. But all organisms have redox buffers. And as you can imagine, and this is why we focused with the mass spec stuff, sulfenylation, one of the major targets of out of control reactive oxygen species is oxidation of cystines. There are other amino acids that get oxidized, but the focus is on the cystine [INAUDIBLE] and all the changes that can be made and all the signaling, most of the signaling, through reactive oxygen species all go through cystine oxidation. So this is a reasonable choice, gamma glutamate. So let me just write down, it's a tripeptide of glutamate cystine and glycine, with an unusual linkage here, with an iso peptide linkage. So the first two are one electron oxidants, and the first one to talk about is hydroxide radical. Hydroxide radical you'll see by far and away is one of the most reactive, is dying to be reduced, as you can tell, by this reduction potential, no matter what the variation on the theme is. And there is no defense. So you don't want to get to hydroxide radical. So there's no actual defense. And that's not completely true, because in reality, if you have glutathione around, the glutathione will reduce this by hydrogen atom transfer. So an important component is the redox buffer. So glutathione-- let me just write that down again, because we're not going to have time to talk about this, but redox buffers play a central role in reactive oxygen species. And so the thermodynamics of this, it's dying to be reduced. If I have the numbers right, I think-- I don't remember what the numbers were. OK, so the numbers here are 0.31. I have different numbers in different places. But anyhow, it doesn't matter. It's dying to be reduced. It's a hot oxidant. And then the rate constant for reaction with glutathione. OK, so it would be h dot transfer. And this is a bimolecular. Rate constant is 1 times 10 to the 10th per molar per second. So this is really fast. And so if you've got glutathione around, your hydroxide radical is gone. We're going to look at another way of trying to define reactivity, but this is the way that people, who were trying to think about the kinetics of all this, are starting to do this. OK, so this is one. The second species, which is also a one electron reductant, is superoxide And this is the one seen described most frequently as a reactive oxygen species. In reality, it's not very reactive at all. It is reactive, but not anywhere near as reactive as some of the others. And do we have a defense mechanism? We'll come back to this a little, and I'll write a balanced equation. I'm just going to list things. We have enzymes, called SODs, and so this is superoxide dismutase. And I'll come back and write a balanced equation in a minute. So we have proteins that are devoted to this, but in reality metals, like manganese inside the cell can actually function as a superoxide dismutase at reasonable rates. Protons cause rapid dismutation to form hydrogen peroxide and oxygen. This guy is also dying to be reduced, so thermodynamically, this is a good oxidant. But the key is thinking about the kinetics. And it obviously depends on the reaction you're looking at, the kinetics are going to be different with every small molecule or large molecule it interacts with. But, again, we're using glutathione as an example. And the numbers that people report for superoxide compared to 1 times tend to the 10th are now 10 to 1,000 per molar per second. OK, so this is chemically much less reactive than hydroxide radical. And even for this one, we have a defense mechanism. This one, again, is problematic. OK, so now what we're going to switch to is two electron oxidants. And the one we're going to focus on today, in this section, what happens in neutrophils to defend against invading organisms, like bacteria or viruses or parasites. The major way that this becomes neutralized inside the cell is, again, in a mammalian cell is with glutathione. So this is a small molecule. It also is a very strong oxidant, but the mechanism of oxidation is distinct, two electrons, versus one electron. We're going to look at examples of this. And if you look at the rate constant for reaction with glutathione-- And again, you need to really think about a balanced equation in the kinetics of all of these things. If you're ever going to work in this area, that's what you need to do. You need to educate yourself about what the species are with which you're going to interact. But you can see from this number under the sets of conditions, they did everything the same way, so that they could compare the relative reactivity of these molecules, 2 times 10 to the 7th. So this is much more reactive, for example, in superoxide And then the last one is hydrogen peroxide. So this is also two electron. And two electron, we will see that there are a number of proteins that mount a defense. These are called-- and I'm going to show you this in a minute-- paroxyredoxans. I'll show you what they do. What did you see-- do you remember the enzyme that was used in the Carroll paper this week in recitation? To get rid of hydrogen peroxide, what did they use? Anybody remember? We use catalase. I'm going to come back and write the equation, so catalase, and then the other one, which we also talked about, but we didn't talk about the chemistry in the Carroll paper, was peroxyredoxins. And there were like seven or eight different isozymes. So we have a number of ways of dealing with hydrogen peroxide. Again, it is thermodynamically favorable to be an oxidant. But as we've already talked about before, hydrogen peroxide is really not very chemically reactive at all. And so the numbers that they quote under these sets of conditions are 0.9 per molar per second. So it's much, much slower. You see numbers that range from 0.9 to 20, but this has really important implications in the paper we talked about in the mass spec analysis, where hydrogen peroxide is functioning as a signaling agent. We're going to come back to this later on. And this for years made more chemical-y type people not believe that hydrogen peroxide was involved in signaling, because the rate constants were just too slow, compared to the biological response of the other side. So this is sort of a superficial overview of the differences in reactivities, but the real take home message is these molecules have different chemistry and different reactivities. And I guarantee if you're studying stuff inside the cell, you're going to worry about these kinds of things, and you need to educate yourself about what you're worried about in terms of the redox potentials of these systems. So there's a second way. So those are just the redox potentials. So there's a second way to look at reactivity, and this is also, I think, in the paper you had to read. So the second way is by just looking at diffusion, how far-- this is within a cell-- can you still feel the effects of the oxidant. And so I'm going to say see PowerPoint for the cartoon. So I think this is a good way to look at this. And again, the numbers are squishy, but here we are inside-- this is the cell, and the question is, do some of these oxidants get out of the cell and go to the next sets of cells. And so if you look at something like hydrogen peroxide. So hydrogen peroxide is the least reactive from this criteria of kinetically, the least reactive. And it goes way outside the cell. So it diffuses farthest away. So that means it's the least reactive. So the distance is used to define reactivity. And again, this is a squishy number, but I think it's informative. Now, what you see here-- so hydrogen peroxide, we just went through, is the least reactive. But I also told you that hydrogen peroxide, there are many ways to remove it inside the cell, peroxyredoxins, glutathione, glutathione peroxidases catalases. They all remove it. They all have different rate constants. Peroxidative redoxins account for, I think, it's 1.5% of mammalian cells, and they're very important in controlling redox balance. And what do you see here? If you are in an environment where you have a peroxyredoxin, what happens? This diffuses a lot less quickly. Why? Because the enzyme rapidly reacts with this molecule. So we know that the enzyme can react. I haven't given you that number. But we'll see that this number is on-- instead of being a number of 0.9 to 20, is going to be 10 to the 6th per molar per second. So this now-- something about the active site of this-- and it's not SH versus thiolate. We all learn now. Everybody is good at this. Thiolates are the reactive species. It has nothing to do with that. Thiolates are always more reactive. But there's something else special about these proteins that allow them to control hydrogen peroxide. Now why might you want to do this? If you have a signaling agent, like hydrogen peroxide, you don't want it going all the way over here. You want to control the effective concentration near we want the chemistry to happen. So these peroxyredoxins play an incredibly important role in controlling the effective concentrations. And so if you look here within the cell, again, we're only focusing on oxygen, but both hydroxide radical and hypochlorous acid are very, very reactive. You can't go very far without having them react with something, and that, again, is consistent with the kinetic analyses that people have done over the years. So the take home message from all of that is that these reactive oxygen species have different chemistry and different reactivities, and you've got to educate yourself. But some of these things, HOCL and hydroxide radical, are very reactive, no matter what. So the last thing I wanted to focus on was in this section, which is basically the introduction, is the defense mechanisms. OK, so this is the defense. And I already have listed what the defense mechanisms are, but I wanted to give you a few equations. You've already seen that they can be enzymes or small molecules. And so one example we already looked at is-- we already described, but didn't look at in chemical details is superoxide dismutase. OK, what does superoxide dismutase do? It takes two molecules of superoxide and they disproportionate in the presence of protons to hydrogen peroxide and oxygen. And these enzymes have a kcat over km, a catalytic efficiency on the order of 7 times 10 to the 9th per molar per second. So these are incredibly efficient. In fact, metals-- again, manganese in solution, in some organisms, they have a lot of manganese, they can actually do disproportionation. But it's all about the rate constants. So this is incredibly efficient. These enzymes are in all organisms. And obviously, this reaction is very important. You don't want superoxide completely uncontrolled, and there are some of these enzymes-- these are all metal catalyzed reactions. Some use iron. Some use manganese. Some use copper. Some use-- humans use copper and zinc. And there are others that use nickel. And they all have different properties, and they've all been studied in some fashion. So depending on where the organism lives, they might use different superoxide dismutases to control the levels of this reoxygen species. The second defense mixing mechanism of the peroxyredoxins. I think I have this one up here. So any of you that are interested in this, there were like a seven or eight isozymes. They keep finding new isozymes everywhere inside the cell. They are at high concentrations. They are clearly very important in controlling the redox balance. So they do react with hydrogen peroxide, but they also react with other peroxides, and they're important in controlling the redox balance. And so, here and in each one of these isozymes has its own characteristics. You don't need to remember the details, but the chemical mechanisms of sort of the same, even though some are dimers, some are monomers. It turns out they all have in the monomer two reactive cystines. So one is called Cp. That means that's the species that reacts with the hydrogen peroxide. So we have-- they can be monomers or dimers. This is the protein. And so you could have a Cp which can react to get sulfenylated. And then you have CR, which can react to resolve the sulfenylation process. So you're going to get rid of the sulfenic acid. So here, if you have a CR I'm being sloppy here. In other words, you probably have-- this is probably protinated. It's all controlled to generate the anionic form of the file, which then can form-- in this case, I'm drawing an intramolecular disulfide. OK, so this forms a disulfide. And this is intramolecular. So what do we see over here? Over here, we see you can form an intermolecular disulfide, if the molecule's a dimer. So the chemistry is exactly the same. But sometimes that occurs to the monomer. Sometimes it occurs through the dimer. And then the question is once you have the disulfide, so now you have-- how do you re-reduce this? And you re-reduce this by some kind of reductant, such as thioredoxin which we will see if we get it as far as talking about ribonucleotide reductase. So where have you seen these kinds of proteins before? Does anybody remember. So thioredoxin-- this is thioredoxin. There are probably 10 different kinds of thioredoxins inside the cell, these small little proteins, as is peroxiredoxin. And they're all involved really in redox balance. So we can intercept the hydrogen peroxide. Say we want to get rid of the hydrogen peroxide fast, we've done our signaling, we want to get rid of it, you need to get something in the air that can react with hydrogen peroxide that they are fast to remove it. And then you want to reset your protein, so it can react with another molecule, so you need a reductant. So these are the key-- for two of the four things I was going to describe in terms of defense. Another one is catalase, this is the one that if you go back and you look at the Carroll paper, which we discussed actually in class, these are heme proteins, and these are distinct from the myeloperoxidase that we'll talk about in this section. But they can take hydrogen peroxide, and they can convert it to oxygen plus water. So what they've done is removed a putative reactant species. Again, how reactive it is depends on the environment, and turn it back into oxygen and water, which are completely unreactive. And the fourth, which is used quite frequently, are the glutathione peroxidases. And this is the one-- I just told you what the structure of glutathione is, peroxidase is. You all know from the Carroll paper that you have a single reactive cystine in glutathione peroxidase 3. That's what we used as the model for all of our redox chemistry. Some of the glutathione peroxidases actually use selenium. So there's the 20 second amino acid is selenocysteine This is one of the few enzymes, as our thioredoxin reductase, which are involved in this whole redox balance system, are selenoproteins. We're not going to talk about those. But the glutathione peroxidases is actually take two molecules of glutathione plus hydrogen peroxide. Again, there are many, many isozymes, and they can oxidize this to the oxidized form. So this is the reduced form. You have a cystine. And this is the oxidized form. So you have a disulfide. Now, again, where have you seen this thiol disulfide system before? I mean, bacteria have these things. We're talking now we're focused on human systems. Do you remember what happens in the periplasm bacteria? Did you talk about that this year? So you haven't seen this before in past. So bacteria in the periplasm enzymes that control what thiols you have and the state of the thiol. So this redox balance by this disulfide interchange, very similar to this kind of chemistry, is everywhere. And the chemistry is pretty simple. But if you go from cystine to a disulfide, you just don't go there. You just don't go there with oxygen. I think this is something that people get confused about all the time. You're doing an oxidation, something has to be reduced. So hydrogen peroxide, [INAUDIBLE],, you go through sulfenic acid. You can now picture that you can have general acid catalyze, general base catalyze, cleavage of disulfide bond formation. So just because you have oxygen around and reduced cystines around doesn't mean you automatically rapidly have disulfides around. Again, you need to think about the chemistry of what's going on. So now what I want to do is then show you sort of the general model. And then we'll talk about the NADPH oxidases, which is the focus of module 7. The general model is as follows. So you have an oxygen. And we have the N-- sorry. The proteins we're going to be talking about are NOx2 or another NOx isozyme. OK, I'm not going to write out the name. We talked about that in the last recitation section. These enzymes use-- and I think this is important because part of the redox switches that I think are under appreciated are the levels of NADPH, NADP. They play incredibly important roles inside the cell. So you have NADPH going to NADP. And we talked about the fact-- and we'll come back to this that this protein has a flavin and two hemes. And it produces superoxide So this is incredibly unusual. Superoxide is usually an artifact of some uncoupling reaction that happens all the time inside the cell. This enzyme is a professional superoxide generator. That's what its goal is. OK, most other times you see superoxide something has gone astray. So this is a professional superoxide generator. And so what happens then, when you generate superoxide you could have SOD, or you could have protons. So if you're in a place where the pH is slightly lower, you generate rapidly, very rapidly hydrogen peroxide. OK, so superoxide doesn't sit around all that long. If you have iron 3 around, it could be bound to something. What happens is the superoxide combines with the iron 3 to reduce it to iron 2 plus oxygen. So superoxide if you look at the reduction potentials, obviously what does it depend on? It depends on the ligand environment of the iron 3. That affects the redox potential. Hopefully, you all know that and have thought about that at this stage, given the last module. So what happens now is the hydrogen peroxide can react with iron 2. And this is the killer. That does what's called Fenton's chemistry, which generates hydroxide radical. So these two guys, hydrogen peroxide and iron, now combine by what is called, in the review, Fenton's chemistry. And I'm not going to write out the detailed mechanism of how this works in fact, I think we still really don't completely understand it. But anyhow, you're generating this reactive species, hydroxide radical, which is dying to be reduced. So this guy is responsible. It hits anything, and it reacts. So it ultimately is responsible for modifying lipids, modifying sugars, modifying amino acids, modifying nucleic acid. This guy-- because it's so reactive-- damages proteins, DNA, RNA, I'm not going to write all this out, lipids. This is the guy. And that's described in the review article you had to read. And what can this hydrogen peroxide also do? We'll also see that the hydrogen peroxide, which is going to be generated inside the neutrophil, which we're going to be focusing on, the white blood cells that are going to be trying to take care of the bacteria or viruses, and that you have chloride. Now, you form hypochlorous acid. So these are the kinds of guys, HO dot, HOCl are guys that are going to actually do destructive things when they react with things that help us to defend ourselves against bacterial insults. So that's a picture of the big overview. And so that's that and we're going to be simply focusing on two proteins. The first protein we're going to talk about is the one we went through in recitation, NOx2. And I'm not going to write down that reaction. Hopefully you all know this by now. I just sort of said that over there. And there are 11 different isozymes, and then myeloperoxidase, which both of these are found in the neutrophils in the phagosome of the neutrophils. OK, so let me-- so the chemistry that goes on with the NOx proteins is complicated. So it's not just the NOx protein. We're going to talk about the NOx protein. But as with everything, there are other factors that play a key role, and I'm going to show you a cartoon with what the other factors are. But we're not going to talk about the details of how those factors help the NOx2 protein make superoxide OK. Make superoxide in a controlled fashion, that's the key thing, in a controlled fashion. So we have a NOx protein. The only one I do want to look at is the NOx protein itself, because we're going to use it not only in this lecture, but also the lecture of NOx2 proteins in signaling. So the chemistry is the same in destroying the bacteria and in signaling. So you need to know what the protein does. So if you look at NOx2 NADPH oxidases, what do you know? They exist in a membrane, and we'll see this membrane can be the phagosome, or it can be the plasma membrane, or can be-- I'm going to show you a cartoon of this-- a vesicle membrane. These proteins are located in many places inside the cell. But they all sort of have the same predispositions. So they have one subunit with a domain that has the FAD on it. So if we're looking at with the epidermal growth factor, or if we're looking at the neutrophils, this would be the cytosol, if we're looking at the neutrophils, and this would be the inside the lumen of the phagosome. And the FAD can be-- what is the function of FAD? We've talked about this. It's a major mediator between two electron chemistry and one electron chemistry. And you've seen that before. Hopefully that was grilled into you in the respiratory chain. So in the respiratory chain, you have iron sulfur clusters, or you have hemes. Here we're going to have hemes. And so what we have is on this face, we have NADPH going to NADP plus a proton. This turns out to be important, because that also controls the pH, and there are voltage channels controlled by pH that you need to think about if you looked at the detail biology. And so this protein is gp91. That is it's 91 kilodaltons, and gp means it's a glycoprotein. And then you have a second protein that's also an integral membrane protein, that's also critical. And this is gp22. And so what you see is you have iron-- you have two hemes, cytochrome b heme, dependent systems. And these are going to change redox state. And the interesting thing is that these two teams are completely coordinated. So where have you seen heme before that reversibly binds oxygen? We need to do something with oxygen. Oxygen is getting reduced. But what I'm telling you is that oxygen does not get reduced by binding to the heme. It's going to be using this method of electron transfer that we talked about. So they've got to be close enough so you can do electron transfer, perhaps through the heme edge in the protein. So ultimately, oxygen is getting converted into superoxide not by direct binding to the heme. So this is distinct. And we'll see, this is completely distinct from the myeloperoxidases. This is completely distinct from the P450s we alluded to when we're talking about cholesterol homeostasis. And the key that makes all of this work is that it can form complexes with other proteins. So let me just tell you what those other proteins are, and that was described in some detail in the reading. So we're going to have a GTPase. RAC2 is a G-protein. OK, so G-proteins can mediate phosphorylations. This one mediates phosphorylations. And it remains in the inhibited state till you need to trigger off your signaling cascade by another protein. The nomenclature is horrible, but there is an inhibitor protein that binds to the G-protein making it inactive, when some sensor comes in, they dissociate, and then the G-protein can function. And we're going to look at that kind of signaling. We already have looked at that kind of signaling in the Carroll paper. But we'll look at it again in the signaling by NOx. The second group of proteins-- again, they're based on this size. These were identified a long time ago. They are unique to the phagosome. They're called phagosome oxidases. That's going to be the organelle where we're going to kill the bacteria. And so this p47 needs to be phosphorylated and it's phosphorylated by the G-protein. And that's key to have everything come together to allow the chemistry happen. So this chemistry, in this form, it's inactive. It's only when everything comes together that you actually start doing the chemistry that's going to help us. So here's the model. So here's a resting cell. This is the nucleus. Here, the blue thing is the NOx protein. And the little blue thing is the second subunit. This is the 91. This is the 22. Here we can see that it's located in the plasma membrane. That's one of its locations. That's not the predominant location in the resting state. The predominant location apparently is in little vesicles within these neutrophils, the white blood cells that are the first defenders against invasion by bacterial systems. And then we have these little complexes. Here's RAC2, a GTPase that's inhibited. And here is the phagosome oxidase. And then what happens when the bacteria comes in, somehow the bacteria is coming in over here, it gets engulfed, and you form these little phagosomes inside the cell. And now what happens is the NADPH, the NOx proteins are located like this. The NADPH is on the outside. It's been activated by this GTPase. And now it's ready to generate superoxide inside the cell. So there's a lot of membrane fusion and reorganization. Obviously, the signaling is really complex. We know a lot about the signaling. How do these guys even know there's a bacteria out there? How do they sense all of that? And I don't know if this is going to work. But this is a sort of a cool picture if it does work. Although it worked in my office, but it might not work here. Oh, here it goes. So here we are. This is a white blood cell. These are red blood cells. The bacteria, these little things, floating around. It's sending off a signal. The white blood cell is chasing the bacteria. So there's some sequences chasing it through all of these cells, and you're going to see that in a minute, it gets it. There it goes. Gets inside. It's now in the phagosome, and puff, everything disappears. And that is really what's going on in the system. So the question is-- it's a really cool picture. The question is what's the chemistry that's actually going on in these systems. So the chemistry-- whoops, somehow I lost-- I'm already over. That chemistry, it's all over already. What we'll do next time is come back and talk about how this flavin works, and then we'll see in that little phagosome also is a myeloperoxidase. We'll talk about how that works, and those are the two things I want-- how did they degrade this bacteria once they get inside this little organelle?
MIT_508J_Biological_Chemistry_II_Spring_2016
R8_Application_of_CRISPR_to_Study_Cholesterol_Regulation.txt
The following content is provided under a Creative Commons license. Your support will help MIT Open Courseware continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT Open Courseware at ocw.mit.edu. JOANNE STUBBE: Because we haven't gotten that far in class to understand what this protein is that's the focus of the paper, I still think the paper is straightforward to understand. I'm just going to put it into context. So I was having trouble trying to decide what to do. And maybe I shouldn't have done this. But the fact is that this technology we're going to be focusing on in a very sort of simple way, CRISPR-Cas, has taken the world by storm. And that's the take home message from this. So you can sort of get what it does. But really to look at the details, you have to go in and study it. And every time you pick up another journal, you look at Google Journals or something like-- there's another 100, 200, 300 papers published on this. So this is a current technology that has taken off really since 2012. And so very rarely is technology successful in that short period of time. And it happens to have been applied to one of the key enzymes that people are now focused on in terms of controlling cholesterol levels, which is what we're talking about. So I use it as an opportunity to just show you what this technology is. Have any of you ever done this technology? Nobody in the last class had done the technology either. But my niece is a sophomore here. She spent a whole EUROP doing this technology. So this technology has moved into the lab. My lab hasn't used it either. So I probably can't answer any of the details. But it's one of these things that it is extremely complicated. I think I can give you a cartoon overview of how it works. But if you're going to use it, just like every tool, you have to study it in more detail. OK. So I am going to ask you questions. And this is going to be different from the one I did on Thursday, because I spent too much time talking about this article. And then I'll come back to-- how many of you read these articles? How many of you didn't read these articles? OK. So your class is much worse than the other one. The other one had read all the articles. OK. So we won't have a very good discussion about this. And I'll tell you why I think you should read that, but I'm not going to focus on that till we end. OK. So the paper we are going to focus on is this one. And this is the gene product, the protein that has become a focus of attention of many people in terms of controlling cholesterol levels and as an alternative to statins or maybe better than statins, but we haven't gotten there yet. OK. So this is still on the drawing board in there. Many people focused on clinical trials targeting this particular protein. And so one of the questions that this paper focused on and asked-- hopefully, you all have read the paper. It was only three pages, so it wasn't very hard to read. Is this protein important in terms of controlling cholesterol levels? And they did experiments in tissue culture and in mice to try to address that issue using CRISPR-Cas as a way of destroying the gene, the gene which then destroys the protein. OK. So this other article sort of gives you an overview of the kinds of things we need to think about to make the technology better. And when technology is introduced-- just like if you look at unnatural amino acids that you guys looked at with the Schultz technology. I mean, first 15 years Peter was doing that, he collaborated with my lab. We never published a single paper, OK? Because the technology was not good. And so now, the technology is still not good. But it's getting there, and it's improved greatly. So you often see something. It looks, oh my goodness, you know, this is going to be fantastic. But the devil is in the details, OK? And that's one of the take-home messages from this course anyhow. OK. So what I want to do is just give you a very brief overview of what I had hoped to get to by the end of lecture today and didn't quite get there. And so we did get to the fact that we made LDL particles. And LDL is transferred in the blood and is a major carrier of cholesterol. So it takes cholesterol from our diet. And it's going to deliver it into different kinds of cells, and it does so. There's a receptor on the surface of the cell. And these little things here, these little flags, are at the receptors and the receptors. That's what we're going to talk about next lecture is low density lipoprotein receptor. OK. And this is the basic. Brown and Goldstein figured out that genetic mutations in this receptor and other steps associated with getting the receptor to the surface of the plasma membrane are responsible for children for the disease familial hypercholesterolemia where kids die at age 7 heart attacks, because of inability to control cholesterol levels. OK. So I need to just sort of briefly walk you through the model, because that model is related to the effect of this protein you were reading about in the paper that you were supposed to read for today. OK. So this lipoprotein can bind to the receptor. This is a plasma membrane. You see there are three receptors. The receptors have to cluster to be successful at somehow, by mechanisms that are moderately well-understood, can engulf the LDL particle and form a little vesicle. And the little vesicle is coated with a protein called clathrin. OK. And we'll see over the course of the rest of the semester this is used over and over again-- so is clathrin-- as a way of taking up nutrients into the cell. So this is a major mechanism of doing that. And then what happens is the clathrin is removed. Biochemically, it's removed enzymatically. And what you're left with is a vesicle that then fuses with an endosome. And so that's a little organelle with lipid membranes that is acidic. And when the LDL protein gets into the interior of this little vesicle and the pH is lower than the normal pH, goes from 7 and 1/2 to 5, the LDL receptor dissociates from the LDL particle. And so then what happens, by mechanisms that are really incompletely understood, the receptors can recycle to the surface. OK? And what happens is that, when some of them recycle to the surface, you're left with an LDL particle that fuses with another organelle called the lysosome. And then this LDL particle goes into the lysosome. The lysosome is sort of like a proteasome. It's a bag of proteases and lipases. So it just degrades everything in there-- amino acids, fats, everything-- allowing you to produce amino acids and cholesterol, free cholesterol. And then cholesterol in the liver is often stored. It gets esterified. And it's stored as triacylglycerol. Fatty acids esterify to cholesterols. OK? So the process, of course, of getting the LDL receptor to the surface is done in the rough endoplasmic reticulum. Because it's a membrane protein, it's transferred by things called little coated vesicles. And then somehow these little coated vesicles deliver the receptor to the protein. OK? So this is a very complicated process. And, in fact, mutations that are responsible for heart attacks occur in every step in this process. It's not just the LDL receptor. We'll see that in class next time. OK. So the key thing you need to know for today is that you have LDL receptors that interact with LDL. And that's key to taking the cholesterol into the cell. That's the take-home message. That's fairly easy to understand. OK. So the protein we're focused on today is this guy, PCSK9-- horrible acronym which I've written down. I can't even remember it. But it stands for Proprotein Convertase Subtilisin/Kexin 9. OK. So the important thing is subtilisin. Has anyone ever heard of subtilisin? So that's like [INAUDIBLE]. So it evolved convergently. And so it is a protease that has a serine, a histidine, and aspartic acid, like you learned in protein media degradation in the first part of the module. OK. And this protein was discovered-- we'll see in a minute-- again, because of patients. OK. So the patients presented themselves in a funny way. That's how the LDL receptor was discovered. If you've read Brown and Goldstein's article, which was one of the things I asked you to read, you've already gone through that. That was the thing that got Brown and Goldstein excited about this. What's going on? Why do these kids have heart attacks at such an early age? Can we figure out what's wrong? And can we do something to fix it? And so, here, what happens is this protein is made as a proprotein just like any kind of serine protease. Lots of times you are pre-proproteins. And they process, they usually self-process, into an active form. And why do they have that? Why does a protease have a pre-pro sequence on it? AUDIENCE: So you have, like, spatial temporal control of the sectors? JOANNE STUBBE: Yeah, over activities. So you're controlling the activity. Because if you produce a protease, nobody could ever overproduce proteases. Why? What happens inside the cell? Everything gets degraded. OK. Because proteins have specificity. But if you overproduce them, all your proteins have degraded. So it's not trivial to overproduce proteases. And so they have a mechanism-- hopefully, you learned about that in introductory biochemistry course-- that makes it inactive till you're ready to use it. And then it cleaves itself. Something triggers it, it cleaves itself. And then it's ready to go. And that's true here, too. So here you have this little purple worm that has to auto process to become active. And in some way, it's going to end up extracellularly. And so it's got to go through membranes. So it goes through the Golgi stacks, just like I just showed you with a cholesterol, the LDL receptor. And it gets extruded extracellularly. And that's where it is. It's out there. OK. It's processed from the original version of it. And so the working hypothesis is-- and this was based on a patient. They found a patient where the LDL levels were elevated, OK. And the child that had this had early coronary disease. That is heart attacks at an earlier age. And they studied this in some detail. And they found out that what this protein does-- I'm not sure we really understand the details of what the protein does-- was that it could bind to the LDL receptor. OK, so this little orange thing is what you just saw on the previous slide. And this little blue thing is LDL. So, now, what happens is instead of having just LDL, low density lipoprotein and the receptor, you've now got another protein stuck to this. OK. And so when this protein is bound, it also undergoes receptor mediated-- they don't show any steps here. I'm not sure if it's been studied in detail. It also undergoes receptor mediated endocytosis. So it's taken into the cells. And normally, remember, with the LDL receptor, the receptor gets recycled. Here, what happens? Something changes because of this complex. And so now this complex is in the endosome. But the LDL particle, which has a cholesterol, doesn't associate from the LDL receptor. The receptor doesn't recycle. But, instead, the whole gemisch, the protein, the receptor, and the LDL particle, fuse with the lysosome, which is a bag of proteases. And it's degraded. So what are the consequences of that? The consequences of that are that you lower concentrations of the LDL receptor on the plasma membrane. OK? And if you lower the concentrations of the LDL receptor on the plasma membrane, what happens to the low density lipoprotein concentrations? AUDIENCE: [INAUDIBLE] JOANNE STUBBE: Yeah, it increases. And so then you're in trouble. OK? So that's the model. Again, I haven't read a lot of papers on this. The discovery was made of this of patients that had that phenotype in 2003. But they also found patients that had a loss of function. And they found out that some of these patients-- they're different kinds of patients. They have different phenotypes. But they had a single mutation. And these patients with single mutation had reduced LDL cholesterol. And they had the same amounts or elevated amounts of the LDL receptor. And because they had LDL receptor, they had lower cholesterol and more LDL receptor to take up the cholesterol, they had reduction in coronary disease. OK? Everybody get that? Why do we care about that? OK. So can somebody tell me from the paper what was the take-home message from the paper? Why do we care about that? What's unique about this particular protein protein, PCSK9, compared to using statins, for example? Did you guys read the paper? OK. So the paper was pretty short. Even if you didn't understand all the details, I thought the paper was pretty easy to understand. So why do we care about? What was the take-home message? Why are we targeting this? AUDIENCE: To change the expression of proteins that create new-- JOANNE STUBBE: PC. AUDIENCE: Yeah. JOANNE STUBBE: Yeah. So but why do we want to do that? We have statins. Statins, you know, everybody's gobbling statins a lot. I mean, you probably know 20 people that take statins. I know many, many people that take statins. So it's a wonder drug in many ways. But when do you start giving statins? When do people start taking-- I'm probably not allowed to ask that. So you don't have to answer if you don't want to. But are any of you taking statins? No. OK. But there could be people that have, you know, high cholesterol. I mean, a lot of it is genetic. I eat McDonald's hamburgers all the time. And I eat huge amounts of ice cream. And I have extremely low cholesterol levels. OK? And it's genetic. OK. Other people might not eat any of that stuff, and they might have extremely high cholesterol. So when you see people, maybe your parents, basically, they're taking this. And it's after you have some issue, right? You have coronary heart problem. You have chest pains, whatever. So they start looking for what could be causing that. And the first thing they look for is clogging of the arteries. And that's when they start some kind of therapy like statins. The beauty of this is, if this model is correct that I just showed you, if you could figure out how to remove or greatly reduce that protein, then that would automatically, you know, prevent the normal function of this protein, which is to degrade the LDL receptor in the lysosome. And I'll get to that in the very end. So if you could figure out how to treat, you could diagnose the predisposition to having elevated cholesterol levels and start treating it much earlier. You have a much higher propensity for success compared if you take statins halfway through your life. I mean, there's really good epidemiological data that support that. So people are extremely interested in figuring out-- I don't think we know the details of the function of what this protein is-- but lowering this protein. Because the consequences of that are lowering cholesterol in the plasma. OK? So are we all on the same page? Everybody understand that? Because that's key to thinking about the paper. OK. So the reason I picked this is people these people in this paper wanted to understand is this protein really important. And so what they did was they decided they were going to knock out the gene or do something to greatly reduce the gene, which then would reduce the amount of protein, which then would allow you to analyze the phenotypic consequences. OK? And what was the analysis they used in this paper? They used two different kinds of analysis. Well, we're not in detail. Globally, what did they use? What were their model systems? AUDIENCE: [INAUDIBLE] JOANNE STUBBE: You need to talk louder because I'm deaf. AUDIENCE: For [INAUDIBLE] they used a surveyor as-- JOANNE STUBBE: Yeah, so that used a surveyor on what, though? So that's too detailed. I want a bigger picture. So you're right. They used surveyor cell assays. That's more detail than I want right now. So they looked at it two ways if you look at the figures. So what were the assays? In the surveyor assay, what were they assaying? AUDIENCE: The blood samples. JOANNE STUBBE: The blood samples of mice. So that's one of the things from the liver. OK. So they took liver cells from mice. So they were using animal models. OK. So one of the questions that, if you read the paper carefully, you should be asking yourself-- and this is always a question when you're looking at therapeutics. Is this animal model any good? OK. And then the other way that they were looking at this was with tissue culture cells. Because, in general, when you start studying something, you don't start on humans, or you don't start on whole animals. You need to start on something simpler. And we haven't gotten to this yet, but Brown and Goldstein, if you've read the reading, have used fibroblast cells. And they showed fibroblast cells behave like liver. And it turns out it had great predictive power. It might not have, but it does. So you need some kind of a model system. And so they used both of those systems to try to test the idea that, if you could get rid of this protein, you could alter in a way that these patients, these loss of function patients, behaved in terms of the levels of cholesterol. So that was it. And so how did they decide to do this? And so I would say, in general, we don't talk about this kind of stuff very much in 508. But if you're ever going to be a biochemist, you can't do biochemistry without being able to do gene knockouts inside the cell. So 25 years ago, that was tough, OK? In the mid-1980s, you could first do that well in bacteria. We still do a lot of that in my lab. It takes four months, three, four months. It's not easy. With the older technology, it works. But it's a rare event, and you've got to screen through a lot of things to find the ones that are interesting. And this technology, CRISPR-Cas, allows you to do this in a couple of days. It's revolutionized what you can do. So you might be studying something really complicated in the test tube. But the question is is what you're studying relevant to what's happening in the cell. And so if you're asking a chemical question, a mechanistic question, like how does isopentenyl pyrophosphate do its chemistry, you don't need to do that in a cell. You can do that in a test tube. If you're asking how things are regulated, which is what we're doing now, you must be in the cell. And the issues within the cell are that people overproduce stuff. You know, they have to mess around, so they can see something. And whenever they do that, they change everything. So the future, for anybody that's interested in biochemistry biology interface, is you've got to be able to do both. And so this technology, I guarantee, in some form you will be using if you pursue a career in doing biochemical and biological studies. OK. So the question really is we want to do manipulation of a gene. OK? And so people have wanted to do this forever. So you might want to delete the gene and see what the phenotypic consequences are. You can do that. You know, there's been technology around. They won the Nobel Prize for the technology in 1983. But, again, it takes months. And you have to screen through millions of cells to be able to figure out which one has your gene deleted or another gene inserted in place of the gene of interest where you've modified the gene of interest, which then gives you information about the function of the protein. And so having technology that can turn around rapidly is important. And so I'm just going to show you what the state of the art has been up until two years ago. And, really, they also work by the same mechanisms. It's just the CRISPR-Cas, even though it's really still early days, works much more efficiently. OK. So the idea is you have a piece of DNA that you care about, and you want to cleave it. And all of these cleavages are double-strand breaks. So double-stranded breaks are lethal to the cell, so you have to repair them. OK. And you have to have a way to repair them. And I'll show you what those two ways are. You've all seen it in some form. But you want to have cleavage at a specific site. And then when you have cleavage, the question, if you repair that, can you delete part of that gene which would make the entire gene inactive? Or, can you replace, in this cleavage site, a gene of interest with a mutation in it, et cetera? OK. You can do many, many, many genetic engineering projects, which are sort of covered in review articles. The more creative people become, the more things you can actually do. OK. So how do you do that? So what they do in the case of the zinc fingers, does anybody know what a zinc finger is? Has anybody seen a zinc finger before? So a zinc finger is a little small protein, I don't know, maybe 70, 80 amino acids that combine zinc and that its sequence specifically binds DNA. OK. That's a major way of regulating transcription inside the cell. OK. And there's not just one zinc finger. There are many, many zinc fingers. OK. So what people have done is taken these little motifs that combine zinc and designed these motifs, so they recognize a sequence. So these guys, these little zinc fingers, now are targeting the DNA that you want to cleave. So they're targeting it here, and the targeting it here. So what that means is every time you want to do an experiment like this, you have to make a little zinc finger. String them together to get enough binding affinity, so you get specificity. That's really key. And you could do it. We're pretty good at this, but it takes months. And so what they do is, once they have these binders, then they attach a nuclease. OK. So Fok1 is the nuclease. So that just means you're cleaving a phosodiester bond over your nucleic acid. And you cleave on one strand. And on the other strand, you have a double-standard break. And these enzymes work by giving you blunt-ended cleavage. There's no overhangs in the DNA cleavage. So people have used this for a long time. In fact, Carl Pabo at MIT, who's an X-ray crystallographer that studied regulation by zinc finger transcription, was one of the people that founded the companies that got this technology off the ground. But it's hard. OK. So the second technology which I think is much more widely used-- but I think it'll be completely displaced. I might be wrong. You can buy a kit Golden Gate, TALEN kit. That's right. You can buy it from some company. And it's the same idea. So, I mean, I don't know anything about this in detail. But it turns out that these little proteins, which are 34 amino acids, you can actually look at a sequence of DNA and design 34 amino acid repeats in a way that it can bind to double-stranded DNA. OK. So this is like, so you have a double-stranded DNA helix. You string a bunch of these little domains together. And you can actually design these little domains, the sequence of these little domains. And it forms a super helix around the double-stranded helix. So the protein forms a helix around the nucleic acid helix. And what it does is it targets the nuclease for cleavage. So it's the same idea. It's just the mechanisms of targeting are different. And so, I mean, they have structures of these things. It's sort of really an interesting problem in molecular recognition if any of you are interested. But I would say, if you want to use this to do something biochemical and biological, you probably want to go to CRISPR-Cas system now. OK. So both of these are the same. They have a nuclease and something that targets it to a DNA sequence of interest. And if you've read the paper on the PCSK9, that's exactly what they're doing. They're targeting a sequence for double-strand cleavage. OK. So this then brings us into the CRISPR-Cas system. And I've given you a hand out of this which is, again, a simplification. Now, I think there are six different moderately well-studied CRISPR-Cas systems. They're all different. So they all have different numbers of proteins. Although, the idea of how they work is pretty similar, I think this has turned out to be the best behaved in terms of biochemically putting it back together and having it work. OK. So what do we have here? So, hopefully, you all know now that what you need for this to work is a Cas9. What's Cas9? AUDIENCE: The CRISPR associated with [INAUDIBLE].. JOANNE STUBBE: So a CRISPR-- is that what the acronym is? AUDIENCE: ...it's a nuclease. JOANNE STUBBE: Yeah. It's a nuclease. OK. And what's special about this nuclease? AUDIENCE: Sequence-specific. That's like-- JOANNE STUBBE: It has what? AUDIENCE: A guide RNA that makes it-- JOANNE STUBBE: No. So just the nuclease, we're just talking about the protein now. We do have to worry about that, yeah. So if you look at the Cas9 sequence, what do you find out? That's not in the paper, but-- AUDIENCE: So it's got two different regions that can bind the two different strands-- JOANNE STUBBE: Right AUDIENCE: --and, like, [INAUDIBLE] in a different [INAUDIBLE]. JOANNE STUBBE: Yeah. So you have two different nuclease domains. OK. I mean, this is not necessarily given. One is going to go to one strand. And one is going to go to the other strand. OK. And we'll talk a little bit about that. And then, as you were saying, what's unique about this? In this picture, what's wrong with this picture? If you read the original discoveries in the bacterial system, what's unusual about this particular-- well, I guess-- OK, no. It's OK. OK. So what do you have here? What is this part? This should be tracr. What's tracr? AUDIENCE: It's the transactivator. JOANNE STUBBE: Yeah, so it's transactivating. OK. And then what's the gRNA? AUDIENCE: The guide. JOANNE STUBBE: So that's the guide that is part of this bigger piece of DNA that we're going to look at in a second. OK. So what you need, although this isn't what people use now for the technology, is you need two pieces of RNA. And you need the target for double-strand cleavage. And you only need a single nuclease. OK? And the key question is how do you make them assemble. OK. And how do you make it as simple as possible, so that you can use this in bacteria, but also use it in humans which is what Eric Lander focuses on. So CRISPR, and we'll look at this in a [INAUDIBLE],, has this horrible name, Clustered Regulatory Interspaced Short Palindrome Repeat. OK? So that's the name. And so this just summarizes-- and we're going to come back to this in a minute-- that all three of these methods, the zinc fingers, the towels, and the Cas9 system, all do the same thing. They somehow recognize double-stranded DNA and cleave it, OK? And so they all give you a break in the DNA, which is lethal if you don't figure out how to deal with that break. OK. And there are two ways to deal with that break. There are two ways of repairing the break that we're not going to talk about in detail, but you probably have heard about somewhere. So what's the way that they deal with this double-stranded break in the paper? Did anybody read the paper carefully enough? And how do they know? So, somehow, you've got to put these things back together. Otherwise, your organism is completely dead, which is the goal of having this CRISPR locus for the bacteria. They want to kill the invading virus. OK. But in this particular paper, which one of these two methods did they show or did they propose from the data that they talked about was involved in repairing the double-stranded break? If you look at this paper, they describe non-homologous end joining. Because in the end, if you looked at the paper carefully, when they were trying to tell whether they successfully got a double-stranded cleavage, they did a lot of polymerase chain reactions to figure out whether they got specific or non-specific cutting. And when they did the sequencing on this, they could tell, because of the different mechanisms between these two, that most of the damage was repaired by non-homologous end joining. So what happens with this approach? What happens with this approach is that the repair is putting the things back together. When you have blunt ends, you've lost the information from the sequence. And you have a disconnect and, if you got a couple of cleavage sites putting them back together, is really tough. And so when you put them back together, you might have an insertion. You might have a deletion. You might have a frameshift. You get a mess. But then when you look at the very ends of your gene using the polymerase chain reaction, what happens is you get a mixture of things. And you can sequence them, so you can tell something about how the repair happened at the double-stranded break. So if you have a double-stranded break, OK, so the question is here do you have a deletion, so it's a little bit shorter. Or, do you have an insertion? Or, do you have an rearrangement? And what you do then is sequence these things using PCR. And then you can get information about the mechanism of repair. OK? So the alternative mechanism-- and this is really important if you want to replace one gene with another gene, a whole gene, rather than just removing the gene, which is what happens here. Here, you've made a cut in the middle of this chain. You've removed a few amino acids. Or, you a removed amino acid, and it's rearranged a little bit, so the protein is never going to get formed. Here, what you're doing with the homologous repair is you have a template. OK. So if you don't know anything about homologous DNA repair, you need to go back and look into it, your basic textbook, and at least read the definition of what's going on. But you have a template. Once you have a template, you can copy that template and replace one gene with another gene. So this template becomes really key in replacing, site specifically, one gene with another gene and, as a consequence, one enzyme or protein of interest with another one. OK. So this was taken from an article by Jay Keasling. And Jay Keasling is interested in synthetic biology. He's an Artemisinin in fame. We talked about that in class. That's the anti-malarial agent. He's also been a major player in trying to figure out how to make bacteria use mevalonic acid pathway, which is what we're talking about in class, to make jet fuel. OK. So how do you make hydrocarbons that are really energy efficient compared to ethanol or butanol? And so his whole lab is focused on figuring out how to use CRISPR-Cas to engineer genes from many different organisms back into the organism of choice. And this technology, apparently, allows you to do five or six genes simultaneously once you figure out how to do it. And so you can do a lot of manipulation in a really fast time compared to the months it used to take before. And so what does this tell us? I mean, I think this is the most amazing thing. If you read the Eric Lander historical perspective on the discovery of CRISPR-Cas, there was a guy in the late 1980s that lived in Spain and did all his research in a salt marsh. OK. And he got really interested in these archaebacteria, really weird bacteria. I don't think they're weird, but most people don't really think about they have really interesting chemistry. And when he was sequencing part of this, for some reason, he found palindromic repeats, many palindromic repeats. And that's those purple spacers. And he says, well, what that heck is going on with that? What is this? OK. So he had discovered this locus in the genome. OK. Now, bioinformatics over the years, if you read the history of this, played a huge role. So you go back, and you look at all of these sequences. You find even an E. coli, you have these little spacer repeats over and over again that are palindromic. And so then the community got really interested in why you would have a locus that looks like this. And what they found is, if you start looking at the genes on either side of it, you found genes that were conserved, that coded for the Cas9 protein. In this case, S. pyogenes is the one that was used in this paper, which is the nuclease. And they also found this transactivating RNA. And what's interesting about the transactivating RNA is it has a sequence that's homologous to one of the sequences in the spacer. OK. So they've got to be able to hybridize to each other. OK. So that started. It became very interesting. And the question was focused on how is this editing going to happen when you get cleavage of your gene. Or does this act like, for example, in SI or an SH RNA in controlling levels of gene expression? And so there were many people that contributed over the years to figure out how this locus is used. And that's what I'm going to briefly describe. So the idea is the following. And I think that discovery, in my opinion, is really a seminal discovery by some guy who was working in a marsh working on some bizarre archae, made this discovery, followed it through for the next 15 years, and discovered that bacteria have adaptive immunity systems. I mean, that's really sort of mind-boggling. I remember when this paper was published, a first paper was published where they knew this was happening. Somebody in my lab gave a group meeting on it. And my mouth just dropped to the floor, because nobody predicted this at all. This is what I would call a revolutionary discovery. And what they found was-- and, again, this is the bioinformatics data analysis now, which we can do better and better and better. What they discovered-- they were looking for what's in between these spacers, OK? So what's in between these repeats? I'm calling it the wrong thing. These little purple things are the repeats. Again, they are the palindromic sequences over and over and over again. What is in between the repeats of the spacers? OK. Where do the spacers come from? Well, they didn't know. OK. But when they started looking at sequences of many of these things, what they found was they came from phage, viruses. OK. So, here, they have a bacteria, and they have phage DNA. Because people would sequence a lot of, at that time, phage DNA. And so what happens is the virus, or it could be a plasmid born piece of DNA where information is transferred from one bacteria to another, they get into the cell. And then they have proteins. And these, again, are Cas genes. And people are still studying these that take the viral DNA or the plasmid DNA and cut it into little pieces and somehow insert it between these repeats. So all of these spacers are different sequences of DNA that come from the invading species, the virus, or a piece of plasmid that you got from another bacteria. And so that became really exciting. OK? And so then the question is, how do you take all this information and convert it into something that can kill the idea-- if you have adaptive immunity, how do you use this information to kill the virus? OK. So what we now know happens is that the DNA can be transcribed into RNA. OK? And so you have this piece of RNA with the repeat and the spacer. And then you can also transcribe the transactivating RNA. And they form stem loop structures. That's what those little things are. So they have palindromic sequences-- so, you know, the base pair, that's why they draw the picture like that-- and Cas9. OK. And so what we know now is that this strand of RNA, this pre-CRISPR RNA can interact with the transactivating RNA. OK? So they have a way of hybridizing to each other. And that's what you see here. So you see this little purple repeat. And you see the hybridization there. And these two pieces of RNA can bind to Cas9. OK. So Cas9 is the nuclease. And in this particular type of CRISPR, there's a ribonuclease, RNase III, which takes off all this stuff. So you only have a single spacer that's actually going to be recognized. OK. So you typically could do this. You could do this again with different pieces of DNA and make many of these things, OK, and do many cuts. And so that's why people in engineering are excited about this. You can do more than one cut at once. But we're just going to focus on a single set of cleavages, double-strand cleavage, with one spacer. And in this case, the spacer is brown. OK? So we trim it. OK. And so this is our machine, two pieces of RNA and a protein. And then it goes searching for what happens. The virus invades. The virus has this sequence somewhere in its genome. Somehow, the bacteria knows the virus has invaded. It makes this machinery. It goes searching for this sequence. This sequence then is recognized, because it can hybridize to one of the two strands of the DNA. OK. And the nuclease then simply cuts it in two pieces. So the idea is simple. I mean, obviously, this is an extremely complex process where it's going to be regulated at every step along the way. But, somehow, bacteria have figured out that, you know, if you have a virus that infects the bacteria, what often happens is the virus causes cells to lyse. And the bacteria is dead. OK? So that happens. So to save yourself, you want to get rid of the virus. OK. And so this is a way that bacteria have evolved to be able to kill this invading virus that would otherwise kill that, which is what adaptive immunity is all about. OK. So this is the model. And so then what people have been focusing on and what was focused on in this paper is Cas9. OK. So the idea's easy. It cleaves double-stranded DNA and gives you blunt ends. Furthermore, it knows where that occurs relative to a P-A-M site, a PAM activating site. It cuts in a certain region. So they've studied all of that. They know where it cuts. I'll show you that in a minute. And then if you want to target any gene inside the cell, you now can put in the right spacer. OK? Then you put the whole thing together. Now, the key issue is getting all of this stuff, the protein and the two pieces of RNA. They are going to go in as DNA, OK, getting them into the cell. And how did they get-- because if you can't get it into the cell, you can't do the double-stranded cleavage. So how did they get this into the cell then? Anybody notice that in the paper? AUDIENCE: Adenovirus. JOANNE STUBBE: Yeah, so adenovirus. So people are trying to do gene replacements using all kinds of methods. None of this is trivial. In this case, they're working on a mouse liver. And adenovirus, I don't know very much about adenovirus. But, apparently, it likes to live in the liver. So that's one of the reasons they chose looking at the mouse liver, but it happens to also be where all the cholesterol metabolism or the predominant cholesterol metabolism occurs as well. And so they wanted to try other things. In the end, you're probably never going to be able to use adenovirus. People have been trying to do that for years for gene replacement without success. So a key issue is going to be how do you get this into the cell, I mean. And so that's what a lot of people are trying to focus on now. But to do this in tissue culture, you can do it without any problems. There are ways of getting it into the cell. OK. So this is what we were talking about. Once you get the cleavage, you know, you can repair the cleavage by this method. If you want to read about this, you can go read about it. But what this does is gives you a deletion of your gene. Or, if you have a template, then you can use this template to remake a protein with a mutation in it, for example, or with a tag on the end, so you can purify it by affinity column, chromatography. And so then the question is in almost all cells-- and it depends on the organism-- you have both mechanisms of repair. And so one of the issues is how do you tweak the repair depending on what function you want to use the technology for to do this or to do that. And a lot of people are studying that. That's one of the focuses of many labs if you look at what are the issues, where are we going. OK. So if you look here, we were just talking about Cas9. So the blue is one nuclease. And the green is another nuclease. And in this case, this is the double-stranded DNA target they're after. And this little piece here, this TGG, is called the PAM site. And that's required for recognition by the Cas9 protein. And do we understand the basis of that? The answer is yes. I don't know. Did any of you hear Jennifer Doudna talk? Yeah. So, I mean, she's the one that discovered that. She just published two weeks ago the structure of this complex. And so, you know, I haven't had time to study it. And this is the kind of thing that, if you really want to use this, you got to roll up your sleeves and get in there and study it. But what they did was instead of having the guide RNA and the transactivating RNA, they put them together. And so that makes the genetic engineering simpler. But the question is how do you put them together. OK-- not trivial. OK. And the whole Eric Lander article about the history of this process, the Doudna, Charpentier group, figured out how to do this extremely efficiently for bacteria-- doesn't work in humans. So Lander's article is focusing on, you know, humans is much more important. And so Zhang, who is at MIT, had figured out another way. You can't just use these two little pieces of RNA. You need something much bigger to have the Cas9 work successfully inside the cell. And what's the basis of that? I don't know. But they did a lot of experiments to figure out how you could get this to work most efficiently. So these are the two partners. And the question is what's going on. And, you, know, frankly I haven't even had time to digest. My lab hasn't used this technology. I haven't had time to digest it. But what you see-- let me just point out one other thing. There are two domains. So you have the nuclease domain, which are not contiguous. And then you have a helical domain. And Doudna used Cryo-EM, which we've talked about at 25, 30 angstroms resolution-- not particularly good-- to show that when you started with Cas9, but you added the two pieces of RNA, you got a change in conformation. They could see that in the Cryo-EM, because it was huge. OK. And then when you add the targeting DNA, what happens is the nuclease domains change tremendously. The conformation of the protein changes tremendously. Putting everything together, you have the double-stranded DNA. So here you have the double-stranded DNA. Here, it's hybridizing to the guide RNA. And this is the tracr transactivating RNA. And the Cas9 simply surrounds this whole thing. That's what she talk about in the lecture this past week or past two weeks. So this was at 30 angstroms. And they had that model. And this was the result of an atomic resolution structure that just came out a week ago. And it just shows you, I've told you, you know, you have to separate the strands. I've given you a cartoon of that. And you need to stare at this a long time. But green is one nuclease. Blue is another nuclease. One can see that the blue and the purple are the DNA. And you can see the two strands separating, because they need hybridize to two different parts of the guide and the tracr RNA, which is in orange. And so what they're doing is looking at a model for how this works. OK. So the key issues, all of which were covered in this paper in some form, are shown here. And these are the key issues that everybody is facing. And I'm already over. But the delivery method into the cell, OK, we talked about that. They use adenovirus in this paper. So I would suggest you go back and you look at what they did. OK. Off target effects, did they look at that? Does anybody know? Did they look at that in this particular paper? Yeah, they did. And so how did they figure out what's going to be off target? How did they choose what to look for? I mean, you know, you got a billion base pairs, right? So how did they tell what to look for? One of the first things they did is what? How did they target the PCSK9? How did they figure out what to target? Can anybody tell me that? All right, nobody can tell me that? I guarantee you're going to have this on the first exam. You're going to have something on this in the first exam. We'll see if you go back and you read this. So the whole paper, really the whole first couple figures, is focused on how do you decide what to target in the PCSK9. And they looked of exon 1. And they looked at exon 2. And then they did experiments to try to look at that. That's how they got this idea about what the mechanism of repair was. And the sequence they targeted, they then look for other sequences that were three or four base pairs different. And then they also did PCR reactions on all those genes to see if you got cleavage or not. So if this is ever going to be used technologically in humans, which is the goal of this paper-- you know, we're very far removed from that. We have a lot of ethical questions. The bottom line is you need to remove all the off-target sites. You need to control, as we've already talked about, the two methods of repair of the double-stranded breaks. And I think now that we have structure, we ought to be able to even better design these three pieces to make more efficient chemistry of cleavage. I mean, it's amazing how efficient this was. So they did the whole thing in four or five days. I mean, that was really quite amazing. And so what I suggest you do now in the rest of the paper is, I think, straightforward. It just tests this model by looking for what happens to low density lipoprotein cholesterol. What happens to the receptor? What happens to cholesterol levels? What happens to triacylglycerol levels? And does it conform to the model that people have for the function of this protein in controlling cholesterol levels? So what I would suggest you do is you go back now. Hopefully, you're now interested in this a little more. And go back and read this. And if anybody has any questions, they can come back and talk to me.
MIT_508J_Biological_Chemistry_II_Spring_2016
R10_MetalBinding_Studies_and_Dissociation_Constant_Determination.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. ELIZABETH NOLAN: ...going to do today is some kind of superficial overview and focusing on things that are important to consider when either designing experiments that probe binding, or also when reading about experiments done by others, and thinking about their data and how the data was fit. So the readings this week were some excerpts from two different types of review articles-- so the Wedd paper and the Giedroc paper. And I guess, just to start, what did you think about these readings and the reviews? What kind of impressions? Did you like them or not like them? How are they different, and all that? AUDIENCE: I liked when they went into certain considerations you need to keep in mind. That was kind of helpful with the longer one. ELIZABETH NOLAN: All right, in the Wedd. AUDIENCE: The longer one? ELIZABETH NOLAN: So in the Wedd one. AUDIENCE: It was like there was a 15-page one. Yeah, yeah, this one. ELIZABETH NOLAN: Yeah, this one. AUDIENCE: Yeah. ELIZABETH NOLAN: Great. Challenges of different-- determining metal-protein affinities. AUDIENCE: Mhm. Understanding, like, pH effects, which wasn't something I'd thought about it in a while, I guess. ELIZABETH NOLAN: Mhm. Mhm. AUDIENCE: I really enjoyed the manganese review because I haven't been introduced much to metals in biology. That's kind of like where I want to go, so I really liked this review. ELIZABETH NOLAN: OK, so that's good, a good introduction to one aspect of the field of metal homeostasis. Any other thoughts? OK, so what I would say in terms of why we selected excerpts from these two papers, one, as Alex mentioned, this review by Wedd, it's extremely comprehensive. And the introductory parts give some really good, just brief and clear summary about considerations and pitfalls that happen when people are studying metal-protein interactions. And so right off the bat, there's an emphasis on some important things to think about when either designing your own experiment or reading about experiments done by others. And then we didn't assign this whole paper, but one of the great things about this review article is that there's this systematic consideration of many different types of binding problems. And the considerations are applicable to more than just a metal-protein interaction. But if you think about biochemistry in broad terms, there's many different types of binding problems. So it's just something to keep in mind for a resource if you ever need that down the road there. And then the other one is very much looking at the biological system and competition between host and microbe for metal nutrients. And so there's a lot of questions involving metal-protein thermodynamics, so what are relative affinities? There's also questions about kinetics there that aren't-- they're not really addressed in this. But a lot of effort these days is going towards trying to understand these metal transport systems and also host defense factors that are involved in this tug-of-war. And also it relates to topics that will come up in lecture. Joanne will be focusing on iron homeostasis and heme, but many of the concepts are similar. And another nice thing about this Giedroc review, and it's something that will come up as we talk about binding experiments more, is this figure 5. So they're talking about a technique called isothermal titration calorimetry and using this method to determine binding affinities. And they've done a lot of simulations. And so if you ever end up thinking about binding problems or doing experiments to look at binding, you can begin to have a qualitative appreciation for what data mean by studying simulations like this one. Or in the packet I've made, there's one looking at, say, optical absorption spectroscopy and what a titration curve will look at for different systems that have different affinities between a metal and a ligand there. So basically, what we'll do is consider just a simple one-to-one bimolecular complex in recitation today and talk about, thinking about, determining a dissociation constant value, which is often how biochemists measure Kd, different methods, and a lot of the things one needs to consider experimentally when studying metal-protein equilibria. And again, many of these aspects apply to other types of binding problems. So it could be protein-small molecule, protein-protein interaction, protein-DNA. Some are specific to metals because they have their unique characteristics and behavior. OK, so if we think about a simple, bimolecular, one-to-one complex, all right, and have a metal, M. And often we think about a metal as being a Lewis acid. And then we have some ligand, and we can think of that as a Lewis base forming a complex, so ML. Often we talk about free versus bound, so free metal or free ligand as metal or ligand that's not complexed versus bound because it's in a complex. And today we'll think about the ligand as being some protein that has a site for a metal, but it could also be a small molecule, for instance. So in introductory chemistry, typically talk about affinity constants for equilibria. In biochemical experiments, people often report affinity as a Kd, so dissociation constant. So if we think about the equation for Kd, we have the concentration of the complex. I'm sorry. That's the Ka. Concentration of the metal ligand over the concentration of the complex, so the Kd also equals 1 over the Ka. And you can also think about the Kd in terms of a ratio of the rate constants for dissociation and association here. K off over K on, here, just as different ways to show this. So if we just look at this equation here, the units for a dissociation constant, our concentration, so units, it could be anything from millimolar, micromolar, nanomolar, et cetera, here. And if we think about a system having increased affinity, so let's say the protein affinity is high. That is a lower Kd. So a protein with a nanomolar Kd value for a metal, that's higher affinity than a protein, say, with a micromolar or millimolar affinity for that metal here-- so lower Kd, higher affinity. So what is the common data fitting that we might see in a textbook or in some experiments? We think about it as very similar to thinking about steady-state kinetics in terms of the plot and the equations. So imagine we have some protein, so we have our ligand. And we titrate in some metal, let's say plus 2. And we have some measure of response to see formation of that complex. So maybe it has a color, like it's a protein that binds cobalt. And cobalt gives some new d-to-d transitions that we can monitor, or maybe it's some other method here. OK, so we can have a response that tells us about formation of the complex versus the concentration of free metal, here. And say we get something that looks like this. What we can say is that the response equals a constant times the concentration of free metal over the Kd plus the concentration of free metal here. So effectively, we get the Kd. So similar to thinking about KM in steady-state kinetics, but keep in mind the Kd and KM are two different things here for that. So if we think about this type of plot and we think about setting up an experiment, so say we have a protein and we want to determine its affinity for some metal. What do we need to know? AUDIENCE: A different concentration for what you're putting in. ELIZABETH NOLAN: Yeah, well, that's for sure, right? So you need to know the concentration, one, of the protein in your cuvette, or in whatever sample hold you're using, and then the concentration of the metal you're titrating in. But beyond that, based on this equation, what do we need to know? AUDIENCE: You're trying to determine Kd. ELIZABETH NOLAN: Yeah. AUDIENCE: Then we need to know M-free. I'm not sure what B is. ELIZABETH NOLAN: Yeah, so this is just-- I mean, think back to steady-state kinetics, right there. So I'm just putting it in as that because we don't know what this response is. But imagine you normalize the data to 1 such that your maximum response is 1. B would be 1. Yeah, so metal-free, so you just mentioned the concentration of metal you're adding in. So let's say you add in 1 micromolar of a metal, and you have 10 micromolar of protein. What is your free metal concentration? AUDIENCE: But you subtract the balance from the total metal. ELIZABETH NOLAN: Yeah, right. So the total is the metal going in, and then you have free and bound. Is it easy, always, to know what this is? AUDIENCE: Probably not. ELIZABETH NOLAN: Yeah, not always, right? And is there always free metal available, right? So this is something we're going to talk about a little bit moving forward. And so what we'll see is that this equation's great. In many instances, it can't be used because we don't know what the free metal concentration is, or we're in a regime where we don't have any free metal concentration. OK? AUDIENCE: Is this response [INAUDIBLE]?? ELIZABETH NOLAN: No. No, these are thermodynamic measurements, all right? So this-- let's say, for instance, that there is a system where, in the absence of metal, it's colorless. And one of the wonderful things about many transition metal ions is that they give us color. So imagine you add in a metal and you end up getting some transition. So perhaps this response is Amax at each addition of metal. So some sort of colorimetric titration. That's one example. You could also imagine using some sort of spectroscopy. And say there's some specific signal for your metal-bound protein that differs from the free metal there, and then you could use that and quantify it. So for instance, EPR NMR, any method like that, MCD, here. So no, this is not a rate here. This a response versus the concentration of free metal. And you see as the concentration of free metal increases, we're seeing an increase in whatever this observable is about the system here. So let's just consider a case. Let's just say we have something like this-- some UV this titration. And we have our ligand. And we titrate in some metal. And how do we often plot this? Let's say we have the ratio of metal over the ligand. And here, let's say we have some change in absorbance at some wavelength, like what we have here. And we'll take one extreme case. So here, let's say you get data that looks like this. OK. So you've done some titration. You've added some aliquot of metal. You let the solution equilibrate. And then you read the optical absorption spectrum. And so what do we learn from something like this? So what do we see in these data? AUDIENCE: There's a point of saturation. ELIZABETH NOLAN: Yeah. So something's happening here, right? So what we see is that over this regime-- which as I've drawn this, we have a ratio of metal to ligand of one. We see that this change in absorption occurs. It's quite linear. And then once we had a ratio of one to one, we see that there's no more increase in absorption observance at that wavelength. It plateaus. So what does that tell us about the interaction between this protein and the metal? AUDIENCE: Probably that one binds to one. ELIZABETH NOLAN: Yeah, right? This tells us something about stoichiometry, first of all-- that for whatever is causing this particular change in the spectrum, we see that change happens to one equivalent of metal and stops, which gives indication of a one to one stoichiometry here. What else does this tell us? So if you see something like this. What's happening in terms of the free metal concentration over this regime? So when there's less than one equivalent of metal added, where is that metal? AUDIENCE: It's probably with the protein. ELIZABETH NOLAN: Yeah, right? It's with the protein. So it's bound effectively. This is evidence for some sort of high-affinity complex, because what you see is that the absorbent change occurs up to one equivalent, and then it stops. Right? So we can contrast that to something like a case where it's more of a curve, like what we see up here-- where it takes more than one equivalent of metal to saturate that site. In this case, maybe it's one to one stoichiometry. Maybe it's something else. You need to do some more experiments to see. I say this is some high-affinity complex. So we have no or negligible concentration of free metal. Question one is, what does high affinity mean in terms of a range of Kd? And secondly, if there's no free metal, what are we going to do in terms of determining a Kd value? So what do we think of as high-affinity binding? AUDIENCE: Nanomolar? ELIZABETH NOLAN: Yeah. So that's pretty good, right? Nanomolar or lower Kd. So something like this, what happens if you see data like this is that typically, you'll say, OK, this indicates we have a one to one complex. And the dissociation constant has an upper limit that's typically in the regime of 10 nanomolars. So that sets the upper limit, right? It could be orders of magnitude lower, but we can't see that in these data here. And so that's something to watch out for when looking at how people analyze binding data, because sometimes, a Kd is reported as an absolute value from a direct titration. So this is what I would call a direct titration, meaning that we only have the ligand here, and the metal is titrated in, or whatever the binding partner is. OK, but if you're in a regime where you're just getting an upper limit, that value is just an upper limit. And it could be one nanomolar. It could be 10 picomolar. It could be femtomolar. There's some more experiments that need to be done to sort that out here. So let's just say we have a case where this Kd is one nanomolar. Thinking about this and what we know from steady state discussions earlier in this course-- and again, this isn't the same thing, but some of the same ideas apply. What concentration regime would you want to set up the experiment? So say you think your protein has a Kd up for a metal of one nanomolar. What concentration of protein do you want to use in the titration? AUDIENCE: Maybe high picomolar? ELIZABETH NOLAN: High picomolar. So why would you want high picomolar, and what does high picomolar mean? AUDIENCE: Because I think otherwise, you wouldn't be able to resolve the dissociation? Like, it'll basically-- if you're above that, it's just going to continue to look linear. There's going to be no curvature for you to observe what the dissociation would be. ELIZABETH NOLAN: So typically, you want to be around your Kd. So if the Kd is one nanomolar, you want to be a bit below or a bit above. And if you're really being rigorous, try a few different concentrations. Because at the end of the day, this response should be independent of that within a range of error. So what's the issue? Let's say your Kd is one nanomolar, or for that matter, one picomolar. And you'd like to set up an experiment. And you need an observable for this response. So this gets back to some of what JoAnne talked about in recitations two and three, and needing a detectable signal in the pre-steady state kinetic experiments, that you have to work with a high concentration of protein to see something. And so that becomes the same issue here. If your system would allow you to work at one nanomolar or one picomolar to have an observable, you would be in a range where you can see something other than this. But often, whatever we're observing, we need to work at a high-protein concentration, because the extinction coefficient is weak, or we just need a high concentration for whatever that type of signal is, which is what can put us in this regime here. So that's something to think about. So what can be done in order to get more information than what's shown here for a high-affinity site? So let's say you're not able to work at a concentration that's appropriate, based on the Kd of this high-affinity site, that you need to work at a higher concentration. What can be done? So effectively, what is often done is what I'll call an indirect approach. Another way this is described is to set up a competition titration, where you take your ligand or protein of interest, you take a competitor, and you titrate in the metal. OK, so what is this competitor? Typically, it's a small molecule with a known affinity, so a known Kd, for the metal of interest under the experimental conditions you're using. And so there's different flavors of using a competitor. And I'll just highlight a few in passing. So one way to use the competitor is to use some small molecule ligand that allows you to buffer the free metal concentration. So in these cases, it's some sort of system that will not affect the readout of, say, metal binding to your protein. So you can imagine, for instance, using EDTA, EGTA, NTA, like what's on the nickel NTA columns for affinity chromatography. And there are published affinity constants for these small molecules for different metals. And so you can set up a metal ion buffering system. And so the idea is that in addition to your normal buffer-- and we'll talk more about buffers in a minute-- you have a very high total concentration of metal and a high total concentration of a chelator. And you can make these buffers such that the buffer will buffer the free metal concentration. So you can buffer free metal, say, in the nanomolar or subnanomolar regime. So what does this mean? Your total metal concentration and total concentration of this competitor is much higher than the concentration of your protein. And so when you introduce-- you set up your titration, you have the protein in this buffer system, the protein will bind some of the metal. And then the buffer will adjust such that the free metal ion concentration you've set it at remains the same. So that gives you a way to get free metal concentrations. Another approach that's often used-- it's also controlling your overall metal concentration, but in a bit of a different way-- is to take a competitor that is also some sort of colorimetric or fluorescent indicator of the metal. And so in effect, what you do is you use the competitor as a readout for competition in the assay. And so what you can do is ask, OK, under these conditions, when the metal bind to the protein, there is no change in absorbance or fluorescence at some wavelengths. But there will be a change from the competitor at that wavelength. So if you put these together, you can ask, OK, as the metal is titrated in, where does the metal go? Do we see a response from the competitor or not? If not, it tells you that the protein won. If yes, and it's the same as the competitor in the absence of the protein, the competitor won. Right? So those are two cases of out competition where either the protein out-competes this competitor or the competitor out-competes the protein. That's not very helpful for actually determining an apparent dissociation constant value. It will give you information about limits here. But what you really want to have happen, and as this name suggests, is that you want the protein and this competitor to compete. So effectively, you see the response of the competitor attenuated, compared to the response in the absence of protein. So some metals here, some metals there. And then what you can do is a mathematical analysis to fit that data, based on knowing the affinity of the competitor for the metal, and knowing the concentrations of the competitor in the ligand here. So this is something that Wedd talks about quite a bit in the review that was assigned, in terms of setting up these competition titrations here. And so when done well, that can really be quite powerful here for that. And there's many other themes and variations about how to do that. But just to keep in mind, if your binding event is too tight to measure by a direct titration, you want to think about a way to do a competition titration here. So in the packet, I put in an excerpt from a paper that was published in 2003 showing some titration curves like what I sketched here, where there's some response to indicate how much is bound versus some concentration of metal. And one of the reasons I really like this plot is that it gives a qualitative sense for Kd values over a range of magnitudes and what that curve would look like here. And just having a sense of this qualitatively gives you a lot of leverage in terms of just looking at data and analyzing it, whether it's your own or someone else's in terms of, is this a high-affinity site? Is this a low-affinity site? Likewise in the Giedroc review with a different type of method called EITC here. So what we're going to do is talk a little bit about some general concepts and then some general considerations for, say, setting up these types of experiments. And so some of this relates to concepts in class. So JoAnne talked about the Irving Williams series. So based on that series, if you're, say, looking at some protein, and you're interested, say, in the Kd for binding of manganese versus zinc, what would you expect qualitatively? So imagine each of these metals is bound at the same site. And today in class, we talked about the different types of ligands that proteins use. So histidines or carboxylates, or maybe a cystine. We'll leave tyrosine out for the moment. But what would we expect? Which metal will bind with higher affinity based on Irving Williams? AUDIENCE: The zinc. ELIZABETH NOLAN: The zinc, right? So as we march along the first row for manganese, we see that the affinity increases and copper combined with higher affinity than zinc. So there's a swap at the end. So that's what we would expect. So what does that mean, just in terms of reading something in the literature? Right. If someone's reporting binding affinities for a protein, and you see that the values are of a similar order of magnitude for manganese and zinc, you might want to scratch your head a little bit and ask what's going on. Right? So is it a case where both metals are bound tightly and the titration didn't resolve a difference because you're just in an upper limit? Is there something unusual about this site that is causing the selectivity to be contrary to what we expect based on the Irving Williams series there? So the point is you can use those generalities as a guide. And there's always exceptions to the rule. I missed class on Wednesday. Did you go over hard-soft acid base? So have any of you heard about this hard-soft acid base concept. No. No. Yes. So, like, what's the hard-soft acid base theory? AUDIENCE: So smaller or electronegative things will associate those are, like, hard things--things--[INAUDIBLE] larger and fluffier atoms than-- ELIZABETH NOLAN: How is an atom fluffy? No. Right. So think about how polarizable it is. But that's along the right track. So basically, we can classify different metals and different ligands as being relatively hard or relatively soft. And then there can be the gray area in the middle, which is called borderline. So if we think about, say, a metal ion that's a hard Lewis acid that's something like calcium, for instance-- iron(III)-- these types of metals, like oxygen donors, which are hard bases, for instance-- often it's metal in a high oxidation state if that's an option. So iron(III) versus iron(II). Iron(III) is more hard. They're not very polarizable. And so, often hard metals are bound by hard acids. So an example like JoAnne brought up and Tara backed in today in class, and if you remember the structure from when we talked about siderophore biosynthesis, it uses six oxygen donors to bind iron(III). So from hard-soft acid base theory, that's a sensible ligand set. On the other extreme, what's soft? So that's a soft acid-- some metal with a large ionic radius. So if we think about to the right in the periodic table-- mercury, cadmium, copper one. And they like soft ligands, like cystine. So sulfur, that's quite polarizable. So soft, typically lower oxidation state. More to the right in the periodic table. And then you get metals that are in the middle, like zinc, iron(II), cobalt(II). There. So this gives you some indication of a guide, and why I bring this up is we've talked about the Irving Williams series, but depending on the ligand set, that series might not make sense. Right? So something like an EF-hand domain that binds calcium ions, it uses many oxygen donors. It's going to prefer calcium, say, over copper, even though calcium is in another place in the periodic table and also not defined by that-- formally defined by the Irving Williams series there. OK. So that's something you can keep in mind when analyzing the data just qualitatively, right? And so in the Giedroc review, if you look at those data, it's the case in many of the systems where what's currently reported or reported at that time are Kd values that are similar for certain metals that are separated along the first row. So then the question is, what's really going on? And some of it is an issue related to methods and experimental design, in terms of finding conditions that allow high-affinity binding to be studied here. So let's consider just some practical considerations in terms of experiments as we go forward. So in the beginning of this Wedd paper, he talks about a bunch of pitfalls that can come up in terms of experimental design. Do any of you recall what some of these problems are? You know, when he brings up on page two, "reliable evaluation and comparison of metal binding affinities is important for quantitative understanding of medal selection and speciation. " So that's central to everything that JoAnne has been talking about in terms of homeostasis the past few days in lecture. And then what does he say? "However, estimation of these metal binding constants is problematic at the moment, as disparate values have been reported in the literature." And then he highlights a few examples that are illustrative of this wider problem here. And so what's striking about some of these issues he shows in that page two of this review? Did these things concern you when reading the review? So what do these highlight in general? Yeah. AUDIENCE: Wait. What was the exact question? ELIZABETH NOLAN: So in terms of in Wedd's paper, he begins this paper by citing a number of examples of problems in the literature. And I guess I'm asking, were these problems striking to you? And if so, why? And really, what is generally the issue here? AUDIENCE: I feel like there's such a wide range of magnitude of the Kds that kind of points to an inconsistency in experimental set-up. ELIZABETH NOLAN: Yeah. AUDIENCE: To where maybe somebody could give something else-- ELIZABETH NOLAN: Right. So these values are hugely different that he's citing here. Right? I mean, 10 orders of magnitude different-- you know, reported Kds that vary by six orders of magnitude. These are huge differences. This isn't one nanomolar versus 10 nanomolar. This is hugely different, and depending on what number you come up with, there's huge implications for what that means in a biological system. So what are some of the reasons for why there may be so many discrepancies? And in each case, we don't really know, but what we're going to do now is just think about some of the aspects of experimental setup that might be affecting determination of one of these values and how to think about these things. So in terms of pitfalls, I'll begin with one, which is just fitting the data in an inappropriate manner. So there are so many programs out there that will fit data. But the end of the day, you need to ask, what does this fit mean? Is it meaningful for the system that's being studied? So did it take into account all parameters? Is it the correct stoichiometry? Do the numbers that come out make sense? What other experiments can be done to try to test that there? So that's a general issue. And then, as I've mentioned here in passing, often direct titrations are fit inappropriately because this is concluded to mean some absolute Kd when it doesn't. It just gives you a limit here. So let's just think about taking a protein and titrating it with a metal. That experiment will happen in a buffer. So do we need to think about the buffer? AUDIENCE: Yeah, but then it could be, like, a cuvette here for metal that you're interested in. ELIZABETH NOLAN: So that's the first question. Does the buffer influence metal speciation in the cuvette by having some affinity for the metal of interest? So from that perspective, what buffers could be classified as problematic? So you need to think about the chemical composition, the chemical structure of the buffer. AUDIENCE: EDTA or something? ELIZABETH NOLAN: OK. So EDTA could be in your buffer for some reason, but that's not your buffer, right? So the buffer is what's going to control the pH there. So Tris is an example. What are other examples of common buffers? AUDIENCE: Bis-Tris? ELIZABETH NOLAN: Yeah, bis-Tris. Others? AUDIENCE: PBS? ELIZABETH NOLAN: Yeah, PBS. So a phosphate buffer. That's often used in tissue culture experiments and other experiments. So let's start with the Tris buffer. Is it a good idea to do a metal binding titration where you want to get a Kd in Tris buffer? Shaking head no. So why? AUDIENCE: Because if you're going for metal being bound with protein, if the Tris is poured into the middle, then it might alter your readout. ELIZABETH NOLAN: Yeah. OK. So let's break that down. So one, Tris-- that has an affinity for certain metals. You have an amine-based buffer. So that's one issue. And then the other thing you need to think about in this are, what are the relative concentrations of the buffer to your protein of interest? So what's a typical Tris buffer concentration used, say, in protein purification or some type of experiment? AUDIENCE: Like normally? ELIZABETH NOLAN: Yeah. Typically higher than one million molar, too. Right? So maybe 20, 75 million molar. Maybe even higher than that. So you have this substantial concentration of your Tris buffer, compared to a protein concentration, which if you have a micromolar Kd you'd like to look at a micromolar range of protein. So that will influence the metal binding equilibria in the experiment. So then the question is, if you're doing that titration under that type of condition, are you taking that Tris metal interaction into account in the data analysis? Are there other buffers that are arguably more appropriate? And the answer is yes. So there's buffers like HEPES. These are buffers that are called good buffers-- zwitterionic buffers that in general have lower metal affinities, and are often used for titrations. What about, say, metal contamination from the buffer or from the water? So what's important to think about there? Is that an issue? AUDIENCE: If you're using hard water or something, there's calcium that would bind to your protein. ELIZABETH NOLAN: Yeah, right. So you need to think about the water. You know, where did this water come from? Where did your Tris come from, or whatever other buffer? Because again, if you have 100 million molar buffer, it's not only the molecules of, say, HEPES, but it's whatever other contaminants are in there. And there's a lot more of that than your protein, which gets into this issue of Irving Williams' series and zinc. So zinc contamination is everywhere. Zinc is everywhere. So are you getting a zinc contamination, say? And your metal binding protein, some portion of it is complexing zinc and you can't see that, because zinc is spectroscopically silent. That's going to be a problem. So that's something to think about and keep in mind. So for rigorous work, high-purity buffers can be used. Or there are tricks out there to demetalate buffers. Those tricks often have a few caveats as well for that. But I think contamination is something to keep in mind, and can be a bit of a nuisance. But you just need to know how to look for it and deal with it. And also, these contaminations-- it becomes an issue, too, in terms of what is your protein concentration? So if you have a one micromolar metal contamination, and you're working with one millimolar protein, it's probably OK. But if you're working with one or 10 micromolar protein, then there's a problem, because you're going to have more of that complexed there. So why are we using the buffer? We're using the buffer to control pH. So how do we want to think about pH from the standpoint of these titrations? AUDIENCE: You don't want to make something that you're trying to coordinate the metal with, so like the proteins [INAUDIBLE]. ELIZABETH NOLAN: Or even histidine that has a pKa that isn't in the regime. And cystine, right? That has a pKa. So often, we think about the pH of the buffer used in protein purification that will make the protein stay in a happy state. But then the question is, is that pH appropriate for the metal binding study? What is the effect of that pH on the ligands and the primary coordination sphere? So are they protonated or deprotonated or a mixture of the two? And then how does that affect the affinity itself? So these Kds will have a pH dependence based on pKas of the side chains here. And I mean, also, are there pH requirements for the metal? And is your experimental setup such that the pH remains constant throughout the titration? So an example-- iron(III). So JoAnne talked about iron(III) in class today, and this ridiculously low Ksp at pH 7 of 10 to the minus 18. You can't just have your iron(III) stock solution at pH 7 and have much of anything soluble. So what do people do about that? Often, the stock solution is stored in acid because it's soluble there. Can you titrate that acidic solution directly into your protein? These are just things to think about here. What else can be in the buffer? So thinking about anyone who has purified protein. So you brought up EDTA, right? And that certainly would be something that would need to be taken into account. Hopefully you would only have it present if you wanted to do something like a competition. Otherwise, that's going to be a major issue in terms of sorting things out. But what else might be in the buffer? So what if your protein, say, is a cytoplasmic protein and it has a lot of cystines? Are those cystines likely to be reduced or oxidized in the native form if it's a cytoplasm protein? AUDIENCE: Reduced. ELIZABETH NOLAN: Yeah. Reduced, right? Because that's a reducing environment. And then you go into the periplasm or the ER, which is where you find proteins that have more disulfide bonds. So let's say your protein likes to have a bunch of reduced cystines in it. Chances are you have a reducing agent in the buffer you use for protein purification. And maybe you need to keep that reducing agent around during an experiment, or maybe you can work in an anaerobic chamber and get rid of it. But let's just say the reducing agent's present. Is that something we need to think about from the standpoint of a metal-protein interaction? So what are examples of these reducing agents? AUDIENCE: TCEP. ELIZABETH NOLAN: TCEP's one, yeah. And we'll come back to that one in a minute. What are some others? AUDIENCE: [INAUDIBLE] ELIZABETH NOLAN: Yep. And what else? Another thiol-based reducing agent commonly used in protein purification. AUDIENCE: DDT? ELIZABETH NOLAN: DTT. Yeah. AUDIENCE: Oh, DTT. ELIZABETH NOLAN: DTT, right. So let's just consider, say, DTT and BME together. Is there something we need to consider there? Yes, because depending on your metal, these reducing agents will have some affinity. And often, they're in very large excess over the concentration of protein. So it's a similar issue to the Tris buffer issue, in terms of how are these reducing agents affecting metals speciation and metal binding equilibria in the experiment. So TCEP. This is Tris-carboxyethel phosphine. So not as commonly used in protein purification. But it is a reducing agent that you commonly see used in certain metal binding titration. And that's because it's thought to cause less interference. So the affinity-- that equilibrium constant is much weaker. So what is one of the pitfalls of using TCEP that people often run into? Do you know? So if you just have TCEP and aqueous solution AUDIENCE: It's going to start-- ELIZABETH NOLAN: What? AUDIENCE: Reducing, just if you leave it there. ELIZABETH NOLAN: Well, it needs something to reduce. So if you just have TCEP and water, is that neutral? Basic? Acidic? So it's acidic. And the manufacturer instructions say this pretty explicitly. But oftentimes they go unread, right? So if you end up working with quite a bit of TCEP in your experimental conditions, the first thing you need to ask is the buffer adequate to buffer the pH when TCEP's added. You don't want the TCEP acidifying your buffer and then you're not working at the pH you think you're working at. So what does that mean? You may want to pH adjust your TCEP solution before starting the experiment there. That's just something to keep in mind. I've seen that happen many, many times, in terms of the TCEP there. Temperature control. The equilibrium constant is temperature dependent. So what is the temperature control throughout one experiment, and then also if you're repeating this experiment over multiple days, because you want to get error analysis and show that it's reproducible, is that temperature good for that? So those are some key things. And then what do we need to think about in terms of using a competitor when setting up the experiment? So one, we need to know the Kd value of the competitor for the metal of interest. And hopefully, we know something about this system so we can make an appropriate choice, because as I said before, we want to see competition there. What could go wrong? And again, this isn't meant to be all gloom and doom. This is just, you know, you need to be aware of certain things that can happen in your experiments and know to look out for them so you can fix things as necessary. So here, we have the protein, we have the competitor, we have the metal. And as I've described it, we want the protein and the competitor to operate effectively, independent of one another. So they can both bind the metal, and somehow this metal is going to be distributed between the two based on the relative concentrations and the relative Kds. So what could muck that up? That's the ideal scenario. AUDIENCE: Could they both bind the metal? ELIZABETH NOLAN: Well, we definitely know they both can, right? AUDIENCE: Simultaneously. ELIZABETH NOLAN: Simultaneously. So what would that be called? So that this can be a major headache. What happens is that you get what's called a ternary complex. So you have the ligand, the competitor, and the metal as one. So imagine that your protein has a metal site that's not coordinatively saturated. And so as a result, maybe you have the metal in this site but then the competitor also binds. That's not good from the standpoint of setting up this competition, right? Because how do you parameterize for that? So that can be a big issue, and something that you need to watch out for when designing the experiments. Could something happen between the competitor and the protein itself in the absence of metal? AUDIENCE: Perhaps they could interact, and then, in their interactions, block the metal. ELIZABETH NOLAN: Yeah. It could block or perturb. So what might happen? I mean, we can just imagine a scenario where this protein has some hydrophobic patch. And maybe this competitor has a fluorophore for that's relatively hydrophobic. Or maybe part of the ligand is hydrophobic. And so you end up getting the competitor sticking to the protein. That doesn't necessarily mean the competitor won't bind the metal, but it will perturb how that competitor behaves. That could perturb the optical readout. It could perturb the metal affinity of the competitor. So that's something to also watch out for. So we talked about the buffer and contaminations in the buffer. What about the competitor here? So typically, these small molecules are coming from some commercial source. Right? And so you have similar issues, even though you're using a much smaller concentration. And so don't always assume what you're getting is as pure as they tell you. And that could be organic impurity, or it could be a metal contamination, because these competitors are ligands. And they could have picked up some metal along the way. So what can be problematic from the standpoint of, say, organic impurity here? One common example is that if you're using something that's fluorescent or brightly colored, to have an optical readout. Maybe there's an impurity that wasn't removed in the synthesis and purification that's also very bright. So you have something that's compromising the optical signal of the probe. And then there's also the possibility, since these are ligands, that there's a contaminant that can also bind a metal. So if there was some byproduct that wasn't fully removed during purification. If that's the case, it will influence speciation as well there. So what does one do in terms of gold standard and testing? You need to know what the primary literature is about this competitor molecule, and then effectively test your sample and make sure it has the expected optical properties and the expected behavior when binding the metal of interest. And if that all looks good, then can move forward. Also, just typical tests of purity, LCMS, HPLC. Even with many of these, if they're highly colored, a simple TLC will give you a lot of information there. So I'll close with that, and just would reiterate broadly that a lot of the topics discussed in the Wedd review and in the packet, although from the perspective of metals and proteins, it's more general to any type of binding problem. And if you need more resources in terms of binding problems related to metals, I highly recommend reviews by Wilcox and Giedroc, in addition to this review by Wedd there. So they talk a lot about aspects of experimental design and certain methodologies there.
MIT_508J_Biological_Chemistry_II_Spring_2016
26_Metal_Ion_Homeostasis_2.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: So that's where we're going in terms of Module Six. We introduced this the last time. And this is the required reading that's also been posted-- and we started this last time. This introductory lecture is talking about metals, in general-- the chemical properties of metals, and why we have these kinds of metals in our bodies. Why we're using these kinds of metals. And focusing then on metal homeostasis, in general. And then we will move over into complete focus on iron for the next three-- so, subsequent three lectures. OK. So here we are-- here we are with the periodic table. And we're going to be focusing on transition-- the transition metals, which are most of the places where you see the chemistry that you're familiar with. And this just sort of tells you the relative abundance of the metals in our bodies. OK? And we talked about last time-- sort of an introduction to the different kinds of chemistries that we can have. And we talked about iron transport, reversible oxygen binding, and then we were at the place for electron transfer with nitrogen fixation. Again, this is a cursory overview with signaling, where I introduced the fact that you have calcium. And also you can have zinc and copper signaling, which people didn't realize until recently. Zinc is worked on extensively by the Lippard Lab and copper is worked on by other people-- the Chang Lab at Berkeley We will see we're going to have regulation at the transcriptional and the translational level-- that's true for all metals-- and that many kinds of reactions can happen. I'm only going to focus on the reactions that we're going-- that are related to iron in the course of this module. We had gone through reversible oxygen binding, and at the end of the last lecture we were focused on the amazing diversity of metallocofactors. These are some of my favorite metallocofactors. Most of you, I think, are not exposed to this. You sort of know there's an interesting cofactor, but I haven't really thought about how these cofactors work. And if you look at this one-- at the end, we were talking about-- you have-- this is the active cofactor formed-- found in the enzyme nitrogenase, which does an eight-electron reduction of nitrogen to ammonia plus hydrogen. And there are many enzymatic systems that use multi electrons, OK? And that's an active area from the chemical point of view, as well. How do you control multi electron oxidation and reduction, and what is the multi electrons versus single electrons get you? That's a hot area of chemistry now in the bio-inorganic world. And I think most intriguing is this little carbon in the middle. That's a carbon minus 4. How does it get there? Where does it come from? That should intrigue you and-- actually, where is even the reactive species? Where does nitrogen bind to do the reduction? Hydrogenase is another active area of research, now-- in energy. People thinking about how do you do-- how do you use catalysts to do oxygen evolution, here? Or hydrogen reduction or oxidation? And people who have taken inspiration from enzymes called hydrogenases, and they come in many flavors. You have nickel iron, iron iron, iron only. And we know quite a bit about the actual chemistry. The rate constants for turnover are amazingly fast, and so people are trying to do that in little devices nowadays, using this as an inspiration to generate these kinds of catalysts. And what do you see unusual about this iron cluster? I'm digressing, but I think this is a good thing for you to know. I wouldn't expect you to remember the details, but what's unusual about this cluster? Anybody see anything from a chemical perspective that's unusual? AUDIENCE: You mean, like all of the ligands on-- JOANNE STUBBE: Yeah. Look at the ligands. What's unusual ? AUDIENCE: Is there a carbon with five bonds? JOANNE STUBBE: There a what? AUDIENCE: Carbon with five bonds? JOANNE STUBBE: I don't see any carbons with five bonds. OK, yeah. So, OK. That's not what I want you-- so again, it depends on what the bonding is. But what is unusual about the carbon with five bonds? AUDIENCE: It's attached to both. JOANNE STUBBE: Yeah. So that's not what I mean, though. What is unusual about the ligand? AUDIENCE: It's carbon monoxide. JOANNE STUBBE: Yeah, it's carbon monoxide. What do you know about carbon monoxide? It kills you, right? OK. So how do how the heck do we have organisms that have carbon monoxide ligands, right? And we all have carbon monoxide detectors in our house because it binds to our heme proteins and kills us. What's the other thing that's unusual about the ligand environment, here? What's the other ligand that's unusual? AUDIENCE: Cyanide. JOANNE STUBBE: Cyanide. That also kills you. So automatically as chemists, you ought to be intrigued by where the heck did these things come from, and how do you prevent it from killing the organism? At the same time, is this able to use these ligands to actually do chemistry? And if you think about transition metal chemistry-- we won't talk about this, but we will see-- what are the oxidation states of iron that you're most familiar with? AUDIENCE: Two, three-- JOANNE STUBBE: Two and three. OK? And then what we'll see is four happens transiently, but we also go to iron zero's. So we have a wide range of redox spanning chemistry by altering the ligands. And that's going to be one of the take home messages from the four lectures. OK? And this is, I think, totally amazing. We now have an atomic resolution structure without the metals being destroyed, which normally happens when you put a metal into an X-ray beam. The electrons reduce the metal and you don't end up with the cluster you think you're going to be getting. And what you hear see here is you have four manganeses and a calcium. And again, this is a multi electron process where, in order to go from water to oxygen uphill, you need to have light. And a lot of people are focused on that now, in terms of energy production and chemical catalysts that can mimic these kinds of reactions. But what we're going to be focusing on now in the case of the iron is the iron cofactors. And most of you have seen these iron cofactors before. This is just a few of the iron sulfur cofactors. Where have you seen them before? What part of biochemistry in your introductory courses have you seen these before? And then I'll tell you where we're going to see it again. Nobody has ever seen them before? No? AUDIENCE: A lot of single-- a lot of single electron transfer-- JOANNE STUBBE: Yeah. So it's single electron transfer. Where? In respiration. Hopefully you all did that as a basic introductory part. You have iron-sulfur clusters all over the place, and these are the clusters that were found in the prebiotic world. So they are incredibly interesting. What we're going to be focused on are four iron, four sulfur clusters. That's a key-- that's a key component that allows us to sense iron in humans. And so we'll come back to the four iron, four sulfur cluster later on. And all of these-- I just leave you to think about, where do these things come from? You just think you throw in iron and molybdenum and you have a cofactor that looks like that? The answer is no. The other thing that I just wanted to introduce you to because this is a very active area of research in our department-- the Drennan lab-- and a lot of my former students have been working on enzymes called radical SAM enzymes. What does SAM normally do? S-Adenosyl mithionine? What does that normally do in biology? AUDIENCE: Does it methylate something? JOANNE STUBBE: Yeah, it methylates. So, you know-- here is that you have something that's activated for nucleophilic attack-- that's what it normally does. We now know there are 130,000 reactions that involve SAM that doesn't do a methylation. You do reductive cleavage of the carbon methyl bond to form this carbon methyl-- carbon sulfur bond to generate a radical. And you do really complex free radical chemistry. For example, 50% of all the methane gas in the environment comes from a radical SAM enzyme that cleaves the phosphorus carbon bond. If you look at the antibiotic resistance problem we have now, there's methylation in the active site-- the A site of the ribosome that you guys talk about-- that prevents five different antibiotics from binding. And it doesn't involve methylation in the standard form, it involves complex free radical chemistry. So I'm not going to say any more than that, but radical chemistry is taking off. I mean, there's all kinds of unusual chemistry that peak chemists didn't think was possible before, and we're-- every time we study another system, we learn something new and exciting from a chemical perspective. OK. So now what I really want to do is sort of get more into the nitty gritty. And so one of those-- I showed you the periodic table. We have manganese, we have iron, we have copper. Why were those chosen? And in part, those are chosen because it reflects-- the metals reflect earth's history. And one of the-- so, one of the things in the geochemical-- what we know about the geochemical production of the earth over the eons since its first-- since a big bang, anyhow. Here's the earth's core. And this is taken from the article by Fry and Reed. And I think it sort of-- this and then the next slide I'm going to be showing you-- sort of, I think, places in perspective why we're using iron and copper and zinc in almost all the enzymes we see inside of ourselves. So the earth's core is here. And then we have-- so we have the inner core, and we have the outer core. These two cores are 80% iron. OK? We then have the mantle. And then we have the crust. And the crust has-- the fourth most abundant metal is iron. But you also have other things in the crust-- aluminum, calcium, silicon. Why are we using carbon and not silicon, if this is the most-- most abundant-- one of the most abundant elements in the earth's core? And this article sort of goes in and discusses those kinds of issues. Making you think about what you learned in freshman chemistry about the periodic table. Iron. Iron is everywhere. The most abundant element in terms of mass is iron. OK? And so, iron, you might expect from this description, to be front and center. And in fact, it is front and center. And so the other thing I think you can think about is solubilities and evolution of-- from the beginning, where we were in a completely anaerobic world. So here's the gaseous environment with oxygen. In the very beginning up to 2.4 billion years ago, it was completely anaerobic. OK? And so is that important? So if we go to 2.4 billion, it's anaerobic. So now, if you look at-- and this is-- where these data come from and where this model comes from, it-- obviously everything is a model. You can go back and read this in detail if you become interested-- maybe some of you might have had a geology course where you've discussed this before. But if you look over here, where do you see iron? OK, so iron is going to be the focus. Where is it compared to cobalt, nickel, manganese, all these other-- zinc, copper-- all these other transition metals? It's way up here. So it's most abundant under anaerobic conditions. What do you think the oxidation state is? So you just told me you had iron that you've commonly encounter is two and three. And that's correct. Everything-- you're going to encounter this over and over again. Hopefully you have encountered this before. What happens in an anaerobic world? What do you think the oxidation state for iron is? AUDIENCE: Maybe two? JOANNE STUBBE: Yeah. It's two. And I think this is incredibly important from a chemical perspective, because many enzymes we're going to see-- metals-- can catalyze reactions by polarizing carbonyls, for example. In an anaerobic world, you likely used iron two all the time. But what's going to happen when we get over here in an aerobic world? And so that's the key question. And do we see iron two used in that capacity? The answer is no, because in the presence of oxygen some other reaction out competes it. So that's why I'm introducing you to this. It's sort of-- I don't expect you to remember the details, but I think it's an interesting exercise to think about what happened when we transitioned from an anaerobic world-- and this is all in the ocean, and versus the atmosphere-- into an oxygen atmosphere. And this is 0.8 billion years later. And if you look at this, what happens-- and this is a period where they believe that you had a lot of H2S around. And remember, we just saw iron clusters with all these sulfides on them. Iron sulfur was in the prebiotic world. They can self assemble. They do all this kind of chemistry that-- until they knew about this radical SAM super family-- they thought was one electron, oxidation, and reduction. And nothing could be farther from the truth. Iron sulfur clusters play a key role, for example, in DNA replication. OK, so I think-- thinking about this and where these iron sulfur clusters came from, you provided some insight perhaps from looking at the geological record of what people think was occurring. So we went through a period where you had a lot of H2S. Concentrations of species have changed. And then we move into the aerobic world, and what happens here? So what happens to the iron? It's dramatically decreased. So when we go from the anaerobic to the aerobic, why does the iron-- what happens to the oxidation state of the iron? It gets oxidized to iron three. So we're changing in the oxidation state, and so we're going to have to deal with it. So I'm going to show you, this presents a major issue we face now, both as humans and as bacteria. If you look at this, what happens to copper and zinc-- if you believe this model? That the copper and zinc concentrations increase. And in fact, that becomes really important. Because if you look at the biological record, and you look at archae and bacteria that are much much, much older, what you see is-- you don't see that many copper catalyzed reactions in zinc, which has a really important role in humans, with zinc fingers. Doesn't play a role like that in bacterial systems. So I think this represents an interesting way to think about metal speciation, oxidation states, what ligands are going to be involved in what's happening-- and also what's happening in bacterial systems. The key thing that I want you to remember about this is that in the aerobic world-- so we now go from iron two to iron three. And what we'll see is the solubility properties of iron three are dramatically different, and that's something we're going to have to deal with. How do we get-- we talked about this last time-- how do we get iron out of a rock? OK, so that's an issue if you're a bacteria-- you have to figure that out. And bacteria have done some pretty cool things to figure that out. And so-- OK, so this, I think, also has-- we're going to be focusing on iron here-- important implications in terms of the chemistry. So in terms of being in an anaerobic world, we can use iron as a Lewis acid. OK? And so it can polarize a carbonyl. We'll come back to this in a minute. Nowadays, we almost never use iron two as a Lewis acid in biological systems. And why is that true? Because when we transitioned to the aerobic world, now we have this problem of-- that the iron three is what? It's insoluble. So that's one problem. And the second problem is that since it's insoluble, we can't use it. How do you get-- how do you get it to actually use it for chemistry? We're going to-- we're going to talk about that. How do you get it to look at chemistry. And then we have this issue of oxidation with oxygen, and this is going to lead us into module seven. So while in the very beginning, we used iron to do a lot of chemistry without oxygen around. We then moved into an oxygen-- oxyphilic world, and we have this issue of during this oxidation using oxygen as the oxidant-- what happens? You produce reactive oxygen species. OK. So, and then you have also the problem of insolubility. So you generated-- by making this transition into an oxyphilic world you're encountering two major problems that we're focused on. How do you deal with the insolubility problem and how do you deal with reactive oxygen species? And that's going to be-- following this module, we're going to talk about what happens with reactive oxygen species as a consequence of moving from an anaerobic to an aerobic world. I don't want to spend a lot of time on this, but I want to make sure that you understand there are some kinds of reactions that are really distinct from the reactions you meet in the organic world. And a lot of you-- we looked at the vitamin bottle, we learn a lot about flavins, we learn about pyridoxine, we learn about vitamin C-- all the vitamins we learn about. But we sort of ignore the metals on our bottle that's required for life. And so I don't want to spend a lot of time, but there are-- what are the general reactions? So I just want to say a little bit about general reactions. OK. And one of them is this idea of Lewis acid-- or Bronsted acid. And so what you can have is a carbonyl, and you can have a metal that can activate the carbonyl you for nucleophilic attack. OK? Where have we seen this before? We've seen this before in-- if you go back and you look in the glycolysis pathway, lots of times you use zinc to activate the carbonyl. Sometimes you use shift spaces, maybe. You remember that? In aldehyde dehydrogenase, aldehyde oxidate that converts aldehyde to an acid or reduces aldehyde to an alcohol-- they use zinc. OK? In the completely anaerobic world, people thought-- most of the time they probably used iron. That was one of the most-- that was much, much more prevalent than zinc. But then things-- so if you go way back and you find bacteria that lived in that period, they still might be using iron in catalysis. But now we almost never use iron two in catalysis, because of the issue of the redox chemistry. So now they're saying the-- so it's now, you know, your polarize this for a nucleophilic attack. You've seen this over and over again with the Claisen reaction, the Aldol reactions, et cetera. I'm not going to go through the details. Another place you see it-- and where have we seen this one? Again we have a metal-- and I'll just leave it in the plus two oxidation state. But what happens to the pKa of the water bound to a metal? And what happens is the pKa is dramatically reduced. You have two positive charges here, depending on the interact-- and that interaction's unfavorable. So the pKa becomes reduced on bonding to a metal. Where we've seen that before? We saw that in the cholesterol module. We didn't talk about the chemistry-- again, I come from the chemistry side of it so I find the chemistry the most interesting-- but it fits into the biology. Where have we seen this before? Anybody remember-- in cholesterol? Homeostasis? What happens in the Golgi when you want to go from the Golgi to the nucleus? AUDIENCE: A zinc-- JOANNE STUBBE: Yeah, we had a zinc protease. So that would be an example. An example of this would be in the cholesterol section. And I'm not going to talk about this in detail. I used to talk about this in a lot more detail, but you can see with different metals-- this is just an example of the first case I'm giving you-- the pKa's of the metal bound are reduced. Again, it's a play off. Those all with waters, those ligands. Every time you start changing the ligands or you change the oxidation state, these numbers change. OK? So you need to know a lot about the metal you're dealing with. So that's one place-- you've already seen all of this before. Whoops. So the second thing I want to very briefly talk about is the second kind of reaction-- which maybe many of you haven't seen before-- is electron transfer. OK. So this is basically oxidation reduction. And so clearly, you've seen oxidation reduction. So if we have some metal m in the n plus state, and we add an electron, it gets reduced. So to get to the reduced state, remember we need two half reactions-- something gets reduced, something else has to get oxidized. And what's different-- we've looked at redox cofactors-- and most of you have looked at a lot of redox cofactors in primary metabolism, like glycolysis of the pentose phosphate pathway or whatever-- what are the normal redox cofactors you encounter? The organic redox cofactors you encounter in biology? AUDIENCE: NAD. JOANNE STUBBE: Yeah, NAD-- NAD flavins. OK, so this chemistry always involves one electron. So that's distinct. NAD, we've already talked about this, always involves hydride transfer-- two electrons and a proton. So this is one electron. OK. And so one electron. And if you have other things-- we could have proton coupled electron transfer. So PC is proton coupled electron transfer. And remember, we just saw the example of nitrogen getting reduced to ammonia. OK? You're doing an eight-electron reduction, but you've got to have protons. That involves proton coupled electron transfer. If you're converting water into oxygen, again, you've got to take care of the electrons and the protons. And if I get that far in the last module, my lab works on ribonucelotide reductases-- that makes a precursor to DNA. You would never think about radicals, at all, but that chemistry involves proton coupled electron transfer. So here is some of the most important reactions in biology, and you really haven't been exposed to what's unique about the chemistry. So what do we know that's unique about the chemistry? What do what do we know about rate constants for electron transfer? Anybody know anything? Fast, slow. What's different about electron versus hydride transfer? AUDIENCE: With the hydride transfer, you have to transfer an entire proton, versus-- JOANNE STUBBE: So you're transferring the proton, which-- what's the difference in mass between an electron and a proton? AUDIENCE: A lot. JOANNE STUBBE: Huge. It's 2,000-- 2,000 fold. So you remember, probably from introductory chemistry, when you think about electrons, you think about-- you think about quantum mechanics and quantum tunneling, as well as-- it can be-- electrons can function as both particles and waves. So they can function as waves and particles. And while I'm not going to spend a lot of time talking about this, this is a central reaction in the inorganic part of biochemistry that occurs in humans that you need to take into account. When things behave as waves, they can function quantum mechanically. And we have an expression called the Marcus equation, which allows us to calculate the rate constants. So we have a rate constant for electron transfer. And if we have some acceptor her and some donor-- so, all we're doing is redox chemistry. The question is, what governs the rate constants for electron transfer? Well, it could be the electronic overlap, so that's part of it. This is part of the Marcus equation. What else governs the-- what else governs the redox chemistry if you have a donor and acceptor? The reduction potential of the donor and acceptor. So you need to think about the reduction potentials. And what other factor governs the chemistry? Does anybody know? Around the metals. So you have to think about, how much energy does it take to go from iron two to iron three, or copper two to copper one? What other factor? What else happens to the metal during a reduction or an oxidation? AUDIENCE: A reorganization. JOANNE STUBBE: Yes, a reorganization. So it can change its geometry. And so the other factor is called lambda, and this is reorganization chemistry. And furthermore-- and we'll see this is important a little bit with the iron systems-- it doesn't just have to be the immediate coordination sphere of the metals. It can be the second coordination sphere, as well. So the whole protein is important, I think as hopefully most of you know by now. And we're not going to spend a lot of time on this, but I think this is something you need to think about-- the rate constants for electron transfer. They could be 10 to the eighth, 10 to the 10th per second. How does that compare with the rate constant for chymotrypsin? What's the turnover number for a protease? Anybody remember? OK. So a turnover number for a typical protease hydrolyzes-- like the cholesterol one hydrolyzes on an amine bond-- might be anywhere from 10 to 50 per second. OK. So how does that compare to this? Slow. Very slow. So again, the chemistry of electron transfer is quite distinct from most of the chemistry you've encountered, and so you need to know it exists because it's everywhere in biology. We don't spend that much time on it in this class, but it's a unique part of the chemistry associated with metals. OK, so the third thing I wanted-- the third kind of chemistry I want to very briefly look at is substitution reactions. OK. Now, in organic chemistry, what kind of substitutions reactions do you have? This is something hopefully you all remember from your organic, but what do you-- what do you have? What are the two basic reactions you learn about in the first semester of organic chemistry? AUDIENCE: SN1 and SN2. JOANNE STUBBE: Right. SN1, SN2. Associate or dissociate. Same thing in metals, OK? So you need to think about associative-- what does that mean? Dissociative. If you have something with four-- a metal with four ligands around it, you're going to add a ligand to get the reaction to go. That's associative. If you have something with four ligands around it, one of the ligands could dissociate, and you only have three ligands-- and that's the basis for getting that chemistry to go. And the reason-- the thing that I want to focus on and-- the thing I want to focus on is ligand exchange. So ligand exchange could occur by associative or dissociative mechanisms. Where have you seen ligand exchange in recitation? I think it was recitation four? You probably didn't think about it. I mean, we were doing something else. But the key to it working is ligand exchange rates. What about histamine tags? OK, so here you have a metal. What kind of a metal do you have on your column? A nickel. And the nickel is bound. But in order-- so you can hang-- how does your thing hang up? By ligand exchange. How does it come off? By ligand exchange. So an example of this is histamine tag chemistry. And another example that you've seen is magnesium. OK. What are the rate constants for ligand exchange with magnesium? Where do we see magnesium in biology? I'm spending too much time on this. But I actually think this is incredibly important. If you take home a few of these basic reactions, this is all you really sort of need to know to deal with metals and biological systems. Where's magnesium? Where do you find it? You find it on-- AUDIENCE: Phosphates. JOANNE STUBBE: On phosphates, yeah. So you have nucleotides-- like ATP would be an example. Whenever you have ATP, if you look at the charges of ATP-- we went through this in one of the recitations that I taught-- you never have these negative charges. It's always complex. Just something to neutralize it. And the major thing-- since magnesium is 10, 15 millimolar inside the cell-- it's always bound. But if you try to isolate magnesium through some kind of a column, what happens? The magnesium-- because of the rate constants for exchange-- falls off. So if you have something else in there that can out compete it-- like protons or something-- it's gone. You never look at-- it depends on the rate constants for exchange-- but you never see the metal bound to those small molecules. So this is rapid exchange. And we'll see in the case of iron rapid exchange-- and I'm going to show you a table with this-- but rapid exchange is also important. And why is that important? It's important because say you isolate a protein and you're putting it through a column. What-- if the ligands are coming off and on, what happens to the metal by the time you get it out the bottom of the column? There's no metal. So the issues with iron, which is everywhere, that catalyzes many, many, many kinds of reactions, is it's really hard to tell that there was a metal there inside the cell, because the iron dissociates during-- in the plus two state-- during protein purification. So what about-- what if I changed the oxidation from iron two to iron three? What do you think would happen to the exchange rate? AUDIENCE: Slow down a lot. JOANNE STUBBE: Yeah. So it would slow down a lot. Every metal-- every metal is different. Every set of ligands is different. But you need to think about exchange reactions, because they're all over the place in biology. Here's is an example that I took out of Lippard's book. I used to give a lot more data than this, but these give you the rate constants for exchange for iron. Here you can see iron two, iron three-- and these are all waters. OK? That you're never going to sight see inside the cell. You might have a few waters, but you have other ligands around. All of the exchange rates change with different ligands, so you need to think about that. And also magnesium-- 6 times 10 to the fifth per second. So it's exchanging really rapidly. And that really does govern-- you know, here we're doing protein purification here. We're trying to identify what the metal is. This is it made it really challenging to tell whether you ever had iron two bound to your protein. Sometimes you isolate zinc bound to your protein. And I'm going to show you-- because of the periodic table, zinc always out competes iron. So when you're purifying something and you have zinc contaminant in your buffers and stuff like that, you'll get the iron replaced with zinc and think you have a zinc protein. And you don't. You really had an iron protein, but because of ligand exchange, you don't know what the real active form of the protein is. This is something that's plagued this area for a long time, and it certainly plagues the area of the iron that we're going to be focused on. So let me see. I think I want to go up one more. All right. What do I want to say now? So the other thing I want to talk about is-- that's unique and distinct from what you see in solution-- all of this stuff happens in solution. That's where we learn. Just like with organic cofactors. We sort of study them, we learn how they work, then we take them into biological systems. We use that as a starting point for think about-- thinking about how the enzymes use these cofactors. And in fact, what you learned over here is exactly what you learn over here, except nature has figured out how to catalyze the reactions by a factor of 10 to the 12th faster. OK? So nature adds her-- adds her two cents worth on top of all the organic and inorganic chemistry we learn. And what is it that at this information? It's the protein environment. So the last thing that one really needs to think about is how do proteins tune metal properties? OK. So that's the big question. And we're going to spend a little bit of time talking about that. And to do that, I want to go back to the periodic table. OK, so again we're going to be focused on these metals. And what we see is there is a set of rules that inorganic chemists Irving and Williams-- many of you may have heard of the Irving-Williams series-- it sort of makes a prediction based on what you learned about transition metals in terms of ability to bind. If you compare all of these metals in the same oxidation state, in the same geometric environment. So one of the questions that we face is binding. And why is that important? Because inside the cell, we will see that copper binds much more tightly than manganese-- no matter what you do, that's true. And what's the basis of that? It's the atomic number, which changes the atomic radius-- it makes it smaller. It makes the ligands bind more tightly. So the problem is, when you're inside the cell-- if all these things were floating around inside the cell-- how do you control the metallation state inside the cell? So that's the key issue, and I'm going to give you-- I'm going to show you a little bit about how nature has figured out how to control all of this. It goes awry quite frequently, and that's-- how does it manifest itself? It manifests itself in disease. Just like we saw with cholesterol. So what we're going to see-- we are going to look at first row transition metals. In general, we'll see that manganese two binds less tightly-- if you look over there, you can see where we are in the periodic table. The atomic numbers increase less than nickel, less than copper. So here are-- here are our transition metals. And so what we see is the atomic numbers decrease-- increase. And the atomic radius decreases. And therefore what you see is over at this end, you have weak binding-- over at the manganese and iron end, we have weak binding. And over at this end, we have strong binding. So if you had a protein-- and I'm going to give you an example of this. There is a protein I'm going to show you that combined both copper and manganese. And you had equal amounts? Copper would always win-- by a lot. OK? So you need to study this, but you know-- you'd have to use 10,000 times more manganese to out compete the copper. OK? So this just shows you. So this is called the Irving-Williams series after the people who described this. And what they compared to get these numbers-- they are looking at all of these things in the plus two oxidation state. OK? And they're looking at it all in an octahedral environment, with six ligands around it. This is all plus two oxidation state, and all octahedral. Everybody remember octahedral? We have four equatorial ligands, and two axial ligands-- I'm not going to draw that out on the board. OK. So that's an issue. And the question then is, how do we deal with this issue? So here's our Irving-Williams series that I've given you here. But what do we do-- how do we deal with this inside the cell? The issue is that-- in vitro, you have an issue. And there's not much you can do about it, except control the relative concentration of the metals. Inside the cell, do you think it's easy to control the relative concentrations of metals? What do you think? Concentration is everything in biology, we just don't talk about it that much. Do you think it's easy to-- say I threw in the outside of a cell 15 millimolar copper. Do you think the cell could control that? Do you think it would all get taken in and then all of your enzymes would be loaded with copper? No. So you have to have a way to actually control all of that. There was a spectacular paper, I think, published a couple of years ago that sort of demonstrates this point. And so I'm going to give you this example, because I think it really-- it was published in 2009. So in vivo. So this would be, over here, in vitro-- sorry. In vitro. And we can't get rid of the in vitro part. That's the chemical properties of the molecule, we're stuck with them. So it depends-- in vivo, metallation depends on abundance. Can we control abundance? Absolutely, we can control abundance. You've already seen with cholesterol, you control abundance with transcription factors. That's one way. There are many ways. We're going to see-- that's one of two general ways that iron is controlled. What about speciation? I've already told you-- and we're going to come back to this with iron later on-- you know, are the metals all bound to waters? But you have ATP inside the cell. Could iron two be bound to ATP? Absolutely. So it's a question of competition and what the binding constants are, which is what are talking about in recitations this week. You know, if it's really weakly bound, then something else will out compete it. But you can purify-- you put ATP in a solution, you'll pick up iron all the time if you do atomic absorption on it. Because iron can easily bind to all the negative charges on ATP. And the other thing that you need to think about is location. So location is what we're going to be focused on in the example. And what do I mean by location? Even in a bacteria you have location, right? What are the two different compartments? You have a periplasm and you have the cytosol. We're going to be talking about periplasm, and you have cytosol. In us, we have much more complicated locations in metal homeostasis. We'll see when we get to the second lecture. Has a lot of issues it has to deal with, OK? Let's look at this example. And I'm not going to spend a lot of time on it, but let me just show you what you need to think about. And these workers we're interested in a cyanobacteria. And they wanted to find what was the protein that bound the most copper, and what was the protein that bound the most manganese. So we're looking at two extremes of the Irving-Williams series. So this group identified-- C-- I can never remember the acronym. CucA is the most abundant copper two binder. And they identified MncA. And the way they did it was pretty creative. If you're interested, you can go read the paper. It's the most abundant manganese binder. Both of these things are made in the cytosol-- both of the proteins are made in the cytosol of the cell. And what they found when they studied this system in more detail is the structures of the proteins and the ligands bound to the metals are exactly the same. So you have a beta barrel in both cases, and you have the same first coordination sphere. The first coordination sphere of the ligands directly bound to the metal. If you took these two proteins, and you wanted to load MncA-- this is manganese binder-- in the test tube, you would have to add 10,000 times more manganese than copper to get the manganese in there. So that, again, goes back to this-- I mean, it's going to be different for every system-- but it goes back to this question of controlling metallation inside the cell, which is extremely challenging to do. And we don't-- there's a major, in my opinion, on cell problem in biology. They're going to do this by localization. So let me just walk you through the kinds of experiments they did. So, both these two proteins-- the one that binds copper and the one that binds manganese-- are produced in the cytosol. It turns out that the one that binds manganese folds uniquely in the cytosol. And the cytosol, if you look at metal speciation, how much free-- you're going to learn about free today, or tomorrow in recitation-- how much free copper or zinc do you have-- do you think you have in the cells? Copper two and zinc two in the cell? Do you think you have a lot? A little. AUDIENCE: For copper, I know it's less than one percent. JOANNE STUBBE: Yeah, it's less-- yeah. It's tiny. Both copper and zinc bind extremely tightly to-- again, it's all about speciation, so it depends on what the ligands are inside the cell. And in fact, in the cytosol cyanobacteria, they have measured a micromolar of free manganese. And so again, this speaks to this question of is manganese readily oxidized? No. So you don't have to worry about reactive oxygen species with manganese. What happens is this protein folds in the cytosol of the cell. Comes off the ribosome. It picks up the manganese and folds. But its location is in the periplasm. How does it get to the periplasm? It gets-- there are two ways you can get proteins from the cytosol to the periplasm. One is through the Tat transporter. And Tat transfers-- it recognizes a couple of arginines. A little zip code-- we've seen zip codes over and over again-- which then takes it in the folded state into the periplasm. And the manganese, once it's in there, doesn't come out. Doesn't exchange. So there's something about the environment that does not allow exchange. So the manganese is placed into the protein in the cytosol. Now, what happens to the copper binding protein? In this case, as soon as it comes off the ribosome, it gets grabbed by a second kind of transporter. And this second kind of transporter transfers the unfolded protein through the plasma membrane. And it folds in the periplasm. And in the periplasm, I don't know what the ratio of copper to manganese is, but remember-- copper, by this model, out competes manganese by a lot. So even if you have manganese and copper in equal amounts, the copper will always win out. And so what happens here is the copper then binds. And so the copper is loaded in a different location than the manganese. So the way this organism-- this is just one solution-- I think a pretty creative solution-- to how you deal with the Irving-Williams series, which we're faced with all the time with the many, many metallocofactors we actually have inside the cell. I'm going to come back-- next time I'll talk about two more issues. I want you to be in tune with me when we move on in the iron world. And talk about sort of a big cartoon for metal homeostasis. Doesn't matter what the metal is-- any of these metals. And then we're going to move on and focus on iron.
MIT_508J_Biological_Chemistry_II_Spring_2016
4_Protein_Synthesis_3.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. ELIZABETH NOLAN: So last time, we were talking about these aminoacyl tRNA synthetases that are responsible for attaching amino acid monomers to the three prime end of tRNAs. And we were looking at the isoleucyl aminoacyl tRNA synthetase as an example, looking at experiments that were done to study mechanisms. So recall, we left off having discussed a two-step model, where there's an intermediate, an amino adenylate formed. And then, in the second step, there's transfer of that amino acid to the tRNA by the aaRS. And so we looked at some data from steady-state kinetic experiments. Recall that a C14 radiolabel was used to watch transfer, and then we closed discussing an ATP-PPi exchange assay which gave evidence for formation of that amino adenylate intermediate. Right? And then, lastly, we talked about use of a stopped-flow to do experiments that allow you to look at early points within a reaction. And so what we're going to do is to close these discussions of experiments and this aaRS mechanism is just look at one more experiment that was done to further probe the rate-determining step of this reaction using the stopped-flow. OK? And so this experiment pertains more to reaction kinetics, and the question is, let's monitor transfer of the amino acid to the tRNA by another method here. These experiments were set up in two different ways depending on what components were mixed. And if you just rewind to Monday and recall the ATP-PPi exchange assay and the steps in that assay, in that we showed that the amino adenylate intermediate remained bound to the enzyme there. Recall then only PPi was released in that assay. And so in these experiments, the fact that the amino adenylate can remain bound was taken advantage of. And the researchers were actually able to have a preformed complex there, so basically starting after step two. So in experiment one, how I'm going to show these is by drawing the two syringes and listing the components of each syringe. And this is a good way for setting up problems within the problem sets, thinking about stopped-flow experiments. So the question is what are we going to mix? So we have syringe one and syringe two, and recall that these go to some mixer. So the two solutions can be rapidly mixed, and that's where the chemistry is going to happen. So in experiment one, in syringe one, what we have is the purified complex. OK? So we have C-14 labeled isoleucine-AMP bound to the aminoacyl tRNA synthetase of a purified complex, here. And then in this other syringe two, what we have is the tRNA. OK? So imagine these are rapidly mixed. There'll be transfer of the radiolabeled isoleucine to the tRNA, and so formation of that aminoacyl tRNA can be monitored. OK? In the second experiment, we have just theme in variation, and if you're interested in more details, the reference is provided in the slides. So again, in syringe two, we have the tRNA, and in syringe one, what will be combined are the components here. OK? So then, the question is, in each case, what do we see? And those data are presented here from the paper, and there's some additional details about the experimental setup. So effectively, what we're looking at on the y-axis is the amount of tRNA that's been modified. So tRNA acylation measured by transfer of the radiolabel versus time. And in the black circles, we have the data from experiment one, shown here, and in the open circles, we have the data from experiment two. So what is the conclusion from these data? And this value here is not similar to something we've seen before in this system. Both experimental setups are giving the same result. Right? Effectively, these data are superimposable, and they can be fit the same. So what does that tell us about the rate-determining step? AUDIENCE: [INAUDIBLE] versus forming the intermediate. ELIZABETH NOLAN: Yeah. Right. Aminoacylation of tRNA is the rate-determining step. So some of you suggested that in class on Monday. Right? So that's the case here. OK? So formation of the intermediate is much more rapid than acylation of the tRNA here. So we've examined now the mechanism in terms of getting the amino acid onto the tRNA. What do we need to think about next here? So what we need to think about is fidelity. OK, and we've looked at the overall rate of error in protein biosynthesis, how often errors occur on the order of 10 to the 3. So how is the correct amino acid loaded onto the correct tRNA? Each tRNA has an anticodon that is a cognate pair with a codon. And so different tRNAs need to have different amino acids attached. OK, and what does that mean? That means, in general, there's a dedicated aminoacyl tRNA synthetase for each amino acid, in general here. So how are amino acids with similar side chains differentiated by these enzymes? And is it possible for an incorrect amino acid to get loaded onto a tRNA? And if that happens, what are the consequences? So we're going to examine fidelity some here. And as background, an observation made, say from studies like that ATP-PPi exchange assay, is that some aminoacyl tRNA synthetases can activate multiple amino acids, so not only the one they're supposed to activate but also others. So what does that mean? That means that the enzyme can bind and activate effectively the wrong amino acid, and if we think about fidelity, we can think about this as being a problem here. So what happens? What happens is that these enzymes have an editing function, and they're able to sense if a wrong amino acid is activated. And then they have a way to deal with it, and this is by hydrolysis. OK? And so let's consider an example, for instance, just similar side chains. So if we just consider, for instance, valine, isoleucine, and threonine, these will be the players for our discussion. OK? They're different, but they're not too different. Right? Oops, sorry about this. We're missing a methyl. Valine, an isoleucine, we have a difference of a methyl group. Threonine, we have this OH group. Right? And we can just ask the question, for instance, how is valine differentiated from isoleucine or threonine here? And so as an example, what's found is, if we consider our friend that we studied for the mechanism here, what we find is that this binds and activates isoleucine, as we saw, but it will also bind and activate valine here. And effectively, if this happens, we have a mismatch, because the end result will be isoleucine-RS with valine AMP bound here. OK? And what's found is that the catalytic efficiency or Kcat over Km, in this case, is about 150-fold less than the native substrate. So that doesn't account for the 10 to the 3 error rate here. So we need more specificity. So what's going on? So we're going to consider this editing function and a model that's often used to describe how these aaRS do editing is one of two sieves. These enzymes don't actually have a sieve. It's just a conceptual way to think about it. So this double-sieve editing model involves a first sieve which is considered to be a course one. So imagine if you have like a change sorter. It will let the quarters through as well as the and dimes and the pennies. There's some sort of discrimination of amino acids based on size, and then depending what gets through this first sieve or gate, there's a second sieve which is considered to be a fine one. And this one can differentiate perhaps on the basis of size or maybe on hydrophilicity or hydrophobic of the side chain. So effectively, if an incorrect amino acid passes through this first sieve-- so in other words, if it binds to the enzyme and becomes activated-- hydrolytic editing will occur. OK? So think about hydrolysis in terms of having breakdown of these species. So if the incorrect amino acid passes through and is adenylated, there'll be hydrolysis. So let's consider some examples so the first example here we can consider this guy and isoleucine and valine. So as I mentioned, this aaRS will activate both. So in this case, the first sieve can't differentiate isoleucine from valine. They have similar sizes according to this aaRS. But then what happens here in the second sieve, isoleucine is too big, and so there's no hydrolysis, and it moves on to form the desired charged tRNA. In contrast, valine's a bit smaller. It passes through the sieve, and it ends up being hydrolyzed. So these aaRS also have an editing domain, and this editing domain, as we'll see in a few slides in a structure, is responsible for this hydrolysis, so stated here. Right? Different sites, so there's an aminoacylation site and an editing site here. So valine can reach the editing site, but isoleucine cannot. So how do you predict? Just to keep in mind, every enzyme is different in terms of the model for discrimination and also when editing occurs. So you really need to look at the data when the data is presented to you to sort out how this works. Let's just look at another example with a cartoon depiction. So this is for the valine RS, and we're going to consider the three amino acids here-- valine, threonine, and isoleucine. So in green, we have the first sieve, and this is based on size. So what do we see in this cartoon? So threonine and valine make it through, but isoleucine does not. It's rejected right away, so it's never activated. So if threonine and valine pass through, what happens? We see each one is activated as the amino adenylate, and then what? Well, valine, we want to transfer the valine to the tRNA, so it can move on and help with protein synthesis. If threonine's activated, and here we see that threonine is transferred to the tRNA as well, this is hydrolyzed by the editing site, in this case. So the threonine is removed from the tRNA with the anticodon for valine. Right, so think about the ester bonds that we saw last time in terms of the three prime end of the tRNA being modified and the chemistry that will happen there to result in hydrolysis of and release of the amino acid here. So what that cartoon hints to is that the hydrolysis can occur at different steps. So we can have hydrolysis that is pre-transfer, which means the editing occurs before the tRNA is modified. Or we can have post-transfer editing which is what we saw in the prior slide, where the editing and hydrolysis occurs after the amino acid monomer is transferred to the tRNA. OK? And this schematic here depicts that, so what do we have? We have the aaRS responsible for modifying tRNA for isoleucine, and we combine that with valine, the wrong amino acid, and ATP. What happens? So E is for enzyme. We have formulation of the amino adenylate intermediate. Here's the tRNA with the anticodon for isoleucine. What happens? So we have this complex form in this depiction. Pre-transfer editing would occur at this stage, before the valine is transferred to the tRNA, and so what do we see? We see breakdown and these species. If the valine is transferred to the tRNA, we don't want this, because that would result in this reading of the genetic code. Post-transfer editing, this species here is hydrolyzed. So whether pre or post-transfer editing occurs is going to depend on the aminoacyl tRNA synthetase, and some can use both mechanisms. That's what we're seeing here. OK? Some only use one, for instance, the valine RS only uses a post-transfer editing mechanism. So when presented with the data, look at the data and see what species is being hydrolyzed. And if both are, how did the steady-state kinetics, for instance, compare? Just to take a look in the context of a structure of one of these aaRS. So the sites where aminoacylation occur and editing occur are separated by about 30 Angstroms, and that's shown here, where we have the aminoacylation site, and here we have the editing site. That's responsible for pre and/or post-transfer editing. So in thinking about this and thinking about how one could leverage this 30 Angstrom separation and these two distinct sites in terms of experiments, what does that allow one to do? So imagine if you want to ask, what are the consequences of having aaRS that have faulty editing function? And effectively, mischarged tRNAs or put the wrong amino acid on a tRNA. What does that mean for a cell? There's an opportunity to do that here. So you could imagine mutating residues that are critical for editing function in the editing site. Such that you have an aaRS variant that can activate amino acids and transfer them to the tRNA but cannot edit when a mistake happens. Right? So you can imagine a site-directed mutagenesis, purifying the enzyme and doing some in vitro characterization to see how it behaves. And then you could also imagine translating this into a cellular context and asking say in cell culture what happens here? So basically, what are the consequences of faulty editing? And these types of studies have been done. We're not going to look at them in detail. But just as an overview and some concepts that will come up within our folding section, what's been shown is that a single point mutation in an editing domain of one of these aminoacyl tRNA synthetases may have deleterious consequences. And we can imagine that these consequences could result from proteins or enzymes that gain a new function or don't do their correct function. Right? So just imagine that some mischarged tRNAs, where mischarged means the wrong amino acid is attached, are around because of some mutant aaRS. And these tRNA that are mischarged can be delivered to the ribosome, which means that point mutations form within synthesized polypeptide chains. So there's some mixture where some of these proteins are native, and others are mutant, and what might happen here in terms of consequences? So native protein will go on and do its job. Imagine there's some mutant protein here that's altered in some way, and these are just some examples of possible outcomes. So maybe there's a breakdown of some essential cellular process. Here, we have triggering of autoimmune-like responses, things that are not good. What if these mutant proteins misfold? So they can't form their correct fold, and fold is important for function. Maybe there's aggregation. Maybe there's stress on the proteasome, ER response, unfolded protein response, cell death. So fidelity's important. And just some things to think about as we close this section. We can consider error rates of various biological polymerizations, whether that be DNA replication, transcription, or translation, and they vary quite a bit here from this. And what the take-home can be by comparing these error rates is infrequent mistakes in decoding the mRNA are accepted as a source of infidelity. So they do occur, and they occur more frequently than, say, an error in replicating the DNA, and that makes sense. Right? If an error occurs in DNA replication, there's a huge problem likely compared to an error in translation. So some questions just to think about, answers aren't going to come up within the context of this course. But higher accuracy is important, but actually how much accuracy is enough? And there is a cost in terms of cellular energy for accuracy, and is it that the cell tunes its accuracy to some point that could be considered optimal, and are there benefits to translational infidelity? Right? So the prior slide showed negative consequences, but are there benefits? So that discussion, we'll close considering how the amino acids get attached to tRNAs, and so where we're moving to now is the elongation cycle. AUDIENCE: So is there a specific part of the cytoplasm where the tRNAs and the amino acids come together, or does this happen everywhere? ELIZABETH NOLAN: So I actually don't know, but I think of them as being everywhere in terms of the tRNAs. Because as we'll see in a few slides, EF-Tu, which is required for delivering the tRNAs to the ribosome, is highly abundant. At least, that's my thinking for prokaryotes. Do you have anything to say? The question was effectively are there certain regions of the cell where tRNAs get modified more than other regions? JOANNE STUBBE: I don't know. In mammalian cells, they have weirdo complexes with tRNA synthases that they've been around forever. and I still think we don't really understand what the function is. AUDIENCE: [INAUDIBLE] JOANNE STUBBE: Can you speak a little bit louder? ELIZABETH NOLAN: The question is, do we have information about say the distribution of tRNAs as being amino acid modified versus unmodified? AUDIENCE: I think maybe we could [INAUDIBLE] I don't know. ELIZABETH NOLAN: There's always a way, probably. Right? But I don't know what that distribution is either in terms of the percentage of tRNAs that are aminoacylated at any one given time. Yeah, just don't know. I think one key thing to think about as we come to the next part is that these tRNAs are bound by EF-Tu. So to think of them as in complex with a translation factor as opposed to tRNAs floating around in the cytoplasm, so I think that that's a key point of focus. So moving into elongation, what do we need to think about here? So we need to think about delivery of the amino acid tRNAs. How does the ribosome ensure that the correct aminoacyl tRNA is delivered? So we have the correct amino acid onto the tRNA, but we also have to get the correct amino acid to the ribosome. How is peptide bond formation catalyzed? What is the method by which polypeptides leave the ribosome, and how is translation terminated here? So effectively, these are all questions we need to address in terms of thinking about how the ribosome translates the genetic code and synthesizes the polypeptide. So within the notes posted on Stellar, there's a number of pages of definitions, so terminology that comes up within these discussions of the ribosome to refer to. And in terms of our translation overview slide, where we are now is here, in elongation. So we have the mRNA our 70S, and we're going to focus for the rest of today on thinking about EF-Tu, this elongation factor that's responsible for delivering the amino acid tRNAs to the ribosome here. So as an overview in terms of a cartoon, where are we going? Here, we have our ribosome, and in this depiction, it has been translating. So we have a nascent polypeptide emerging through the exit tunnel of the 50S. So we see this peptidyl tRNA in the P-site, and we have this deacylated tRNA in the E-site. So what happens? That A-site is empty, and for another round of elongation to occur, the aminoacyl tRNA needs to be delivered. And as we'll see today and in recitation this week, EF-Tu is responsible for that. So there's a ternary complex that forms between EF-Tu-GTP. So EF-Tu is a GTPase and the aminoacyl tRNA. And this ternary complex delivers the aminoacyl tRNA to the A-site. OK? This allows for peptide bond formation to occur in the catalytic center. And then there's a process called translocation, in which the elongation factor-G in complex with GTP comes in and helps to reset the ribosome such that another aminoacyl tRNA can come in. So where we're going to focus for the rest of today is on this process here, thinking about EF-Tu and how that delivers amino acid attached to tRNAs to the A-site. OK, so just in our cartoon, where we left off, with initiation process, so we have that initiator tRNA in the P-site, and the A-site is empty. OK? And one other thing I'll just show here, I mentioned when describing ribosome structure that some ribosomal proteins have additional jobs. So it's not just that these proteins help with the overall structural integrity of the ribosome. And there's two ribosomal proteins, L7 and L12, and these are involved in recruitment of that ternary complex between EF-Tu, the GTP, and the aminoacyl tRNA. So now, we need to get the aminoacyl tRNA to the A-site, and this requires EF-Tu. And when we think about this, we always need to think about this ternary complex which is EF-Tu bound to the aminoacyl tRNA bound to GTP. So a little bit about EF-Tu. So in E. coli, EF-Tu is the most abundant protein. So there's tons of EF-Tu. OK, approximately here, we have 100,000 copies per cell. So it's about 5% of total cellular protein. And so, as I just said in response to a question about these tRNAs in the cells, we can think about this entire tRNA pool, or aminoacylated tRNA pool, as being sequestered by EF-Tu. So EF-Tu binds the aminoacyl tRNA, and it binds GTP to form the ternary complex. And this allows EF-Tu to deliver these amino acids attached to the tRNAs to the A-site, and it's a GTPase. And we need to think a lot about how this activity relates to its function and fidelity. So here is a depiction of the structure of a ternary complex. So what we see is that we have a tRNA here, and here we have EF-Tu bound to the tRNA. So here is the anticodon loop, and if we consider this structure of the ternary complex bound to mRNA, what do we see? So we have an mRNA in green. OK, here's the tRNA, and the anticodon end, and here's EF-Tu. And as I said, EF-Tu is a GTPase. Where is the GTPase center? That's up here. So this GTPase center of EF-Tu is quite far from the tRNA anticodon, down here. This distance is about 70 Angstroms. And so this is something quite incredible to think about, because as we'll see, when there's codon recognition-- meaning this codon-anticodon interaction, that's a cognate pair-- GTP hydrolysis is stimulated. So how is that communicated over 70 Angstroms? If there's a recognition of that here between the mRNA and the tRNA anticodon, and GTP hydrolysis happens up here, how is that signaled over 70 Angstroms? Right? So clearly, there's going to be some conformational changes that occur that allow this GTPase activity to turn on. Just another view, so here, again, we have the structure of the ternary complex bound to the mRNA, and here, we can look at the ternary complex bound to a 70S ribosome. So we have the ribosome in this orangey-gold color, the 50S the 30S. Here, we have the PTC and decoding site. The tRNA is in green, and EF-Tu is in this darker orange here, to place that in the perspective of the 70S ribosome here. So conformational change is required to signal code on recognition to the GTPase center, and this is something that will be spoken about in quite some detail this week in recitation. One other point of review before moving forward with delivery of the amino acid tRNA. We need to think about codon-anticodon interactions here for decoding. So we have cognate versus near-cognate versus non-cognate, and this is for the codon-anticodon interaction. OK, and so if we imagine we have some mRNA, and you need to think about the five prime and three prime ends with this. And then we have some tRNA, three prime, five prime, we need to ask how do these match? So for instance here, if we have AAG, and we have positions one, two, three, from left to right of the mRNA, right here we have a cognate match. OK? So we have the AU match in positions one and two, and then wobble's allowed in position three, this GU here. So no, no interaction. OK, just as another example here, imagine we have GAG, here. What we see is that there's only one match, meaning Watson-Crick base pairing, in position two. OK. Here, this GU, that's not a match based on Watson-Crick base pairing, and as a result, the ribosome is going to want to reject this tRNA, if this is what's happening in the A-site here. And then, we can just imagine some situation, where we have a tRNA and an mRNA where there's just no match. OK? No Watson-Crick base pairing here. So what we need to ask is, as EF-Tu is delivering these aminoacyl tRNAs, what happens if it's a cognate match versus a near-cognate versus a non-cognate? How does the ribosome deal with the wrong tRNA entering the A-site? Right? So again, this is something important for fidelity, and these both need to be rejected. So why are we reviewing this? We're reviewing this, because it's important in terms of what happens during initial binding of aminoacyl tRNAs to the ribosome. So we're going to go over some of this in words and then look at a cartoon that explains this process. And what we're focused on is delivery of the aminoacyl tRNA to the A-site. So what happens first? OK. First, there needs to be an initial binding event, where the ternary complex binds to the ribosome. So initial binding, it binds to the 70S, and these ribosomal proteins are involved in the recruitment of the ternary complex. This initial binding event of the ternary complex to the ribosome is independent of the mRNA. What happens next is that there's codon recognition. So we need to think about that tRNA entering the A-site, and there's some sort of sampling that occurs in the decoding center, so sampling of codon-anticodon pairs in the A-site, and so what happens? What happens if there's a cognate event or a non-cognate event? So if a cognate anticodon recognition event occurs, there's a series of steps that then happen. So with a cognate codon-anticodon interaction, there will be a conformational change in EF-Tu, and this activates the GTPase center which allows for GTP hydrolysis. OK, and effectively this conformational change stabilizes the codon-anticodon interaction here, and that stabilization accelerates the GTP hydrolysis step. So this is all building towards a kinetic scheme. In terms of enhancements, what's found is that the rate of GTP hydrolysis by EF-Tu increases by about 5 times 10 to the 4th with cognate anticodon recognition in the A-site. So we have GTP hydrolysis, and then there's another conformational change. So we have EF-Tu in its GDP-bound form, and effectively, EF-Tu will dissociate from the aminoacyl tRNA, and the aminoacyl tRNA will fully enter the A-site. OK so this process is called accommodation, and once that happens, peptide bond formation can occur. So this is the good scenario. The polypeptide can keep being made. What if it's not a cognate? So what if a near-cognate tRNA is delivered to that A-site during this initial binding event which is independent of the mRNA? That's why this can occur. If it's a near-cognate anticodon, what we observe-- and this is all from experiments you'll be learning about this week-- the ternary complex rapidly dissociates from the ribosome. And what's found from kinetic measurements is that the dissociation of the ternary complex, when it's a near-cognate situation, is about 350-fold faster than cognate. So let's look at this stepwise within a cartoon format. You'll see another depiction of this scheme in the recitation notes and in problem set two. So here, we have multiple steps in this overall process. All of these steps have some rate that's been measured by multiple types of methods, and Joanne will be presenting this week on a lot of pre-steady-state kinetic analysis that were done to measure these rates here. And basically, the key point to keep in mind, and that I'd like to stress from what was just said on the prior slide, is that what you'll see throughout this is that conformational changes are coupled to these rapid chemical steps. And the chemical steps are irreversible, this GTP hydrolysis. So what do we see? We begin with initial selection. Here, we have our ribosome, and there's a polypeptide being synthesized. Here's the ternary complex-- EF-Tu, GTP, and the aminoacyl tRNA. So there's an initial binding step that's governed by k1 in the forward direction and k minus 1 in the back direction, and said before, this is independent of the mRNA. So what happens? The ternary complex binds the ribosome, there's sampling in the A-site of the anticodon, and then there is a step described as codon recognition with k2 and k minus 2. OK? In this scheme, if an arrow is colored, red arrow indicates the rate is greater for near-cognate than cognate. OK? Which means in the event here of a cognate pair, this is going to push forward in the forward direction. If it's near-cognate, this back step has a greater rate of about 350-fold. OK? So we're going to end up back here. With cognate recognition, next, we have GTPase activation, again, forward and reverse. Green indicates the rate is greater for a cognate match than near-cognate. So if it's the correct anticodon, it's going to plow through to here. We have GTPase activation. And then what happens down here? We have a GTP hydrolysis step. We have a conformational change in EF-Tu, and then what? We can have accommodation such that the tRNA was installed fully into the A-site and then rapid peptide bond formation or peptidyl transfer. The ribosome has one last chance to correct a mistake. So you can imagine that after GTP hydrolysis, after the conformational change in EF-Tu and its dissociation, there's a last chance at rejection here. Realize that step is occurring at the expense of GTP here. So in thinking about how to deconvolute this model or how to design experiments to test this model, there's a lot that needs to be done. Right? A lot of rates that need to be measured, a lot of different species along the way with the ribosome. Right? So how do you get a read out of each of these steps? That's what we'll be focused on in recitation this week and next here. So here are some more details on this initial binding process with some information related to the k1s and k minus 1s here. That's provided to help navigate the reading this week for recitation here. So what happens in the GTPase center of EF-Tu? What are some of these conformational changes? And effectively, there are conformational changes in the decoding center that are critical on one hand. So that's not at the GTPase center, but first asking what's happening when the mRNA and tRNA codon interact? And then what's happening in the GTPase center here? So just to note, not shown in the slide in terms of the decoding center. OK, what we need to be focusing on are changes in the 16S RNA, and effectively, I'll just point out three of the positions. So we have A1492, A1493, and G530 of the 16S, here. And what we find is that these bases effectively change conformation with a cognate match. And they effectively flip and interact with that cognate anticodon to help stabilize the codon-anticodon interaction. So this stabilizes the codon-anticodon interaction, and that stabilization accelerates the forward steps. So that results in this acceleration of GTP hydrolysis. So then the question is, what's happening in the GTPase center of EF-Tu? Because there has to be a change in conformation at that GTPase center 70 Angstroms away to allow for GTP hydrolysis to occur, and somehow, that all has to be signaled from here to there. So what we're looking at here is an excerpt of the structures looking at this GTPase center, and so what do we see? Effectively, two residues, so isoleucine-60 and valine-20 have been described as a hydrophobic gate in the GTPase center. OK, and the idea is that if this gate is closed, it prevents a certain histidine residue, histidine-84, from activating a water molecule which then allows for the GTP to be hydrolyzed. OK, but if there's a change in conformation, and this gate opens, that chemistry can occur. So what are we looking at here in these structures? Effectively here, we have the two hydrophobic residues of the gate, so valine-20, isoleucine-60, and here's that histidine-84 I told you about, and what is this, GTPCP? So what we have there is a nonhydrolizable GTP analog. These types of molecules are very helpful in terms of getting structural information, in terms of doing certain types of biochemical experiments. OK? So effectively, we can have an analog bound that cannot hydrolyze. What are we looking at here? Here, we're looking at the, say, activated species, and what do we see? We see that this histidine has changed position. So here, it's flipped that way, here this way and here, what we see is a view with EF-Tu in the GTP-bound form. So the idea is that overall conformational changes that occur 70 Angstroms away, because of codon-anticodon recognition, effectively signal conformational changes in GTPase center that allow for GTP hydrolysis to occur and things to move in the forward direction there. So that's where we'll close for today. On Friday, we'll continue moving forward in this elongation cycle, and starting in recitation tomorrow, you'll look at experiments that allowed for this kinetic model to be analyzed and presented. You really need to come to recitation this week and read the paper. JOANNE STUBBE: And you need to read the paper more than once. It's a complicated paper. ELIZABETH NOLAN: That's on [INAUDIBLE].. It's a complicated paper which is why we have two weeks of recitation for it. There's a lot of methods, and I'll also point out that problem set three has very similar types of experiments, but it's looking at EFG instead of EF-Tu. So spending the time on this paper in the upcoming weeks is really important.
MIT_508J_Biological_Chemistry_II_Spring_2016
18_PK_and_NRP_Synthases_4.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. ELIZABETH NOLAN: We're going to end the unit on synthesis today. And the focus of today's lecture will really be looking at one system in detail and the types of experiments that are done to elucidate a biosynthetic pathway for a non-ribosomal peptide. And so just to recap from last time, if we think about studying assembly lines in lab, and we're thinking about this for a non-ribosomal peptide synthetase, what needs to be done? So first, it's necessary to, typically, overexpress and purify domains, didomains or modules. And so on Monday, it came up that often, these proteins are enormous and it's not possible or feasible to express entire modules, or entire proteins that have multiple modules. So oftentimes, people will look at individual domains, or didomains, which are smaller and more amenable, to overexpression in an organism like E. coli. Then it's necessary to assay for A domain activity. So we're called the A domains through the adenylation domains. And the question is, what monomer is selected and activated? And so the ATP-PPi exchange assay comes up here. There needs to be assays for loading of the T domain, or carrier protein, with the monomer. Assay for peptide bond formation, which is the condensation domain. And then often, some assay for chain released by the thioesterase domain. OK, so assay for TE activity. Chain release. And so in terms about of thinking about these T domains, we learned that these T domains need to be post-translationally modified with the Ppant arm, which means we need an enzyme called a PPTase. And so in many cases, we don't know what the PPTase is for a given gene cluster. And what's done, often in the lab, is that a PPTase from B. subtilis, named Sfp, is used in order to post-translationally modify the T domains with the Ppant arm. So there is a serine residue in these T domains that gets modified. We looked over that in a prior lecture. So this one is very useful. And if you don't know the enzyme to use, people will use recombinant Sfp And just recall, we have the T domain. There's a serine moiety. We have a PPTase. That's going to stick on the P pant arm here. So we call this apo, holo, and then the amino acid, or aryl acid monomer, in the case of NRPS gets loaded here via a thioester. And so Sfp can be used to get us here. And even what people have done is make modified analogs, where there's some R group. So you can imagine using chemical synthesis to load a monomer, or even some other type of group that, for some reason, you might want to transfer here. And this Sfp is very promiscuous and it can do that. And so the take-home here is if you need a PPTase, overexpress, purify, and utilize Sfp. Here's just an example for review, where we have a carrier protein, so a T domain, and we have the PPTase activity here, Sfp, attaching this Ppant arm. And here, it's described with an R group. And just to give you an example of possibilities here, there have been many reports of CoA analogs being transferred to T domains by Sfp. And these can range from things like an isotope label to peptides to steroids to some non-ribosomal peptide derivative, or a fluorophore. So this has been used as a tool. And you might ask, why is this possible? And if we just take a look at the structure of Sfp from B. subtilits with coASH and magnesium bound. What we see is that this end of the coA is extended out into the solvent. And at least in this structure here, it's not interacting with regions of the protein. So you can imagine that it's possible to attach some group, even a bulky group, here and be able to transfer it there. So where we're going to focus the rest of the lecture is on an assembly line responsible for the biosynthesis of a natural product called enterobactin, and this is a siderophore. And so in thinking about this, what I would just like to first note is that when we talk about these assembly lines, we can group them into two types, which are non-iterative and iterative assembly. And so what does this mean? So we've seen examples of non-iterative assembly last time on Monday with the ACV tripeptide and the vancomycin synthetase. So in these non-interative assembly lines, effectively, each step has its own module. So each carrier protein, T domain, each condensation or catalytic domain, is used only once as the chain grows. And we see the chain passed along from module to module here. So also, the PKS we looked at for synthesis of DEB is one of these non-iterative assembly lines. So in contrast, in the example we're going to look at today with the enterobactin synthetase is an iterative assembly line, and this is similar to what we saw in fatty acid synthase. So in these iterative assembly lines, effectively, only one module is employed over and over again. So you can have the same carrier protein and same catalytic domain used for multiple cycles of chain elongation. And that's what we saw in fatty acid synthase, where there are multiple cycles in addition of a C2 unit via the same domains. And so what we're going to see today is this type of iterative assembly is responsible for the synthesis of this molecule here. So first, just an overview of building blocks. And then we'll talk about why organisms want to make this molecule, and then focus on the biosynthetic logic and experiments. So this molecule, enterobactin, is produced from two monomers. So we have 2, 3 dihydroxybenzoic acid, or DHB, and we have serine here. And there is a two-module assembly line responsible for the synthesis of this natural product. And that assembly line is shown here. So we see that there's three proteins, EntE, EntB, and EntF. We have an initiation module, elongation module, and this TE domain for termination. So overall, three separate proteins, two modules, and seven domains. So this NRPS is quite small. And this is an example of a non-ribosomal peptide that's produced by E. coli. So E. coli makes this molecule, as well as some other gram-negative bacteria. So this is iterative. We have three of each of these monomers, yet only two T domains here, so imagine one responsible for each. So before we get more into this biosynthetic logic, let's just take a moment to think about why this molecule is produced. So this is a case where we actually have very good understanding about why an organism is producing a natural product. And this actually gives a segue into JoAnne's section on metal homeostasis, which will come up after cholesterol after spring break. So many bacteria use non-ribosomal peptide synthesis machinery in order to make chelators in order to acquire iron. And that's because iron is an essential nutrient and it's actually quite scarce. So if you imagine an organism in the soil, maybe it needs to obtain iron from a rock. Somehow it needs to get iron from our pool, and concentrations are very tightly regulated, and most iron is tightly bound. And we can also think about this from a standpoint of solubility, so simple KST type things. We all know that iron 3, which is the predominant oxidation state in aerobic conditions, is very insoluble. So our cars rust up here in the Northeast because they sit outside on the road in the winter, and that's no good. So we can think about 10 to the minus 18 molar. And then if we think about free ion in human serum, for instance, the concentration is even lower because there's inherent toxicity associated with free iron. And you'll hear about that from JoAnne in more detail later. So these organisms have a predicament because for metabolism, they need iron on the order of micromolar concentrations. So how does some organism obtain micromolar iron when in environments where, say, that's 10 to the minus 24 molar? And there's a number of strategies that come up, but one of the strategies is the biosynthesis of non-ribosomal peptides that act as metal scavengers and metal chelators. And so I just show you two examples here. And we have enterobactin, which we're going to focus on today. And this is really just a wonderful molecule. Yersiniabactin-- and I put this up here, in part, because there were some questions about those cyclization domains in the bleomycin gene cluster, that we looked at that assembly line on Monday. And this is another example where cyclization of cysteine residues occurs in order to give the final natural product via those modified condensation domains here. So if we think about enterobactin for a moment longer, what happens, effectively, this molecule can bind iron 3 with higher affinity. And the iron bound form is shown here. So these aryl acids, these catechol groups, provide six oxygen donors to the iron center to get a structure like this. So in terms of the organism in production, what happens when these organisms are confronted with iron limitations? So essentially, they're starved for essential nutrients. They'll turn on biosynthesis. So they'll express the enterobactin synthetase, which will allow for production of enterobactin. So this is happening in the cytoplasm. So we have those three proteins that comprise the assembly line that use the HDML serine to produce the natural product. And then in addition to that biosynthetic machinery, the organism needs to also express and use a whole bunch of transport machinery. So what happens is that this natural product is exported into the extracellular space. So this is a gram-negative organism, so it has an inner membrane and an outer membrane. And it's in the extracellular space that enterobactin will scavenge iron 3. So there's formation of the coordination complex, shown in cartoon here. And then there's a dedicated receptor on the outer membrane that will recognize that iron bound form and bring that into the cell. And then through transport and through breakdown of the natural product, this iron can be released and then used. So iron is a co-factor of many types of proteins and enzymes here. So a whole lot is going on. We're going to focus on the biosynthetic part. And so in thinking about this, from the standpoint of a non-ribosomal peptide synthetase, what's something interesting? So in the examples we saw last time, we had the ACD tripeptide, the vancomycin synthetase. These assembly lines are only forming peptide bonds, so we saw formation of amide bond. If we take a look at enterobactin and we think about the monomers coming from, what do we see? So this has some C3 symmetry. And we can see that it's comprised of three of these DHB serine monomers, so 1, 2, 3. And effectively, there's formation of amide bonds between DHC and serine. So it's shown and throughout here. But there's also ester linkages formed. So this ring here is often called a trilactone, or a macrolactone, and somehow, these three esters need to be formed. So how is the enterobactin synthetase doing this? So if we look at an overview of different enterobactin synthetase, the gene cluster, what do we learn? So the first point to make is that there are actually six proteins required. So you've seen three so far, in terms of the assembly line. So we have an A, B, C, D, E, and F. And A, B, and C are required for the biosynthesis of this aryl acid building block here, this DHB. And then this is a case, rather unusual, where the PPTase was identified, and we're going to talk about that more as we go through the experiment. So I just told you about using Sfp if you don't know what to do. This was the case where the researchers were able to identify the dedicated PPTase for the assembly line. So that's EntE. And then we have B, E, and F that provide an iterative assembly line that yields the natural product, as shown here. OK, so also just note that B is coming up twice. We're seeing it here and we're seeing it here. So that should bring up a question, what's going on with this enzyme? And then we'll address that as we move forward. So in terms of thinking about this synthetase, we'll do an overview and then look at the experiments. So we have an A domain and B. We have a protein here that has a T domain and an IDL domain that we'll get back to. This is EntE, and then here we have EntF. And then we have our PPTase. So effectively, here, we can have our initiation. Here, we have elongation. And here, termination. So what is the overview, in terms of what happens for A domain activity? Loading of the T domains and peptide bond formation. So for the overview, we'll first consider getting a monomer on to EntB. So EntB has a T domain. And that has a serine. The serine needs to be modified under the PPTase EntD. Holo EntD. We put the Ppant arm. And then what we'll see is that EntE is the A domain that's responsible for activating DHB and transferring that monomer to EntB. So then in terms of EntF and getting the two domains of EntF loaded, it's going to be loaded with L-serine. And so here, you have EntF, again, focusing on the T domain. Again, we have that action of EntD to give us the holo form with the Ppant arm. And then we see that, in this case, the A domain is within the same protein. So the A domain of EntF is going to activate L-serine and transfer that to the T domain. So we have EntF, A domain to get us here. So then what about peptide bond formation? So we see the C domain, condensation domain as an EntF. And so what we can imagine is that we have our EntB loaded with the aryl acid monomer plus EntF loaded with L-serine. And what's going to happen? The C domain of EntF is going to catalyze formation of the amide bond here to give us EntB plus EntF, effectively, with DHA serine attached. So this gives us some insight, just this overview, in terms of how the amide bond is formed and pretty much follows what we saw for the ACV tripeptide and vancomycin biosynthesis for the heptapeptide that forms its backbone. So a question we have at this stage is, well, we see in that structure, in addition to these amides, there's also esters. How are those formed? And then what assays are needed? And so first, we're going to think about formation of the ester linkages, and then we'll launch into the experiment. So let's take a look at this assembly line. So we have EntE, the A domain, EntB, this didomain. That has the T domain. And here's EntF. And we see in this cartoon, the T domains are already modified with the P pant arm. And here is the serine residue of the TE domain that, ultimately, accepts the chain. So what happens? If we take a look, so we saw this on the board, EntE becomes loaded with dihydroxybenzoic acid. EntF becomes loaded with serine. The condensation domain catalyzes this formation of an amide bond between two monomers. And then what happens? We see transfer of this DHB serine unit to the TE domain here. And then we can imagine these two domains being loaded with monomers again. And what happens? What do we think about this? Effectively, formation of one amide bond transferred to the TE domain. Formation of another amide bond. And look. The second moiety here is transferred to the TE domain, to the initial monomer, via this ester linkage. This is really unusual behavior for a TE domain. And what happens again? We see this happen again, so we get this linear trimer of enterobactin, effectively. And then what happens? Chain release to form the macrolactone here. We have this group that can come around here. So what is the hypothesis? The hypothesis that was put forth by the researchers is that in this assembly line, effectively, this thioesterase is serving as a waiting room. And it's allowing these DHB serine monomers to wait around and remain attached, such that these esters can be formed. And somehow, it senses this appropriate size, this linear trimer, and then catalyzes chain release, as shown here. AUDIENCE: Does it mess up? ELIZABETH NOLAN: Does it mess up? AUDIENCE: Yeah, does it always give you a 3 under [INAUDIBLE] circle? ELIZABETH NOLAN: Yes, to the best of my knowledge. What's very interesting is that-- so this is worked out as Chris Walsh's group. Recently, Alison Butler's lab at Santa Barbara has discovered an enterobactin analog that has an additional unit in it here. So it looks like there's other thioesterases around that serve as waiting rooms and can accommodate different ring sizes. But this one will just give this size. And that's a very interesting question, just in terms of, how is this thioesterase doing that? We need more structural understanding to get at that. In addition, these are just some overviews that I've put in the notes, other depictions of this process and the waiting room hypothesis from the literature. So we're going to look at the experiments that were done to study this. And I really, 1, like enterobactin, so I got excited about this molecule as an undergraduate, actually. But beyond that, why I like to present on this system, in terms of experiments, is that many firsts came from it, and it really serves as a paradigm for many, many other studies. So if we just consider the various firsts that came from the studies of the enterobactin synthetase, 1, it was the first siderophore synthetase to be studied, and there's hundreds of siderophores out there and many have been investigated since this one. It's the first example of a siderophore synthetase to be characterized for the Ppant arms. This was the first identification of a dedicated PPTase for one of these assembly lines. And the first identification of the thioesterase domain that has this behavior of forming this cyclo-oligomer. And the first identification of an aryl carrier protein, so this T domain that carries DHB here. And in terms of the experiments we'll go through, these experiments that were devised in this system have been generalized across many, many assembly lines and the methods are still routinely used today. But a major difference I want to point out is that today, we have so many microbial genomes sequenced that a lot of work is driven by bioinformatics here, in terms of identifying that NRPS. So often, the gene cluster may be identified well before the natural product is ever isolated. And this is a case where the natural product isolation occurred first, so that was done well, well before here. And this is a case where we know how to get the organism to produce this natural product. You starve the organism of iron and it will start to make it, in many instances, for other natural products produced by these assembly lines. We don't know how to get the organism to actually make the molecule in a laboratory setting there. So there's some interesting work being done about that. Some actually recent work out of Northeastern, actually trying to grow organisms in soil-like environments and seeing what they can be provoked to produce. If you're curious, I'm happy to give you references. OK, so where are we going to start, in terms of characterization of this synthetase here? We're, more or less, going to follow the logic outlined up here for this. So here's the cartoon of the players. And the first order of business is that it's necessary to characterize the adenylation domains. So we need to ask, what the monomers are selected and activated? And we have two adenylation domains to consider, so EntE and the A domain of EntF. So what was done? For EntE, where we'll start, this protein was purified from E. coli and characterized here. And so how was it characterized? It was characterized by ATP-PPi exchange, like what we saw for the amino acyltransferase synthetase characterization. And so what was observed is that when EntE was combined with dihydroxybenzoic acid, ATP and radiolabeled PPi, there was incorporation of the P32 label into ATP. So that indicates formation of this adenylate intermediate. And resulted in the conclusion that EntE is the A domain that activates this aryl acid monomer, so this chemistry, which should be very familiar at this stage based on our discussions in the translation unit. So what about EntF and its A domain? So again, we're working with E. coli proteins. EntF was purified from E. coli. And again, this ATP-PPi assay was performed. And so in this case, what was observed is that when EntF was incubated with L-serine, ATP, and radiolabeled PPi, there was incorporation of the radiolabel into ATP, which indicates that EntF, its A domain is responsible for that activation of L-serine. And so you can imagine in each set of experiments, the researchers also tried the other monomer, and in the case of EntF, would have seen no ATP-PPi exchange with DHB. And likewise, for EntE, if they tried with L-serine, there would be no exchange. You'd want to see that, in terms of making a robust conclusion here for that. So that's good. Now, the next question is we need to get these monomers to these T domains here. And so that's the next step, is to study the T domain. And something you all need to appreciate about the time of this work, there wasn't a whole lot known about PPTases. There wasn't Sfp that you could borrow from your lab mate, or maybe you've expressed 100 milligrams for yourself and you could get that Ppant arm on here. And so JoAnne may want to elaborate, but there were a lot of effort to try to figure out, what is going on here? JOANNE STUBBE: And graduate students had no thesis. Because they couldn't get any activity of any of the enterobactin genes. ELIZABETH NOLAN: Yeah. JOANNE STUBBE: Until it was discovered what was going on. ELIZABETH NOLAN: Right. So this was a major, major effort, undertaking, and discovery here. And so they couldn't find activity, and that's because these two domains needed to be modified and they weren't modified. But some little detective work here. So in the analysis of purified EntF, what analysis of this purified protein had revealed, in some instances, with substoichiometric phosphopantetheine. And so is that a contamination or is that meaningful? In this case, it was a meaningful observation that, when chased, proved to be very hopeful. It suggested that maybe there's a T domain here that's modified. That's something we can infer from this. So if this Ppant arm is attached to EntF, how does it get there? And if we rewind and think about what was going on at the time, it was only shortly before that the PPTase for the acyl carrier protein and fatty acid synthetase was discovered. So that might beg the question, is it possible that this enzyme also modifies EntF, if you don't know much about its substrate scope? And so that hypothesis was tested and it didn't pan out. So if EntF was incubated with ACPS from fatty acid biosynthesis and coASH, there was no product formation. There was no transfer of the Ppant arm to here. Yeah? AUDIENCE: Was it obvious the fatty acid synthesis-- was there [INAUDIBLE] synthesis at the time, or did it have a name? ELIZABETH NOLAN: I don't think it had a name, but I defer to JoAnne, who was on the thesis committee, because this is really the first one. AUDIENCE: Were the analogs of mercury obvious at the time? ELIZABETH NOLAN: No, it's really discovery work at this stage. The question is, is there a possible lead from somewhere? And if you try it, what will happen? And really, there is no clue as to what is this modification and that design involved. But if you see an enzyme with activity in one system, maybe it will be active with another. Maybe not. And in this case, it didn't work, but it was something certainly worthwhile to try. So then what was done? So there is a search for another PPTase, and this was done using BLAST. And what BLAST, this bioinformatics, revealed was the identification of that enzyme EntD. So here, we have this EntD, the PPTase here. And so EntD was overexpressed and purified. And so in this case, a histag was used, affinity column purification. And the question is, what happens if we incubate EntD with EntF and coASH? And so in these experiments, radiolabeled coASH was used, and radio labels are commonly used to look for transfer of either Ppant arms or monomers, as we'll see as we go forward, to these domains. And so the question is, will we see transfer of the radiolabel to EntF in the presence of EntD and coASH? And so here are the results from the experiment. So we have formation of holo EntF, as monitored by the radiolabel, versus time. And so what's done, the reaction is run for a given time point. The reaction is quenched with acid to precipitate the proteins. And then you can imagine measuring radioactivity in the pellet. coASH will remain in the soluble fraction and then protein in the pellet here. You can imagine control assays with EntD included there. And so what do we see? So as I said before, that we tried the acyl carrier, ACPS from fatty acid synthesis. There's no reaction. But look. When EntD is present, we see transfer of this Ppant arm to the protein here. So this was a really exciting result at the time. We have a new enzyme, a new activity, this post-translational modification there. And this opens the door to further studies. If you can get the Ppat arm on, then we can look at loading the monomers here. So what's the next step? We have EntF loaded. We're also going to want to try to load EntD-- EntB, excuse me-- here. But of course, you need to know some more about EntB. And so let's think about that. I'll also note, just noted here, the next step is to look at loading of L-serine onto this moiety here, as drawn. And you can think about how to do that experimentally. So what about EntB? This was another mystery, in terms of experimental work and exploration. And so initially, EntB was purified and characterized for its activity that led towards the biosynthesis of the BHP monomer. So this ICL domain is involved in the series of reactions that give DHB. On a historical note, it was thought there was another protein. And this protein was named EntG that was thought to be required for enterobactin biosynthesis. And EntG would be the T domain that is for the aryl acid. So effectively, it would be this T domain, or aryl carrier protein for dihydroxybenzoic acid. But the problem was they couldn't find a gene for EntG. And so as it turned out, what more detective work showed is that this EntG is actually just the N-terminus of EntB here. So they realize that EntB has another role, another function, and that in addition to having this function and the synthesis of dihydroxybenzoic acid, because of this domain at the N-terminus, it's also the carrier protein for this monomer here. So how is this sorted out here? What we can do is just take a look at a sequence alignment. And so this is from one of the papers about all of these explorations. And effectively, what we're taking a look at are known [INAUDIBLE] phosphopantetheinylation sites, the proteins. So something is known about fatty acid synthesis and some other carrier proteins here from different organs. And so effectively, if we just look at this region of the alignment, we see this serine residue with an [INAUDIBLE] And this happens to be serine 245 of EntB. So this led to the hypothesis that maybe this serine 245 towards the C-terminus terminus of EntB is the site where the Ppant arm is attached here. And so this means that some experiments are needed to show that EntB has this carrier protein, or T domain, and that it can be modified with the Ppant arm. And it was predicted EntD would do this. And also, that once modified with the Ppant arm, the aryl acid can be transferred to EntD. So if we just think about EntB for a minute, So have the N-terminal domain. Here's the C-terminal domain. Here's the ICL domain. Here's the T domain for an aryl carrier protein. So amino acid 1, 285. This is 188. It's not quite drawn to scale. So we serine 245 around here, which is the serine of interest for post-translational modification with the Ppant arm. And so what was done is that pathways were performed, where EntB was incubated with EntE and radiolabeled coASH, like what we saw for the studies of EntF. But they made a few additional constructs. So they considered full length EntB, so as shown here. They considered an EntD variant where with C-terminal 25 amino acids were deleted. So you can imagine, they just put a stop codon in and leave the last 25 amino acids. So the serine is still there, but a bunch of the C-terminal residues aren't there. And then they also considered a variant of EntB where they deleted this entire N-terminal domain. And so the question is, what are the requirements? Well 1, does this reaction work? Does EntD modify EntB with the Ppant arm? And then if yes, what are the requirements? So is the N-terminal domain needed? Are these C-terminal residues needed? And so these are the gels that come from this experiment. And so what we're looking at on top are total proteins, so Coomassie staining. And on the bottom, we're looking at radioactivity. And the idea here is that we want to track the radiolabels. So in lane 1, we have assays with full length EntB. In lane 2 with the C-terminal truncation. And in lane 3, deletion of the N-terminal domain. OK, so the question is, what do we see from these data here? And so these give us a sense as to where the individual proteins run on the gel. And here, we're looking at the radioactivity. So what do we see? In lane 1, you see a huge blob of radioactivity. This isn't the most beautiful gel, actually. Nevertheless, there's much to learn. So what do we see? We see radioactivity associated with EntB. That's really good news. We see transfer of this radiolabeled Ppant arm. What about lane 3? So what do we see there? AUDIENCE: Also a lot of radioactivity. ELIZABETH NOLAN: Right. We have a lot of radioactivity. We're looking at the construct that only has this C-terminal domain. So what does that tell us? AUDIENCE: It's shorter. That's why it moved down the gel further. ELIZABETH NOLAN: Right. So that's why it has a different migration on the gel. But in terms of seeing the radioactivity, what did we learn? Is this region of the protein essential or dispensable? We don't need this N-terminal domain in order for EntD to modify EntB. So we're seeing that. What about in the middle? AUDIENCE: The deleted region is important [INAUDIBLE] ELIZABETH NOLAN: Right. We see very little radioactivity here. Basically, almost nothing, especially compared to these spots. So deletion of those C-terminal amino acids is detrimental, and so that region is important. So maybe there's protein-protein interaction going on, or something with information that's important. So from here, we learn that EntD transfers the Ppant arm to EntB. The ICL domain is not essential for this, but the C-terminus of this region is here. So now what? We've got in here via the action of EntD. Can we get attachment of the monomer? And so our hypothesis is that EntE, which we saw EntE activate DHB to form with the adenylate, it will also transfer this moiety to EntB. So in this case, what was done, again, we're looking at use of a radiolabel. In this case, the radiolabel is on the DHB lane. And this is an important point. In order to see this, we cannot have radiolabeled Ppant arm, in this case, because that would give you a big background. So they're going to prepare EntB with the Ppant arm unlabeled. We know that will work based on the prior study. And now, we repeat that with unlabeled coASH. And then ask, if we incubate total EntB with EntE, ATP, and radiolabeled DHB, do we see transfer of the radiolabel to this protein here? JOANNE STUBBE: Let me ask a question. This will be a question during class. Can you do this experiment with tritiated CoA and C14-labeled serine, based on what you know about radioactivity? We actually discussed a similar situation. AUDIENCE: Would it last longer [INAUDIBLE] JOANNE STUBBE: Did you go back and look at the lifetimes? Is it infinite compared to any experiments? So it's not lifetimes. Do you have any ideas? AUDIENCE: I mean so tritium, the energy, the particle released, is much lower than the energy of C14. JOANNE STUBBE: Right. So the difference is in the energies. We talked about this. You can tune the scintillation counter. So you measure tritium to C14. So people that do enzymology for a living often use tritium in C14 at the same time. And you can quantitate, if you do your experiments right. It's a very powerful tool together, actually. ELIZABETH NOLAN: So another option, the non-simplistic approach. So basically here, if we're looking at the four lanes, again, we're looking at total protein. We're looking at radioactivity, and can consider the overall reaction, and then a variety of controls. OK, and I want to move forward to get through the rest of the slides and we just have a few more minutes, but you should work through this gel and convince yourself that there is transfer of this radiolabel in the presence of EntD And this was done with unlabeled EntD. OK, so what about peptide bond formation? We have the T domains loaded with the monomer. Can we see activity from the C domain, in terms of the formation of an amide bond. And so this experiment requires a lot of components. So what is the experiment? To look at whether or not EntF catalyzes condensation. Basically, we can incubate EntE, holo EntE, holo EntF, ATP, and these monomers. And what we want to do, in this case, is look at transfer of radiolabeled DHB to serine-loaded EntF. And I guess I got a little ahead of myself on the prior slide. So this is a case where if you add C14 labels in both of your monomers, you'd have a big problem. So key here is to use unlabeled serine and radiolabeled DHB so you're not getting a big background. And an important point to make here in these experiments is that we're looking for detection of a covalent intermediate. So effectively, having this guy here attached to EntF. And so the radiolabel is here. So that's what we're looking for, not the final product, and that's indicated by having the gels. So what do we see? We have the total protein and then the autoradiograph. And so we have holoEntF, EntE, holoEntB. And the question is, where do we see radiolabels transfer? And if we look at lane 3, where we have the proteins, ATP, serine, and radiolabeled DHB, what we observe is that there is some radioactivity here, which is indicative of a covalent intermediate. And again, you should work through these gels and work through the different conditions and make sure it makes sense to you what's seen in each one. So finally, at that stage, the activities for all of these different domains have been found. And so the question is, in the test tube, can we actually get enterobactin biosynthesized, which is going to rely on this TE domain. So the idea is if we incubate everything together, similar to what was done before, can we detect the actual small molecule, rather than this intermediate attached to EntF here? And so the way this was done was by monitoring the reaction by HPLC using reverse stage chromatography. And so here, we have all of the reaction components. Here, we see standard. So in enterobactin, this is the linear trimer, the linear dimer, a monomer. Here's the DHB substrate. And the question is, over time, do we see enterobactin formed? So you can imagine quenching the reaction, taking the soluble component, which should have this small molecule, and looking at each POC. And where you should just focus at the moment is here. So the enterobactin peak, what we see is that over time, there's growth in this peak. You can imagine doing something like LC-MS analysis to confirm it is the species you expect here. So this is full in vitro reconstitution of a non-ribosomal peptide synthetase in the test tube, and it really paved the way for many, many additional experiments related to these types of biosynthetic machines. And so with that, we're going to close this module, and I hope you all have a great spring break. And I leave you in the good hands of JoAnne starting the 28th here for lecture.
MIT_508J_Biological_Chemistry_II_Spring_2016
R9_Cholesterol_Homeostasis_and_Sensing.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JOANNE STUBBE: This is the second recitation on cholesterol, and it's really focused on this question of how do you sense cholesterol in a membrane? So that's really a tough problem. And they've developed new tools, and that's what we're going to be talking about-- what the tools are, and whether you would think they were adequate to be able to address this question about what kinds of changes in concentration of cholesterol. Number one, can you measure them? And number two, what effects do they have, in terms of whether you're going to turn on cholesterol biosynthesis and uptake, because you need more cholesterol, or you're going to turn the whole thing off? So we've been focusing, as we've described in the last few lectures, in the endoplasmic reticulum. And what would the cholesterol-- what kinds of changes in cholesterols did they see in the experiments they were doing in this paper? What were the range of changes that they saw? AUDIENCE: 3% to 10%? JOANNE STUBBE: Yeah, so see, something low. Say they were trying to do this same experiment in the plasma membrane-- how do we know it's the ER membrane that does this sensing? That's what the whole paper is focused on, that's what everything we've focused on in class. Say you wanted to do a similar kind of experiment in the plasma membrane, do you remember what I said about the levels of cholesterol? So they distributed throughout the cell, in all membranes. Where is the most cholesterol? So if you don't remember, it's the plasma membrane. So say, instead of having 7% or 8% of the lipids cholesterol, say you had 40%-- that's an over-exaggeration-- do you think this kind of an experiment would be hard to do, that they've talked about in this paper? So you would want to do this-- if you tried to do the same experiment with the plasma membrane? So the key issue that you need to think about, is go back and look at the changes-- they did a whole bunch of different experiments. The numbers are squishy, but they came up with numbers that reproduced themselves, I thought, in an amazing way. But now say you wanted to do this in the plasma membrane, where the levels of cholesterol are much higher. Do you think it would be easy to do? Using the same tech techniques that are described, that we're going to discuss, or not? And what would the issues be? Yeah? AUDIENCE: So they had to deplete the cholesterol from the membrane, and so that would probably be hard to deplete it to a level that's low enough, so that you don't get the activity. Right? JOANNE STUBBE: So, I don't know. So that's an interesting question. So you'd have to deplete-- so that's going to be it, we're going to have to control the cholesterol levels. But what change-- if you looked at the changes in levels of cholesterol in the ER, how much did they change? They change from what to what? From-- 2% to 7%. Say that you were in that same range of change that was going to turn on a switch in the plasma membrane. And say you could control the levels. Do you think it would be easy to see that? So you start with 40%, say, that's the norm. Say the change was very similar to what you see in the change in the ER-- do you think that would be easy to detect? No, because now you have two big numbers, and there's a huge amount of error in this method of analysis. So those are the kinds of things I'm trying to get you to think about. I don't know why it's the ER-- I mean, everybody's focused on the ER. Could cholesterol and other organelles have a different regulatory mechanism? Or somehow be connected, still, to what's going on in the ER? Could be-- I mean, you start out with the simplest model you can get and you test it, but then as you learn more, or we have more and more technology, we learn new things, you go back and you revisit and rethink about what's going on. So the key question is, it's really this switch of having cholesterol that keeps it in the membrane, or not having cholesterol. And the question is, what are the differences in the levels that allow turn on of cholesterol-- biosynthesis and LDL biosynthesis, which then allows uptake of cholesterol from the diet? OK, so that's the question. And what does this look like? And people hadn't measured this by any method, and this model I've gone through a number of times in class today, so I'm not going to go through it again. Hopefully you all know that in some form in your head, or you have the picture in front of you so you can remember it. So these are the questions I want to pose, and I want you guys to do the talking today. And what I'm going to do is, I have most of the figures on my PowerPoint, so we can bring them up and look at them. And you can tell me what you see. And then everybody might be seeing something different-- and so we're thinking about this differently, and maybe we come to some kind of consensus about whether these experiments were carried out well or not. So one of the first things-- so these will be the general things, and then we'll step through them. But they wanted to perturb the cellular cholesterol levels. And how did they end up doing that? Did that make sense? We talked a little bit about this already. I mean, what did they use as tools to do that? AUDIENCE: [INAUDIBLE] JOANNE STUBBE: So you need to speak louder, because I really am deaf. Sorry. AUDIENCE: So just, right here, they were careful of the amount of cholesterol present in this? JOANNE STUBBE: So that's one place, so they can deplete cholesterol for the media. But then what did they do? So the whole paper is about this-- how did they control the [INAUDIBLE]? Let's assume that they can do that, and they got good at that. I think a lot of people have used that method, and so they can deplete media. So how did they deplete cholesterol? There was some unusual ways to deplete cholesterol in this paper. Did any of you pick up on that? AUDIENCE: A chemical that could bind to cholesterol. JOANNE STUBBE: So did you think that was unusual? Did any of you look up what that was? AUDIENCE: It was a kind of carbohydrate that can bind to cholesterol. JOANNE STUBBE: Yeah, so but what was interesting about it, it was hydroxypropyl-- remember HP, cyclodextrin. We're going to look at this in a minute. But what do we know-- what was the other molecule they used to add cholesterol back? AUDIENCE: Another form of that molecule is-- JOANNE STUBBE: So methyl-cyclodextrin-- I'm going to show you the structure, but they aren't very different. So have any of you ever heard of cyclodextrin before? People won the Nobel Prize for that, Don Cram won it, Breslow spent his whole life studying host guest interactions. So you guys, I don't know what you teach you now anymore, but that used to be something that was taught a lot, host guest interactions, trying to understand weak non-covalent interactions as the basis for understanding catalysis. But to me, that was-- immediately when I saw this, what the heck's going on? So then I Googled it, and immediately-- and I don't know anything about hydroxypropyl-- you Google it, you look it up. And then you look at it, and if you were a chemist and you were really interested in the molecular interactions, you might make a model of it. And then see, what is the difference between that one little group, when you look at the structure, it's amazing. And that's the basis of most of the experiments. So you need to believe that they figured that out. And that's not in this paper, so if you really cared about it you would have to go back and read earlier papers, and see what are the experiments that led them to focus on these molecules? How else did they end up getting cholesterol levels back into the cell? Do you remember what the other method was? So we'll come back and we'll talk about this in a minute-- so that was one of the methods. AUDIENCE: They added two kind of sterols. JOANNE STUBBE: OK, so they did add two kind of sterols-- and they tried to figure out, this is another unknown, what was the difference between the sterols? Simply a hydroxyl group. OK, so if you looked at this, cholesterol is this guy. And then they had something like this guy-- 25, and remember where [INAUDIBLE] the side chain, hanging out of the little [? cheer ?] system you have. I don't think they learned very much from that. And in fact, in your problem set, you had all of these different cholesterol analogs. I mean, I think we still really don't get it. That's complicated-- we talked about this in class. You have these transmembrane helices-- what is it that's actually the signaling agent? So people are still asking that question, and we haven't quite gotten that far. But if you've read the reading, for HMG CoA reductase degradation, which is what we we're going to be talking about in class, the signaler is not the sterile, it's lanosterol. OK, and where have you seen lanosterol? The biosynthetic pathway has lanosterol sitting in the middle. It's not all that different, structurally, from cholesterol. You need to go back in, they all have four-membered rings, they have different extra methyl groups. So people are trying to sort that out. I don't think we really know. But how well? So you're right, they use sterols. They didn't use that, they didn't see very much difference with the sterols. What was the other way, which is sort of unusual, that they added cholesterol back into the system. So they could add it back with the methyl cyclodextrin-- they told you that that worked, and if you believe that-- and you look at the data-- it looked like that was happening. Nobody remembers? OK, well, we'll get to that in a little bit. OK, so the question we're focusing on is what are the changes in concentrations of cholesterol in the ER? So what method did they use to try to separate the ER membranes from all the other membranes? AUDIENCE: They first separated the [INAUDIBLE]---- JOANNE STUBBE: They separated the what? AUDIENCE: The sterols and the nucleus in the [INAUDIBLE].. JOANNE STUBBE: OK, so that's good. You can separate out the nucleus, and you could do that by ultracentrifugation-- we've seen that used in different kinds of ultracentrifugation. We've seen the different particles, the lipoproteins in the diet, how do we separate those? We talked about that in class briefly, you haven't had any papers to read. But what was the method of separation? If you look at all those particles-- remember we had a little cartoon of all the particles, and we focused on LDL, which is the particle that has the most cholesterol. So that's why everybody is focusing on that. What was the basis of the separation? AUDIENCE: Was it sucrose screening? JOANNE STUBBE: Was the what? AUDIENCE: Was it a sucrose screening-- the ultracentrifugation? JOANNE STUBBE: You need to-- AUDIENCE: Did they use a sucrose screening, like ultracentrifugation? JOANNE STUBBE: Yeah, ultracentrifugation. But how did the-- AUDIENCE: For the sucrose screening? JOANNE STUBBE: Yeah, OK, so they have different density gradients. , OK so that's going to be a key thing, and that's because if you look at the composition, they have different amounts of proteins, different amounts of fats. And they have different-- they float differently. So that's the method that they're going to use here. Is that a good method? Can you think of a better method? So in order to understand the switch for cholesterol, you've got to be able to measure the changes in cholesterol. Not an easy problem, because cholesterol is really insoluble in everything. And so how much is really in there, and how does it change under different sets of conditions? So is this a good method? What do you think? We'll look at the method in a little more detail, when I pull up the figures, but what did you think when you read the paper? AUDIENCE: Seems a pretty good method, other than that they're slightly different any other like properties different from the membrane than say, press on golgi bodies and ER. So it's like the only one I can think of. JOANNE STUBBE: Yeah, so the question is, you could you separate? Even separating the nucleus from the cytosol is not so trivial. But these methods are really gross methods, and during the centrifugation, things diffuse. So if you're having close separations, it's a equlibrating down this thing. And so you're getting your proteins, or your lipids are spreading out. Is there anything else any of you experience with insoluble-- this is what we're dealing with, is an insoluble mess, and how do you how do you separate things in a way that you have control over it so that you can address the key questions in this paper? Nobody thought about anything else? Did you like this method? Were you convinced by the data? AUDIENCE: I mean, like I couldn't necessarily think of something better. I don't know, I guess the thing that sketches me out the most about it just like how-- I'm not really familiar with the method. I haven't done this myself, so I don't know how that process affects the membrane integrity. JOANNE STUBBE: So that's an incredibly important question, because lipids confuse. They can mix. The question is, what are the rate constants for all of that? And we don't really teach very much in the introductory courses about lipids, and they're partitioning between other membranes and fusion, and all that stuff. But if you think about it, that's what the cell is, right? How do you get a plasma membrane, and all these membranes around all these little organelles-- that's an amazing observation. And we've seen in class already, what have we seen to get LDL receptor from here to the plasma membrane? How do we have to do that? We had to use these little vesicles. So you're generating something over here, it goes through the Golgi stack. Again, another set of membranes has got to come out the different levels of the Golgi stack. And then it's still got to get into the plasma membrane, and fuse, and dump its cargo. So I think it's an amazing process. And people interested in evolution, this is one of the major things people are focused on is, how can you make cells, little fake cells, artificial cells, that can replicate themselves. You can make it, and they're going to have to divide and fuse. And it's exactly the same problem here. And so this question of fluidity is an extremely important question. And a lot of people that focus on lipids-- which is not a popular thing to study, because it's so hard-- it's incredibly important. And people that look at membrane proteins, they almost always have lipids on them. And when you do them yourself, you have a detergent, which is not a real lipid-- does that change the property? So all of these questions, I think, are really central to what happens in the membranes, which is a lot of stuff inside the cell. So I think it's good to question what they did. I think their results turned out to be quite interesting. But we'll come back-- I think that was a hard problem. And so we'll come back and we'll look at this. And so then, let's say that we could end up separating things. Then the question is, what was the key type of measurement they made, where they could correlate the changes in cholesterol levels-- we talked about, you can control perhaps the cholesterol levels with the cyclodextrin. But then, how did they correlate the changes in the cholesterol levels in the membrane with this transcriptional regulation? Which, that is what happens with the steroid-responsive element-binding protein, the transcription factor. So what happens in that process? What are the changes in the SRE BP dependent on the concentrations of the cholesterol? And how did they take advantage of that in answering this question about what the cholesterol levels were that allowed you to turn on transcription of LDL receptor, and HMG CoA reductase. So what's the major assay? We'll look at that, as well. So if you go back and you look at the model, what happens in this model? All right, here we go-- what happens in this model? What's happening to SREBP? AUDIENCE: It has completely changed and exposed [INAUDIBLE]. JOANNE STUBBE: No, that's SCAP-- SCAP, that's this guy. OK? So SCAP, that's a key player. That's what we talked about. I know the names are all confusing. You're going to need to write these down to remember. The names are very confusing. Yeah? AUDIENCE: So the SCAP SREBP, whatever you call it, complex move signal g-apperatus then part of it's cleaved and moves to the nucleus? JOANNE STUBBE: Right, so how could you take advantage of that? This is the key observation that they're taking advantage of, to ask the question-- since this whole process is dependent on the concentration of cholesterol. If you have high cholesterol, there's no way you want this to happen-- you want to shut it off. If you have low cholesterol, you want to turn these guys on. So this movement is the key. And what do we see, if we look at what happens to this protein, SREBP, what happens to it during this process? It gets cleaved. And how could you monitor that cleavage? How do they do it in the paper? AUDIENCE: They used a-- was it a [? florifor-- ?] or is that the homework? JOANNE STUBBE: They could use a [? florifor, ?] they didn't do that. They did a what? AUDIENCE: They were able to separate the [INAUDIBLE] gel? JOANNE STUBBE: So it can be operated by a gel. So to me, this is quite an easy assay. Because if you look at this-- I don't remember what the molecular weight is, but it's a lot smaller over here. And so, that turns out to be a great assay. So that part of their analysis, I think, was a really smart part of the analysis. And so then the question becomes, can you quantitate all of this? So if you have a lot of cholesterol, this doesn't happen. And so everything is bigger, and resides in the membrane. You could even probably look at that. Whereas, when the cholesterol is really lower, things go there. And it's everything in between. The question is, what is the concept-- can you measure if you have X% cholesterol in the ER, how much do you have to decrease it to see a change or a switch in where this protein goes? So I think the experimental design is actually amazingly creative. But then you see the data of the other side. And what I want to do now is focus on what the issues are. So we're going to come back and look at, how did they look at SREBP? So you could look at this a number of ways-- you could look at this by protein gel directly. How else do people look at proteins using westerns? What's a western? Anybody know what a western analysis is? Didn't I ask you that at the beginning of class? How else do you detect proteins? You've seen this in the first half of the semester a lot. Yeah, antibodies. So if you have antibodies to this-- and we'll talk about this, because the western analysis, which people use all the time, and there are so many issues with it, that I think I want you to think about what the issues are. And then you correlate the two-- changing the levels of cholesterol. Which they measure by mass spec after separation and purification of lipids, and the cleavage. And they plot the data, and that's where they got the analysis from. So the first thing that you want to do-- the first thing, and the key to everything, is separation of the membranes. And so, this is a cartoon of when you put something, you load something on the top, and you have a gradient, and the gradient could be made of a number of things. Have any of you ever run these kinds of gradients? OK, so you can make them out of glycerol, you can make them out of sucrose-- did anybody look at how these gradients were made? Did you read the experimental carefully enough to look at that? Yeah, how do you make a sucrose gradient? You have no idea? But yeah, so layering. So what you really like to do is have a continuous gradient, or something. But sucrose is incredibly viscous. So if you were trying to make a linear gradient, which you could do by mixing two things of different concentrations-- if you could get them to stir really well, and then add it in, and you could generate a gradient. But it's so hard to do, that what happens is they end up layering it. So they make X%, Y%, Z%, they put it down. And then they try to layer something on top of it. And then they put whatever the interest in at the top, and then they centrifuge it. So what are the issues? Do you think this is what the gradient would look like? So what are the issues when you're doing this, when you layer it? And this is why the data-- which we'll talk about in a minute-- is the data, or part of the issue is this method. That's why you need to think about the method. And there are better ways to do this. And it really depends on what you're trying to separate. So if this band-- say these were two bands, you wouldn't really get very much separation at all. If there were two separate things that sedimented under these conditions very close together. So what would happen when you're sedimenting this? Does anybody have any idea how long it takes? Do you think you'd do this in a centrifuge, you spin it for three minutes, and then-- so sometimes you sediment these things for 16, 20 hours. So what happens during the sedimentation? That might make this more challenging, in terms of separating what you want to separate? AUDIENCE: I'm not sure, but it [INAUDIBLE] diffusion. JOANNE STUBBE: Yeah, so exactly, you have diffusion. And even when you've layered things on top of each other like that, you start to have diffusion. And if you shake up the tube a little bit, it's all over. So how do you prepare these things is not-- so people still use these methods, but I would like to see better methods. And so they tried one method with sucrose, and then that wasn't good enough. We'll look at the data. So they went to a second method. And where did they come up with this? I have no idea where they came up with this, but there was an MD PhD student in our class who had seen this and one of his classes, and they use it and some blood test. So I think that's probably where these guys got it from, because Brown and Goldstein are both MDs. But again, it's just another way to make a gradient. And I'm not sure why this gradient works as effectively as it does. But the first gradient didn't work so great, and we'll look at that data. So then they added on a few more steps, because they weren't happy with the level of separation. So looking at membranes, I think this is going to be more and more looking at membranes, because membranes, you have two leaflets-- the lipids and the leaflets are different. Do you think that affects the biology? I guarantee you it affects the biology in ways that we would really like to understand that I don't think we understand very well. When you isolate a membrane protein, have any of you ever isolated a membrane protein? So you have an insoluble-- it's in this lipid system. How do you think you get it out, so you can go through the steps, a protein purification that you've talked about, or you have probably done in an introductory lab course? What is the first thing you need to do? Yeah, solubalize it. And how do you solubalize it? AUDIENCE: With a detergent. JOANNE STUBBE: Yeah, with some kind of detergent. It's like what you saw with a kilo microns, or the bile acids that we talked about. So you can use different-- and people have their own favorite detergents. But again, that changes things. But otherwise, you can't purify anything unless you happen to have a membrane where the only protein in the membrane is the one you're interested in, which, of course, doesn't exist. So anyhow, they went through that. And then what did they end up seeing? So they went through different steps, and they separate them into different-- the supernate, or the light and the heavy membrane fractions. And then they have to analyze it. And so the question is, how do they analyze to tell how well these separations actually worked? What was the method that they did to determine whether they separated the ER from the plasma membrane, from the Golgi stacks, from the lisosomes, from the peroxisomes. So they have all we have all these little organelles in there. What did they do to test each one of these fractions? Let me ask you this question-- how do you think they got the-- how do you how did they get the material out of these gradients to do the experiments that I was just talking about. So they want to analyze what's in each of these bands. How did they get it out of this tube? AUDIENCE: Would they use a Pasteur filter? JOANNE STUBBE: So what do you think? You just stick it down in and suck it out? Well, I mean, yes, so what do you think? You could do that-- you open the top, you stick it in, you carefully stick it in. If you can see it. Lots of times you can see these lipids, because they're opaque, or something. So you can see. Or, if you still hope your sucrose layers, lots of times they layer in between the different concentrations of the sucrose, and you see white stuff precipitating. So you could conceivably stick a pipe head from the top and suck it out. AUDIENCE: But that would perturb all the other layers. JOANNE STUBBE: Absolutely it would perturb all the other layers. So here you're doing something-- it's already a very hard experiment, because they're all being perturbed anyhow, because of diffusion. So is there any other way you could think about separating these things? And so, the hint is that they use plastic tubes. So these things are not glass. Most centrifuges-- AUDIENCE: Freeze it? Cut it? JOANNE STUBBE: Well, so you don't do that, that could be-- OK, so you could. But you then have to, if you were cutting it, you still have to get it out of the tube. Unless you had a saw that didn't have any vibrations when you were cutting it, of course, which would not happen. But if you look here in this cartoon, so I gave you this, what are they doing here? They're sticking a syringe in through the side of the tube. And that's still what people use. So you can suck out-- if you can see something. So you have to be able to see in some way to know where to suck it out, so you might have a way, actually, in doing ultracentrifugations. I think with the lipids you can see them by eyeball, but you might look at absorption. If they have proteins, you could monitor absorption through the gradient, and that might tell you how to fractionate things. But anyhow, that's also an issue. Because before they can do the next step in the analysis, they've got to get the material out. So they've got the material out in each of these steps, and then, how do they look at this? They can pull it out. So what are they looking for? To tell them how effective this method is. AUDIENCE: Maybe some specific markers for each protein. JOANNE STUBBE: Exactly. So what are they-- to do that, what they're going to have to do is, before we look at the details of the method, I want to go through a western blot. So what do we know about a western blot? AUDIENCE: I have a quick question about the method here. JOANNE STUBBE: About the which method? AUDIENCE: The lysis method [INAUDIBLE] ball bearing homogenizer. So they're literally putting these cells in something like a bunch of ball bearings? JOANNE STUBBE: Yeah, you could do that. There's a lot of ways to crack open cells. I don't know which one's the best-- mammalian cells are really easy to open. Sometimes what I like to do is freeze and thaw them-- sometimes you have like a little mortar and pestle, or something like that. But that's-- I mean, yeast cells, you roll them. You have to have enough cells so you can do something. If you only have a tiny amount of cells, it makes it really challenging with beads, because it covers the beads. AUDIENCE: Do you have any issues with any of the different types of membranes that-- JOANNE STUBBE: Sticking to that? Absolutely. I'm sure you have to look at all of that kind of stuff. So how you choose, that's an important thing to look at, how you choose to crack open the cells. And it's the same with bacterial cells-- there are three or four ways to crack open the cells. And I can tell you only one of them really works efficiently. And a lot of people, when they use some of the others, they do something and they assume it works, but they never check to see whether the cell walls have been cracked open. A lot of times they haven't, and so what you get out is very, very low levels of protein, because you haven't cracked open the cell. So figuring out-- mammalian cells are apparently, I haven't worked with those myself, but they're apparently much easier to disrupt than bacteria. Or if you look at fungi-- fungi are really hard to crack open, yeast. So anyhow, that's an important thing to look at. So every one of these things, again, the devil is in the details. But when you're doing your own research, it doesn't matter what method you're looking at. The first time around, you need to look at it in detail, and convince yourself that this is a good way to chase this down. And you look at it in detail the first time around. And when you convince yourself it's working really well, and doing what you want to do, then you just use it. And that's the end of it. You don't have to go back and keep thinking about this over and over again. So the method we're going to use is a western blot. So we've got this stuff out, and have you all run SDS page shells? OK, so SDS page shells separate proteins how? AUDIENCE: Based on size... JOANNE STUBBE: By the what? AUDIENCE: It separates into a a charge gradient, and then-- not a charge gradient, but-- JOANNE STUBBE: Not charge. AUDIENCE: That's what drives the protein, but... JOANNE STUBBE: Right, but it's based on size, because it's coded-- every protein ratio is coded with this detergent, sodium dodecyl sulfate, which makes them migrate pretty much like the molecular weight. But if you've done these, it's not exactly like the molecular weight. You can do standards where you know the molecular weight, you can do a standard curve, and then you see where your protein migrates. And sometimes they migrate a little faster, sometimes a little slower, but it's OK. So you run this, and then what do you do? Does anybody know what you do next, to do a western? AUDIENCE: You need to use the membrane to... JOANNE STUBBE: Right, so the next thing they did was they used-- I'm going to put all of these up-- so they transferred it to a membrane. And why did they have to transfer it to a membrane to do this analysis? This is an extra step. And it turns out-- we're going to look at an antibody interacting with a protein. Why don't we just look at the antibody interacting with the protein to start with? AUDIENCE: It doesn't have access to the protein. JOANNE STUBBE: Right, it doesn't have very good access. It's really not very efficient. So people found, pretty much by trial and error, that you needed to transfer this to a membrane. I mean, we have hundreds of kinds of membranes. How did they choose nitrocellulose? If any of you have one run westerns, you remember what kind of a membrane you used? Did you use nitrocellulose? You do this in undergraduate class, don't you? You don't do a western? We used to do-- AUDIENCE: Did it once in undergrad class. JOANNE STUBBE: Yeah, in what kind of a membrane? Was it in biology? AUDIENCE: Yes, biology. JOANNE STUBBE: So what membrane? Do you remember what the membrane was? AUDIENCE: I think it was-- it was not nitrocellulose. JOANNE STUBBE: It's not nitrocellulose. So this PVDF, polyvinyl difluoride is the standard one that people use now. It works much better than nitrocellulose-- this paper is really old, and so they're looking at nitrocellulose. So then they do this. And then, what do they do next? They have an antibody-- we'll look at the details of this in a minute-- that can recognize the protein, that can find it on the membrane. And then what we're going to see is-- you still can't see anything really, because you don't have very much material there. And you can't observe-- you don't have enough to stain, oftentimes, by Coomassie, so you're going to have to amplify the signal. So then you're going to make an antibody to an antibody. And then you have to figure out how to, then, amplify the signal. And we'll look at that in a second. Is this what-- you ran a western, is this what westerns look like? AUDIENCE: I remember, we first [INAUDIBLE] non-specific proteins to occupy the sites. JOANNE STUBBE: Yeah, so that's good, you have to block everything, if you're using crude extract. So in this case, we would be using the crude mixture-- well, not a crude mixture, it's been fractured by the ultracentrifugation that's been fractionated. But you still have mixtures of proteins in there. Have any of you ever looked at westerns in a paper? Or even the papers you had to read? The paper on the PC-- go look at the PCK-- PCSK9 paper, that had westerns in it. What do you see? Do people show you something that looks like this? And if they did show you that, what would it look like? So you have an antibody that's specific for the protein of interest, whatever that is-- supposedly specific. What do you see? What do you think you see? Do you think antibodies are specific? I think I have an example of a typical western. AUDIENCE: I don't think they're as specific as [INAUDIBLE] JOANNE STUBBE: Yeah. Yeah. So when you look at a paper, you should pay attention to this when you read a paper, if you're doing anything in biology, what do you see? You never see a gel, ever. What you see is a slice of a gel where they cut off this-- the way they cut up all this stuff and all this stuff. The reason they do that is because it's a hell of a mess. So let me just show you a typical-- I don't care what kind of an antibody you're using, in crude extracts, it's a mess. Because you have non-specific interactions. We'll just look at that. So that would be something like you might see-- depending on how much antibody you have. So when you see this, the reason everybody reports data like that now. So it looks like it's really clean, but in reality-- I think if it is dirty as that, then in my opinion, I would make you publish the whole gel. But people don't do that. They just cut off the little band they're interested in-- they can see it change in concentration using this method. But you should be aware of the fact that antibodies in general aren't as specific as you think they're going to be. Yeah? AUDIENCE: Are they required to report the whole gel in supplementals? JOANNE STUBBE: I mean, I think, it probably depends on the journal, and it probably depends on the reviewer. But I would say, we're going away from data-- is something that is a pet peeve for me. And all the data, which I think is all right, is published in supplementary information, as opposed to the paper. I think if you have something really dirty, you should publish in the paper, in the main body of the paper. If you have something that's really clean, and it looks like that, it's fine with me. You don't even have to publish it, if you could believe what people were saying. Because people know what this looks like, a lot of people-- everybody uses westerns. But if it's a real mess, then you need to let your reader know that this is not such an easy experiment, and it's not so clear-cut. That's what your objective is, is to show people the data from which you drew your conclusions. And then they can draw their own conclusions, which may be different. So let's look at the apparatus to do this. So how do you get from here to here? So you have a gel, you run the gel, a polyacrylamide gel-- what do you do? AUDIENCE: Put the membrane on the gel. JOANNE STUBBE: So you put the membrane on the gel. And what do you do? AUDIENCE: [INAUDIBLE] applying charges to. JOANNE STUBBE: Yeah, so you're transferring it based on applying a voltage across this system. So here's your gel. And here's your membrane, nitrocellulose membrane. And then they have filter paper above the gel, and below the membrane. Why do you think they have the filter paper there? When you ran the gel, did you have filter paper? AUDIENCE: Yes. JOANNE STUBBE: Yeah. How do you think they decide how to do this transfer? Do you think is a straightforward? Do you run it for an hour, do you run it for five hours, do you run it for 15 minutes? What is the voltage you use to do the transfer? Do you think any of that is hard to figure out? So how do you figure that out? Somebody told you that this is a good way to do it? Yeah, so that might be a place you start. So you do it because somebody gave you a recipe. But then what do you need to do to make sure this recipe is correct? AUDIENCE: Find out what conditions that work for what you're working on. JOANNE STUBBE: Right, and then how do you do that? So that's true, every protein is going to be different. And if you have a protein-- if you have a clean protein, versus a mess of proteins, and you try to do this transfer, the transfer conditions will be different. So for example, if you really want to look at the concentration of something inside the cell, in the crude extracts, you never compare it to a standard with clean protein, because this transfer is different. So you need-- in the back of your mind, if you care about quantitating this, you need to understand the basis of the transfer. So why do you think they have these filter papers here? So this goes back to what controls you would do to see whether your transfer was working. So what would you look for? Did you do this? What did you do? What did you do with the filter papers in your-- AUDIENCE: You want to filter all to the SDS molecules... JOANNE STUBBE: You did what? AUDIENCE: You want to filter all-- JOANNE STUBBE: No, that's not what you do. I mean, you might want to do some of that, too, but in terms of thinking about whether your transfer is successful-- figuring out the conditions to blot from the gel to a piece of paper is not trivial. And there is a standard way that you do this, initially, to try. But then you have to make sure that that method is working. And lots of times it doesn't work. So it's something that's going to be experimentally determined. So the question is, what would you think would happen if you did this for six or seven hours? Whereas, a normal blot would take two hours? AUDIENCE: Would be transferred onto the filter paper? JOANNE STUBBE: Right, it would go right into the filter paper, or even off the filter paper. So what you do is you take the filter paper out, you look for protein being bound. What about the gel? What do you do with the gel after your experiment's over? AUDIENCE: Make sure a protein's not on it? JOANNE STUBBE: Right, make sure that the protein is not on it. So these are simple controls, but these are the controls you always do until you work out the conditions to make sure this works. And it's pretty critical to make sure you have good transfer. So then, so this is the antibody thing that they do. Has anybody thought about these kinds of assays? You've seen them, I think, already in class. But what's wrong with this picture? The target protein, what's wrong with this picture in the target? So here's your nitrocellulose filter paper. What's wrong with this cartoon? Should be unfolded, yeah. So you're doing SDS page, it's unfolded. So then we react it with an antibody. Presumably we have a good antibody, but you've already learned in the first half of this course that having really good antibodies is not so trivial-- you can get them, but most of the time they are not specific if you're looking at crude extracts. They have little epitopes they recognize, if you're using monoclonals that could be present in other proteins. And furthermore, how are you detecting something? An antibody as a protein, it has absorption of 280. Again, this is too low to see, so putting an antibody on it is still going to be too low to detect. So how do you detect your signal? So have you done this? I'm surprised they don't do this in your introductory class-- they don't do westerns, at all. So what you're looking at is an antibody to an antibody. So you put your antibody on, that's specific for your protein. And then you make an antibody in another organism that can specifically recognize antibodies in general. So if this is to a mouse, you make it to go and isolate that. And then what you do is derivatize the second antibody with what? A protein? That can function as a catalyst. AUDIENCE: Why can't you just derivatize the first antibody? JOANNE STUBBE: Well, what? What did you say? AUDIENCE: It's more expensive? JOANNE STUBBE: Well, no, I don't know whether it's more expensive or not. But-- AUDIENCE: Well, because you'd have to derivatize every primary antibody. JOANNE STUBBE: So you'd have the derivatize every primary antibody, and so this is a standard procedure. You could derivatize the primary antibody. So that's not a bad question. And so what you're doing now, you can buy these commercially, so they have rabbit, rabbit, mouse, whatever, antibodies. And the key is the amplification of the signal, and you use enzymes to amplify the signal. Does anybody know what the enzymes are, what the enzymes do to amplify the signal? AUDIENCE: You can covert the molecule to a blue molecule... JOANNE STUBBE: To something that's colored. So does anybody know what that horseradish peroxidase-- have you ever heard of horseradish peroxidase? So that's a heme iron-- we're going to be talking about heme irons pretty soon, and hydrogen peroxide. It makes a chemically very reactive iron oxide species, that can oxidize a dye that changes color. And it has extremely high extinction coefficients. So you can see it, and it does it catalytically and the lifetime of the dye is long enough. So it accumulates, and you can get really amplification of your signal. Or you can use a phosphatase that liberates something that's highly colored, again, and you can see it. So this is a standard method that everybody uses. And so, that's our gel. So now we're looking at sort of-- at the end already-- but we're looking at these gels, and what do you see through the different steps? So if we look through the first gradient, through the sucrose gradient, that gets us through DNE. And if you look, say, at lane E-- our goal is to separate proteins that are specifically localized in each one of these membranes. So you need to believe that's true, that people have selected the right group of proteins to look for. And you notice they do more than one. So they look at multiple proteins. Why do you think-- do you think it's easy to select the proteins to look for? And why or why not? So, they obviously have selected a group of proteins, and I think most people would agree that they've selected a good group of proteins. But what do we know now about proteins, do they stay in one place? No, they move around. But some might be present in very low amounts, sometimes in much higher amounts. And so you need to have more than one protein as a control to make sure you're looking in the right region. And what do you see in E? If you look over here, it tells you what the organelle is. And if you look at this protein, this is localized to the lisosomes-- we talked about that in class. If you looked at this protein, it's localized to the peroxisomes. So in addition to the ones we care about, the ER proteins, we're also getting proteins that are localized in other membranes. So that's when they went to the next method, and they added on another gradient to try to separate out, again, the lysosomal and the peroxisomal proteins. And you can see they were pretty successful at this. There's none of these proteins left in this gradient. So that's good. And they took it a step further. Do you remember what this is? What are they looking for down here, in this? AUDIENCE: Enzymatic activity. JOANNE STUBBE: Yeah, so enzymatic activity is localized in certain organelles. So they again did a second experiment to look at all of that. So they were very careful in this, they figured out how to separate. And that's the key thing for them to analyzing the concentration of cholesterol in these membranes. And what they looked at-- we're over time-- but is the concentration of cholesterol compared to the total amount of lipids. And how did they do that analysis? Gene Kennedy, who's at Harvard Medical School-- he's in his 90s, now-- really trained all the lipid chemists in the whole country. And they figured out many years ago how to separate lipid fractions with methanol, chloroform extract, something that you guys probably haven't though about at all. But we're really pretty good at separating things, and it's nothing more than an extraction like you do as organic chemist to purify and separate things. We've figured that out. And so then they use mass spec to allow them to quantitate the amount of glycerol. And then in the end, so they use mass spec, these western blots, and they can change the concentration of the cholesterol and do the experiments over and over again, to see what happens. And when they do that, this is the picture of cyclodextrin. So you can see the only difference is this group here versus that with a methyl. And one, so this is hydroxypropryl-- hydroxypropionyl cyclodextrin-- so it's like a cavity like this. And the other only other change here is a methyl group, removing that. And they have very different properties about binding and releasing cholesterol, which somebody had to do a lot of studying on to be able to ensure that they can use it to remove cholesterol, and then to add it back to the media. And so you have to think about the exchange kinetics, you have to think about a lot of things. This is not trivial to set this up, to figure out how to control the levels of cholesterol. And then what they do is, this is like a typical assay, and this is the end. What you can do is this, removes cholesterol, and you can see it change. This reports on low levels of cholesterol, which is happening over here, allows the protein to move to the nucleus where it's smaller. And that's how they do the correlation-- the correlation between the levels in the nucleus and the levels of cholesterol. So I thought this was a pretty cool paper. And these kinds of methods, I think, will be applicable to a wide range of things if people ever do biochemistry, looking at the function of membranes. So, OK, guys.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Conclusion_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
Hello. We hope you found this course to be enjoyable and rewarding. Now that we're wrapping up, your thoughts might be turning to how you can build on what you've learned to conduct original research, to develop new technologies, and so much more. The material should position you really well to do this. The models for word representations that we discussed are likely to be valuable components for any task you take on. Our relation extraction unit focused on powerful techniques for distant supervision, which is a really common mode for applied problems. And the natural language inference unit is representative of the kind of opportunities and challenges one faces when building deep learning systems with really large data sets. And of course, the other lectures highlight more diverse application areas and help reveal how even complex cutting edge models are actually usually made up of familiar modular components. And of course, by now you've done a lot of work with our notebooks, you've designed three original systems and entered them into bake-offs, and you've completed an original project. This is an unusually high level of hands-on work, and the practical skills you've acquired should serve you well in many domains. The field of NLU continues to progress rapidly, and you're now extremely well-positioned to follow those changes and even help to shape them. For our part, we'll continue to update and improve our NLU course materials, and share them widely. And we hope to see you in a future Stanford class soon.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Matrix_Designs_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Hello, everyone. Welcome to Part 2 of our series of screencasts on distributed word representations. The focus of this screencast will be on matrix designs. Let's start with the word-by-word design that we concentrated on in Part 1. So here again, we have a vocabulary along the rows. That same vocabulary is repeated along the columns, and the cell values captured a number of times that each row word co-occurred with each column word in some large collection of texts. This matrix will have two properties that I think make it noteworthy for developing semantic representations. The first, is it will be very dense. And as we bring in more data from ever-larger corpora, it will get denser and denser in virtue of the fact that more words will tend to co-occur with more other words, in this ever-larger collection of documents. The second is that it kind of has the nice property that its dimensionality will remain fixed, even as we bring in more data. As long as we decide on the vocabulary ahead of time, all we'll be doing is incrementing individual cell values. And so we could bring in as much data as we want, but without changing the fundamental design of the object. Both of those things are points of contrast with another common design that you see in the literature, especially in information retrieval, and that is the word-by-document design. For this design, again, I have words along the rows, but my columns are now individual documents, and the cell values capture the number of times that each word occurs in each one of those documents. As you can imagine, this is a very sparse matrix in contrast to the word-by-word one that we just looked at in virtue of the fact that most words don't appear in most documents. And we'll also have the property that as we bring in more data in the form of more documents, the shape of the matrix will change, we'll be adding column dimensions for each new document that we bring in to the space, and that could really affect the kind of computations that we can do. The only thing that balances against the ever-increasing size of this matrix is that because it is so sparse, we might have some easy and efficient ways of storing it efficiently, putting it on par with a much more compact but dense word-by-word matrix that I showed you before. Now, those are two very common designs that you see in the literature, but I want you to think creatively and align your matrix design with whatever problem you're trying to solve. So let me show you one that's really radically different. This is what I've called the word by discourse context matrix. I derived this from the Switchboard Dialog Act Corpus, which is the Switchboard corpus, where each dialog act has been annotated by an expert annotator with the sort of dialog act or speech act that was performed by that utterance. What that allows us to do is collect a matrix where the rows are, again, words, but the columns are those individual labels the annotators assigned. I think this is a really interesting matrix. I think if you appear even at this small fragment, you can see some interesting information emerging. So for example, "absolutely" occurs a lot in acceptance dialog acts, whereas more hedged words like "actually" and "anyway" are more common in things like rejecting part of a previous utterance. And I'm sure there are lots of other interesting patterns in this matrix. And of course, that's just a glimpse of the many other design choices that you could make. Again, think creatively. You could have something like adjective by modified noun. This would probably capture some very local, syntactic information or collocational information. We could generalize that a bit to word by syntactic context to explicitly try to model how words associated with specific syntactic structures. It would be very different from our usual semantic goals for this course. Word by search query might be a design that you use in information retrieval. We don't even have to limit this to linguistic objects. Word by person can capture the number of times that each person purchased a specific set of products, and then we could cluster people or products on that basis. We could also mix linguistic and non-linguistic things, so word by person might capture different usage patterns for individual speakers and again, allow us to do some kind of interesting clustering of words or of people. We can also break out of two dimensions. We can have something like word by word by pattern or verb by subject by object. Many of the methods that we cover in this unit are easily generalized to more than two dimensions, so you could have that in mind. And of course, as I said, think creatively, and think in particular about how your matrix design is aligned with whatever modeling goal you have or whatever hypothesis you're pursuing. Another connection that I want to make is that even though this feels like a kind of modern idea in NLP, vector representations of words or of objects, are actually pervasive, not only throughout machine learning, but also throughout science, right? So think back to older modes of NLP where we would write a lot of feature functions. We'll be exploring such techniques that can be quite powerful. Even though they feel very different from the distributional hypotheses that we've been pursuing, in fact, they also represent individual data points as vectors. So for example, given the text like the movie was horrible, I might reduce that with my feature functions to a vector that looks like this. And I might know as a human that 4 captures the number of words, 0 captures the number of proper names, and 1 over 4 captures the percentage of negative words according to some sentiment lexicon, that's a human level understanding of this. In fact, those dimensions will acquire a meaning to the extent that they assemble them into a vector space model and the column-wise elements are compared with each other. So even though the origins of the data are very different, in fact, this is just like vector representations of words in the way we've been discussing it. The same thing happens in experimental sciences where you might have an experimental subject come in and perform some act in the lab. They do a complicated physical and human thing, and you reduce it down to a couple of numbers like a choice they made or a reaction time in a choice and so forth. We might model entire humans or entire organisms with a vector of numbers representing their physical characteristics, and perspectives, and outlooks and so forth. Again, we might know what these individual column dimensions mean, but they acquire a meaning when we're doing modeling only to the extent that they are embedded in a matrix and can be compared to each other across the columns and so forth. There are many other examples of this, where essentially, fundamentally, all of our representations are vector representations. So maybe the far-out idea for this unit is just that we can gather interesting vector representations without all of the hand work that goes into the examples on the slide right now. A final technical point question that you should ask that's separate from your particular matrix design, what is going to count as co-occurrence? So I think there are at least two design choices that are really important when answering this question. To illustrate them, let's use this small example. So I have this text "from swerve of shore to bend of bay, brings." And imagine that our focus word at our particular point of analysis is this token of the word "to." And these indices here indicate going left and right, the distance by counts from that particular focus word. The first question that you want to decide is, what's your window of co-occurrences going to be? So for example, if you set your window to 3, then the things that are within 3 distance of your focus word will co-occur with that word, and everything falling outside of that window will not co-occur with that word, according to your analysis. If you make your window really big, it might encompass the entire document. If you make it very small, it might encompass only a very local kind of collocational information. So you can bet that that's going to be meaningful. There's a separate choice that you can make falling under the heading of scaling. I think a default choice for scaling is to just call it flat. So what you're saying there is something is going to co-occur once with your focus word if it's in the window that you've specified, and that would kind of equally weight all of the things that are in the window. You can also decide to scale, a common scaling pattern would be 1 over n, where n is the distance by word from your focus word. That would have the effect that things occur that occur close to the word of interest, co-occur with it more than things that are at the edges, that are near the end of the window. Those choices are going to have really profound effects on the kinds of representations that you develop. Here are some generalizations I can offer. Larger, flatter windows will capture more semantic information. As the window gets very large to encompass for example, the entire document, you'll be capturing, essentially, topical information. In contrast, if you make your window very small and scaled, you'll tend to capture more syntactic or collocational information. Independently of these choices, you could decide how text boundaries are going to be involved. So a text boundary at the level of a sentence or a paragraph or a document or a corpus could be a hard boundary that's independent of your window, or you could decide that you're going to allow your window to go across different notions of segment that you have. That's really up to you. And again, I think it will have major consequences for downstream tasks involving the representations that you've created. To help you begin exploring this space, the associated code released for this course, the associated notebooks, provide you with four word-by-word matrices, and they have a few things that allow you to do comparisons. First, there are two matrices that were developed from the Yelp Academic Dataset, which is a lot of reviews of products and services. And there are two matrices that come from Gigaword, which is news wire text. So there's fundamentally a real difference in the genre of text involved. In addition, for each of those corpora, we have two different designs-- window size of 5 and scaling of 1 over n, which ought to, by my hypotheses, deliver a lot of kind of collocational or syntactic information. And a window size of 20 and scaling of flat, a very large window, lots of things co-occurring with lots of other things, that might be a better basis for semantics. And you have those two points of variation, both for Yelp and for Gigaword, and I'm hoping that kind of gives you a sense for how these design choices affect the representations that you're able to develop with methods that we're going to cover in later parts of the screencast series.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
SNLI_MultiNLI_and_Adversarial_NLI_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone, to part 2 in our series on natural language inference. We're going to focus on the three data sets that we'll be concentrating on this unit, which are SNLI-- the Stanford Natural Language Inference Corpus-- MultiNLI, and Adversarial NLI. I think they're interestingly different, and they're all big benchmark tasks that can support the training of lots of diverse kinds of systems. So let's begin with SNLI, which is the first to appear of these three. The associated paper is Bowman, et al., 2015. Sam Bowman was a student in the NLP group, and I was his advisor, along with Chris Manning, and a bunch of us contributed to that paper. An important thing to know about SNLI is that the premises are all image captions from the image Flickr30K data set. So that's an important genre restriction that you should be aware of when you think about training systems on this data. All the hypotheses were written by crowdworkers. I'll show you the prompt in a little bit. But the idea is they were given this premise, which was an image caption, and then they wrote three different texts corresponding to the three NLI labels. Unfortunately, as is common with crowdsourced data sets, you should be aware that some of the sentences do reflect stereotypes. I think this traces to the fact that crowdworkers, trying to do a lot of work, are faced with a creative block. And the way they overcome that is by falling back on easy tricks, and some of those involve stereotypes. Completely understandable, and this is something that the field is trying to come to grips with as we think about data set creation. It's a big data set. It has over 550,000 training examples. And it has dev and test sets. Each have 10,000 examples balanced across the three classes. Here's a look at the mean token lengths. It's just sort of noteworthy that premises are a little bit longer than hypotheses. I guess that comes down to the fact that crowdworkers were writing these sentences. In terms of clause types, mostly, we talk about NLI as a sentence task. But in fact, only 74% of the examples are sentences that is S-rooted in their syntactic parses. It has a large vocabulary, but may be modest relative to the size of the data set, and that might come back to the fact that the genre is kind of restricted. We had about 60,000 examples that were additionally validated by four other annotators. And I'll show you the response distributions, which suggests some sources of variation. They had high interannotator agreement. So given that validation, about 60% examples had a unanimous gold label. And we rate the overall human level of agreement at about 91.2% for the gold labels. And that's the measure of human performance that's commonly used for SNLI. And the overall Fleiss kappa measured interannotator agreement was 0.7, which is a high rate of agreement. And then for the leaderboard, you can check out this link here. Sam has been good about curating all the systems that enter, and you can get a sense for which approaches are best. It's clear at this point, for example, that ensembles of deep learning methods are the best for this problem. I mentioned before, the crowdsourcing methods I think it's worth thinking about precisely what happened here. So here's the crowdsourcing interface. There's some instructions up here. Here's the caption-- that is, the premise sentence in our terms-- a little boy in an apron helps his mother. And then the crowdworker had to come up with three sentences. One definitely correct-- that's an entailment case. One may be correct-- that is our gloss on neutral. And one definitely incorrect, which is our gloss on contradiction. So you can see here that there's an attempt to use informal language connecting with informal reasoning, common sense reasoning in the prompt here. And then those get translated into our three labels for the task. And here are some examples from the validated set. And I think they're sort of interesting, because you get high rates of agreement, but you do find some examples that have a lot of uncertainty about them, like this last one here. And I think that might be a hallmark, actually, of NLI problems. Now, one really fundamental thing that I mentioned in the overview screencast as definitely worth being aware of relates specifically to the contradiction relation. And there's discussion of this in the paper. It's a tricky point. What we say for SNLI, using these simple examples here, is that both of them are in the contradiction relation. The first one is "a boat sank in the Pacific Ocean." It has premise and hypothesis, "a boat sank in the Atlantic Ocean." You might ask, of course, those could be true together. They should be neutral, not contradiction. The reason we call them contradiction is because we make an assumption of event coreference, that we're talking about the same boat in the same event. And therefore, the locations contradict each other in a common sense way. And the second example is an even more extreme case of this. Ruth Bader Ginsburg was appointed to the Supreme Court and I had a sandwich for lunch today. We say those are in the contradiction relation. Of course, they could be true together. But they couldn't, in our terms, be true of the same event. They're describing very different events. And for that reason, they get the contradiction label. If a premise and hypothesis probably describe a different photo, then the label is contradiction. That's kind of anchoring back into our underlying domain that you might have in mind. We can mark progress on SNLI, because Sam has been curating that leaderboard. As I mentioned before, we estimate human performance up here at almost 92. And along this x-axis here, I've got time. And you can see that very quickly, the community has hill-climbed toward systems that are superhuman, according to our estimate. But down here at 78 is the original paper. That was from an era when deep learning systems were really not clearly the winners in this kind of competition, but SNLI helped change that by introducing a lot of new data. So a very rapid rise in system performance, and then basically monotonic increase until 2019, when we saw the first systems that were, in these restrictive terms, better than humans at the SNLI task. Let's move to MultiNLI, which was a kind of successor to SNLI. This was collected by Idina Williams and colleagues, including Sam Bowman. The train premises, in this case, are going to be much more diverse. They're drawn from five genres-- fiction; government reports, and letters and things; the Slate website; the Switchboard corpus; which is people interacting over the phone; and Berlitz travel guides. And then interestingly, they have additional genres just for dev and test. And this is what they call the mismatched condition. And those are the "9/11 Report," face-to-face conversations, fundraising letters, and nonfiction from Oxford University Press, as well as articles about linguistics. So this is noteworthy because in the mismatched condition that MultiLNI sets up, you are forced to train on those training examples and then test on entirely new genres. And you can just see how different, for example, Berlitz travel guides might be from the "9/11 Report." I think this is an interesting early example of being adversarial and enforcing our systems to grapple with new domains and new genres. And I think that's a really productive step in testing these systems for robustness. It's another large data set, slightly smaller than SNLI. But actually, the example lengths tend to be longer. They did the same kind of validation, and that gives us our estimates of human performance. And once again, I would say that we can have a lot of confidence. There was a high rate of agreement. 92.6% is the traditional measure of human performance here. For MultiNLI, the test set is available only as a Kaggle competition, and you can check out the project page here. I love the fact that MultiNLI was distributed with annotations that could help someone kind of do out-of-the-box error analysis. What they did is have linguists go through and label specific examples for whether or not they manifested specific linguistic phenomena, like do the premise and hypothesis involve variation in active-passive morphology? Which might be a clue that the sentences are synonymous or in an entailment relation, but nonetheless hard for systems to predict because of the change in word order. We also have things like whether there are belief statements, conditionals, whether coreference is involved in a nontrivial way, modality, negation, quantifiers-- things that you might think would be good probes for the true systematicity of the model you've trained. And you can use these annotations to kind of benchmark yourself there. I think that's incredibly productive. How are we doing on MulitiNLI? So again, we're going to have our score over here and on the x-axis, time. We have that human estimate at 92.6%. And since it's on Kaggle, we can look at lots more systems. For SNLI, we just have the published papers. But on Kaggle, lots of people enter and they try lots of different things. As a result, you get much more variance across this. It's much less monotonic. But nonetheless, you can see that the community is rapidly hill climbing toward superhuman performance on this task, as well. And again, I would just want to reiterate, recalling themes from our introductory lecture, this does not necessarily mean that we have systems that are superhuman at the task of common sense reasoning, which is a very human and complex thing, but rather, systems that are just narrowly outperforming humans on this one particular very machine-like metric, which gives us our estimate of human performance here. Still, startling progress. And then finally, adversarial NLIs, kind of a response to that dynamic that looks like we're making lots of progress. But we might worry that our systems are benefiting from idiosyncrasies and artifacts in the data sets, and that they're not actually good at the kind of human reasoning that we're truly trying to capture. And that gave rise to the Adversarial NLI project. The paper is Nie, et al., which also involves some authors from earlier data sets, SNLI and MultiNLI. It's another large data set. A little bit smaller, but you'll see why it's special in some respects. The premises come from very diverse sources. We don't have the genre overfitting you might get from SNLI. And the hypotheses were again, written by crowdworkers. But here, crucially, they were written not in the abstract, but rather with the goal of fooling state-of-the-art models. That's the adversarial part of this project. And this is a direct response to this feeling that results in findings for SNLI and MultiNLI, while impressive, might be overstating the extent to which we've made progress on the underlying task of common sense reasoning. So here's how the dataset collection worked in a little more detail. I think this is a fascinating dynamic. The annotator was presented with a premise sentence and one condition, which would just correspond to the label that they want to create. They write a hypothesis, and a state-of-the-art model makes a prediction about the premise-hypothesis pair, basically predicting one of these three condition labels. If the model's prediction matches the condition, the annotator returns to step 2 to try again with a new sentence. If the model was fooled, though, the premise hypothesis pair is independently validated. So in this way, we're kind of guaranteed to get a lot of examples that are very hard for whatever model we have in the loop in this process. Here are some more details. So it has three rounds, this data set, for its first release. Overall, that results in that large data set. And you can see that in subsequent rounds, the model is going to be expanded to include previous rounds of data, in addition, possibly, to other data resources. And so what we're hoping is that as we progress through these rounds, these examples are going to get harder and harder in virtue of the fact that the model is trained on more data and is getting better as a result of seeing all these adversarial examples. In terms of the splits, the train set is a mix of cases where the model's predictions were correct and where it was incorrect, because sometimes in that loop, the annotator was unable to fool the model after some specified number of attempts. And we keep those examples, because they're nonetheless interesting training data. However, in the dev and test sets, we have only examples that fooled the models. So with respect to the best model for each round, the test set is as adversarial as it could possibly get. The model has gotten every single example wrong. And Adversarial NLI is exciting because it's given rise to a whole movement around creating adversarial datasets. And that's represented by this open-source project, Dynabench. And we just recently published a paper that's on the Dynabench effort, reporting on a bunch of tasks that are going to use approximately adversarial NLI techniques to develop datasets that are adversarial in lots of domains. And you've actually seen one of these in the Dynasent dataset from our previous unit on sentiment analysis. And here's the Dynabench interface. And I guess I'm just exhorting you, if you would like to get involved in this effort, it's a community-wide thing to develop better benchmarks that are going to get us closer to assessing how much progress we're actually making. And then finally, there are a lot of other NLI data sets that I didn't mention. So let me just run through these. The GLUE benchmark has a lot of NLI tasks in it, as does SuperGLUE, which is its successor. I mentioned before, in the context of ANLI, this NLI-style FEVER dataset. FEVER is fact verification, and I've just translated the examples into NLI ones. Here's an NLI corpus for Chinese, and here's one for Turkish. The Chinese examples are all original, and the Turkish one is a translation with validation of SNLI and MultiNLI into Turkish. XNLI is a bunch of assessment data sets that is dev-test splits for more than a dozen languages, drawing on the MultiNLI examples. Those are human-created translations that could be used to benchmark multilingual NLI systems. And then there are a few others down here kind of pointing out trying to get genre diversity, and then NLI for specialized domains. Here is medicine and science. And those could be interesting for seeing how well a model can grapple with variation that comes in very specific and maybe technical domains. So there's a wide world of tasks you can explore, and I think that makes NLI a really exciting space in which to develop original systems, and projects, and so forth.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Practical_Finetuning_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome back, everyone. This is part 6 in our series on Contextual Word Representations. We're going to be talking about practical fine tuning. It's time to get hands-on with these parameters we've been talking about. So here's the guiding idea. Your existing architecture, say, for the current original system and bake off probably can benefit from contextual representation. We've seen that in many, many contexts and I know you know these things. The notebook fine tuning shows you how to bring in transformer representations in two ways. First, with simple featurization, and then with full-on fine tuning. And I'm going to talk about both of those in this screencast. The heart of this idea is that by extending existing PyTorch modules from the course code distribution, you can very easily create customized fine tuning models with just a few lines of code. And that should be really empowering in terms of exploring lots of different designs and seeing how best to use these parameters for your problem. I just want to mention that really and truly, this is only possible because of the amazing work that the Hugging Face team has done to make these parameters accessible to all of us. So let's start with simple featurization. And I actually want to rewind to our discussion of recurrent neural networks and think about how we represent examples for those models. In the standard mode, we have as our examples lists of tokens here. We convert those into lists of indices, and those indices help us look up vector representations of those words in some fixed embedding space. And the result of that is that each example is represented by a list of vectors. That's important to keep in mind. We tend to think of the model as taking as its inputs lists of tokens and having an embedding. But from the point of view of the model itself, it really wants to process as inputs lists of vectors. And that's the empowering idea because if we use it in fixed embedding, of course, then these two occurrences of A will be the same vector, and these two occurrences of B across examples will be the same vector. But the model doesn't really care that there's a same vector. We could, if we wanted to, convert directly from token sequences into lists of vectors using a device like a BERT model. And that would allow that A in the first position and A in the third could correspond to different vectors, or B across these two examples might correspond to different vectors. That would be the contextual representation part of these models. And again, from the point of view of the RNN, we can feed these indirectly. That's straight forward. This is a complete recipe for doing that using the SST code and the PyTorch modules from the course code distribution. So you can see that beyond the setup stuff which we've done a few times, the feature function is just going to use BERT functionality to look up the example's indices and then convert them into vector representations. And here, as a summary, we're going to use the representation above the class token. But lots of things are possible at that point. And then when we have our model wrapper here, we set up a Torch RNN Classifier. And there're just two things of note. First, we say use embedding equals false, because we're going to feed vectors in directly. There's no embedding involved here. And we also don't need to have a vocabulary. You could specify one, but it's not involved because fundamentally, again, the model deals directly with vectors. And then at SST experiment, you, again, say vectorized equals false. And that is a complete recipe for bringing in BERT representations with the standard RNN. This isn't quite fine tuning though, so let's think about how we might get added benefits from actually updating those BERT parameters as opposed to just using them as frozen representations inputs to another model. What I'd encourage you to do is think about subclassing the PyTorch modules that are included in our course code distribution. Because then, you will be able to write code, just oriented toward your model architecture, and a lot of the details of optimization and data processing will be handled for you. This is, I hope, a powerful example of that. It comes from the tutorial PyTorch models notebook. It's a Torch Softmax Classifier, and the only thing we have to do is rewrite this build_graph function to specify one single dense layer. We are using as our base class the Torch Shallow Neural Classifier which handles everything else about setting up this model and optimizing it. If we wanted to go in the other direction and instead fit a really deep model, we could, again, begin from Torch Shallow Neural Classifier and rewrite the build_graph function so that it just has more layers, essentially. And then what's happening in this init method is we're just giving the user access to the various hyperparameters that they could choose to set up this model. Finally, here's a more involved example. This one, we start with a PyTorch NN module, kind of all the way down at the base here. This is a Torch Linear Regression model. We set up the weight parameters here and then we have this single forward pass, which corresponds to the structure of a simple linear regression. Now, for the actual interface, we need to do a little bit more work here. So we set up the loss so that it's appropriate for our regression model. So most of the classifiers we've been looking at up until now. build_graph just uses the NN module that I showed you a second ago. We need to do a little bit of work in build_dataset and rewrite that so we process linear regression data correctly. And then we do need to rewrite the predict and score functions to be kind of good citizens of the code base and allow for hyperparameter optimization and cross validation and so forth. But that's, again, straightforward, and fundamentally for predict we're actually making use of the base class's _predict method for the heavy lifting there. And then score, of course, is just moving us out of the mode of evaluating classifiers and into the mode of evaluating regression models. That's all you need to do. And again, conspicuously absent from this is most of the aspects of data processing and all of the details of optimization. The base class Torch model base here has a very full featured fit method that you can use to optimize these models and do hyperparameter exploration. And that brings us to the star of the show, which would be BERT fine tuning with Hugging Face parameters. Here we'll start with a PyTorch nn.module. We load in a BERT module as we've done before, and make sure to set it to train so that it can be updated. And then the new parameters here are really just this classifier layer, a dense layer that's going to be oriented toward the classification structure that we want our model to have. The forward method calls the forward method of the BERT model, and you get a bunch of representations. There are a lot of options here. What I've decided to do is just use the Hugging Face Pooler Output, which is some parameters on top of the class token as the input to the classifier. When we optimize this model, with luck in a productive way, not only will these classifier parameters be updated, but also all the parameters of this BERT model that you loaded in in train mode. The interface is a little bit involved here. So what we do is provide the user with some flexibility about what choices to make. build_graph, again just loads in the module that I showed you just a second ago. And then build_dataset is a bit involved. But what we do fundamentally is use the BERT tokenizer to batch encode our data. And then we do a little bit of processing on the output labels to make sure PyTorch can make sense of them. That's really it. In the heart of this, it's just that we're, again, using Hugging Face functionality to represent our data to the BERT model. And then this is the really interesting part. Calling the forward method and then fitting the classifier on top is pretty much all you need to do. And of course, that opens up a world of options. Reps here has lots of other things that you could use as the input to this classifier layer. And many of them actually might be more productive than the simple approach that I've taken here.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Feature_Representation_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is part 7 in our series on supervised sentiment analysis. The focus of this screencast is on feature representation of data. There are really two things I'd like to do. First, just explore some ideas for effective feature representation in the context of sentiment analysis. And, second, cover some of the core technical concepts that surround feature representation that you'd do well to have in mind as you write new feature functions and optimize models. Let's begin in a familiar place which is N-gram feature functions. To this point in the series of screencasts, I've been just focusing on unigram feature functions. That's also called the "bag-of-words" model, and we can easily generalize that idea to bigrams, and trigrams, and so forth. All of these schemes will be heavily dependent on the tokenizer that you've chosen because, of course in the end, for every example we represent, we are simply tokenizing that example and then counting the tokens in that example. This can be combined of course with preprocessing steps. In part 2 in this series, I covered the preprocessing idea of _NEG marking. Which is essentially to mark words as they appear in a heuristic way in the scope of negative morphemes as a way of indicating that, for example, "good" is positive in normal context but might become negative when it is in the scope of a negation like "not" or "never." We would handle that as a preprocessing step and that would just create more unigrams that our tokenizer would turn into tokens and then would be counted by these feature representation schemes. A hallmark of these feature approaches is that they create very large, very sparse feature representations. You are going to have a column in your feature representation for every single word that appears anywhere in your training data. And another important thing to keep in mind about this approach is that, by and large, they will fail to directly model relationships between features unless you make some special effort to effectively interact these features. All you'll be doing is studying their distribution with respect to the class labels that you have. And it's very unlikely that you'll recover, in any deep way, the kind of underlying synonymy of words like "couch" and "sofa," for example. And this is a shortcoming that we might want to address as we move into distributed representations of examples like deep learn. So for our first technical concept, I would like to just distinguish between feature functions and features. And to do this, I've just got a fully worked out example here using tools from scikit-learn that I think will make the importance of this distinction really clear and concrete. So in cell 1, I've just loaded a bunch of libraries. In cell 2, I've got my standard, kind of lazy unigrams feature function, which is taking in a stringed text, downcasing it, and then simply splitting on whitespace. And then the counter here is just turning that into a count dictionary mapping each token to the number of times that it appears in this example, according to our tokenizer. That would be fine for now. In cell 3, I have a tiny little corpus that has just two words, a and b. In cell 4, I create a list of dictionaries by calling unigrams_phi on each of the texts in my corpus here. So that gives me a list of count dictionaries. In 5, I use a DictVectorizer, as covered in a previous screencast. And what that's going to do is when I call fit_transform on my list of feature dictionaries, it will turn it into a matrix, which is the input that all of these scikit machine learning models expect for their training data. And in cell 7, I've just given you what I hope is a pretty intuitive view of that design matrix. Underlyingly, it's just an NumPy array. But if we use pandas, we can see that the columns here correspond to the names of each one of the features. Because we have just two word types in our corpus, there are two columns, a and b, and each of the rows corresponds to an example from our corpus. And so you can see that our first example has been reduced to a representation that has 3 in its first dimension and 0 in its second, corresponding to the fact that it has three a's and no b's. Example 2, a, a, b is represented as a 2 in the first column and a 1 in the second column, and so forth. So that's a first distinction. We have this feature function here, which is like a factory, and depending on the data that come in for our corpus, we're going to get very different features which correspond to each one of the columns in this feature representation matrix. Let's continue this a little bit and think about how this actually interacts with the optimization process. So in cell 7 here, I've just repeated that previous matrix for reference. In cell 8, I have the class labels for our four examples, and you can see there are three distinct classes-- C1, C2, and C3. I set up a logistic regression model, although that's not especially important, it's just a useful illustration. And I call fit on my pair x, y. That is my feature representations and my labels, and that's the optimization process. As part of that, and for a convention for scikit, the optimization process creates this new attribute coef_ and this new attribute classes_. coef_ here, these are the weights that we learned as part of the optimization process, and of course classes_ corresponds to the classes that inferred from the label y that we input. And here I'm just using a pandas data frame again to try to make this intuitive. It's really just a NumPy array, this coef_ object here. And you can see that the resulting matrix has a row for each one of our classes and a column for each one of our features. And that's a useful reminder that what the optimization process for models like this is actually doing is learning a weight that associates class feature name pairs with a weight, right? So it's not just that we learn individual weights for features, but rather we learn them with respect to each one of the classes. And that's a hallmark of optimization for multi-class models like this one. And then in cell 12, I've just shown you that you can actually use the coef_ and this other bias term, intercept_, to recreate the predictions of the model. All you're doing is multiplying examples by those coefficients and adding in the bias term. And this matrix here is identical to what you get in scikit, if you simply directly call predict_proba for predict probabilities on your examples. Let's turn back to what we're trying to do to create good models, having those ideas in mind. So let's just cover a few other ideas for hand-built feature functions that I think could be effective for sentiment. So, of course, we could have lexicon-derived features. I earlier showed you a bunch of different lexicons. And that could be used to group our unigrams. So we could have these feature functions work in conjunction with a "bag-of-words" or "bag-of-acronyms" model, or we could use them to replace that model and develop a sparser feature representation space. We could also do the negation marking that I mentioned before, and we could generalize that idea. So many things in language take scope in a way that will affect the semantics of words that are in their scope. So another classical example behind-- besides negation is these modal adverbs like "quite possibly" or "totally". We might have the idea that they are modulating the extent to which the speaker is committed to "masterpiece" or "amazing," in this case. And keeping track of that semantic association with simple underscore marking of some kind might be useful for giving our model a chance to see that these unigrams are different depending on their environment. We can also have length based features, and that's just a useful reminder that these don't all have to be count features. We can have real valued features of various kinds and they could signal something important about the class label. For example, I think neutral reviews, three star reviews tend to be longer than one and five star reviews, so might as well throw that in. And we could expand that idea of float valued features a little bit more. I like the idea of thwarted expectations which you might keep track of as the ratio of positive to negative words in a sentence. The idea being that very often if that ratio is exaggerated, it's telling you the opposite story that you might expect about the overall sentiment. Many, many positive words stacked up together might actually be preparing you for a negative assessment and the reverse. But the important thing about this feature is that it wouldn't decide for you what these ratios mean. You would just hope that it was a useful signal that your model might pick up on as part of optimization to figure out how to make use of the information. And then, finally, we could do things, various kinds of ad-hoc feature functions to try to capture the fact that many uses of language are non-literal and might be signaling exactly the opposite of what they seem to do on their surface. Like, "Not exactly a masterpiece." is probably a pretty negative review. It was "Like 50 hours long." is not saying that it was actually 50 hours long but rather, with hyperbole, indicating that it was much too long or something like that. And "The best movie in the history of the universe." could be a ringing endorsement, but it could just as easily be a bit of sarcasm. Capturing those kind of subtle distinctions is, of course, much more difficult. But the hand-built feature functions that you write where you could try to capture it, and if they have a positive effect, then maybe you've made some real progress. And that's a good transition point to this topic of assessing individual feature functions. As you can see, the philosophy in this mode of work is that you write lots of feature functions and kind of see how well they can do at improving your model overall. You might end up with a very large model with many correlated features and that might lead you to want to do some feature selection to weed out the ones that are not contributing in a positive way. Now, scikit-learn has a whole library for doing this called feature selection, and it offers lots of functions that will let you assess how much information your feature functions contain with respect to the labels for your classification problem. So this is very powerful. And I encourage you to use them, but you should be a little bit cautious. Take care when assessing feature functions individually because correlations between those features will make the assessments very hard to interpret. The problem here is that your model is holistically thinking about how all of these features relate to your class label and figuring out how to optimize weights on that basis, whereas the feature function methods, many of them, just look at individual features and how they relate to the class label. So you're losing all that correlational context. And to make that a little concrete, I just cooked up an example here, an idealized one, that shows how you could be misled. So I have three features, x1, x2, and x3 in a simple binary classification problem. And I use the chi-square test from feature selection to kind of assess how important each one of these features is with respect to this classification problem. And what I found is that, intuitively, it looks like x1 and x2 are really powerful features. And that might lead me to think, well, I'll drop the third feature and include just one and two in my model. So far, so good. However, if we thoroughly explore this space, what we find is that in truth a simple linear model performs best with just feature x1 and actually including x2 hurts the model, despite the fact that it has this positive feature importance value. So what you really ought to be doing is using just this single feature, but these methods can't tell us that. And even a positive feature selection value might actually be something that's at odds with what we're trying to do with our model, as this example shows. So, ideally, what you would do is consider more holistic assessment methods which scikit also offers. This would be things like systematically removing or perturbing feature values in the context of the full model that you're optimizing and comparing performance across those models. This is much more expensive because you're optimizing many, many models. So it might be prohibitive for some classes of models that you're exploring. But if you can do it, this will be more reliable. However, if this is impossible, it might still be productive to do some feature selection using simpler methods. You should just be aware that you might be doing something that's not optimal for the actual optimization problem that you've posed. OK, and the final section of this screencast is a kind of transition into the world of deep learning. I've called this distributed representations as features. This is a very different mode for thinking about representing examples. What we do in this case is take our token stream as before but instead of writing a lot of hand-built feature functions, we simply look up each one of those tokens in some embedding that we have. For example, it could be an embedding that you created in the first unit of this course. Or it could be a GloVe embedding or a static embedding that you derived from BERT representations, and so forth and so on. The important thing is that each token is now represented by a vector and that could be a powerful idea because in representing each of these words as vectors, we are now capturing the relationships between those tokens. We might now have a hope of seeing that "sofa" and "couch" are actually similar features in general and not just with respect to the class labels that we have. So that's the idea why this might be powerful. So we take all those vectors and look them up. However, for all these classifier models, we need a fixed dimensional representation to feed into the actual classifier unit. So we're going to have to combine those vectors in some way, and the simplest thing you could do is combine them via some function like sum or mean, right? So take all these things, for example, and take their average and that would give me another fixed dimensional representation, no matter how many tokens are in each one of the examples. And that average vector would be the input to the classifier. So if each one of these vectors has dimension 300, then so, too, does the feature representation of my entire example. And I now have a classifier which is processing feature representations that have 300 columns. Each dimension in the underlying embedding space now corresponds to a feature. And that's the basis for optimization. And I'd say an eye opening thing about this class of models is despite them being very compact-- 300 dimensions versus 20,000 that you might have from a "bag-of-words" model-- they turn out to be very powerful. And this final slide here just shows you how to implement those using tools and other utilities for our course. So I'm going to use GloVe, and I'm going to use the 300 dimensional GloVe space which is included in your data distribution. In 4 and 5 here, we just write simple feature functions and the hallmark of these is that they are simply looking up words in the embedding and then combining them via whatever function they use or specifies. So the output of this is directly a vector representation of each example. In cell 6, we set up a logistic regression, as before. Of course, it could be a much fancier model but logistic regression will do. And then we use sst_experiment almost exactly as before. The one change we need to remember to make in operating in this mode is to set the flag of vectorized equals false. We already have each example represented as a vector, so we do not need to pass it through that whole process of using a DictVectorizer to turn count dictionaries into vectors. And as I said before, these turn out to be quite good models despite their compactness. And the final thing I'll say is that this model is a nice transition into the recurrent neural networks that we'll study in the final screencast for this unit, which essentially generalize this idea by learning an interesting combination function for all the vectors for each of the individual tokens.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Varieties_of_contextual_grounding_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome to part 4 in our series on grounded language understanding. Our topic is varieties of contextual grounding. What I'd really like to do is make connections with additional tasks as a way of drawing out what I think is one of the central insights behind the work that we're doing, which is that speakers should try to be informative in context. Let me explain a bit more about what that means. So our task is this task of color reference in context. The speaker is given three color patches, one of them designated the target, and the speaker's task is to communicate which of the three is the target to the listener who's in the same context, but of course doesn't know what the target is. And what I think you see running through the human data here is that speakers are striving to be informative in context. In this first case, the speaker can just say "blue" because the contrasts are so clear. But in the second case, merely saying "blue" would be really unhelpful. It would be uninformative in the context because there are these two blues. And as a result, the speaker is pushed to do something more interesting, the darker blue one, making implicit reference to the context in an effort to communicate effectively with the listener. And that communication aspect, I think, can be so powerful and runs through lots of tasks, both ones that explicitly involve communication and ones that involve a more general setting. One case of the latter is I think discriminative image labeling, which is tackled in this lovely paper, Mau et al. 2016. The task here is given an image to label entities that are in those images. And for many, many contexts, it would be a shame if our goal was to label these two entities here and we simply called them both dog. It's uninformative in the sense that it doesn't distinguish the two entities in the context of this picture. What we might hope is that we would get fuller descriptions, like a little dog jumping and catching a Frisbee and a big dog running, fuller descriptions in the sense that they provide more detail that distinguishes the two dogs. And we could extend that to full image captioning as well. Again, given these three images, it would be a shame if our image captioning system just labeled them all dog. We might have the intuition that we would like the image captioning system to produce descriptions of these images that would help a listener figure out which image was being described. And we might have a further goal for this image captioning system that as we change the set of distractors, it's sensitive to that and produces different descriptions, trying to be informative relative to these new contexts that we're creating, amplifying some kinds of information and leaving out other kinds of information to the extent that they would help the listener achieve that task of figuring out which image was being described. Machine translation is another area that might benefit from this notion of informativity and context. This was explored in a lovely paper by Reuben Cohn-Gordon and Noah Goodman in 2019. So let's say our task is to go from English to French. Reuben and Noah just observed that at the time, these two English inputs, "she chopped up the tree" and "she chopped down the tree" were both mapped to the same French translation, which is a shame given how different those two English inputs are in terms of their meanings. What we would like is to have the English inputs map to different French sentences. And their intuition about how to achieve that would be to achieve some kind of invariance so that given the translation from English to French, we should be able to do the reverse, figure out from the French which underlying English state was being, quote, referred to in this context. So its language on both sides, but it's drawing on this idea that we want translations that are informative in the sense that they would help someone figure out what the original system input was. Same guiding idea, drawing on this metaphor of communication, but now to achieve good translations. And in other domains, it's just very intuitive to think about informativity in context. So Daniel Fried et al, we have a lovely paper exploring how to give navigational instructions drawing on pragmatic ideas like informativity and context. And for example, they have both speaker and listener agents, and they observed that the base speaker is true but uninformative, whereas their rational speaker, which brings in pragmatic ideas, is more sensitive to the kinds of information that a listener would need to follow an instruction. And the same thing is true on the listener side. The base listener is unsure how to proceed, but the rational listener was able to infer that since this instruction didn't mention this couch over here, it was probably not relevant to the instruction, and therefore, this listener stops at this point in interpreting the navigational instructions. And Stefanie Tellex and colleagues have explored this idea in the context of human-robot interaction. They've called their central mechanism inverse semantics. And this is, again, just the intuition that a robot producing language ought to produce language that reduces ambiguity for the human listener. In this context here, where the robot is trying to get help from the human, it shouldn't just say help me. The human won't know how to take action. But it also shouldn't do something simple like, hand me the leg. The robot should be sensitive to the fact that there are multiple table legs in this context, and the robot needs to ensure that the human listener is not faced with an insurmountable ambiguity. And that would, therefore, push this robot, in being aware of the listener state, to produce descriptions that were more like hand me the white leg on the table, fully disambiguating from the perspective of the listener. And I'd like to push this idea of informativity in context even further by connecting with one of the classic tasks in machine learning, which is optical character recognition. Even this task, I believe, can benefit from notions of contrast and informativity in context. On the left, I have four digits, and you can see that this is a speaker who puts little hooks at the top of their 1s and slashes through their 7s. And those two pieces of information would help us disambiguate the final digit and infer that it was a 1. On the right here, we're pushed in a very different direction. This is a speaker who does not put hooks on the top of their 1s or slashes through their 7s, and that would lead us to think that this final digit here is a 7. Notice that, in terms of what's actually on the page, these two digits are identical, but the context is what's leading us in very different directions, and we can assume that at some fundamental level, the speaker is going to be informative in the sense that they're going to write in ways that are consistent and draw an intended contrast between their digits, and that's what guides us toward what are ultimately the correct classification decisions, even for this apparently mechanical-seeming environment.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Static_Representations_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Hello, everyone, welcome to our final screencast in our unit on distributed word representations. Our topic is going to be deriving static representations from contextual models. That might sound awfully specific, but as you'll see, I think this could be really empowering for you as you work on your original system for your assignment and the associated bake-off. So let's dive in. A question on your minds might be, how can I use BERT or related models like RoBERTa or XLNet or ELECTRA in the context of deriving good static representations of words? You probably have heard about these models and heard that they lift all boats and the question is, how can you take advantage of those benefits? But there's a tension here. We've been developing static representations, but these models like BERT are designed to deliver contextual representations of words. And I'll return to what that means in a second, but that is the central tension between static and contextual. So the question is, are there good methods for deriving static representations from the contextual ones that these models offer? And the answer from Bommasani et al is yes. They are effective methods for doing this and it's those methods that will be the focus of this screencast. I really want to do two things though for this lecture. I would like to get hands on a little bit with a high-level overview of models like BERT. We're going to look later in the quarter in much more detail at how these models work. So for now, we're just going to treat them as kind of black box. It's just like you might look up a GloVe representation of a word and just get back that representation and use it. So too here, we can think of these models as devices for feeding in sequences and getting back lots and lots of representations that we might use. And later in the quarter, we'll come to a deeper understanding of precisely where those representations come from. And in addition, of course, I want to give you an overview of these exciting methods from Bommasani et al in the hopes that they are useful to you in developing your original system. So let's start with the structure of BERT. BERT processes sequences, here I've got a sequence, the class token, the day broke, SEP, class and separate designated tokens, the class token typically starts the sequence and then SEP ends the sequence. It can be also used internally in sequences to mark boundaries within the sequence that you're processing. But the fundamental thing is that we have the short sentence, "the day broke." BERT processes those into an embedding layer and then a lot of additional layers. And here I depicted 4, but it could be 12 or even 24 layers. What we're seeing here, the rectangles represent vectors. They are the outputs of each layer in the network. A lot of computation goes into computing those output vector representations at each layer. We're going to set that computation aside for now so that we can just think of this as a grid of vector representations. Here is the crucial thing that makes BERT contextual. For different sequences that we process, we will get very different representations. In fact, individual tokens occurring in different sequences will get very different representations. I've tried to signal that with the colors here so like the two sequences both contain the word "the" and the word "broke." But in virtue of the fact that they have different surrounding material and different positions in the sequence, almost all of the representations will be different. The class and SEP tokens might have the same embedding, but through all of these layers because of the way all these tokens are going to interact with each other when we derive the representations, everything will be different. We do not get a static representation out of these models. And I've specified that even in the embedding layer, if the positions of the words vary, one and the same token will get different representations. The reason for that is that this embedding layer is actually hiding two components. We do at the very center of this model have a fixed static embedding, where we can look up individual word sequences. But for this thing that I've called the embedding layer, that static representation is combined with a separate positional encoding from a separate embedding space, and that delivers what I've called the embedding layer here. And that means that even at this first layer because, for example, "the" occurs in different points in the sequence, it will get different representations even in the embedding space. And from there, of course, as we travel through these layers, we expect even more things to change about the representations. A second important preliminary is to give some attention to how BERT and models like it tokenize sequences. And here I'm giving you a bit of code in the hopes that you can get hands-on and get a feel for how these tokenizers behave. I'm taking advantage of the Hugging Face library. I have loaded a BERT tokenizer and I load that from a pre-trained model. In cell 3, you can see that I've called the tokenize function on the sentence. This isn't too surprising and the result is a pretty normal looking sequence of tokens. You see some punctuation has been separated off, but you also see a lot of words. When you get down to cell 4, though, for the sequence "encode me" this is a bit surprising. The word encode in the input that's been broken apart into two subword tokens, "en" and then "code" with these boundary markers on it. BERT has broken that apart into two subword sequences. And if I feed in a sequence that has a really unfamiliar set of tokens in it, it will do a lot of breaking apart of that sequence, as you can see in cell 5 for the input "snuffleupagus" where a lot of these pieces have come out. This is the essential piece for why BERT is able to have such a small vocabulary, only about 30,000 words, compare that with the 400,000 words that are in the GloVe space. The reason it can get away with that is that it does a lot of breaking apart of words into subword tokens. And of course, because the model is contextual, we have an expectation that, for example, when it encounters code here in the context of "en" at some conceptual level, the model will recognize that it has processed the word "encode" even though there was two tokens underlining it. Let's flesh this out a bit by looking at the full interface for dealing with these models and, again, taking advantage of Hugging Face. I'm going to load a BERT model and a BERT tokenizer. It's important that they use the same pre-trained weights, which Hugging Face will download for you from the web. And so those are tied in and I set up the tokenizer and the model. If I call tokenizer.encode on a sequence, it will give me back a list of indices. And those indices will be used as a lookup to start the process of computing this entire sequence. In cell 6, I actually use the model to derive that grid of representations. Hugging Face is giving us an object that has a lot of attributes. If I call output_hidden_states equals true when I use the model here, but I can call .hidden_states and get that full grid of representations that I showed you before. This is a sequence with 13 layers. That's 1 embedding layer plus 12 of the additional layers. And if I key into one of the first layer, that will be the embedding. You can see that its shape is 1 by 5 by 768. This is the batch of one example. It has five tokens. The three that we can see here plus the class and SEP tokens. And each one of those tokens in the embedding layer is represented by a vector of dimension 768. And that remains consistent through all the layers in the model. So I went to the final output states, I again just index into .hidden_states here. The shape is the same and that will be consistent for all the layers. Those are the preliminaries. And let's think about how we could derive some static representations. The first approach that Bommasani et al considered is what they call the decontextualized approach and this is like the simplest thing possible. We are just going to process individual words as though they were sequences and see if we can make any sense of them. So we would start by feeding in a word like "kitten" and we would allow the model to break it apart into its subword pieces. And then we simply process that with the model, we get a full grid of representations. Now because we potentially have subword tokens here, we need some pooling function. So what we can do is just pool using something like mean to get a fixed static representation of dimension 768 for this individual word. And of course, we don't have to use the final layer, we can use lower down layers. And we don't have to use mean as the pooling function. You could consider something like max or min or even last, which would just disregard all of the representations except for the one corresponding to the final subword token. This is really simple. It's potentially unnatural, though. BERT is a contextual model. It was trained on full sequences. And especially if we leave off the class and SEP tokens, we might be feeding in sequences that BERT has really never seen before. And so it might be unknown how it's going to behave with these unusual inputs. Nonetheless, though, we could repeat this process for all the words in our vocabulary and derive a static embedding space and maybe it has some promise. However, to address this potential unnaturalness and potentially take more advantage of the virtues that BERT and related models have, Bommasani et al consider also the aggregated approach. So in this approach, you process lots of corpus examples that contain your target word. You've got that sort of glimpse of a corpus. Our target word is kitten, of course, we allow it to be broken apart into subword tokens. The full sequences in these examples would also be broken apart into subword tokens. But the important thing is that our target word might have subword tokens. We pool those as we did before for the decontextualized approach and we're also going to pool across all of the different context examples that we processed. And the result of that should be a bunch of natural inputs to the model. But in the end, we derive a static representation that is some kind of average across all of the examples that we processed. This seems very natural. It's taking advantage of what BERT is best at. I will warn you, though, that this is very computationally demanding. We're going to want to process lots of examples, and BERT requires lots of resources because it develops really large representations as we've seen, but it might be worth it. Now, Bommasani et al offer lots of results that help us understand these approaches and how they perform. Let me give you a glimpse of them as a kind of summary. So what we've got here is results for the SimVerb 3,500 dataset, a word similarity data set that's very similar to the ones that you'll be working with on the homework and bake-off. Our metric is Spearman correlation and higher is better. That's along the y-axis. And along the x-axis, I have the layer in the model that we're keying into. And then, of course, what we should watch is that we have two pooling functions f and g. f is subword pooling and g is context pooling for models that have it, and it's decont for the decontextualized approach. Now we have a very clear result across these results, and I think across all the results in the paper. Lower layers are better. Lower layers are giving us good high fidelity representations of individual words. As we travel higher in the model we seem to lose a lot of that word level discrimination. In addition, your best choice is to do mean pooling for the context and subword pooling seems to matter less, right? All of these lines here are all for the context pooling model with mean as your context pooling function. The very best choice, though, I think consistently is mean for both of these pooling functions here. You can see that in this result, and I think that's consistent across all the results in the paper. But the overall takeaway here is that, as expected, the aggregated approach is better than the decontextualized approach. However, if you don't have the computational budget for that, then mean pooling and the decontextualized approach looks really competitive. That's not so evident in this spot, but if you look across all the results in the paper, I think that's a pretty clear finding. So that would be a good choice. And one thing is clear, that simple approach is better than some kinds of context pooling where you choose the wrong context pooling function like min or max. Despite all of the effort that went into this set of results and also these, they're all kind of down here entangled with the decontextualized approach. But mean as the pooling function there is really an outstanding choice, as you can see from these results.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Homework_3_Colors_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome back, everyone. This screencast is an overview of the homework and bakeoff associated with our unit on grounded language understanding. More than any of the other assignments, what we're asking you to do here, is essentially develop a fully integrated system, that addresses our task. So the distinction between homework questions and original system questions is kind of getting blurred here in the interest of having you devote all your energy to developing a cool original system for this problem. So because of that I'm going to use some slides to give you an overview of the entire problem. And how we're thinking about evaluation and how the questions feed into these overall goals. So recall that our core task is the Stanford colors and context task. And we're going to take the speaker's perspective primarily. And what that means is that the inputs to our model are sequences of three color patches. One of them designated as the target, and the task is to generate a description of the target, in that particular context. The core model that will be using, which is in torch_color_describer is an Encoder/Decoder architecture. And the way it works is on the encoder side you have a sequence of three colors. And we always put the target color in the third position. So those are the inputs, and then the decoding step is essentially to describe the target in that context. So that's the natural language generation part and we've covered this core architecture in previous screencasts. And I'll return to some of the modifications that you see here in the context of question four. There's a separate notebook called Colors Overview that you should start with. It gives you a sense for what the dataset is like, and also what our modeling code is like. Here you can see that I've loaded in the corpus itself, it's got about 47,000 examples in it. And each one of those examples has a number of different attributes that you should be aware of. So here's a typical example, the first one in the corpus. Fundamentally, you have these three color patches. And you can see this display is marking out the target, as well as an utterance. Each one of these colors is encoded as a triple and HSV values. That is a sequence of three floats, and you can see here that you can also access the utterance. There are three conditions in the underlying corpus that vary in their difficulty. In the far condition, all three of the colors are quite different from each other. And so the task of identifying the target is typically pretty easy, here the person just had to say purple. In the split condition, two of the colors are highly confusable. So you can see here that we have two green colors. And that pushed the speaker to choose a kind of more specified form of green in saying lime. And the hardest condition is the close condition, and that's where all three of the colors are highly similar to each other. This tends to lead to the longest descriptions. You can see here that the speaker even took two turns, as indicated by this boundary marker. To try to give their full description; medium pink, the medium dark one. Because these colors are so confusable. So you should be aware of this difference in the conditions. And it might affect how you do different kinds of modeling based on what the color sequence is like. Now evaluation for natural language generation systems is always challenging. And there are some automatic metrics that we can use as guideposts. In fact we're going to use BLEU in various places. But our primary evaluation metric will be this task oriented one, which brings in a listener perspective. So at a mechanical level here's how we'll make predictions. For a given context, c, consisting of three colors. Capital C here is all the permutations of those three colors. Suppose that you have trained a speaker model, PS here, it's a probabilistic agent. We're going to think about how it makes predictions for all of those different permutations. And take as its prediction the level of a full sequence, the sequence that it assigns the highest probability to. Given the message that your system produced. And then we say that a speaker is accurate in its prediction about some context c. Just in case the best sequence that it predicts, the highest probability one, has the target in the final position. As designated by our model structure. So in a little bit more detail, here's how this works with an example. Suppose that our context looks like this, it has these three color patches. The target is always in third position, and our message was blue. Here on the right we have all the permutations of these three colors. And we're going to say that your system was correct, if its highest probability context given that message was one of these two. That is one of the two that has the target in final position. And the system is inaccurate to the extent that it stands higher probability to one of these other sequences. Essentially, we're saying that it's assigning higher probability to some other target. But we do operate at the level of these full sequences. All right, now let's move into the questions here, and we first start with the tokenizer. You're unconstrained in how you design your tokenizer. You should just make sure that you have a start symbol and an end symbol. The start symbol is important conditioning context for the model. And the end symbol is the crucial signal that your model will actually stop producing tokens. So don't forget those pieces, but in terms of what else you do in there, it's unconstrained. And I think you can see from the Monroe et al. work that making smart choices about tokenization might be really meaningful. Question two asks you to think about how you're representing colors. By default they're just going to be those three float values, but that's probably not optimal. In the Monroe et al. paper, we explored a Fourier transform as a way of embedding colors. And I've given you a little recipe for that in the context of the notebook. In case you want to explore that it is highly effective, but this is optional. And there might be other representation schemes that are even better and worth exploring. Question three asks you to think about rich initialization or pre-training for your model. We've worked a lot with pre-trained GloVe embeddings. And this is a chance for you to bring those into your model and see how well they do. You should be aware that this step is going to interact in non-trivial ways, with choices you make for your tokenizer. And question four is the most involved, it involves some real PyTorch wrangling. Conceptually, what we're asking you to do is borrow a trick from the Monroe et al. paper. What we found in that work is that it helped to remind the model, during decoding, of which of the three colors was its target. And the way we did that, essentially, was by taking the color embedding for the target. And appending it to the embedding of each one of the tokens that it was producing, as a kind of reminder. In terms of how that works at the level of code, there is a decoder class, and you should modify it so that the input vector to the model at each timestep is not just the token embedding. But the concatenation of that embedding with the representation of the target code. Then you need to modify the Encoder/Decoder class to extract the target colors, and feed them to that decoder class. And then finally here, this is the interface that you use. Modify that interface so that it uses your decoder and encoder, and that's a pretty mechanical step. When you're developing on this problem use toy datasets. Because you don't want to wait around as you process the entire colors corpus, only to find out that you have a low level bug. And I also encourage you to lean on the tests that we have included in the notebook as a way of ensuring that you have exactly the right data structures. And assuming all those pieces fall into place, I think you'll find that the resulting models are substantially better for our task. That brings us to the original system. And here's just some expectations about how we think you might work on this problem. You could iteratively improve your answers to the assignment questions as part of the original system. Modify the tokenizer, think about your GloVe embeddings, think about how you're representing colors, and kind of how all those pieces are interacting. You might want to extend the modified Encoder/Decoder classes to do new and interesting things. And I have provided guidance on how to do that at a mechanical level, in the colors overview notebook. Any data that you can find is fine to bring in for development, and for training your original system. The bake-off involves a new test set, that's never been released anywhere before, it's just used in this context. It's got the same kinds of color context as in the released corpus. But it was one-off games rather than iterated games. And I do think that makes this test set a little bit easier than the training set. And all the items have been listener-validated. So I think all the descriptions are in principle good descriptions at a human level. And so it should be a good basis for evaluation.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Neural_RSA_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome back, everyone. This is part six in our series on Grounded Language Understanding. We're going to be talking about neural RSA, which is our combination of the Rational Speech Acts model with the kind of machine learning models that we've been focused on for this unit. And I'm hoping that this draws together a bunch of themes from our earlier screencasts, and also sets you up, if you choose to, to apply these ideas in the context of an original system or a final project. I'm going to be talking in a general way about these ideas. They really emerge from the papers that are listed on this slide, and the full references are given at the end of the slide show. Now, what's our motivation here? Recall that in screencast four, I presented a bunch of tasks that I claimed would benefit from the back and forth reasoning that RSA offers grounded in specific contexts. And those tasks included discriminative image labeling, image captioning, machine translation, collaborative problem solving, interpreting complex descriptions, especially navigational instructions, and maybe even optical character recognition. And I think we can think of other tasks that we could put into the mold, like the colors in context task, and really benefit from the mechanisms that RSA offers. However, as we saw at the end of the RSA screencast, there are some obstacles to doing this. RSA is standardly presented as not especially scalable. It's also not especially sensitive to the kind of variation that we're likely to see in actual usage data in the large-scale corpora that would support these tasks. And relatedly, it just doesn't have any notion of bounded rationality, even though, of course, once humans interact, they're not perfectly rational, even in the pragmatic sense that RSA offers. And there's another dimension to this problem for motivation here, which is that RSA harbors a really powerful insight. And we might hope that we can achieve more impact for that model by bringing in new kinds of assessment for it. You know, taking it out of the psychology and linguistics lab and into the world of AI. And in turn, achieve more impact for RSA. And maybe show more of the scientific world that RSA has a really powerful insight behind it. But, of course, to realize all of this potential, we're going to have to overcome some of those core issues and scalability. And that's what I'll show you here. I think I can offer a simple recipe for doing that and testing out a lot of these ideas. To make this concrete, let's continue to ground our discussion in our core task, which is this colors in context task. Just recall that, if you're playing the speaker role, you're presented with three color patches, one of them privately designated as your target. And your task is to describe that target in that context for a listener. And then, in turn, the listener task is given the three patches, and no idea which one is the target, and a speaker utterance. Use that utterance to figure out which was the speaker's target. And so you can hear in that description that this is potentially a kind of communication game, and would support the back and forth reasoning that is the hallmark of the Rational Speech Acts model. So how are we going to take this task and RSA and combine them? Well, the first step is straightforward. We're going to start with a literal neural speaker. I've given that as S theta up here with literal indicating that it's a base agent. And for this, it's just going to be exactly the natural language generation system that we explored in the earliest parts of the screencast, right? Except now, we're going to consume three color patches with the target always given in the final position. And then the decoding task is to offer a description. And we can make a lot of different model choices here, but the fundamental insight is that we can now treat this agent as a kind of black box base listener. Instead of having to hand specify a semantic grammar, which would be impossible even for the task the size of the colors in context dataset, we now just train an agent, and use it to play the role of the base agent. And we can, of course, do the same thing for the neural literal listener, who will, again, have some parameters theta, which will represent this entire encoder/decoder architecture. This neural literal listener will process incoming messages as a sequence. And then given some context of colors and a scoring function, make a guess about which one of those three colors the message that it had as input was being referred to. And again, instead of hand specifying the lexicon, we just treat this agent as a black box. It serves the role of the literal listener. And from there, the RSA recursion, so to speak, is very easy to apply. Let's consider the base case of a pragmatic speaker. So you can see over here, we're going to use our trained literal listener. And this is the most basic form that the speaker can have. And we've just gotten now a pragmatic agent that is reasoning about states of the world as inputs and making message choices on that basis. And it's doing that not in terms of the raw data, but rather in terms of how the literal listener would reason about the raw data, so that core RSA insight. But we're just essentially using L0 here as the mechanism to derive the speaker distribution. Now, there is one catch here as we discussed. In principle for RSA, this would be a summation over all messages, which would be completely intractable for any realistically-sized language model. What we can do to overcome that obstacle is simply use our trained literal speaker, which I presented before, and sample utterances from it. And that small sample will serve as the basis for this normalization down here. So it's an approximation, but it's an easy one, given that we have this trained agent down here. And in practice, we've seen that it does quite well in serving as the normalization constant. And then the neural pragmatic listeners are even more straightforward, having defined that pragmatic speaker. To put a listener on top of that is really easy. Again, you essentially just apply Bayes' rule, and you get a listener out. And in the Monroe et al paper as you've seen, we actually found that weighted combinations of the literal listener and the pragmatic listener were the best at the colors in context task. But let me just close up by mentioning a few other related strands of work that you might think about bringing in. And what I just showed you is the most basic form of this, but many extensions have been explored. So Golland et al 2010 is a really early paper in the history of these ideas that is quite forward-thinking. They explore recursive speaker listener reasoning as part of interpreting complex utterances compositionally, with grounding in a simple visual world. And I love the connection with semantic composition. This Wang et al 2016 paper does even more of that. Pragmatic reasoning helps in online learning of semantic parsers. I mentioned before, work by Stefanie Tellex and colleagues on what they call inverse semantics, which is a simple RSA mechanism applied in the context of human-robot interaction to help humans and robots collaborate more efficiently. Khani et al extend this to more free form social interaction by showing that RSA has a role to play in collaborative games. I mentioned before this work by Reuben Cohn-Gordon and Noah Goodman on RSA for translation. Reuben Cohn-Gordon did a lot of innovative work as part of his PhD in the context of RSA. He also explored applying RSA at the word and character level, so removing the approximation that we sample from S0 speaker to create the denominator. Rather, instead, he applies RSA at every single timestep in a left-to-right sequential decoding step. And that timestep could be either the word level or surprisingly, it was very effective at the character level. And then these final two papers here just show that we could move out of the mode of pre-training the base agents and applying RSA on top. And instead, have a mechanism of end-to-end RSA learning, which is more ambitious in terms of learning and model set up. But provides more chances for us to be responsive to the nature of actual usage data, while still making good on the central insights of RSA, and with luck, seeing some empirical benefits from doing that.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Adversarial_Training_and_Testing_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone to part 3 in our series on analysis methods in NLP. We're going to be talking about adversarial training as well as testing of systems. This is the second of the behavioral evaluation methods that we're considering. We've previously talked about adversarial testing. Adversarial training and testing, of course, implies that we have much larger datasets, that this is more difficult to do. But for selected tasks where we have such datasets, this can be very exciting and push you to address all sorts of interesting cutting edge questions. I'll start with SWAG. This is an early entry into the space of adversarial training sets. SWAG stands for Situations With Adversarial Generations. There's actually two data sets, SWAG and the colorfully named HellaSWAG. And you'll see why there are two in a second. This is fundamentally, again, another interesting story of very rapid progress in our field. Here's how SWAG examples work. We're given as a system input a context like, "he is throwing darts at a target" and another system input, which is the start of a sentence, here it's, "another man." And the task of the system is to figure out what the continuation should be. So the actual continuation that we predict might be, "throws a dart at the target board." And this is fundamentally a classification task, the system is given some distractors like, "comes running in and shoots an arrow at a target," or "is shown on the side of men," or "throws darts at a disk." And the system is tasked with figuring out which of the options is the actual continuation for the sentence, given the context. The data sources for this are ActivityNet and the Large Scale Movie Description Challenge. I think the idea here is that we're going to key in to all sorts of interesting notions of common sense reasoning. Now here's where the adversarial part of this comes in, we're going to do adversarial filtering for SWAG. For each of the examples in our corpus, and there are over 100,000 examples in SWAG, we're going to be given the system input like, "the mixture creams the butter. Sugar." And then we'll have a generator model- in the case of SWAG, this was an LSTM produce some distractors for the target. So let's suppose that the actual target continuation is added, we'd have a model produce "is sweet" and "is in many foods." And then we have the filtering model. If it guesses correctly for "is added," then we're going to drop out this entire example and we'll create some new distractors like "is sprinkled on top" or "is in many foods." And in this case, if the model guesses incorrectly like suppose it chooses b in this case, then we'll keep this example because relative to the current models for the thing we're using to generate these distractors and the thing that we're using to filter, this is a challenging example. And the idea is that we can repeat this for a bunch of iterations, continually retraining the filtering model so that it gets better and better and therefore, ending up with a dataset that is really, really difficult in terms of the current models that we had available to us. Here's a picture of test accuracy. This is interesting here. They actually did an ensemble of filtering models to try to key into different notions that might be indicating which is the correct continuation. So they start by using just a multi-layer perceptron for efficiency, and then they bring in all of these ensembles. And you can see the test accuracy as we do this iterative filtering very quickly goes down so that by iteration 140 we're at 10% accuracy. So that's the sense in which this is a very difficult dataset because given the generator model and the filtering model that we have available to us, we have a dataset that is very difficult in terms of a classification task. So that looks really exciting and challenging and I think the authors expected this dataset to last for a very long time. However, the BERT paper, the original BERT paper, did evaluations on SWAG and essentially solved the problem. BERT Large got 86.6 and 86.3% on the dev and test sets for SWAG respectively, a very unexpected result given that I just showed you that the SWAG authors got about 10% with their current models. And even closely related models to BERT like this ESIM model here, were really pretty low in their performance. So BERT looked like a real breakthrough and you can see that it's in some sense superhuman relative to the SWAG estimates. So wow. So, of course, we know what the response should be given that we're talking essentially about model-in-the-loop adversarial dataset creation. That leads us to HellaSWAG. They made some changes to the dataset that they use for HellaSWAG, but I would say the fundamental thing is that we do the same adversarial filtering with the generator, except now we have much more powerful filtering and generator models, thanks to developments related to transformers. So for HellaSWAG, we again have human performance that's really good. This is very reassuring because we are using much more powerful models at step 4. As you can expect, BERT is no longer easily able to solve this problem. Here's a further summary of the results with BERT Large before I remember that it's essentially solved SWAG. Now it's down around 50% which shows that it still gets traction but is nothing like the superhuman performance that we saw for SWAG. OK. Now let's move into a slightly different mode, and this is going to be a kind of human-in-the-loop adversarial dataset creation method. The first entry in this space was the adversarial NLI data set, I think this is a really visionary and exciting paper. Adversarial NLI is a direct response to the previous things that we've seen with the SNLI and multi-NLI datasets where models seem to do well on those benchmarks but are easily susceptible to simple adversaries. With adversarial NLI, we're going to hopefully push systems to be much more robust to those adversaries and explore a much wider range of the space of things you might see under the heading of natural language inference. So here's how it worked. There's a human in the loop, an annotator, and the annotator is presented with a premise sentence and a condition that they need to be in, which is just an NLI label- entailment, contradiction, or neutral. The annotator writes a hypothesis to go along with the premise and the condition and then a state-of-the-art model comes in and makes a prediction about the premise hypothesis pair. If the model's prediction matches the condition, that is, if the model was correct, then the annotator needs to return to step 2 and try again with a new hypothesis. And we could continue in that loop. If the model was fooled, the premise-hypothesis pair is independently validated by other annotators, of course. So what we get out of this is we hope a dataset that is intuitive for humans, because of the check in step 5, but assuming we continue to loop around through 2, 3 and 4, an example that is really difficult for whatever model is in the loop. And the expectation is that as we put better and better models in the loop here, we're going to get even more challenging data sets as an outcome. NLI examples tend to be impressively complex. You can see that this example has a very long premise. The hypothesis was relatively shorter. And an intriguing aspect of adverserial NLI is that annotators also constructed a reason or a rationale for their label holding between the premise-hypothesis pair. To date, as far as I know, relatively little use has been made of these texts, but I think they could bring in other aspects of natural language inference reasoning and that could be an exciting new direction. Adversarial NLI is a difficult dataset indeed. We have a similar sort of leaderboard that we've seen throughout this adversarial regime where across different rounds of NLI- there are three, or cumulatively for the data set, even really excellent models that do really well on SNLI and multi-NLI are posting really low numbers for all of these variants of the dataset and that shows you that this is truly a difficult problem. And as far as I know, not much progress has been made since this dataset was released on boosting these numbers. So it stands as an interesting challenge. Stepping back here, I'd just like to say that I think we find in this paper a real vision for future development and that you see this also in the SWAG and HellaSWAG papers as those authors say this adversarial dataset creation is "a path for NLP progress going forward: toward benchmarks that adversariallly co-evolve with evolving state-of-the-art models." Right, with SWAG and HellaSWAG, we saw this. SWAG got solved but the response was clear, bring the best model in and use it to create the successor dataset set that stands as a real challenge. You have the similar picture from the adversarial NLI paper. This process of having iterative rounds with humans in the loop yields a moving post-dynamic target for natural language understanding systems, rather than the static benchmarks that eventually saturate. And we've seen repeatedly that our benchmarks saturate very quickly these days, so we need this kind of moving post to make sure we continue to make meaningful progress. The Nie et al project gave rise, I believe, to this Dynabench platform, an open source platform for model and human-in-the-loop creation. As of this writing, there are four datasets available that have been created on Dynabench, an NLI data set which is a successor to ANLI, a question-answering dataset, a sentiment dataset, and a hate speech dataset. So if you're working on problems of this form or you have a model that would fit into this mold for one of these tasks, I would encourage you to explore some training of the systems on these datasets to see whether you're making progress or whether they stand as true adversaries for whatever innovative thing you're doing. Finally, I want to close with a really important question for this area that kind of remains open, can adversarial training improve systems? There is a course of concern that as we construct ever harder data sets, we're pushing systems into stranger parts of the linguistic and conceptual space which could actually degrade their real world performance. We have to keep an eye on that. And the evidence so far, I think is pointing to yes as an answer to this question but the evidence is a bit mixed. So I've mentioned that in the SQuAD adversarial paper from Jia and Liang, training on adversarial examples makes them more robust to of those examples but not to simple variants, so it's hardly very much progress. In this paper, they found that adversarial training provided no additional robustness benefit in the experiments using the testset, despite the fact that the model achieved near 100% accuracy classifying adversarial examples included in the train set. So that's a more worrisome picture. But this is more hopeful. Fine-tuning with a few adversarial examples improved systems in some cases, especially where you bring in inoculation. And this is hopefully yet again, adversarially generated paraphrases improve model robustness to syntactic variation. That's really the dream there- that as a result of doing this new kind of training, we get systems that are truly more robust. But I think we might need more evidence on this picture, which means more datasets of this form, and more and interesting use of the available resources and I would just love to see what the emerging picture is over the next year or two.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Evaluation_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
BILL MACCARTNEY: Last time I introduced the task of relation extraction, I described the corpus and the KB that we're going to use. And I proposed a precise formulation of our prediction problem. So now, let's talk about how we're going to measure success on this problem. We need to define a quantitative evaluation that can drive a process of iterative development. In this section, I'm going to first make a connection to the software engineering principle of test-driven development. Then I'm going to explain how we'll split our data into training and evaluation data. I'll do a brief refresher on precision, recall, and F-measure. And I'll review the distinction between micro-averaging and macro-averaging. And by the end, we'll know exactly how we're going to measure success. When you start working on a new machine learning problem, it's very tempting to jump in and start building models right away. Because you're bursting with ideas, and you can't wait to get started. But whoa, Nelly. That's like driving cross-country without a map. There's going to be lots of forks in the road, and you won't know which way to go. There's a better way. In software engineering, we use test-driven development. First, write the tests, and then write code and iterate until it passes the tests. In model engineering, we can use a similar paradigm. First, implement a quantitative evaluation. Specify your evaluation dataset, choose your evaluation metric, build a test harness that takes a model and generates a score. Then when you start building models, you can hill-climb on this score. And at those forks in the road where you could do it this way or that way, your quantitative evaluation will tell you which way to go. Now, whenever we build a model from data, it's good practice to partition the data into multiple splits, minimally, a training split and a test split. Actually, here we'll go a bit further, and we'll define multiple splits. First, we'll have a tiny split with just 1% of the data. Having a tiny split is super useful, and I encourage you to adopt this practice whenever you take on our prediction problem. During the early stages of development, you can use the tiny split as training data or test data or both, and your experiments will run super fast. Of course, your quantitative evaluations will be pretty much meaningless, but it's a great way to quickly flush out any bugs in your setup. Then we'll have the train split with 74% of the data. This is the data that we'll usually use for model training. Then the dev split, with 25% of the data. We'll use this as test data for intermediate evaluations during the development. And for the bake-off, we're also going to have a separate test split, but you won't have access to it, so we won't talk about it here. There's one complication. We need to split both the corpus and the KB. We want each relation to appear in both the training data and the test data so that we can assess how well we've learned how each relation is expressed in natural language. But ideally, we'd like to have any given entity appear in only one split. Otherwise, we might be leaking information from the training data into the test data. In an ideal world, each split would have its own hermetically sealed universe of entities, and both the corpus and the KB, for that split, would refer only to those entities. So for example, you might have a new world corpus whose examples mention only new world entities like Elon Musk and Bill Gates and Steve Jobs, and a new world KB, which contains only triples about the same new world entities, and then an old world corpus that talks about Daniel Ek and Jack Ma and Pony Ma and a corresponding old world KB. If we had this, then we could achieve a really clean separation between train and test data with no overlap in entities. But in practice, the world is strongly entangled, and this ideal is hard to achieve. So instead, we're going to approximate the ideal. I think I won't dwell on the details, but we've written the code for you to achieve a good enough split. In particular, the dataset class provides a method called build_splits which lets you specify split names and proportions and a random seed. And it just returns a map from split names to datasets each containing a corpus and a KB. So now that we have our splits, we need to choose an evaluation metric. We've formulated our problem as binary classification, and the standard metrics for binary classification are precision and recall. So here's an example where we have 100 problem instances. The rows of this table represent the actual labels. 88 are labeled false, and only 12 are labeled true. So this is a skewed distribution. The columns of this table represent the labels predicted by our model. So 95 are predicted to be false and 5 true. Now, there are 89 instances where the predicted label agrees with the actual label. So the accuracy of this model is 89%. But accuracy is not a great evaluation metric, especially when you have a skewed distribution like this because even a model that ignores the data and always guesses false can get 88% accuracy just by always guessing false. So instead of accuracy, we look at precision, which says of the instances that are predicted to be true, what proportion are actually true? And recall, which says of the instances which are actually true, what proportion are predicted to be true? So that's great. Precision and recall are really useful. But having two evaluation metrics is often inconvenient. If we're considering a change to our model which improves precision but degrades recall, should we take it? In order to drive an iterative development process, it's useful to have a single metric on which to hill-climb. So for binary classification, the standard answer is the F1 score, which is the harmonic mean of precision and recall. The harmonic mean is the reciprocal of the arithmetic mean, the average of the reciprocals, and it's always less than the arithmetic mean. It's pessimistic in a sense that it's always closer to the lower number. So the arithmetic mean of 60% and 25% is-- sorry, the harmonic mean of 60% and 25% is 35.3%. Now, the F1 score gives equal weight to the precision and recall. But depending on the application, they might not be of equal importance. In relation extraction, we probably care more about precision than recall, and that's because adding an invalid triple to the KB is more harmful than failing to add a valid one. So instead, we could use the F-measure, which is a generalization of F1. It's a weighted harmonic mean of precision and recall. And this parameter beta controls how much more importance you place on recall than on precision. So let's say that in a particular evaluation, you have high precision, 80%, and low recall, 20%. The F1 score gives equal weight to precision and recall, so its value is 32%. If we set beta equal to 0.5, we're giving more weight to precision, so the value is 50%. If we set beta equal to 2, we're giving more weight to recall, so the value is 23.5%. In relation extraction, precision is more important than recall, so let's go with F 0.5 as our evaluation metric. OK, another issue that comes up in evaluation scenarios like this is whether to use micro-averaging or macro-averaging. We're going to compute precision, recall, and F score separately for each relation. But in order to drive iterative development, we'd like to have summary metrics, which aggregate across all of the relations. And there are two possible ways to do this. Micro-averaging gives equal weight to each problem instance, which means that it gives more weight to relations with more instances. Macro-averaging just gives equal weight to each relation. So let me show you an illustration of this. This is an artificial example where I have just three relations, and the contains relation has 10 times as many instances as the other two relations. It also has the highest F score. When I compute the micro-average and the macro-average, well, the micro-average gives equal weight to each problem instance, so it gives a lot more weight to the contains relation, and the result is that the micro-average F score is very close to the F score for contains. Whereas the macro-average gives equal weight to each relation, and so it's just right in the middle of this range. The micro-averaged F score is probably not what we want because the number of instances per relation is kind of an accident of our data collection methodology. And it's not like we believe that the contains relation is more important than the other relations. It just happens to be more numerous in the data that we collected. So we're going to use macro-averaging so that we don't overweight large relations. So if you put it all together, the bottom line is that with every evaluation, we're going to report lots of metrics, but there's one metric that we're going to focus on. And this will be our figure of merit. It's the one number that we're going to be hill-climbing on, and we're choosing, as our figure of merit, the macro-averaged F 0.5 score.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
The_Rational_Speech_Acts_Model_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Hello, everyone. Welcome to part 5 in our series on grounded language understanding. We're going to be talking about the rational speech acts model or RSA. This is an exciting model that was developed by Stanford researchers Mike Frank and Noah Goodman. And it's a chance for us to connect ideas from cognitive and psychology and linguistics with large-scale problems in machine learning. Now what I'm going to do for this screencast is kind of queue up the high-level concepts and the core model structure as a way of leading into the next screencast, which is going to show you how to incorporate pieces of this model into standard machine learning models. If you would like a deeper dive on the conceptual origins of this model and how it works in a kind of mathematical way, I would encourage you to check out these resources here. So this first paper, Goodman and Frank, from the developers of RSA, is a nice overview that shows not only all the technical model details with real rigor, but also connects the ideas with decision theory, game theory, cognitive psychology, and Bayesian cognitive science, and also linguistics. From there, you could watch this technical screencast that I did. This is on YouTube, and here are the associated slides for that if you want to follow along. And from there, I have this Python reference implementation of the core RSA model, and that would be a great way to get hands-on with the model and begin to think about how you could incorporate it into your own project or original system. Without further ado, though, let's dive into the model. And I'm going to begin with what I've called pragmatic listeners. And we can also, as you'll see later, take a speaker perspective. So the model begins with what's called the literal listener. This is a probabilistic agent, and you can see that it conditions on a message. That is, it hears or observes a message and makes a guess about the state of the world on that basis. And the way it does that is by reasoning essentially entirely about the truth conditions of the language. Here I've got these double brackets indicating that we have a semantic lexicon mapping words and phrases to their truth values. This agent also takes the prior into account, but that's the only way in which it's pragmatic. Otherwise, it's kind of a fundamentally semantic agent. From there, we built the pragmatic speaker. Speakers in this model observe states of the world, things they want to communicate about, and then they choose messages on that basis. And the core thing to observe here is that the pragmatic speaker reasons not about the semantics of the language as the literal listener does, but rather about the literal listener who reasons about the semantics of the language. And for this pragmatic speaker here, it does that taking cost of messages into account. And it also has this temperature parameter, alpha, which will help us control how aggressively it reasons about this lower agent, the literal listener. Other than that, you can probably see that this model is a kind of softmax decision rule, where we're combining the literal listener with message costs. And then finally, we have the pragmatic listener, which has essentially the same form as the literal listener. It observes the message and makes a guess about the state of the world on that basis. And it has the same overall form as the literal listener, except it's reasoning not about the truth conditions, but rather about the pragmatic speaker, who is reasoning about the literal listener, who is finally reasoning about the semantic grammar. So you can see that there's a kind of recursive back and forth in this model. You might think of this as reasoning about other minds, and it's in that recursion that we get pragmatic language use. Here's a kind of shorthand for the core model components of a literal listener's reasoning about the lexicon and the prior overstates. The pragmatic speaker reasons about the literal listener taking message costs into account. And finally, the pragmatic listener reasons about the pragmatic speaker taking the state prior into account. And then you can see nicely this point of indirection down to the semantic lexicon. And as I said, it's in that recursion that we get interesting pragmatic language use. Let me show you how that happens with a small example here. So along the rows in this, I have the messages. We're imagining a very simple language in which there are just three messages. You can think of them as shorthand for like-- the person I'm referring to has a beard. The person I'm referring to has glasses and so forth. And we have just three reference. And I'll tell you that this is David Lewis, one of the originators of signaling systems, which is an important precursor to RSA. This is the philosopher and linguist, Paul Grice who did foundational work in pragmatics. And this is Claude Shannon, who, of course, is the developer of information theory. And in this table here, we have the semantic grammar, the truth conditions of the language. So you can see that Lewis has this wonderful beard, but neither Grice nor Shannon have beards. Glasses is true of Lewis and Grice, and tie is true of Grice and Shannon. The literal listener, assuming we have flat priors, simply row normalizes those truth conditions. So we go from all these ones to an even distribution, and you can see that already beard is unambiguous for this listener, but glasses and a tie present what looks like an insurmountable ambiguity. On hearing glasses, this listener just has to guess about whether the referent was Lewis or Grice and same thing for tie. When we move to the pragmatic speaker, we already see that the system starts to become more efficient. So we take the speaker perspective along the rows now. And we-- because we're going to assume zero message cost, we can, again, just row normalize in this case from the previous matrix having transposed it. And now you can see that on trying to communicate about Lewis, the speaker should just choose beard as an overwhelming bias for that. And down here on observing Shannon or wanting to talk about Shannon, the speaker should say tie, that's completely unambiguous. But we still have a problem. If we want to refer to Grice, we have kind of no bias about whether we should choose glasses or a tie. But already, we have a more efficient system than we did for the literal listener. And then, finally, when we move to the pragmatic listener, we have what you might think of as a completely separating linguistic system. On hearing beard, infer Lewis. On hearing glasses, your best bet is Grice. And on hearing tie, your best bet is Shannon. And in this way, you can see that we started with a system that looked hopelessly ambiguous, and now in the back and forth RSA reasoning, we have arrived at a system that is probabilistically completely unambiguous. And that's the sense in which we can do pragmatic language use and end up with more efficient languages as a result of this reasoning. Now for natural language generation problems, it's often useful to take a speaker perspective, as we've discussed before. And I just want to point out to you that it's straightforward to formulate this model, starting from a speaker. We would do that down here at the bottom. This has the same form as the previous speakers. We're going to subtract out message costs, and we have the softmax decision rule overall. But now, the speaker, of course, will reason directly about the truth conditions of the language. And we have our pragmatic listener. There's just one for this perspective, and it looks like just those other listeners accepted reasons, not about the truth conditions, but rather about that literal speaker. And then finally, for our pragmatic speaker, which is the one that you might focus on for generation tasks. It has the same form as before, except now we're reasoning about the pragmatic listener, who reasoning about the liberal speaker. So we have that same kind of indirection. And once again, here's a kind of shorthand way of thinking about the speaker perspective. So the literal speaker reasons about the lexicon, subtracting out costs, the pragmatic listener reasons about that literal speaker and the state prior. And then finally, the pragmatic speaker reasons about the pragmatic listener, taking message costs into account. And again, you see that recursion down into the lexicon. Now I've given you a glimpse of why this model might be powerful, but let's close with some limitations that we might address in the context of doing modern NLP and machine learning. So first, we had to hand-specify that lexicon. In cognitive psychology and linguistics, this is often fine. We're going to run a controlled experiment, and hand-specifying the lexicon is not really an obstacle. But if we would like to work in open domains with large corpora, this is probably a deal-breaker. A related problem arises if you look more closely at the way the speaker agents are formulated. In their denominator, they have this implicit summation over all possible messages, where we do this computation here. But in the context of a natural language, what does it mean to sum over all messages that might be an infinite set? And even if it's finite, because we make some approximations, it's still going to be so large as to make this calculation intractable. So for computational applications, we will have to address this potential shortcoming. It's also RSA-- what you might think of as a very high biased model. We have relatively few chances to learn from data. It hardwires in a particular reasoning mechanism and is inflexible about how that mechanism is applied. Relatedly, we might then run up against things like it's difficult to be a speaker, and speakers, even the pragmatic ones, are not always perfectly rational in the way the model might portray them to be, and we might want to capture that, if only to do well with actual usage data. And relatedly, even setting aside the pressures on speakers to be rational, they just might have preferences for certain word choices and other things that the model is simply not even trying to capture, and we might hope in the context of a large scale machine learning model, that we would have mechanisms for bringing those in. And finally, it's just not scalable. And you can see that in the first two bullet points, and there are many other senses in which RSA, as I've presented it, just won't scale to the kind of big ambitious problems that we're trying to tackle in this class. The next screencast is going to attempt to address all of these limitations by bringing RSA into large-scale machine learning models.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Course_Overview_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: So here we go. It's a golden age for Natural Language Understanding. Let's start with a little bit of history. So way back when, John McCarthy had assembled a group of top scientists, and he said of this group, "We think that a significant advance can be made in artificial intelligence in one or more of these problems if a carefully selected group of scientists work on it together for a summer." He had, in fact, assembled a crack team, but, of course, there were so many unknown unknowns about working in artificial intelligence, that he wildly underestimated, and probably in those three months they just figured out how little they actually knew. But, of course, we've been working on those problems that they charted out at that point ever since. NLU has a kind of interesting relationship to this history because very early on in the history of all of AI, a lot of the research was focused on Natural Language Understanding. Originally, in the 60s, it was done with kind of pattern matching on simple rule sets and things like that. You've seen these things in the form of artifacts like ELIZA. It was oriented toward the things that we know and want to work on. In the 1970s and 80s, you get a real investment in what I've called linguistically rich, logic-driven, grounded systems or LLGs. This is like a lot of symbolic AI, again, oriented toward problems of Natural Language Understanding. Everybody wanted talking robots, and this was the path that they were going to take to achieve them. As we all know, in the mid-1990s in the field, you had this revolution of machine learning. Statistical NLP is on the rise, and that led to a decrease, a sharp decrease in Natural Language Understanding work. I think because of the way that people were understanding how to work with these tools and understanding the problems that language posed, the field ended up oriented around things that you might think of as like parsing problems, much more about structure, and much less about communication. And as a result, all of these really exciting problems from earlier eras kind of fell by the wayside as people worried about part-of-speech tagging and parsing and so forth. So that was like a low period for NLU. In the late 2000s, linguistically rich, logic-driven systems reemerged, but now with learning. And that was the golden era of kind of moving us back into problems of Natural Language Understanding, starting with some basic applications involving semantics. And then, of course, as you all know probably from the recent history or semi-recent history, in the 2010s, NLU took center stage in the field. That's very exciting, right? And it's sort of aligned with the rise of deep learning as one of the most prevalent set of techniques in the field, and as a result, logic-driven systems fell by the wayside. This is exciting for us because, of course, this is like the history of our course. When we first started this, our problems, the ones we focus on in this class, were really not central to the field. And now they're the problems that everyone is working on and where all the action is in the scientific aspects of the field, and also in industry. As a result of this, this is kind of provocative. The linguistically grounded, logic-driven systems have, again, kind of fallen by the wayside in favor of very large models that have almost no inductive biases of the sort that you see in these earlier systems. What's going to happen in the 2020s? I'm not sure. You might predict that we've seen the last of the linguistically rich, logic-driven systems, but the people might have said similar things in the 1990s, and we saw them reemerge, so I think it's hard to predict where the future will go. But this is an exciting moment because you can all be part of making that history, asz you work through problems for this course and on into your careers. Let's talk more about some of these really defining moments in this golden age. And, for me, a very important one was when Watson, IBM Watson, won Jeopardy! This was in 2011. It seems like a long time ago now. And it was a really eye-opening event that you would have a machine-- Watson in the middle here-- beat two Jeopardy! Champions at what is nominally a kind of question-answering task. For me, an exciting thing about Watson was that it was an NLU system, but it was also a fully integrated system for doing Jeopardy! It was excellent at pushing the button and responding to the other things that structure the game of Jeopardy! But at its heart, it was a really outstanding question-answering system, and it was described as drawing on vast amounts of data and doing all sorts of clever things in terms of parsing and distributional analysis of data to become a really good Jeopardy! Player, in this case, a world champion Jeopardy! Player. And for me, it felt different from earlier successes in artificial intelligence, which were about much more structured domains like chess playing. This was something that seemed to be about communication and language, a very human thing, and we saw this system becoming a champion at it. Certainly, an important moment. And it's kind of really eye-opening to consider that that was in 2011, and by 2015, teams of academics, like here led by Jordan Boyd-Graber, who was at the time, at the University of Colorado as a professor, could beat Ken Jennings, that champion that you saw before, with a system that fit entirely on the laptop that Jordan has there. So in just a few years, we went from acquiring a supercomputer to beat the world champion to beating the world champion with something that could fit manageably on a laptop and that you all could do exploration on for a final project for this course. And that kind of ushered in the era. You know, Watson was 2011, and right about the same time you started to get things like Siri, and the Google Home device, and the Amazon Echo. The more trusting among you might have these devices living in your homes and listening to you all the time and responding to your requests. For me, one aspect of them that's so eye-opening is not necessarily an NLU piece, but rather just the fact that they do such outstanding text-to-speech work, so that they pretty reliably, for many dialects, do a good job of taking what you said and transcribing it. As we'll see a little bit later, the NLU part often falls down. But there's no doubt that these devices are going to become more and more ubiquitous in our lives, and that's very exciting. Here's the promise of this artificial intelligence, setting aside the problems that you all encounter if you do use them, right? The idea is that you could pose a task-oriented question like, "any good burger joints around here?" And it could say proactively, I found a number of burger restaurants near you. You could switch your goal. What about tacos? And at this point, it would kind of remember what you were trying to do and it would look for Mexican restaurants and kind of anticipate your needs and in that way, collaborate with you to solve this problem. That's the dream of these things that would involve really lots of aspects of Natural Language Understanding, and sometimes it works. Another exciting and even more recent development is the kind of text generation that you see from models like GPT-3. Again, 15 years ago, the things that we see every day would have seemed like science fiction to me, even as a practitioner. This is an example of somebody having a product that on the back of GPT-3 can help you write advertising copy. And it does a plausible job of advertising products to specific segments given specific goals that you give in your prompt. And here's an example actually, from a Stanford professor, a company that he started, where you use GPT-3 to help you do writing in a particular style that you choose, and it's strikingly good at helping you hone in on a style and kind of say what you take yourself to want to say. Although the sense in which these are things that we alone want to say as opposed to saying them jointly with these devices that we're collaborating with is something that we're going to really have to think about over the next few years. Image captioning. This is another really exciting breakthrough area that, again, seemed like something way beyond what we could achieve 15 years ago and is now kind of routine where you have images like this, an image comes in, and the system does a plausible job of providing fluent Natural Language captions for those images. A person riding a motorcycle on a dirt road. A group of young people playing a game of Frisbee. A herd of elephants walking across a dry grass. These are really good captions for these images, things that you would have thought only a human could apply. But in this case, even relatively early in the history of these models, you have really fluent captions for the images. Search. We should remind ourselves that search has become an application of lots of techniques in Natural Language Understanding. When you do a search into Google, you're not just finding the most relevant documents, but rather the most relevant documents as interpreted with your query in the context of things you search for and other things that Google knows about, things people search for, so that if you search SARS here, you'll get a card that kind of anticipates that you're interested in various aspects of the disease SARS. If you search parasite, it will probably anticipate that you want to know about the movie and not about parasites, although depending on your search history, and your interests, and typical goals, and so forth, you might see similar, but you might see very different behavior. And we should remind ourselves that search at this point is, again, not just searching into a large collection of documents, but this kind of agglomeration of services, many of which depend on Natural Language Understanding as that kind of first-pass where they take your query and do their best to understand what the intent behind the query is and parse it and figure out whether it's a standard search, or a request for directions, or a request to send a message, and so forth and so on. In the background there, a lot of Natural Language Understanding is happening to figure out how to stitch these services together and anticipate your intentions, and essentially, collaborate with you on your goals. And we can also think beyond just what's happening in the technological space to what's happening internal to our field. So benchmarks are big tasks that we all collaborate to try to do really well on with models and innovative ideas and so forth. And I've got a few classic benchmarks here, MNIST is for digits. GLUE is a big Natural Language Understanding benchmark. ImageNet, of course, is finding things in images. SQuAD is question-answering. And Switchboard is typically, speech-to-text transcription in this context. In this spot here along the x-axis, I have the year from 2000-- or actually, the the mid-90s up through the present. And along the y-axis, I have our distance from this black line, which is human performance as measured by the people who developed the dataset. And the striking thing about this plot is that it used to take us a very long time to reach human-level performance according to this estimate, so for MNIST and for Switchboard it took more than 15 years. Whereas for more recent benchmarks like ImageNet, and SQuAD, and recently GLUE, we're reaching human performance within a year. And the striking thing about that is not only is this happening much faster, but you might have thought that benchmarks like GLUE were much more difficult than MNIST. MNIST is just recognizing digits that are written out as images, whereas GLUE is really solving a whole host of what looked like very difficult Natural Language Understanding problems. So the fact that we would go from way below human performance, to surpassing, to superhuman performance in just one year is surely eye-opening and an indication that something has changed. Let me give you a few examples of this, just dive in a little bit. So this is the Stanford Question Answering Dataset or SQuAD as you saw it here, I'll say a bit more about this task later, but you can think of it as just a question-answering task. And the striking thing about the current leaderboard is that you have to go all the way to place 13 to find a system that is worse than the human performance, which they've nicely kept at the top of this leaderboard. Many, many systems are superhuman according to this metric on SQuAD. The Stanford Natural Language Inference corpus is similar. Natural Language Inference is a kind of common sense reasoning task that we're going to study in detail later in the quarter. In this plot here, I have time along the x-axis, and the F1 score, or the performance along the y-axis, and the red line charts out what we take to be the human estimate of performance on this dataset. And if you just look at systems over time, according to the leaderboard, you can see the community very rapidly hill-climbing toward superhuman performance which happened in 2019. So superhuman systems, when it comes to common sense reasoning with language, really looks like a startling breakthrough in artificial intelligence quite generally. I mentioned GLUE is another benchmark. The GLUE paper is noteworthy because it says "solving GLUE is beyond the capability of current transfer learning methods." The reason they said that is that at the time, 2018, GLUE looked incredibly ambitious because the idea was to develop systems that could solve not just one task, but ten somewhat different tasks in the space of Natural Language Understanding. And so they thought they had set up a benchmark that would last a very long time, but it took only about a year for systems to surpass their estimate of human performance. In the current leaderboard, which you see here, you have to go all the way to place 15 to find the GLUE human baselines, with many systems vastly outperforming that estimate of what humans could do. SuperGLUE was announced as a successor to GLUE and meant to be even more difficult. It was launched in 2019 I believe. I'm missing the date, but it took less than a year. This happened just a couple of months ago for a team to beat the human baseline, and now we have two systems that are above the level of human performance in an even tighter window, I believe, than what happened with the GLUE benchmark. And remember, SuperGLUE was meant to have learned the lessons from GLUE and posed an even stronger benchmark for the field to try to hill-climb on, and very quickly we saw this superhuman performance. So what's the takeaway of all this? You might think, wow. Have a look at Nick Bostrom's book called Superintelligence which tries to imagine in a philosophical sense, a future in which we have many systems that are incredible at the task that we have designed them for, vastly outstripping what humans can achieve. And he imagines this kind of very different reality with lots of unintended side effects. And when you look back on the things that I've just highlighted, you might think that we're on the verge of seeing exactly that kind of superhuman performance that would be so radically transformative for our society and for our planet. That's the sense in which we live possibly scarily in this golden age for Natural Language Understanding. I mean this to be an optimistic perspective. We should be aware of the power that we might have. And keep in mind that I do think we live in a golden age, but at this point, I have to step back. I have to temper this message somewhat. We have to take a peek behind the curtain because although that's a striking number of successes, doing things that, again, I think would have looked like science fiction 20 years ago, we should be aware that progress seems to be much more limited than those initial results would have suggested. I mentioned Watson as one of these striking early successes, and it did, in fact, perform in a superhuman way at Jeopardy! for the time. But Watson also does all sorts of strange things that reveal that it does not deeply understand what it's doing, and here's a wonderful example of that. Remember, that Jeopardy! does this kind of question-answer thing backwards. So the prompt from the host was, "grasshoppers eat it," and what Watson said was, "what is kosher?" And you might think that's not something that a human would do. "Grasshoppers eat it" and "what is kosher?" And it feels kind of mismatched, and in many respects what's the origin of this very strange response? Well, primarily, Watson was a device for extracting information from Wikipedia, and a few Wikipedia pages have very detailed descriptions of whether various animals, including grasshoppers, are kosher, and the sense of conforming to the laws of the kosher dietary laws. And Watson had simply mistaken this kind of distributional proximity for a real association and thought that kosher was a reasonable answer to "grasshoppers eat it." I think very unhuman, certainly, and revealing about the kinds of superficial techniques it was using. Here's another example that's even more revealing of how superficial the techniques can be. So I painted this picture before of how we imagine Siri will behave anticipating our needs and our goals, and responding accordingly. This is a very funny scene from the Colbert Show. This is Stephen Colbert, and the premise of this is that he's just gotten his first iPhone with Siri, and he's been playing with it all day and, therefore, has failed to write the show that he's now performing. And so he says "for the love of God, the cameras are on, give me something?" You know, give me something for the show. And Siri says "What kind of place are you looking for? Camera stores or churches?" Initially, very surprising, not something a human would do, and then you realize it has, again, just done some very superficial pattern matching. God goes with churches. Cameras goes with camera stores. And there is no sense in which it understands his intentions. It has just done some pattern matching in a way that would be very familiar to the designers of ELIZA way back in the 60s and 70s. And the dialogue continues. "I don't want to search for anything. I want to write the show." and true to form, Siri says "Searching the Web for "search for anything. I want to write the shuffle." Revealing its fallback when it has no idea what has happened in the discourse. It just tries to do a web search, a simple trick revealing that it doesn't deeply understand goals, or plans, or intentions, or even communicative acts. I showed you before that GPT-3 can do some striking things. If you've gotten to play around with it, you've seen that it can, indeed, be very surprising and delightful, but of course, it can go horribly wrong. This is a very funny text from Yoav Goldberg. He posted this on Twitter when he was experimenting with the prompts. I encourage you to read this one and be distracted. You don't need to worry too much about this one on the right. This is a case where someone tried to use GPT-3 to get medical advice, and the ultimate response from GPT-3 to the question "Should I kill myself?" was "I think you should." This is the really dangerous thing. The text on the left here is, again, more innocent and just revealing that although GPT-3 as a way of mimicking the kinds of things that we say in certain kinds of discourse, and it often has a strikingly good ear for the kinds of style that we use in these different contexts, it has no idea what it's talking about. So that if you ask it "Are cats liquid?" It gives a response that sounds quite erudite provided that you don't pay any attention to what it's actually saying. What it's actually saying is hilarious. [LAUGHS] I mentioned those image captions before, and I tricked you a little bit because I showed you from this paper, the ones that they regarded as the best captions for those images, but to their credit, they provided a lot more examples. And as you travel to the right along this diagram, you get worse and worse captions. And the point, again, is that by the time you've gotten to the right column over here, you have really absurd captions like this one saying a refrigerator filled with lots of food and drinks, when this is, in fact, just a sign with a bunch of stickers on it. The striking thing, again, is that the kinds of mistakes it makes are not the kinds of mistakes that humans would make, and, to me, they reveal a serious lack of understanding about what the actual task is. What you're seeing seep in here is that even the best of our systems are kind of doing a bunch of superficial pattern matching, and that leads them to do these very surprising and unhuman, hopefully, not inhuman, but unhuman things with their outputs. And then, of course, I've showed you before that Search can be quite sophisticated and really do a good job of anticipating our intentions and fleshing out what we said to help us achieve our goals, but it can go horribly wrong. And at this point, it doesn't take much searching around with Google to see some really surprising things as supposedly curated pieces of information like King of the United States. It has this nice box. It's making it look like it's some authoritative information, but, of course, it has badly misunderstood the true state of the world. The associations in its data are misleading it into giving us the wrong answer. There's another example, "What happened to the dinosaurs?" Again, a nicely curated box that looks like an authoritative response to that question, but it is, in fact, anything but an authoritative recounting of what happened to the dinosaurs. And then we have other charming stories that, again, reveal how superficial this can be, and this is from a headline from a few years ago-- "Does Anne Hathaway News Drive Berkshire Hathaway Stock?" This was just an article observing that every time Anne Hathaway has a movie come out and people like the movie, it causes a little bump in the Berkshire Hathaway stock revealing that the systems are just keying in on keywords and typically, not attending to the actual context of the mentions of these things and, therefore, they're building on what is essentially spurious information. This is a more extreme case here, the United Airlines bankruptcy. In 2008, when a newspaper accidentally republished the 2002 bankruptcy story, automated trading systems reacted in seconds, and $1 billion in market value evaporated within 12 minutes. You can see that sharp drop-off here. Luckily, people intervened and the market more or less recovered. But the important thing here, again, is just that in attending to superficial things about the text these systems are consuming, they miss context. They don't bring any kind of human-level understanding of what's likely to be true and false, and, therefore, they act in very surprising ways. And in the context of a large system with lots of moving pieces interacting with other artificial intelligence systems, you get these really surprising outcomes that we could help correct if we just did a better job designing systems that could attend to context and have a more human-like understanding of what the world is likely to be like. And we're all, of course, very worried about the way these systems, which are just trained on potentially biased data, might cause us to perpetuate biases, so that not only are we reflecting problematic aspects of our society, but also amplifying those biases. And in that way, far from achieving a social good, we would actually be contributing to some pernicious things that already exist in our society. And the field is really struggling to come to grips with that kind of dynamic. But I also wanted to just dive in a little bit and think about the low-level stuff, so kind of benchmarks that we've set for ourselves. And I pointed out that progress on these benchmarks seems to be faster than ever, right? We're getting to pass-- we're getting to superhuman performance more quickly than we ever have before. The speedup is remarkable. However, we should be very careful not to mistake those advances for any kind of claim about what these systems can do with respect to the very human capability of something like answering questions or reasoning in language. And one very powerful thing that's happened in the field that we're going to talk a lot about this quarter is so-called adversarial testing where we try to probe our systems with examples that don't fool humans but cause these systems no end of grief. So let's look at one of those cases in a little bit of detail. This is from SQuAD. The way SQuAD is structured is that you're given a passage like this and a question about that passage, and the goal of the system is to come up with an answer where you have a guarantee that the answer is a literal string in that passage. So here you have a passage about football, and the question "What is the name of the quarterback who was 38 in Super Bowl XXXIII?" And the answer is "John Elway." What Jia and Liang-- our own Jia and Liang from Stanford-- what they observed is that you could very easily fool these systems if you simply appended to that original passage a misleading sentence like quarterback Leland Stanford had jersey number 37 in Champ Bowl XXXIV. Humans were not misled. They very easily read past the distracting information and continued to provide the correct answer. However, even the very best systems would reliably be distracted by that new information and respond with Leland Stanford changing their predictions. And you might think, ah, well, this is straightforward. They've already charted a path to the solution because we should then just train our systems on data where they have these misleading sentences appended, and then they'll overcome this adversarial problem and be back up to doing what humans can do. But Jia and Liang anticipated that response. What happens if you prepend the sentence then? Even when they're trained on the augmented data with sentences appended to the end, systems get misled by the pre-appended examples in this case. And you can just go back and forth like this. Trained on the pre-appended examples, well, then an adversary can insert a sentence in the middle and again trick the system, and so forth and so on, right? So this is a worrisome fact, again, revealing that we might think we've got a system that truly understands, but actually, we have a system that is just benefiting from a lot of patterns in the data. Another striking thing I want to point out about the way this adversarial testing played out, which we should have in mind as we think about results like this. So this is the original system on SQuAD and the results for the adversaries. And Percy Liang has this system called CodaLab where he possesses all the systems that enter into the SQuAD competition, which made it possible for him and his students to rerun all those systems and see how they did on this adversarial data set they had created. And you can see that all the systems really plummet in their performance. From a high of 81, you drop down to about 40. Maybe that's kind of expected, but another really eye-opening thing about the result they have is that the rank of the systems changed really dramatically, right? So the first, the original top-ranked system went to 5, 2 to 10, 3 to 12. As we did this adversarial thing, we didn't see a uniform drop with the best system still being the best, but a really shuffling of this leaderboard, again, revealing that, I think, the best systems were kind of overfit and benefiting from relatively low-level facts about the data set and not really transformatively different when it comes to being able to answer questions. The history of Natural Language and Natural Language Inference problems is very similar. As I said, we're going to look at this problem in a lot of detail later in the course. Here are just a few very simple NLI examples. You've got a premise like, "a turtle danced." Hypothesis, "a turtle moved." And you have one of three relations that can hold between those sentences. So "a turtle danced" entails "a turtle moved." "Every reptile danced" is neutral with respect to "a turtle ate." They can be true or false independently of each other. And "some turtles walk" contradicts "no turtles move." This is typical kind of NLI data. The actual corpus sentences tend to be more complicated and involve more nuanced judgments, but that's a framing of the task. It's a three-way classification problem with these labels and the inputs are pairs of sentences like this. And as I showed you before for one of the large benchmarks, the Stanford Natural Language Inference corpus, we reached superhuman performance in 2019. But those same systems really struggle with simple adversarial attacks. This is a lovely paper called Breaking NLI from Glockner et al. What they did is fix a premise like, "a little girl kneeling in the dirt crying." The original corpus example was that that entails "a little girl is very sad." And they just had an expectation that, you know, sort of adversarially, but this is a very friendly adversary. If I just replace "sad" with "I'm happy" I should continue to see the entailment relation predicted. After all, I've just substituted one word for its sort of near-synonym. But what they actually saw is that systems very reliably flip this to the contradiction in relation, probably because they are keying into the fact that this is a negation, and they overfit on the idea that the presence of negation is a signal that you're in the contradiction relationship. So that's the sort of distressing thing again. Humans don't make these mistakes, but systems are very prone to them. Let me show you one more. This is a slightly different adversarial attack. In this case, we're going to modify the premise. So the original training example was a woman is pulling a child on a sled in the snow, that entails a child is sitting on a sled in the snow. I think that's pretty clear. For their adversarial attack, they just swapped the subject and the object. So the new premise is a child is pulling a woman on a sled in the snow. We would expect that to lead to the neutral label for this particular hypothesis. But what Nie et al. observed is that the systems are kind of invariant under this changing of the word order. They continue to predict entailment, revealing that they don't really know what the subject and the object were in the original example, and it kind of does something much fuzzier with the set of words that are in that premise. Remember, these are at the time the very best systems for solving these problems. These are very simple kind of friendly adversaries that they're stumbling with. So this could lead you to have two perspectives. I had showed you that Nick Bostrom one before where we worry about superintelligent systems, but on the other hand, we might be living in a world that's more like the one presented in this lovely book from a roboticist and practitioner, Daniel H. Wilson, called How To Survive A Robot Uprising, where he gives all sorts of practical advice like wear clothing that will fool the vision system of the robot, or walk up some stairs, or drench everything in water, very simple adversarial attacks that reveal that these robots are not creatures which we should be fearful of. And I feel like I've just shown you a bunch of examples that are the analogs of wearing misleading clothing in the space of Natural Language Processing, revealing that our systems are not superhuman understanders, or communicators, or anything like that, but rather still, to this day, fairly superficial pattern matchers. Why is this all so difficult? It's hard to articulate precisely what is so challenging because this is probably deeply embedded in the whole human experience, but I think there are some pretty straightforward superficial things I can show you to just make alive for you how hard even the simplest tasks are. So here, I've got an imagined dialogue of the sort you would hope Siri would do well with. Where is Black Panther playing in Mountain View? Black Panther is playing at the Century 16 Theater. When is it playing there? It's playing at 2:00, 5:00, and 8:00. OK, I'd like one adult and two children for the first show. How much would that cost? It seems like the most mundane sort of interaction. You would not expect a human to have any problem with any of these utterances, but think about how much interesting stuff is happening in this little dialogue. We have domain knowledge that tells us that this is a place where movies might play and that this is the name of a movie. That's already very difficult. And we have anaphora from the third utterance to the first. When is it playing there? And I guess also into the second. Where these pronouns, you need to figure out what they refer to in the discourse. Then you get this sequence of responses, again, with some anaphora back to earlier utterances. And then something really complicated happens here. "I'd like one adult and two children for the first show." "First show" refers back to the sequence of things that was mentioned here, very difficult. "One adult and two children" is not a request for human beings, although that's what the forms would look like, but rather are a request for tickets. So somehow in the context of this discourse, "one adult and two children" is referring to tickets and not to people. How much would that cost? That is a kind of complicated event description referring to a hypothetical event of buying some tickets for a particular show. That's the referent of this "that" here-- highly abstract, very difficult at the level of resolving it in the discourse, and then figuring out what its actual content is. And this is for the most mundane sort of interaction, to say nothing of the complicated things that, for example, you and I will do when we discuss this material in just a few minutes. So I think this is why we're actually quite far from the superintelligence that Bostrom was worried about. Here's our perspective. As I said, this is the most exciting moment ever in history for doing NLU. Why? Because there's incredible interest in the problems. Because we are making incredibly fast progress and doing things and solving problems that we never could have even tackled 15 years ago. On the other hand, you do not have the misfortune of having joined the field at its very end. The big problems remain to be solved. So there's a resurgence of interest and explosion of products. The systems are impressive, but their weaknesses make themselves quickly apparent. And when we observe those weaknesses, it's an opportunity for us to figure out what the problem is, and that could lead to the really big breakthroughs. And you all are now joining us on this journey if you haven't begun it already. And for your projects, you'll make some progress along the path of helping us through these very difficult problems the field is confronting, even in the presence of all these exciting breakthroughs. NLU is far from solved. The big breakthroughs lie in the future. So I hope that's inspiring. Now, let me switch gears a little bit and talk about the things that we'll actually be doing in this course to help set you on this journey that we're all on. So we'll talk about the assignments, the bakeoffs, and the projects. The high-level summary here. Our topics are listed on the left. You can also see this reflected on the website. The one thing that I really do like about this particular plan is that it gives you exposure to a lot of different problems in the field, and also helps you with some tools and techniques that will be really useful no matter what problem you undertake for your final project. The same thing goes for the assignments. We're going to have three assignments, each with an associated bakeoff, which is a kind of competition around data. We're going to talk about word relatedness, cross-domain sentiment analysis, and generating color descriptions. This is a kind of grounded Language Understanding problem. Again, I think those are good choices because they expose you to a lot of different kinds of systems, techniques, model architectures, and so forth. And that should set you up really nicely to do a final project, which has three components, the literature review, an experimental protocol, and then the final paper itself. Our time for this quarter is somewhat compressed, so we'll have to make really good use of the time. But I think we have the schedule that will allow you to meaningfully invest in this preliminary work and still provide you with some space to do these final projects. Let's talk about the assignments and bakeoffs themselves. So there are three of them. Each assignment culminates in a bakeoff, which is an informal competition in which you enter an original model. The original model question is part of the assignment. You do something that you think will be fun or interesting, and then the bakeoff essentially involves using that system to make predictions on the held-out test set. The assignments ask you to build these baseline systems and then design your original system as I said. Practically speaking, the way it works is that the assignments earn you 9 of the 10 points, and then you earn your additional point by entering your system into the bakeoff. And the winning bakeoffs can receive some extra credit. The rationale for all of this, of course, is that we want to exemplify the best practices for doing research in this space and help you do things like incrementally build up a project with baselines, and then, finally, an original system. But I should say it should be possible, and it's actually pretty common, for people to take original systems that they developed as part of one of these assignments, and use them for their final project. Each one of the assignments is set up specifically to make that kind of thing possible, and productive, and rewarding. Let me show you briefly what the bakeoffs are going to be like. So the first one is word-related. The focus of that unit is on developing vector representations of words. You're going to start probably with big count matrices like the one you see here. This is a word-by-word matrix where all these cells give the number of times that these words are, in this case, emoticons, co-occurred with each other in a very large corpus of text. The striking thing about this unit is that there is a lot of information about meaning embedded in these large spaces. You will bend, and twist, and massage these spaces and maybe bring in your own vector representations or representations you've downloaded from the web, and you will use them to solve a word-relatedness problem. So basically, you'll be given pairs of words like this with a score, and you will develop a system that can make predictions about new pairs. And the idea is to come up with scores that correlate with the held-out scores that we have not distributed, of course, as part of the test set. You'll upload your entry, and we'll give you a score, and then we'll look at what worked and what didn't across all of the systems. And the techniques that we'll explore are many, right? So we'll talk about reweighting, dimensionality reduction, vector comparison. You'll have an opportunity if you wish-- this is a brand new addition to the course-- to bring in BERT if you would like to. So there's lots of inspiring things to try building on the latest stuff that's happening in this space. The second bakeoff is called cross-domain sentiment analysis. This is a brand new bakeoff. I'm very excited to see how this goes. Here's how this is going to work. We want to be a little bit adversarial with your system. So there are two datasets involved, the Stanford Sentiment Treebank, which is movie review sentences. And we're going to deal with it in its ternary formulation, so it has positive, negative, and neutral labels. That's SST-3. Alongside that, I'm going to introduce a brand new dev/test split previously unreleased, which is sentences from the Restaurant Review domain. It has the same kind of labels but, of course, it's very different from the Movie Review domain along many dimensions. So for the bakeoff, you'll have the SST-3 train set. We're going to give that to you, and you are welcome to introduce any other data you would like to introduce as part of this training. That is entirely up to you. We're also distributing, for this bakeoff, two dev sets, the SST-3 dev set, which is public already, and this brand new one of Restaurant Review Sentences. And you can introduce other development sets if you want as part of tuning your system. And then the bakeoff will be conducted as the best that people can do jointly on SST-3 and this new test set, which is, again, held-out. The idea here is that you'll not only be doing a really great kind of project in the classification thing involving sentiment, but also, pushing your systems to adapt into new domains from the ones that they were trained on. Although, of course, part of that could be training in clever ways that do help you anticipate what's in this Restaurant Review data. And then the third bakeoff is a Natural Language Generation task, but it's a grounded Natural Language Generation task. So for this bakeoff, you'll be given color context like three patches like this. And one of them is designated as the target. For the training data, we had people describe the target color in this context, and your task is to develop a system that can perform that same task, produce natural language utterances like this. I think this is a really cool problem because it's grounded outside of something that involves language, right? You're grounded in a color patch, it's just a numerical representation. And it's also highly context-dependent in that the choices people make for their utterances are dependent not only on the target-- the color they need to describe-- but the target in the context of these other two colors. And you can see that in these descriptions. This one is easy. So the person just said "blue." But in this case, since there are two blues, they said the darker blue one, kind of keying into the fact that the context contains two blue colors. This is an even more extreme case of that, dull pink, not the super bright one. Implicit reference is not only to the target but also to other colors that are distractors in that context. And then the final two are really extreme cases of this where for one in the same color in terms of its color representation, this person said purple and this person said blue in virtue of the fact that the distractors are different. So we have this kind of controlled Natural Language Generation task that we can see is highly dependent on the particular context that people are in here. And again, this will follow the same path here. I'm going to give you the whole model architecture as a kind of default. It's an encoder-decoder architecture where you'll have a machine learning system that consumes color representations and then transfers that into a decoding phase where it tries to produce an utterance. So here, you consumed a bunch of colors and produced the description light blue, but, of course, you'll be able to explore many variants of this architecture and really explore different things that are effective in terms of this representation and Natural Language Generation task. And again, as a bakeoff, it will work the same way. You'll do a bunch of development on training data that we give you, and then you'll be evaluated on the held-out test set that was produced in the same fashion but involves entirely new colors and utterances. Quick note on the original systems. As I said before, the original system is kind of a central piece of each one of the assignments. The homeworks really culminate in this original system, and that becomes your bakeoff entry. In terms of grading it, this is kind of hard, because we want you to be creative and try lots of things. So the way we're going to value these entries is any system that performs extremely well on the bakeoff will be given full credit, even systems that are very simple, right? We can't argue with success according to the criteria that we've set up. So if the simplest possible approach to one of these bakeoffs turns out to be astoundingly good and you had to do almost no work to succeed, you, of course, get full credit. But that's not the only thing we value, right? So systems that are creative and well-motivated will be given full credit even if they don't perform well on the bakeoff data. This is meant to be an explicit encouragement for you to try new things, to be bold, and be creative even if it doesn't numerically lead to the best results. In fact, some of the most inspiring things we've seen and insightful things we've seen as part of these bakeoffs, have been systems that didn't perform at the top of the heap, but harbored some really interesting insight that we could build on. And then, of course, systems that really are minimal, if you do very little and you don't do especially well at the bakeoff, will receive less credit. Specific criteria will depend on the nature of the assignment and so forth, and we'll try to justify that for you. This is the more subjective part. I think 1 and 2 really encode the positive part of the kind of values that we're trying to convey to you as part of these original system entries. And then you'll have project work. This occupies the entire second half of the course. At that point, the lectures, the notebooks, the readings, and so forth, are really focused on things like methods, metrics, best practices, error analysis, model introspection, and other things that will help you enrich the paper that you write. The assignments are all project-related-- the literature review, the experimental protocol, and the final paper. Unfortunately, in many years, we've had a video presentation, which has always been really rewarding. But I feel like, given the compressed time schedule that we're on, we just don't have time for even short videos, so we're going to focus on these three crucial components. And then for exceptional final projects from past years that we've selected, you could follow this link. It's access restricted, but if you're enrolled in the course, you should be able to follow the link and see some examples. And there's a lot more guidance on final projects in our course repository. I have a very long write-up of FAQs and other guidance, about publishing in the field and writing for this class. And I have what is now a really inspiringly long list of published papers that have grown out of work people have done for this course, so you can check that all out here. Final words here by way of wrapping up. As I said, this is the most exciting moment ever in history for doing NLU. This course will give you hands-on experience with a wide range of challenging problems. I emphasize the hands-on thing. I think this is so important. If you want to acquire a new skill like this, it's all well and good to watch other people doing it, but the way you really acquire the skill is by having hands-on experiences yourself. So everything about the requirements in the materials is pushing you to have those hands-on experiences and become expert at that very fundamental level. For the final project, a mentor for the teaching team will guide you through those assignments. We'll be there to help you, and make choices, and set the scope for the project, and maybe push it towards something that you could one day publish. And there are many examples of successful publications deriving from this course. Our central goal, fundamentally though, is to make you the best, that is most insightful and responsible NLU practitioner and researcher wherever you go next into academia, or just into other classes, or on into industry to leverage these skills. We want to make you, as I said, the most insightful and responsible practitioner we can.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Vector_Comparison_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome back, everyone. This is part 3 in our series on distributed word representations. We're going to be talking about vector comparison methods. To try to make this discussion pretty intuitive, I'm going to ground things in this running example. On the left, I have a very small vector space model. We have three words, A, B, and C. And you can imagine that we've measured two dimensions, dx and dy. You could think of them as documents if you wanted. There are two perspectives that you might take on this vector space model. The first is just at the level of raw frequency, B and C seem to be united. They are frequent in both the x and the y dimension. Whereas A is comparatively infrequent along both those dimensions. That's the first perspective. The second perspective, though, is more subtle. You might just observe that if we kind of correct for the overall frequency of the individual words, then it's actually A and B that are united. Because they both have a bias in some sense for the dy dimension. Whereas by comparison, C has a bias for the x dimension, again, thinking proportionally. Both of those are perspectives that we might want to capture and different notions of distance will key into one or the other of them. One more preliminary, I think it's very intuitive to depict these vector spaces. And in only two dimensions, that's pretty easy. You can imagine that this is the dx dimension along the x-axis. And this is the dy dimension along the y-axis. And then I have placed these individual points in that plane. And then you can see graphically that B and C are pretty close together. And A is kind of lonely down here in the corner, the infrequent one. Let's start with Euclidean distance, very common notion of distance in these spaces and quite intuitive. We can measure the Euclidean distance between vectors u and v if they share the same dimension n by just calculating the sum of the squared element wide differences, absolute differences, and then taking the square root of that. That's the math here. Let's look at that in terms of this space. So here we have our vector space depicted graphically, A, B, and C, and Euclidean distances measuring the length of these lines. I've annotated with the full calculations. But the intuition is just that we are measuring the length of these lines, the most direct path between these points in our high dimensional space. And you can see that Euclidean distance is capturing the first perspective that we took on the vector space, which unites the frequent items B and C as against the infrequent one A. As a stepping stone toward cosine distance, which will behave quite differently, let's talk about length normalization. Given the vector u of dimension n, the L2 length of u is the sum of the squared values in that matrix. And then we take the square root. That's our normalization quantity there. And then the actual normalization of that original vector u involves taking each one of its elements and dividing it by that fixed quantity, the L2 ranks. Let's look at what happens to our little illustrative example. On the left here, I have the original count matrix. And in this column here, I've given the L2 length as a quantity. And then when we take in that quantity and divide each one of the values in that vector to get its L2 norm, you can see that we've done something significant to the space. So they're all kind of united on the same scale here. And A and B are now close together. Whereas B and C are comparatively far apart. So that is capturing the second perspective that we took on the matrix, where A and B have something in common as against C. And that has come entirely from the normalization step. And if we measured Euclidean distance in this space, just the length of the lines between these points, we would again be capturing that A and B are alike and B and C are comparatively different. Cosine kind of does that all in one step. So the cosine distance or approximately the distance, as you'll see, between two vectors u and v of shared dimension n. This calculation has two parts. This is the similarity calculation, cosine similarity. And it is the dot product of the two vectors divided by the product of their L2 lengths. And then to get something like the distance, we just take 1 and subtract out that similarity. Again, let's ground this in our example. Here we have the original count vector space model. And what we do with cosine distance is essentially measure the angles between these lines that I've drawn from this origin point. And so you can see that cosine distance is capturing the fact that A and and B are close together as measured by this angle. Whereas B and C are comparatively far apart. So again with cosine, we're abstracting away from frequency information and keying into that abstract notion of similarity that connects A and B as against C. Another perspective that you could take is just observe that if we first normalize the vectors via the L2 norm and then apply the cosine calculation, we changed the space as I showed you before. So they're all up here kind of on the units here. And notice that the actual values that we get out are the same whether or not we did that L2 norming step. And that is because cosine is building the effects of L2 norming directly into this normalization here in the denominator. There are a few other methods that we could think about or classes of methods. I think we don't need to get distracted by the details. But I thought I would mention them in case they come up in your reading or research. The first class are what I called matching based methods. They're all kind of based on this matching coefficient. And then Jaccard, Dice, and Overlap are terms that you might see in the literature. These are often defined only for binary vectors. Here, I've given their generalizations to the real valued vectors that we're talking about. And the other class of methods that you might see come up are probabilistic methods which tend to be grounded in this notion of KL divergence. KL divergence is essentially a way of measuring the distance between two probability distributions. To be more precise, from a reference distribution p to some other probability distribution q. And it has symmetric notions, symmetric KL, and Jensen-Shannon distance, which is another symmetric notion that's based in KL divergence. Again, these are probably appropriate measures to choose if the quantities that you're thinking of are appropriately thought of as probability values. Now I've alluded to the fact that the cosine distance measure that I gave you before is not quite what's called the proper distance metric. Let me expand on that a little bit. To qualify as a proper distance metric, a vector comparison method has to have three properties. It needs to be symmetric. That is, it needs to give the same value for xy as it does to yx. KL divergence actually fails that first rule. It needs to assign 0 to identical vectors. And crucially, it needs to satisfy what's called the triangle inequality, which says that the distance between x and z is less than or equal to the distance between x and y and then y to z. Cosine distance, as I showed it to you before, fails to satisfy the triangle inequality. And this is just a simple example that makes it intuitive. It just happens that this distance here is actually greater than these two values here, which is a failure of the statement of the triangle inequality. Now this is relatively easily corrected. But this is also kind of a useful framework. Of all the different choices that we could make, of all the options for vector comparison, suppose we decided to favor the ones that counted as true distance metrics. Then that would at least push us to favor Euclidean distance, Jaccard for binary vectors only, and Jensen-Shannon distance if we were talking about probabilistic spaces. And we would further amend the definition of cosine distance to the more careful one that I've given here, which satisfies the triangle inequality as well as the other two criteria. And by this kind of way of dividing the world, we would also reject matching Jaccard, Dice, Overlap, KL divergence, and symmetrical KL divergence as ones that fail to be proper distance metrics. And so that might be a useful framework for thinking about choices in this space. One other point in relation to this. This is obviously a more involved calculation than the one that I gave you before. And in truth, it is probably not worth the effort. Here's an example of just a bunch of vectors that I sampled from one of our vector space models. And I've compared the improper cosine distance that I showed you before on the x-axis with the proper cosine distance metric that I just showed you. And the correlation between the two is almost perfect. So there is essentially no difference between these two different ways of measuring cosine. And I think that they are probably essentially identical up to ranking, which is often the quantity that we care about when we're doing these comparisons. So probably stick with the simpler and less involved calculation would be my advice. Let's close with some generalizations and relationships first. Euclidean, as well as Jaccard and Dice with raw count vectors will tend to favor raw frequency over other distributional patterns, like that more abstract one that I showed you with our illustrative example. Euclidean with L2 norm vectors is equivalent to cosine when it comes to ranking, which is just to say that if you want to use Euclidean and you first L2 norm your vectors, you're probably just doing something that might as well just be the cosine calculation. Jaccard and Dice are equivalent with regard to ranking. That's something to keep in mind. And then this is maybe a more fundamental point that you'll see recurring throughout this unit. Both L2 norming and also a related calculation which will just create probability distributions out of the rows. They can be useful steps, as we've seen. But they can obscure differences in the amount or strength of evidence that you have, which can in turn have an effect on the reliability of, for example, cosine, non-Euclidean, or KL divergence. Right, these shortcomings might be addressed through weighting schemes though. But here's the bottom line. There is valuable information in raw frequency. If we abstract away from it, some other information might come to the surface. But we also might lose that important frequency information in distorting the space in that way. And it can be difficult to balance these competing pressures. Finally, I'll just close with some code snippets. Our course repository has lots of hand utilities for doing these distance calculations and also length norming your vectors and so forth. And it also has this function called Neighbors in the VSM module. It allows you to pick a target word and supply a vector space model. And then it will give you a full ranking of the entire vocabulary in that vector space with respect to your target work, starting with the ones that are closest. So here the results for "bad" using cosine distance in cell 12 and Jaccard distance in cell 13. And I would just like to say that these neighbors don't look especially intuitive to me. It does not look like this analysis is revealing really interesting semantic information. But don't worry, we're going to correct this. We're going to start to massage and stretch and bend our vector space models. And we will see much better results for these neighbor functions and everything else as we go through that material.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Classifier_Metrics_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome back, everyone. This is part 2 in our series on methods and metrics. We're going to be talking about classifier metrics. I'm sort of assuming that the metrics I'll be discussing are broadly familiar to us. I think that's a chance for us to step back and be reflective about what values these familiar metrics actually encode. Because that really is the name of the game here. No matter what kind of task you're working on or what the structure of your model is like, it's just fundamentally true that different evaluation metrics will encode different values, different goals you have for your system, and different kinds of hypotheses that you might be pursuing. You can hear in that, that really fundamentally choosing a metric is a crucial aspect to any kind of experimental work. It's a fundamental step in how we operationalize hypotheses in terms of data, and models, and model comparisons. As a result, you should feel free, for whatever task you're working on, to motivate new metrics or specific uses of existing metrics, depending on what your actual goals for your experiments actually are. Relatedly, for established tasks, you'll probably feel some pressure to use specific well-established metrics. But you should always, as a scientist, feel empowered to push back if you feel that the accepted metrics are not reflective of your hypothesis or are distorting our notions of progress somehow. Because remember, areas of research can stagnate due to poor metrics. And so we have to be vigilant. We have to be on the lookout for cases in which the metrics we've accepted might be at odds with the actual goals we have for the research we're doing. Let's begin our discussion of classifier metrics by talking about confusion matrices, a pretty fundamental data structure for a lot of the calculations that we'll perform. So by convention, for my confusion matrices, I'll have the actual labels going across the rows here. And across the columns, I'll have the predictions from some classifier model. So you can see in this confusion matrix, that there were 15 cases in which the model predicted positive. And the actual label was positive. Where there-- whereas there are 10 cases where the actual label was positive and the model predicted negative, and so forth for the other values in this table. I think that seems familiar. It's something we can take for granted. But we should remember that behind the scenes here, a threshold was imposed in order to create these categorical predictions. By and large, classifier models that we use today predict probability distributions over the labels. And so in order to create an actual categorical prediction, we decided, for example, that the label with the maximum probability would be the true one. And that was-- the result of that decision was used to aggregate this table. But, of course, different choices of that threshold might give very different results. And there might be contexts in which we want to explore the full range of probabilistic predictions. That's something I'll return to at the end of the screencast. Final note about this. It can be helpful in the context of confusion matrices to add a column for what's called support, which is simply the number of actual true instances that fall into each class. So there are 125 positive instances in this corpus, 35 negative, and over 1,000 that's all in the neutral category. And that's already illuminating about how specific metrics might deal with that extremely imbalanced vector of support values. So let's start with accuracy-- by far, the most famous and familiar of all the classifier metrics. Accuracy is simply the number of correct predictions divided by the total number of examples. In terms of our confusion matrices, that is just the sum of all the values along the diagonal divided by the sum of all the values that are in this table. The bounds are 0 and 1, of course, with 0 the worst and 1 the best. In terms of the value encoded by accuracy, I would say it's an attempt to answer the question, how often is the system correct? And that kind of feeds into the weaknesses here. So the weaknesses are, first, there's no per class notions of accuracy. Not directly. We just get a single holistic number. And relatedly, there is just a complete failure to control for class size. So you can see, for example, in this confusion matrix, that performance on the neutral class will completely dominate the accuracy values. And it's to the point, in this table, where no matter how much progress we make on the positive and negative classes because they are so much smaller in terms of their support for neutral, that progress is unlikely to be reflected in our accuracy values. And that's why if you return to the value encoded, you can see that just at a raw fundamental level, it is simply answering how often is the system correct? Another thing to keep in mind is that for many classifier models, the loss for those models is what's called the cross-entropy loss. It's also called the negative log-loss in Scikit-learn. And that value is inversely proportional to accuracy. The takeaway there is that even as we might choose other metrics to compare models and evaluate models, we should keep in mind that our classifiers themselves are kind of engines for trying to maximize accuracy. And so they are likely to inherit whatever properties, and values, and strengths, and weaknesses are inherent in the accuracy calculation, which as we'll see, could be at odds with our actual values for the system that we're developing. And that kind of feeds nicely into precision, recall, and F scores which are attempts to make up for some of the weaknesses that you see in accuracy. We'll start with precision. This is a per class notion. For a class k, it's the correct predictions for k divided by the sum of all the guesses for k that were made by your model. So in terms of this confusion matrix, if we focus on the positive class here, the numerator is the number of correct predictions for that class divided by the sum of all the values that are in this column. And for the negative class, we would repeat that. The numerator would be 15, and we would sum over the column. And finally, for neutral, the numerator would be 1,000. And we would again sum over this column. And that leads to this vector of precision values that you see along the bottom here. The bounds of precision are 0 and 1 approximately. With 0 the worst and 1 the best. There is an important caveat here though. Precision is technically undefined in situations where a model makes no predictions about a given class. Because in that situation, you're dividing by 0. And that's technically undefined. It is common practice to map those to 0, but we should keep in mind that we are making that extra decision. The value encoded is a kind of conservative one. We're going to penalize incorrect guesses for a certain class. So you can imagine that a failure mode there is to just rarely guess a certain class. That is the core weakness. You can achieve high precision for a class k simply by rarely guessing k. So we'll obviously need to offset that with some other pressure. And by and large, the offset pressure is recall. Recall is, again, a per class notion. For a class k, it's going to be the correct predictions for k divided by the sum of all the true members of k. So now we're going to operate row-- rowwise. We focus on the positive class. Our numerator is 15, the number of true predictions for the positive class, divided by the sum of all the values along the rows. That is all the true members of that class for positive. That gives us a recall value of 0.12. And we can repeat that for the other two rows. The bounds of 0 and 1. 0 the worst and 1 the best. The value encoded is a permissive one. We want to penalize missed true cases. We would like to make a lot of predictions about a class in order to avoid leaving any out, so to speak. And that leads into the core weakness. We can achieve higher recall for a class k simply by always guessing k. Never mind the mistakes. As long as we get all the actual cases into our predictions, we're doing well by recall. And you can hear in that that it's important to offset this pressure by something else. And that's standard precision. And the way we offset these two pressures is typically with F scores. So F scores are a harmonic mean of the precision and recall scores. It's again a per class notion. And it has this weighting value, beta. If we want to evenly balance precision and recall, then we set beta to 1. So here's that confusion matrix again. And along this column here, I've given the per class F1 values here. The bounds are 0 and 1 as before with 0 the worst and 1 the best. And you can count on the fact that the F1 score for a class will always fall between the precision and recall classes, because it's a, kind of, an average. It's the harmonic mean. What's the value encoded? The best way I can say this is that we're essentially trying to answer the question, for a given class k, how much do predictions for k align with a true instance of k? That is aligning with both precision and recall as pressures. And then we can use the beta value to control how much weight we place on precision versus recall. What are the weaknesses of F scores? Well, I can really think of two. The first is that there's no normalization for the size of the dataset because of the way we use the denominators for the row and column sums. And relatedly, for a given class that we decide to focus on, we actually ignore most of the data that's in the table. Consider the fact that if we decided to calculate the F1 score for the positive class, we pay attention to these column values and these are row values, but we completely ignore these four values here. They're just not involved in the calculation at all. And as a result, the positive class F1 score might give a distorted picture of what the model's predictions are actually like in virtue of the fact that they leave out so much of the data here as you can see. Now because F scores are a per class notion, I think, that's useful in the sense that it gives us a perspective on each one of the classes separately. But for many kinds of model evaluations, we need a summary number, a single number that we can use to compare models and assess overall progress. So we're going to do some kind of averaging. And I'd like to offer you three ways that we might average these F scores. Macro-averaging, weighted averaging, and micro-averaging. And as you'll see, these encode quite different values about how we want to think about the F scores. Macro-averaging is a averaging that we've done at various points throughout the quarter. It is simply the arithmetic mean of all the per category F1 scores. So it's just the mean of the values along this column. Its bounds are 0 and 1 with 0 the worst and 1 the best. What value does encode? Well, it's the same values that we get from F scores, plus the additional and non-trivial assumption that all of the classes are equal regardless of size differences between them, right? And that kind of feeds into the weaknesses here. A classifier that does well only on the small classes might not actually do well in the real world. If you imagine counterfactually that for our given model here, we had really outstanding F1 scores for positive and negative and really low for neutral. That might really be at odds with how this classifier would behave in the world, assuming that most of the examples that are streaming in are in the neutral category. Relatedly, a classifier that does well only on large classes might do poorly on the small, but nonetheless vital classes that are in our data. And that just reflects the fact that very often in NLP, it's the small classes that are the most precious, the ones that we care about the most. And we're not reflecting that kind of asymmetry in our values by simply taking the average of all these F scores. Weighted average F scores will give a very different perspective on model performance. In this case, we are again just going to take an average of the F1 scores, but now weighted by the amount of support for each one of the classes. That, again, has bound 0 to 1 with 0 the worst and 1 the best. The value encoded is the same as the values that we get for the F scores, but now with the added assumption that the size of the class as the amount of support really does matter. And that's going to feed into the weaknesses. And the fundamental thing here is that large classes will dominate. Just as with accuracy, the larger our classes, the more it's going to contribute to our overall summary number. And that can lead to the kind of problematic situation, where the small classes are just not relevant for the evaluation metric. That could reflect your values because. If what you really care about is raw rate of correct predictions, you might want to weight the larger classes more heavily. But again, for many contexts in NLP we really care about how much progress we can make on the small, but nonetheless important classes. And so in those contexts, weighted averaging is probably not the right choice. The final averaging scheme that I would like to consider is micro-average F scores. This will be very similar to weighted averaging of F1 scores and is directly connected to accuracy. The way this works is a little bit involved. We start with this core confusion matrix. And we're going to break it down into three smaller confusion matrices, one per class. So you can see this one on the left here is for the positive class. The yes'es are 15 and the no's are the sum of these two values here along this row. The no's are the 20, which is the sum of these two values. And then there's no no categories, for all the remaining data in this quadrant here. We repeat that same procedure for the negative class and for the neutral class. And then we simply sum up those three smaller tables into one big yes-no confusion matrix and calculate the F1 scores per category. That gives us two scores here. One for yes and one for no. The bounds on this are 0 and 1. With 0 the worst and 1 the best. The value encoded is really easy to state. Macro-averaged F1 scores for the yes category are equivalent to accuracy scores numerically. So this is identical in terms of that metric. And we have this additional problem now that-- well, we have the same kind of value reflected as we have for the weighted F scores or for accuracy, but now we have brought in an additional source of uncertainty, which is we have a number for the yes category and the no category. And hence, no single summary number. The convention in the literature is to focus on the yes category, but that simply brings us back to accuracy with a more involved calculation. So that's obviously not very productive. And I would say as a result here, the two real choices that you want to make are between macro-averaging and weighted averaging of your F1 scores. And again, that will come down to what your fundamental values are and what hypotheses you're pursuing. The final point I want to make is that thus far, we have operated in terms of the confusion matrix, which involved imposing a threshold on probabilistic predictions in order to create categorical values that we could then compare with precision, and recall, and so forth. Precision and recall curves offer a fundamentally different perspective. In this case, instead of imposing one threshold, we'll take every possible value that's predicted by our classifier to be a potential threshold, and essentially create a bunch of confusion matrices based on that successive series of thresholds. And then we can plot the trade-offs between precision here along the y-axis and recall along the x-axis for all those different notions of threshold. And that can be really illuminating in terms of helping us see how our system trades precision and recall against each other. And help us find based on values that we have about our problem and our goals, what the optimal balance between precision and recall actually is. And then if you do need to summary a number of this entire table, average precision, which is implemented in Scikit-learn is a standard way of summarizing the entire curve with a single number without, though, imposing that single threshold that was so much shaping all of the previous metrics that we discussed.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
DynaSent_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRIS POTTS: Hello, everyone. Welcome to part 4 of our series on supervised sentiment analysis. This is the second screencast in the series that is focused on a dataset for sentiment. And that dataset is DynaSent. This video could be considered an optional element in the series. I'm offering it for two reasons really. First, this is a new data set that I helped produce. And I would love it if people worked on it. It would be great to see some new models, new insights. All of that would help push this project forward in interesting ways. The second reason is more practical. I think that this data set could be useful to you as you work on the assignment and the associated bake-off. You could use the data set itself for supplementary training data. You could use it to evaluate your system. And as you'll see, there are a few points of conceptual connection between this data set and the brand new dev and test sets of restaurant sentences that are part of the bake-off this year. So let's dive in. Here's a project overview. First, all the data, code, and models are available on GitHub at this link. This dataset itself consists of about 122,000 sentences. They are across two rounds. And I'm going to cover what each round means. And each of the sentences has five gold labels in addition to an inferred majority label where there is one. And I'll return to that as well. I think that's an interesting aspect to this kind of data collection. The associated paper is Potts et al. 2020, which I encourage you to read if you want to learn even more about this dataset and how, in particular, it relates to the Stanford Sentiment Treebank, our other core dataset. And another ingredient here, as you'll see when we get to round two, is that this is partly an effort in model in the loop adversarial data set creation. For round two, crowd workers interacted with the model attempting to fool it and, thereby, creating sentences that are really difficult and are going to challenge our models in what we hope are exciting and productive ways. So here's a complete project overview. Let me walk through it quickly. And then we'll dive into the details. We begin with what we've called Model 0, which is a RoBERTa model that's fine-tuned on a bunch of very large, sentiment benchmark data sets. The primary utility of Model 0 is that we're going to use it as a device to find challenging, naturally occurring sentences out in a large corpus. And then we human validate those to get actual labels for them. The result of that process is what we hope is a really challenging round 1 dataset of naturally occurring sentences that are hard for a very good sentiment model like Model 0. On that basis, we then train a Model 1, which is similar to Model 0 but now extended with that round 1 training data. So we hope that, in bringing in that new data and combining it with the sentiment benchmarks, we get an even stronger model. That is the model that crowd workers interacted with on the Dynabench platform to try to create examples that are adversarial with respect to Model 1. So they ought to be really difficult. We feed those through exactly the same human validation pipeline. And that gives us our second round of data. So two rounds of data that can be thought of as separate problems are merged together into a larger data set. I think we're kind of still deciding how best to conceptualize these various data aspects. So let's look at round 1 in a little more detail. This is where we begin with Model 0, and try to harvest interesting naturally occurring sentences. We sort of run Model 0 as a RoBERTa-based classifier. And its training data are from customer reviews, which is small, the IMDB dataset, which I linked to in an earlier screencast, SST-3, which you saw in the previous screencast, and then these two very large external benchmarks of product and service reviews from Yelp and Amazon. You can see that they're very big indeed. And the performance of Model 0 on the datasets, these are our three external data sets. It's pretty good. They range from the low 70s, for SST-3, to the high 70s for Yelp and Amazon. So this is a solid model. And I will say, impressionistically, if you download Model 0 and play around with it, you will find that it is a very good sentiment model indeed. So we used Model 0 to harvest what we hope are challenging sentences. And for this, we used the Yelp academic dataset which is a very large collection of about 8 million reviews. And our heuristic is that we're going to favor in our sampling process harvesting sentences where the review was one star, so it's very low, and Model 0 predicted positive for a given sentence and, conversely, where the review is five stars, and Model 0 predicted negative. We are hoping that that at least creates a bias for sentences that are very challenging for Model 0, where it's actually making a wrong prediction. We're not going to depend on that assumption. Because we'll have a validation step. But we're hoping that this is as kind of as adversarial as we can be without actually having labels to begin with. This is a picture of the validation interface. You can see that there were some examples given and a little bit of training about how to use the labels. And then, fundamentally, what crowd workers did is they were prompted for a sentence, and they made one of four choices, positive, negative, no sentiment, which is our notion of neutral, and mixed sentiment, which is indicating a sentence that has a balance of positive and negative sentiments expressed in it. I think that's an important category to single out. We're not going to try to model those sentences. But we certainly want crowd workers to register that kind of mixing of emotions where it appears. So here's the resulting dataset. And because we got five gold labels for every sentence, there are two perspectives that you can take. The first one I've called distributional train. And this is where, essentially, we take each one of the examples and reproduce it five times for each of the labels that it got. So if an individual sentence got three positive labels, two negative, then we would have five examples, three labeled positive and three labeled negative, with the actual text of the example repeated five times. What that is doing is essentially simulating having a distribution over the labels. And for many classifier models, that is literally the same as training on a distribution of the labels as given by our crowd workers. I think this is an exciting way to bring in uncertainty and capture the fact that there might be kind of inherent disagreement among the crowd workers that we want our model to at least grapple with. And in the paper, as we discuss, this gives better models than training on just the majority labels. But you can take a more traditional view. So majority label here means that at least three of the five workers chose that label. That gives you 94,000 or 95,000 sentences for training. And then these dev and test sets have 3,600 samples each. And presumably, we would predict just the majority label for them. What's more open is how we train these systems. And in the end, what we found is that 47% of these examples are adversarial with respect to Model 0. And as you'll see, the dev and test set are designed so that Model 0 performs at chance on them. Yeah, that's the Model 0 versus the human. So here's a summary of the performance. I showed you these categories before. And I'm just signaling that we have, by design, ensured that Model 0 performs at chance on round zero. We could compare that to our human baseline. For this, we kind of synthesized five annotators and did pairwise F1 scoring for them to get an estimate of human performance that is on the same scale as what we got from Model 0 up here. And we put that estimate of 88% for the dev and test sets. I think that's a good conservative number. I think if you got close to it, that would be a signal that we had kind of saturated this round. And we'd like to think about additional dataset creation. I do want to signal, though, that I think this is a conservative estimate of how humans do. And one indicator of that is that, actually, 614 of the roughly 1,200 people who worked on this task for validation never disagreed with the majority label, which sort of starts to suggest that there are humans who are performing perfectly at this task, putting the set at pretty low bound. And here are some example sentences. These are fully randomly sampled with the only bias being that I set a length restriction, so that the slide would be manageable. These are the same examples that appear in the paper, where we needed to fit them all into a pretty small table. I think this is illuminating though. So it's showing all the different ways that Model 0 could get confused with respect to the majority response. And I would like to highlight for you that there is a real discrepancy here on the neutral category. What we find is that, because Model 0 was trained on large external benchmarks, its notion of neutral actually mixes together things that are mixed sentiment and things that are highly uncertain about the sentiment that is expressed, for whatever reason. So you get a lot of borderline cases and a lot of cases where humans are kind of inherently having a hard time agreeing about what the fixed sentiment label would be. I think that DynaSent is doing a better job of capturing some notion of neutral in these labels over here. And we should be a little wary of treating three-star reviews and things like that as a true proxy for neutrality. This is a good point to signal that the validation and test sets for the bake off of the restaurant sentences were validated in the same way as DynaSent. So those sentences will have the same kind of neutrality that DynaSent has, which could be opposed to the sense of neutrality that you get from the Stanford Sentiment Treebank, which was, of course, underlying we kind of gathered in this setting of having a fixed five-star rating scale. So that's round 1. That's all naturally occurring sentences. Let's turn to round 2. So recall that we benefit from round 1 at this point by training a brand new model on all those external datasets plus the round 1 dataset. And then we have workers on Dynabench interact with this model to try to fool it. And we validate the resulting sentences to get our round 2 data set. So Model 1 is, again, a RoBERTa-based classifier. What we've done for our training here is, more or less, carry over what we did for the first round. Except, we have upsampled the SST to give it more weight. And we have dramatically upsampled the distributional labels from our round 1 dataset, effectively, trying to give it equal weight as all of these other datasets combined in the training procedure. So we're trying to get a model that, as a priority, does really well on our round one dataset. Here is a look at the performance of this model. And first, I would just note that it's doing well on round 1. We're at about 81%, which is a little below humans but certainly much better than the chance performance, by design, that we set up for Model 0. I do want to signal, though, that we have a kind of drop in performance for a few of these categories. You can see that especially for Yelp and Amazon, where Model 0 was at about, for example, 80 here. Model 1 dropped down to 73. And it's a similar picture for dev. And, more or less, that's repeated for Amazon with a drop from about 76 to 73 and 77 to 73, similarly. So we have a trade off in performance that I believe traces to the fact that we are performing some changes to the underlying semantics of the labels. But that's something to keep in mind. And you can see that there's a tension here as we try to do well at our dataset versus continuing to do well on these fixed external benchmarks. Here is the Dynabench interface. And there's one thing that I want to note about it. This is the stock interface. But we've actually concentrated on a condition that we call the prompt condition, where workers, instead of having to just write a sentence as a blank slate, sit down to an empty buffer and try to fool the model, they were given an inspirational prompt, which was an attested sentence from the Yelp academic data set, and invited to modify that sentence if they chose in order to achieve their goal of fooling the model in a particular way. And this proved to be vastly more productive. It led to more diverse and realistic sentences. I think we've essentially freed the crowd workers from the creative burden of having each time to come up with a completely new sentence. And we're hoping that this procedure leads to fewer artifacts, more diversity, and more realism for this adversarial dataset collection procedure. Our validation pipeline was exactly the same as round 1. And here is the resulting data set. It's a little bit smaller because this kind of adversarial dataset collection is hard. And you can see how good Model 1 is. It was actually pretty hard for crowd workers to fool this model. They did so only about 19% of the time. Here's the dataset for distributional training. You have about 93,000 sentences. And if you go for the majority-label training, you have about 19,000. And the dev and test sets are smaller. But again, the reason they're smaller is that they are designed to set Model 1 as having chance performance on this data set. And so that's what I'll flesh out here. You can see that this model chance performance, I showed you before that it's doing pretty well on round one. And we had that kind of tension with the external benchmarks. In terms of human performance, we're at about 90 using that procedure of synthesized, kind of averaged F1 values. And I would just note, again, that that's certainly conservative. In that, almost half of the workers never disagreed with the majority label. So it is certainly within the capacity of individual humans to perform essentially perfectly on this data set. But 90 is, nonetheless, a good signpost for us as we think about hill climbing and launching subsequent rounds of DynaSent. And here are some short examples. And I think they make the same point that our neutral category is more aligned with the semantics of what we mean when we identify neutral sentences and less heterogeneous than you get from naturally occurring, neutral sentences derived from star rating metadata and so forth. So I'm hopeful that this is a kind of positive step toward getting true ternary sentiment. But we should be aware that this label shift has happened in these data sets. And the final thing I want to say is just to reiterate that, if people do exciting work with this dataset and start to make real progress on the existing rounds, that would be our cue to launch new rounds. The Dyna in DynaSent is that we would like to have an evolving benchmark, not one that's static but rather responsive to progress that's made in the field and the evolving needs of people who are trying to develop practical sentiment analysis systems. So do let us know what kind of progress you make and what you discover.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
General_Practical_Tips_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Hello, everyone, welcome to part 2 in our series on supervised sentiment analysis. The screencast is going to focus on some general practical tips for doing work in this space, especially focused on setting up a project and doing kind of pre-processing of your data. So first I just wanted to give you links to a whole bunch of benchmark data sets in the space. We're going to concentrate on the SST and DynaSent, but there are a lot of other choices you could make both for developing original systems and also supplementing training data that you've got for a particular application. Some of these data sets are really, really large, and they cover a diversity of domains. So these could be important assets for you. In a similar spirit, there are lots of sentiment lexicons out there. They cover different emotional dimensions and different aspects of the problem. And they too could be used to help you with powerful featurization. They could supplement features that you've created or help you group your vocabulary into interesting subcategories that would be powerful for making sentiment predictions. And these range from simple word lists up to highly structured multi-dimensional lexicons. Now for a first pre-processing step, I thought we would just talk a little bit about tokenization because I think that this can be a definitional choice that really affects downstream success. So just as a running example here, let's imagine we start with this raw text, which is a kind of imagined tweet. We have an @ mention here. And then you can see that some of the markup has gotten a little bit garbled. We have an emoticon that looks sort of obscured and a link at the end. I think as a very preliminary step, even before we tokenize it, we might want to isolate some of that markup and replace the HTML entities. It's a pretty easy thing that you can do that could really make a difference. Now we've got an apostrophe. We've got our emoticon intact. And we still have the link and other things in here. So even before you do that, you might check to see whether a simple replacement of the HTML entities would make a difference in your data. Now we begin the tokenization question. And I think a good baseline choice here would be simply whitespace tokenizing. I think we're going to split on whitespace and treat all the resulting strings as tokens. So that would take our raw text up here and split it up. As you can see on these independent lines, this looks OK to me. So we're going to have a problem with our @ mention, because it has this colon on the end. So we might miss the fact that this is the actual @ mention. The unigrams look OK, although the date has been split apart. We've preserved our hashtag. We've got this token that might appear only once even though there's a clear consistent signal there. We do have our emoticon. And our link is mostly intact although this period could be disruptive if we actually want to follow the link, because it could still go on onto the end of the URL. Treebank tokenizing is another very common scheme in NLP I would say at this point largely for historical reasons. The way treebank tokenizing works is it takes this raw text and splits it up into a whole lot of tokens, right? In comparison with whitespace, we have a lot of distinct pieces here. And this really looks kind of problematic, right? So we have destroyed our @ mention, we don't have that username anymore. It does this interesting thing with words like "can" that they get split apart into two tokens. We've lost our date. We have lost our hashtag. This is possibly good. So YAAAAAAY has been split up according to its punctuation. So we have now four exclamation marks separated out from this word here. But our emoticon is completely lost and our link has been really destroyed. So this looks problematic from the point of view of accurate featurization and also doing things with social media. So that kind of brings me to what we might want from what I've called the sentiment-aware tokenizer. We would like to isolate emoticons clearly because they can be really sentimentally-- We want to probably respect Twitter and other domain specific markup because that's often the space in which-- our data come from, the kind of place we want to make predictions in. In a similar spirit, you might take advantage of underlying markup. Maybe don't filter off the HTML, because there could be an important signal there. You might be aware that the website or data producer might have done some pre-processing of their own that might disrupt things like curses which could of course carry a lot of important sentiment information. You might want to preserve capitalization because of course that could be used for emphasis. In a similar spirit, you might want to regularize emotive lengthening like YAAAAAY down to just three characters here to capture that it is an emotive lengthening but also regularize all those distinct tokens. And then there's a stretch goal. Although this might be less important in the era of contextual models, you might think about capturing multiword expressions that carry sentiment. Just think of an example like "out of this world," which is positive. But none of its component pieces are positive. So many models will miss that that is conveying clear sentiment, whereas with a clever tokenization scheme, you might capture that as one single token. So here's a simple example that meets a lot of those goals here for a sentiment-aware tokenizer. We begin from our usual raw text. We normalize and preserve the @ mention. We keep most of these words intact and we kind of capture that June 9 thing was a date. Preserve the hashtag, of course. We're treating all these potentially emotion-laden punctuation marks as separate unigrams. I think that could be good. Of course capture the emoticon and capture the link, right? And if you want something that meets more or less all these criteria, except I think the date normalization, you could just use the nltk TweetTokenizer. It's a good simple choice that you could make that I think will be useful for sentiment analysis. And to quantify that a little bit, here's some experimental evidence that I think is going to be relevant to the kind of work that you all are doing. So my data is OpenTable. That's restaurant review, short ones. I've got 6,000 reviews in my test set. And what I'm doing along the x-axis here is varying the amount of training data that these systems can see. It's simply a Softmax classifier and my primary manipulation is I have the sentiment-aware tokenizer in orange, treebank in green, and whitespace in gray. And the picture is pretty clear. Right along the x-axis, we have accuracy. It's a balanced problem. And what you can see is that the sentiment-aware tokenizer is the clear winner here, especially where training data are sparse. In the limit of adding lots of training data, I think we can make up for a lot of shortcomings of tokenizers because we see a lot of redundancy in the training data. But where data are sparse, the sentiment-aware tokenizer is clearly a good choice. And another thing I would add is that because it produces more intuitive tokens, the sentiment-aware models might be more interpretable in some sense. And to really connect with the homework that you all are doing in the bakeoff, this is what happens when we go across domains. So here I'm training on OpenTable restaurant reviews, but I'm going to test on movie review sentences here. Otherwise, this is the same experimental paradigm. Because of the cross domain thing, the results are a little bit more chaotic. But I think, again, the sentiment-aware tokenizer is a clear winner with the largest gains where training data are a little bit sparse. And that's the expected picture. So be thoughtful about tokenizing. As a counterpoint to that, I've called the section on stemming the dangers of stemming because what I want to try to do is convince you not to stem your data, the first word is stemming. So stemming is a kind of pre-processing technique that would collapse distinct word forms. There are three common algorithms for this. Easy to use, the Porter stemmer. The Lancaster stemmer, and the WordNet stemmer. And my criticisms are largely leveled at Porter and Lancaster. Here is the bottom line. In doing this kind of stemming, you are apt to destroy many important sentiment distinctions, making this a counterproductive pre-processing step. On the other hand, the WordNet stemmer does not have this problem. It's much more conservative, but it also doesn't really do enough to make it worthwhile. It's costly to run and has some requirements that might make it simply not worth it. And I would say that the bottom line here for stemming is that in an era where we have very large sentiment data sets, the function of stemming would be to collapse the size of your vocabulary and make learning more easier in small domains, but we mostly don't confront that problem anymore. But just to drive home this point here, here are some examples focused on the Porter stemmer, of cases where running the Porter stemmer actually collapses clear sentiment distinctions according to the Harvard Inquirer, which is one of those lexicons I mentioned before. I got defense and defensive. They get collapsed down into this funny, non-word defens. Extravagance and extravagant, different sentiment collapsed down into this word fragment, and so forth for these other examples. I think this is showing that in pre-processing your data, you might be removing some important sentiment signals. The Lancaster stemmer uses a very similar strategy and has arguably even more problems in this space. Here we've got the positive word complement and complicate. According to the Harvard Inquirer again, they could both get collapsed down into what is a completely distinct word, comply. That should be concerning for many reasons. And the other examples make a very similar point. The WordNet stemmer I mentioned before, I think this actually has something going for it. There might be cases where you'd want to use it. It's high precision. It requires word, part of speech pairs. And the general issue is just that it removes some comparative morphology. That's the only thing you might worry about for sentiment. But otherwise it's going to take like exclaims, explained, and exclaiming, and collapse them down. That could be a useful compression of your feature space. It will leave exclamation alone, which I think is good. Similarly for these things, they all get preserved across the two verb forms but we preserve the adjective is different. I think that could be good. And as I said, the only concern would be that happy, happier, and happiest all go down into their base form whereas I think these could encode different gradations of sentiment that you might want to preserve. That's worth some thought, but overall I think you probably want to avoid doing stemming. And to bring that home, let's return to my experimental paradigm using a Softmax classifier, OpenTable reviews, 6,000 of them in my test set, and here along the x-axis, I'm varying the amount of training data I have. And I think what you see is that the Porter and Lancaster stemmer in purple and black respectively are kind of forever behind, right, versus just simply sentiment-aware tokenizing. It gives you a lead. The lead is especially clear as you get out of this very sparse domain here with very few training instances. To close, just a few other pre-processing techniques that you might think about. So you could part-of-speech tag your data in the spirit of trying to capture more sentiment distinctions that you might capture otherwise. So just for example, arrest, like arresting as an adjective is positive, but arrest as a verb is typically negative. Fine as an adjective is positive, but to incur a fine as a noun is negative, and so forth. You can see that some sentiment distinctions actually do turn on the part of speech of the word. So treating all of your unigram features as based in word part-of-speech tag pairs could be useful for preserving some of these distinctions. Again as a pre-processing step to help your model be more attuned to these points of variation in comparison. But there are limits even to this, right? So there is just some cases on the slides where even within the same part of speech, we have an adjective that in one sense is positive and another negative. For example, the adjective mean can mean hateful, but it can also mean excellent as in they make a mean apple pie. Smart as an adjective could be both painful and also bright and brilliant, and so forth like that. And similarly for serious and fantastic and sneer, depending on the context and the intention of the speaker, they can kind of cut in different directions. So even part-of-speech tagging is going to be limiting when it comes to really recovering the underlying word sense, even for something as low dimensional as a sentiment distinction. Finally, this is another powerful technique that you might use and think about as you select and evaluate different models. This is what I've called simple negation mark. And the phenomenon is just that if I have a verb like enjoy, which sounds positive in isolation, of course its contribution to the overall sentiment will change depending on whether it's in the scope of a negation. "I didn't enjoy" is negative. A negation can be expressed in many ways as this modifier of auxiliaries, like not. But as an adverb like never, it could be in the subject like "No one." And it could even be really encoded for things like "I have yet to enjoy it," which is a kind of negation. And then of course the negation in 5 here is very far. "I don't think I will enjoy it," is probably negative, but the negation is way far away from the verb that we want to-- with sentiment we want to modulate with a negation. So here's a very simple method that I think was first explored by Das and Chen. It's also used in Pang et al. These are classic early sentiment analysis papers. And the idea is simply to append a _NEG suffix to every word in the sequence that appears between the negation and some clause level mark of punctuation, to sort of roughly indicate the semantic scope of the negation. This is a simple pre-processing step, highly heuristic. It would take a sentence like "No one enjoys it," and literally turn the unigrams one, enjoys, and it into variant forms of them where one has a NEG appended to it, and so does enjoys, and so does it. And the idea is that in doing this, we're giving our model the opportunity to discover that enjoys in this context is actually a different token in some sense than enjoys when it's not in the scope of negation. And for many of the linear models with handout features that we explore, simply making that initial distinction might create some space for your model to learn the interaction of negation with these other features. And just to quantify it a little bit by way of rounding this out, I think this slide shows the impact that this can have despite its simplicity. So similar, we have OpenTable as our test set. We're using a Softmax classifier. And the x-axis is again varying the amount of training data that we have. The whitespace tokenizer is in gray. It's the worst, followed by Treebank in green. Then we have that sentiment-aware tokenizer in orange. And then way above them, consistently for all parts of the data here are sentiment-aware plus that negation marking. That is obviously the superior model for all kinds of amounts of training data. And I think what that's showing is that the influence of negation is actually really real and severe in a lot of sentiment datasets. It's just very common to combine sentiment words, positive or negative, with negation and it has this predictable effect of kind of flipping the value. So in doing this sentiment, and doing this negation marking, we're giving our model a better chance at discovering exactly those distinctions. And here's a similar set of results for cross-domain where I'm starting on OpenTable and testing on IMDB. Again the results are a little bit more chaotic, but I think it's a clear win for the sentiment-aware plus negation marking model.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Writing_NLP_papers_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Hello, everyone. Welcome to part 2 in our series on presenting your research. We're going to be talking about writing papers in our fields. To start, let's look at the outline of a typical NLP paper. By and large, these are either four or eight page papers in a two-column format that you get from the style sheets. Those links don't include the references. And there are a bunch of conventions for how the papers are typically organized. So you have your title in abstract on page one. Usually, you have an intro section that kind of fits on that first page, maybe a little bit onto the second. In place two, you discuss the related work or prior literature or background that's needed to kind of contextualize the work that you're doing. Then there's typically a data section, followed by a section on a model or this could be thought of as the kind of core proposal section of the paper. Then there'll be some methods related to the experiments. The reporting of the results of the experiments. And then some analysis of what the experimental results mean. And then possibly a short conclusion. It's not set in stone that you have to follow these conventions, but if you do follow them, I think it will be easier on your readers and also easier on you as a writer because you can kind of slot your ideas into this familiar format. Let's look at those sections in a little bit of detail. So starting with the intro, the ideal intro, to my mind, really tells the full story of your paper at a high level. We don't need all the details, but it is very helpful to know from beginning to end what the paper accomplishes. And good intros provide all of that information and really tell the reader precisely what they'll learn as they go through the rest of the paper. In place two as I said, is the discussion of background material or related work or prior literature. And this is an opportunity for you to contextualize your work, and provide insights into the major themes of the literature as a whole. What you should really be thinking about doing is using each paper or each theme that you identify as a chance to kind of contextualize your ideas, and especially articulate what's special about the contribution that you're making. So this kind of sets the stage for the reader. The data section, this could vary a lot. This could be very detailed if you're offering a new dataset, or using a dataset in some unfamiliar way the community is not used to. But of course, if you're just adopting some data off the shelf, then this section might be pretty short. Then you get to the heart of your proposal, your model. You want to flesh out your own approach and really help us understand your core contribution. Then we turn to supporting your ideas with some experimental evidence. You report the methods, your experimental approach, including descriptions of the metrics. And again, that will be long or short depending on whether the metrics are familiar or unfamiliar. You want to describe your baseline models and anything else that's relevant to kind of understanding precisely what's going to happen in your experiments. I will say that for details about hyperparameters and optimization choices and so forth, you can probably move those to an appendix, unless they're really central to the argument. What you want to offer here are kind of the crucial pieces that will help the reader understand precisely what you did for your experiments. Then we get our results. This could be a no-nonsense report of what happened. It's probably mainly going to be supported by figures and tables that report a summary of your core findings according to your data models and metrics. And then things open up a bit. You have an analysis section. I think this is really important. You should articulate for the reader what your results mean, what they don't mean, where they can be improved, where their limits are, and so forth, right? These sections vary a lot depending on the nature of the paper and the findings. But I think they're always important and they can be very rewarding. It is intimidating because this is awfully open ended. But I'm hoping that the previous unit on analysis methods in our field offer some really good general purpose tools and techniques for doing rich analyses of this sort and really helping us understand precisely what you accomplished. Now this is, as I said, is not set in stone, and different projects will call for different variants on it. And one really prominent variant that you see is that, if you have multiple experiments with multiple datasets you might want to repeat that method's results analysis rhythm across all of your experiments to give them kind of separate sections in your paper. But again, it really depends on what you think the most natural way to express your ideas is. These things aren't set in stone, they're just conventions that help us as readers and as authors. Then finally, you have a conclusion. This is probably a quick summary of what the paper did, and then an opportunity for you to chart out future directions that you or others might pursue. So it's a chance to be more outward looking and expansive. Let me close the screen cast with some general advice on scientific writing that I think can be helpful kind of in the background as you think about expressing your ideas. First, I just want to review this really nice piece from Stuart Shieber, where he advocates for what he calls the "rational reconstruction" approach to scientific writing. And to build up to that, he offers two contrasting styles that you might think about. The first is what he calls the continental style. This is in which one states the solution with as little introduction or motivation as possible, sometimes not even saying what the problem was. He says, "Readers will have no clue as to whether you're right or not without incredible efforts in close reading of the paper. But at least, they'll think you're a genius." At the other end of the extreme, you have what he calls the historical style. And this is a whole history of false starts, wrong attempts, near misses, redefinitions of the problem. This is a kind of genuine history of maybe the struggles that you endured as you built up to the final product for your paper. And Shieber says, "This is much better than the continental style, because a careful reader can probably follow the line of reasoning that the author went through and use this as motivation. But the reader will probably think you're a bit addle-headed." We don't need to hear about every dead end and every false start, what we would like, rather, is what Shieber calls the rational reconstruction. You don't present the actual history that you went through, but rather, an idealized history that perfectly motivates each step in the solution. You might selectively choose models that you abandoned as a way of helping the other reader understand how you built toward your actual core set of methods and findings and results. So it's going to be a kind of streamlined version of that historical style. The goal in pursuing the rational reconstruction style is not to convince the reader that you're brilliant, or addle-headed for that matter, but that your solution is trivial. Shieber says it takes a certain strength of character to take that as one's goal. Right, the goal of writing a really excellent paper is that the reader comes away thinking, that was clear and obvious, and even I could have done it. That's an act of genuine communication. And it does take a strength of character. But in the end, this is what we should all be striving for, this kind of really clear and open communication. This is also a nice document from David Goss. He has some hints on mathematical style. There's a bunch of low level details in there, especially related to presenting very formal work. The piece that I wanted to pull out is just "Have mercy on the reader." This is, again, recalling the rational reconstruction approach that Shieber advocated for, where you're really thinking about what it's like to be a reader encountering the ideas for the first time and genuinely trying to understand what you accomplished. You have to really think about what it's like to be in that position in order to have a successful and clear paper. I also really like this piece from the novelist Cormac McCarthy which was published in Nature. It's full of great advice. The one piece that I wanted to highlight is this. Quote-unquote, McCarthy says, "Decide on your paper's theme and two or three points you want every reader to remember. This theme and these points from the central thread that runs through your piece. The words, sentences, paragraphs, and sections are the needlework that holds it together. If something isn't needed to help the reader understand the main theme, omit it." This is helpful to me because I think it not only results in a better paper, but it will also be easier for you to write your paper. Because the themes you choose will determine what to include and exclude, and resolve a lot of low level questions about your narrative. And conversely, I've often found that when I'm really struggling to write a paper, it's because I haven't figured out what these core themes are and I'm kind of casting about, unsure of what's relevant and what's irrelevant. And if you step back and really figure out what you're trying to communicate, then the act of writing kind of all falls into place. And then the final bit of advice that I wanted to offer, which I'm going to return to when we talk about presenting work at conferences. This comes from Patrick Blackburn. It's about talks, but it really extends to any kind of communication and science, is fundamental insight, he asks. "Where do good talks, and I think good papers, where do they come from?" And he says, "Honesty." A good talk or a good paper should never stray far from simple, honest communication. And you can hear this in the way that we talk about evaluating your work. That fundamentally for us, we're looking for papers that offer open, clear, honest communication about what happened, and what it means. And that's really a fundamental value. And I think it's inspiring to think about this as your kind of guiding light when you report your scientific results to the community.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Overview_of_Methods_and_Metrics_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRIS POTTS: Welcome, everyone. This is the first screencast in our series on methods and metrics. Fundamentally, what we're trying to do with this unit is give you help with your projects, and specifically give you help with the experimental aspects of your projects. And so the kind of highlight topics for us will be around things like managing your dataset for the purposes of conducting experiments; establishing baseline systems; and in turn, doing model comparisons between baselines and an original system, or between an original system and published results in the literature, and so forth. And relatedly, we're going to give you some advice on how to optimize your models effectively. Those are kind of the highlight topics there. And I would say that all of this is kind of oriented toward the more abstract topic of helping you navigate tricky situations that arise as you conduct experiments in NLP. And as you'll see very often, these tricky situations arise because we encounter limitations in the data that's available to us, or we're just fundamentally constrained in terms of computing resources. And that leads us to have to make some compromises in the ideal experimental protocol that we would use. These things are inevitable, and the idea here is that we're going to equip you with some tools and techniques for thinking about the trade-offs in making your way through all of these tricky situations. There are a bunch of associated materials for these screencasts. We have a whole notebook that's on metrics. I'm going to offer some screencasts that just highlight a few of the metrics that are discussed in that notebook. But it's meant as a resource, the notebook itself, so that you could pursue other avenues. And overall, what I'm trying to do is give you a framework for thinking about what metrics encode in terms of their values, what bounds they have, and where they can be applied and misapplied. Scikit-learn implements essentially all of the metrics that we'll be discussing. And to their credit, they've done a wonderful job of offering rich documentation that will again, help you not only understand what the metrics do, but also where and how they can be effectively applied. And then there is an entire notebook that's on methods, especially experimental methods, and that covers a lot of the tricky situations that I just described in terms of setting up experiments and thinking about trade-offs, and then following through on model evaluation and so forth. And that notebook is nice as a supplement to these screencasts, because it embeds a bunch of code that can help you run hands-on experiments to get a feel for the core concepts. And we have two readings. Resnik and Lin 2010 is a wonderful overview of experimental evaluations in the context of NLP. And Smith 2011, Appendix B is a compendium of different metrics, so another good resource for you if you're unsure about how a metric works, or what its bounds are, or how it's calculated, and things like that. The final thing I want to say for this overview relates specifically to the projects that you'll be pursuing. And the bottom line for us is that we will never evaluate a project based on how good the results are. Now, we acknowledge that in the field and throughout science, publication venues do this because they have additional constraints on space nominally, and that leads them-- as a cultural fact about the way science works-- to favor positive evidence for new developments over negative results. I frankly think this is unfortunate and exerts a kind of distorting influence on the set of publications that we all get to study, but nonetheless, that's the way the world works at present. In the context of this course, we are not subject to that constraint. So we can do the right and good thing scientifically, evaluating positive results, negative results, and everything in between. So I repeat our core value here. We will never evaluate a project based on how good the results are. Instead, we're going to evaluate your project on the appropriateness of the metrics that you choose; the strength of your methods; and really fundamentally here, the extent to which your paper is open and clear-sighted about the limits of its findings. So you'll notice that, given this framework here, you could report state-of-the-art results, world record-breaking results on a task, but nonetheless, not succeed with a project if it fails on all of these things that we've listed under our true values. And conversely, you might have a hypothesis that turns out to be a miserable failure in terms of the performance metrics that you're able to report. But that could lead to an outstanding grade in the context of this course, provided that you do all of these things. And that would be under the heading of a negative result but nonetheless, teaches us something really fundamental and important about NLP, and therefore pushes the field forward.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
NLP_Conference_Submissions_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRIS POTTS: Welcome, everyone. This is part three in our series on presenting your research. We're going to be talking about the sometimes thrilling and sometimes agonizing process of submitting your work for publication at an NLP conference. To start, I want to review what's known as the anonymity period for ACL conferences. All of the ACL conferences that have adopted a uniform policy that submitted papers cannot be uploaded to repositories like arXiv or made public in other ways starting one month from the submission deadline and extending through the time when decisions go out. So you should be aware of this, and for specific conferences, check their sites for the precise date when the embargo goes into effect so that you're sure you're playing by the rules. The rationale of the policy is an attempt to balance the benefits of free and fast distribution of new ideas against the benefits of double blind peer review. The idea is that we want to avoid corrupting reviewers just at the moment that they sit down to begin their work of reviewing by having them see on arXiv or via Twitter an announcement about the paper they're reviewing that reveals the authors, and the origins, and so forth, because we know that that public announcement would influence their decision-making process. So to preserve that balance against free dissemination of ideas, we have this embargo period around the period where we know reviewing will happen. For more on the policy and its rationale, you can follow this link here. But I would say that, fundamentally, it's an attempt to balance these two pressures, the kind of pragmatic approach to balancing them. OK, now let's dive into the actual process of submitting work for publication and having it go through review at one of these conferences. So to start, let's suppose you've submitted your paper. When you do that, you'll select some area keywords that will help determine which committee gets your paper. This is already a really important step. You'll be choosing from a bunch of keywords that signal different areas of NLP. And what you're doing at that point is probably routing your paper to certain sets of reviewers and certain area chairs and so forth. And in doing that, you're creating expectations about the kind of contribution that you're making. So if you're unsure about this process, I would encourage you to recruit an expert in the NLP reviewing process to help you make these keyword selections. Here's an example of how this could be important. Suppose that your paper is fundamentally a new machine learning contribution but it reports some experiments that involve kind of topics in computational social science, it might be really a mistake to choose the computational social science keyword at this point, because if your paper ends up with reviewers who have expectations that you'll be making some new and fundamental contribution to that area and what they're looking at is a machine learning contribution, that mismatch in their expectations might lead them to have a negative perception of the work. And the reverse of course, holds as well. If you have a fundamentally new computational social sciences contribution but it incidentally makes use of some machine learning apparatus, I think it would be a mistake to choose machine learning as a keyword, precisely because of that mismatch in expectations that would result in reviewers' minds when they sat down and started reviewing your paper. So think strategically about this stage. You can see why it's important at this next stage here. Reviewers when they begin their work, they're going to first scan a long list of titles and abstracts, and they'll make bids on which ones they want to do. At this point they signal yes or maybe or no, or maybe they indicate a conflict of interest, indicating that they can't review the paper at all. When they do this, the title is probably the primary factor in bidding decisions. They probably have access to the abstract at this stage, but they might be looking at a list of over 200 different contributions, and it's probably just too much to ask, that they would read all of these abstracts at this stage. So they're probably scanning the title and using that as an indication about what kind of bids they want to make. So again, you might think strategically about your title and the role it will play at this early stage in the process. After that, the program chairs assign reviewers their papers partly based on the bidding or maybe partly based on other considerations of workloads and so forth. We don't know precisely how that process will happen, but by some mechanism your paper will be assigned probably three different reviewers. Reviewers read the papers, write comments and supply ratings over the course of a few months. At the very end of that process, authors are typically allowed to respond briefly to the reviews, and then the program chair or the area chair might seek to stimulate some discussion among the reviewers about conflicts between their reviews or maybe places where the author responds, and says the reviewers are incorrect or misguided or something. We hope that that's a lively and rich discussion about the paper that's led by open-minded people who are just trying to arrive at the best possible recommendation for your paper. That's what we hope at this stage in the process. And finally, at the very end, the program committee is going to do some magic to arrive at the final program based on all of this input. Of course, their reviews and ratings will be a major factor, but there might be other considerations that they bring in at this stage in terms of constructing a diverse and interesting program for their conference. At this stage, you might get a meta review that provides some insight into the final decision making process, although those vary in the amount that they actually illuminate the behind-the-scenes process that led to the particular recommendation that was made by the reviewers and area chair. In terms of the work that the actual reviewers are doing, I would say that the current ACL set up is oriented around structured text for the reviews as opposed to providing a lot of metadata via ratings. So first, they'll probably be asked to just indicate what the paper is about, what contributions it makes, and what its main strengths and weaknesses are. This is a kind of check that they actually understand what's in the paper and can articulate what's in the paper, and it gives the first indication of their assessment. Next you have reasons to accept and reasons to reject. Then there could be a section for additional questions and feedback for the authors. Maybe the reviewers can indicate missing references and maybe also a catchall section for typos, grammar, style, and presentation improvements and so forth. But of course, you get two really important ratings-- the overall recommendation and maybe an assessment of the reviewer's confidence in their overall evaluation. And then finally, there could be a section for confidential information that the reviewers want to communicate directly to the program committee. That would be hidden from the authors as well as the other reviewers. So stepping back, the most important pieces of this reviewing form are obviously the overall recommendation must be balanced against reviewer confidence and the reasons to accept and reasons to reject. You can count on the two texts that they supply under 2 and 3 here as being really important to shaping the discussion that happens and the overall recommendation that gets made. There's an author response period for at least the major ACL conferences. This is a chance for authors to submit short responses to the reviews. This is a rather uncertain business along many dimensions, so let me just offer some thoughts. First, many people are cynical about author responses since they've observed that reviewers rarely change their scores afterwards, and I think that is an important consideration. However, it might just be bad signaling not to submit a response at all. It could incidentally indicate to the program committee that you've kind of silently opted out of the process, so its mere absence could reduce your chances of getting accepted I believe. More positively, for conferences that have area chairs, and I believe all of the current major ACL conferences do, the author response could be really important. An area chair is someone who's tasked with stimulating discussion and writing meta reviews for a small number of papers-- maybe 5 to 20 depending on the conference volume. And for those people, the author response might have a major impact. I played the role of area chair many times, and the author responses are always valuable to me. It's another text alongside the reviews that the reviewers provided, and it helps me understand places of conflict, places where the authors differ in their perspective from the reviewers and so forth. It is always extremely valuable evidence to me, and I think that holds for many area chairs. And for this reason alone, you might think about submitting a detailed author response with these area chairs in mind as your primary readers. NLP conferences, for better or worse, have very complex rules about what you can and can't say in these author responses. Sometimes you can't report any new results. Sometimes you have to be very circumspect about what kind of results you have and so forth. If you have questions about what you can do in a particular case, seek out an expert at Stanford for advice on how to interpret the precise rules and what kind of leeway you actually have in saying what you think is important to say. I think all of these restrictions are kind of unfortunate. When I play the role of area chair and as reviewer, I would simply like to have access to all the information that I can possibly obtain, and so I would like these author responses to be offering as much information as they feel is important, and then I can use that to balance all the evidence to make a final recommendation. So it's unfortunate from my perspective that these have restrictions at all, but they're often there, and it's important to figure out how to navigate them. When you construct an author response, always be polite. You can be firm and direct, but you'll want to do that strategically to signal what you feel most strongly about, but fundamentally, you should never say things like this-- "Your inattentiveness is embarrassing. Section 6 does what you say we didn't do." You might privately say that to your co-authors as a kind of cathartic act of venting about how bad your reviews were, but you should never put it in an author response. Rather, you should do things that are more like, "Thank you. The information you're requesting is in section 6. We will make this more prominent in our revision." Fundamentally here, I think it's just important to be polite in these professional contexts, and it's a way to remind yourself that these reviewers did make an investment of their own time and intellectual energy into your work, and we want to be respectful and aware of that investment that they made. Presentation types and venues. There are lots of them. So first, there's the fundamental distinction. You might have either oral presentations or poster presentations and crosscutting that, you might think about submitting to a workshop or to a main conference. And here's a whole bunch of venues, and what I've done here is kind of organized this in a soft way. So on the left here, I have NLP conferences and workshops. I've put what we consider the most prestigious three at the top here-- ACL, NAACL, and EMNLP, and then some other large ones just below that. Of course, the prestige order could really change here, so who knows what the next year's will bring. For example, the Asia ACL is brand new, and of course, the number of people doing outstanding work in our area throughout Asia is enormous, and so it's very easy for me to imagine that AACL becomes at least as prominent, maybe even more prominent, than some of these other ones up here over the next few years. Then we have some smaller and older conferences down here, and then of course, at the bottom, I've put workshops. All of these major conferences have workshop series attached to them, and workshops can be a great initial outlet for work you do, especially for a course like this. So I would encourage you for the major conferences to scan their program of workshops, and if you find one that's topically aligned with what you're working on, consider submitting to the workshop. It will probably be less competitive, so you have better chances of getting in, but I would say the more important thing is that it's a chance for you to actually connect with a community of people who are working on precisely the topic that you're working on, and that can be intellectually really exciting. In the middle here, I have some conferences that kind of run the spectrum from linguistics through the world wide web, through knowledge graphs and kind of more core topics in the artificial intelligence generally, and those can be really good outlets for work that has an NLP aspect to it. And then over here on the right, these are very prestigious machine learning conferences, and it's kind of the same story. All of these conferences here are welcoming a work that involves natural language processing, but you might just have to think about how you're going to precisely connect with these specific audiences and their specific concerns. Here's my personal assessment of NLP reviewing at present. First, I think the focus on conference papers as opposed to journals has been really good for NLP. It fits with and encourages the very rapid pace of our field, and I think we all benefit from that rapid pace overall. Before about 2010, the reviewing in the field was admirably good and rigorous in comparison with other fields. It really was impressive how many deep and insightful reviews you would get when you submitted to one of the main conferences in the field. Lately though, the explosive growth of the field has, I think by consensus, reduced the general quality of reviewing, and the field is still grappling with this. Of course, there was always a kind of lottery aspect to whether your paper would be accepted, and you should keep in mind that luck is a real element in publication and throughout science. But that lottery aspect has gotten more amplified as the field has grown, and that's affecting the main conferences in ways that we all have to figure out. But again, this is kind of useful. Keep in mind that a rejection does not necessarily mean that your work was of low quality. It could mean that you just had really bad luck throughout a kind of chaotic reviewing process. I also want to say that I think it's unhealthy to force every paper to be four or eight pages. The reason that happens is that there are two kinds of submissions typically-- a short paper which has max length four, and a long paper which has max length eight, and we all, as submitters to one or another of those tracks, feel a kind of signaling pressure to maximize the available space. So there are no six page papers in the field, and there is really no room to have papers that are longer than eight pages. And that's unfortunate, because sometimes you just need a different length than one of these to express your ideas in the best possible way. So that's been unhealthy. I will say, though, that this is alleviated somewhat by the increased use of appendices and supplementary materials to express a lot of details that don't need to be part of the main narrative, and that is in effect here a back door allowing papers to be of more variable length. The biggest failing to my mind of the conference reviewing process is that there's no revise and resubmit. That's the standard we get from journals. There's no chance for authors to appeal to an editor and interact with an editor in a way that you get from really top quality journals. And that just introduces inefficiency into the system, and it's a missed opportunity for intellectual engagement that could really benefit the world. I do want to say, though, there's hope. The Transactions of the ACL or TACL is a journal that follows the standard ACL conference model fairly closely but allows for journal-style interaction with an editor. I'm conflicted here, because I've been a TACL action editor for a very long time, but I do that work because I think TACL is wonderful. It's allowing for the best aspects of our field in terms of fast pace, while also introducing some healthy aspects of the journal reviewing process. TACL papers are a little bit longer at ten pages, so there's still some of this problematic influence under four here, but at least they are a bit longer. And overall, I think it's just a healthier rhythm for thinking about evaluating work in these scientific contexts, so think about TACL as an outlet for your work as well. Let's focus on two more specific topics here and starting with titles. And as I said before, a title is an important ingredient in the fate of your paper in conference reviewing. Jokey titles can be risky, and I've linked to some evidence for that down here at this paper. More importantly, you might think that it's important-- calibrate the scope of your contribution in your title. It's a very common complaint from reviewers that the title is an overreach-- that it claims a broader spectrum of the scientific space than the paper actually delivers on, and you want to avoid that by getting a tight alignment between title and paper. And then also consider the kinds of reviewers you're likely to attract during that bidding process. The choices you make in your title will definitely influence that process, and so you can again think strategically about who you want to pull in as a reviewer because of their interest and expertise and so forth. I would also say that it's worthwhile avoiding special fonts and formatting if possible, just because it makes it harder to consistently reproduce your title. That said, I am kind of charmed by the move to have emoji in titles recently. Jokey is risky, but this can be fun and make your paper a bit memorable, even if it is harder to copy and paste the title in various contexts. Abstracts-- abstracts are also incredibly important, because after the title, they will create this first and lasting impression. So here's a suggestion about how to think about writing abstracts-- the difficult form. First, have your opening be a broad overview-- a glimpse of the central problem in its context. Next, the middle takes concepts mentioned in the opening and elaborates upon them, probably by connecting with specific experiments and results from your paper. And then finally, at the very end, establish links between your proposal and broader theoretical concerns so that the reviewer on finishing this abstract has an answer to the question, "Does the abstract offer a substantial and original proposal?" So in a little more detail, here's a kind of meta abstract. "This opening sentence situates you, dear reader. Our approach seeks to address the following central issue." You spell it out. "The techniques we use are as follows." And you spell it out. "Our experiments are these--" Details. And then finally, "Overall, we find that our approach has the following properties, and the significance of this is--" And if you just fill in all those ellipsis dots, I think you're pretty close to an effective abstract, and that's at least a starting point for you when you think about maybe being more creative and fitting this better to the specifics of your ideas. Style sheets-- this is small but important. This is the way that you could avoid the dreaded desk reject. Pay close attention to the details of the style sheet for your conference and any other requirements included in the call for papers. It is worth your while to read those documents carefully and make sure you've checked all the boxes. In NLP, infractions are the most likely cause of the dreaded desk reject-- that is rejection without review, which is of course so disheartening, because you made a big push to get your paper submitted, and then shortly thereafter you get this desk reject in the mail, and there's no recourse. That's pretty much the end of the process for you, and you can avoid it by just attending to what they've said in their style sheets and other guidelines. And then finally, we have this charming notion of a camera-ready version. This refers to old-fashioned technology for publishing papers where you actually needed to take pictures of documents in order to publish them. Those days are long gone, but the terminology remains. For most NLP conferences, as part of this you get an additional page upon acceptance. This is presumably to respond to requests made by reviewers. Although in practice, you're free to use the space however you like. In general, we find that the extra page is mainly used for fixing passages that we made overly terse at reviewing time in order to get into the page limits, and now, we're kind of unpacking them to make them more readable. And this is obviously an inefficient process, but it's the reality at present. You could, of course, use your additional page to improve your results in all sorts of ways. It's entirely up to you. But a word of caution here, if you have very substantially new ideas and results, it might be better for you to save those for a follow up paper as opposed to trying to pack them into work that's already been accepted, right? Instead of just making more work for yourself as part of this first contribution, it might be that you're on the exciting path toward a follow-up contribution that's going to be its entirely own new paper. So think about that and balance these pressures. And then good luck with your camera-ready submissions.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Transformers_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is part two in our series on contextual word representations. We're going to be talking about the transformer architecture, which is the central piece for all the models we'll be exploring in this unit. Let's dive into the model structure, we'll work through this using a simple example. At the bottom here I've got the input sequence, "the Rock rules" and I've indicated in red that we're going to be keeping track of the positions of each one of those tokens in the sequence. But the first step is a familiar one. We're going to look at both the words and the positions in separate embedding spaces. Those are fixed embedding spaces that we'll learn as part of learning all the parameters in this model. I've given the word embeddings in light gray and the positional embeddings in dark gray. To form what we think of as the actual embedding for this model, we do an elementwise addition of the word embedding with the positional embedding. And that gives us the representations that are in green here, and you can see that on the right side of the slide, I'm going to be keeping track of all of the calculations with regard to this C column here and they're completely parallel for columns A and B. So to form C input, we do elementwise addition of X34, the embedding for the word "rules" and P3 which is the embedding for position 3 in this model. The next layer is really the hallmark of this architecture and what gives the paper its title, Attention is All You Need. We're going to form a bunch of dense dot product connections between all of these representations. So you can think of those as forming these connections that look like there's a dense thicket of them. On the right here, I've given the core calculation and it should be familiar from part one in this unit. It's exactly the calculation I've presented there with just two small changes, but fundamentally if our target vector is the C input here, we're attending to inputs A and B, and we do that by forming the dot products here. And the one twist from before is that instead of just taking those dot products, we'll normalize them by the square root of the dimensionality of the model dk. dk is an important value here because of the way we combine representations in the transformer. All of the outputs and all the layers we look at have to have the same dimensionality as given by dk. And so what we're doing here is essentially scaling these dot products to kind of keep them within a sensible range. That gives us a score vector off the tilde, we softmax normalize that. And then the other twist is that instead of using mean here as we did before, we use summation. But the actual vector is the one we calculated before, we're going to take weighted versions of A input and B input according to this vector of weights that we created here. That gives us the representation C attention as given in orange here, and we do that of course for all the other positions in the model. The next step is kind of interesting, we're creating what's called a residual connection. So to get CA layer here in yellow, we add up C input and this attention representation that we just created and apply dropout as a regularization step there. And that gives us CA layer, the interesting thing there of course is this residual connection. Instead of simply feeding forward C attention, we feed forward actually a version of it that's combined with our initial positionally encoded embedding. If we follow that with a step of layer normalization, which should help with optimization, it's going to kind of scale the weights in these representations. The next step is more meaningful. This is a series of two dense layers, so we'll take Ca norm here and feed it through this dense layer with a non-linearity followed by another linear layer to give us Cfx that's given in dark blue here. And that's followed by another one of these interesting residual connections, so we'll apply dropout to Cff. And then add that in with Ca norm, as given down here at the bottom, and that gives us the second yellow representation. And we follow that by one more step of layer normalization and that gives us the output for this block of transformer representations. And you can imagine of course as you'll see that we can stack up these transformer blocks, and the way that we do that is essentially by taking these dark green representations at the top here and using them as inputs and all the calculations are the same. So you might imagine that we could continue here by just doing a dense series of attention connections across these and then continuing on with the calculations I just presented. And in that way, we could stack up transformer blocks. And I'll return to that later on. There are a few other things that are worth pointing out that are kind of noteworthy about this model. It looks like a complicated series of calculations, but I would say that fundamentally what's happening here is we're doing positional encoding to get embeddings, so that they are position-sensitive representations of words. We follow that within an attention layer which creates that dense thicket of connections between all of the words as positionally encoded. Then we have these optimization things woven in, but fundamentally we're following that attention step with two series of feed-forward layer steps here, followed by the same process of dropout and layer normalization. So if you kind of elided the yellow and the purple, you would see that what we're really doing is attention followed by feed-forward. And then as we stack these things that would be attention feed-forward, attention feed-forward as we climbed up. And interwoven into there are some things that I would say help with optimization. Another noteworthy thing about this model is that the only sense in which we are keeping track of the linear order of the sequence is in those positional embeddings, if not for them the column order would be completely irrelevant. Because of course, we've created all of these symmetric connections at the attention layer. And there are no other connections across these columns. So the only sense in which column order, that is word order matters here, is via those positional embeddings. Here's a more detailed look at the attention calculations themselves. I just want to bring up how this actually works at a mechanical level. So this is the calculation as I presented it on the previous slide and in part one of this unit. In the paper, and now commonly it's presented in this matrix format. And if you're like me, it's not obvious right away that these are equivalent calculations. So what I've done for these next two slides is to show you via worked out examples how those calculations work and how they arrive at exactly the same values. I'm not gonna spend too much time on this here, this is really just here for you if you would like to work through the calculations in detail, which I strongly encourage because this is really the fundamental step in this model. And here's all the details that you would need to get hands on with these ideas. Now so far, I've presented attention in a kind of simplified way. A hallmark of attention is in the transformer is that it is typically multi-headed attention. So let me unpack that idea a little bit concretely. We'll start with our input sequence from before and we'll be looking at these green representations here. And the idea behind the multi-headed attention mechanisms that we're gonna inject a bunch of learned parameters into this process to encourage diversity as part of the learning process, and are really diverse and interesting representations. So here's how that works. We're gonna form three representations here using that same dot product mechanism as before. And fundamentally it's the same calculation except now we're gonna have a bunch of learned weight parameters that's given in orange here. And those will help us with two things. First injecting diversity into this process, and also smushing the dimensionality of the representations down to one third of the size that we're targeting SDK for our model dimensionality. And you'll see why that happens in a second. But fundamentally what we're doing is exactly the calculation we did before but now with these learned parameters injected into it. So if you squint, you can see that this is really the dot product of c input with a input as before. But now, it's transformed by these learned parameters that are given in orange. And that repeats for these other calculations. So we're going to do that for position a. And we do it also for position b. And it's the same calculation, but now it's new parameters for the second position. And then for the third head, exactly the same calculation but new learned parameters up at the top here. So this is three-headed attention. And then we actually form the representations that proceed with the rest of the calculation of the transformer architectures presented before is by concatenating the three representations we created for each one of these units. So the A column is the first representation in each one of these heads. The B column is the second representation in each head and similarly for the C column it's the third representation in each one of these heads. And that's why each one of these needs to add one-third the dimensionality of our full model so that we can concatenate them and then feed those into the subsequent calculations. The idea here, of course, is that injecting all of these learned parameters into all of these different heads we're providing the model a chance to learn lots of diverse ways of relating the words in the sequence. The final point is one I've already mentioned before, which is that typically we don't have just one transformer block but rather a whole stack of them, we can repeat them N times. For models you're working with you might have 12 or 24 or even more blocks in the transformer architecture. And the way we do that as I said is simply by taking the dark green representations of the output layer here and using them as inputs to subsequent blocks so they get attended to, and we proceed with the subsequent regularization and feed-forward steps just as before. And when you work with these models in Hugging Face, if you ask for all of the hidden states, what you're getting is a grid of representations corresponding to these output blocks in green here. And of course, just as a reminder I'm not indicating it here but there is actually multi-headed attention of each one of these blocks through each one of the layers. So there are a lot of learned parameters in this model, especially if you have 12 or 24 attention heads. At this point, I'm hoping that you can now fruitfully return to the original Vaswani, et al paper and look at their model diagram and get more out of it. For me, it's kind of hyper compressed but now that we've done a deep dive into all the pieces, I think this serves as a kind of useful shorthand for how all the pieces fit together. So let's just do that quickly. As before we have positional encodings and input word embeddings, and those get added up to give us the intuitive notion of an embedding in this model. That's followed by the attention layer as we discussed, and it has a residual connection here into that layer normalization part. That's fed into the feed-forward blocks and that's followed by that same process of kind of dropout attention layer normalization. And this is essentially saying each one of these is repeated for every step in the encoder process, every one of the columns that we looked at. And if because then in the paper, they're working with an encoder/decoder model. Each decoder state self attends with all of the fellow decoder states and with all of the encoder states, so right? Imagine this double sequence, we have a dense area to potentially connect some connections across both parts of the representation. On the right side when we're doing decoding, again, we repeat this block for every decoder state. And if every state there has an output as for machine translation or some kind of generation process, then we'll have something like this output stack at every one of those output states. If by contrast we're doing something like NLI or sentiment classification problem, maybe just one of those states will have one of these outputs on them. And then the attention gets a little bit complicated if you are doing decoding for a model like natural language generation or machine translation, in the decoder, you can't attend into the future as you're doing generation. So there's a masking process that limits self-attention to the preceding words in the sequence that you're creating.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Simple_Baselines_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
BILL MACCARTNEY: It's good methodological practice whenever you're starting to build new models to start by evaluating very simple models, which establish baselines to which you can then compare the more sophisticated models that you're going to build later on. So to do that, we're going to start by looking at three simple models. A random guesser, a very simple phrase matching strategy, and then, our first machine learning based approach, which will be a simple bag-of-words classifier. Just about the simplest possible model is one that doesn't even look at the input, but just flips a coin. And I strongly encourage you, whenever you're embarking on a model building adventure in your final project, wherever, you start by evaluating a random guesser. It's a snap to implement. It can help to work out the kinks in your test harness. And it's often very informative to put a floor under what good scores look like. Now, we've written an evaluation method for you. It's in the RelExt module and it's just called evaluate. You invoke it with your splits, your classifier, and the name of the split that you want to evaluate on, which defaults to dev. When we evaluate our random guesser, we have some interesting results, so we have results separated for each of the relations. And for each one, we have precision recall, F-score. Remember, that's F0.5, which gives more weight to precision than to recall. We have the support, which is the number of instances whose actual label is true. And we have size, which is just the total number of instances. We find that recall is generally right around 0.5. And this makes sense, because recall says of the instances which are actually true, what proportion do we predict true? Well, we predict true about half the time because we're just flipping a coin. Precision on the other hand, is generally quite poor, because precision says of the instances where we predict true which are basically a random sample, because we're just flipping a coin. How many are actually true? Well, relatively few. And actually, you can tell that by looking at the ratio between support and size. The ratio between support and size is how many of the instances are actually true. So when we're tossing a coin, the precision should be right around the ratio between support and size. Our F-score is also generally poor, it stays close to precision, for two reasons. Number one, because the harmonics-- the harmonic mean stays closer to the lower number, and number 2, because we're using F0.5, which gives more weight to precision than to recall. And the bottom line our macro-average F-score is 9.7%, so that's the number to beat. It's a pretty low bar, but this is a random guesser, after all. OK. So let's look at another approach, which is very simple, but smarter than random guessing. And it's a simple pattern matching strategy. And the idea is for each relation, let's go through the corpus and find the most common phrases that connect to entities that stand in that relation. The most common middles, in our terminology. So here's some code that does that. I won't go through it in detail, but one thing to note is that it counts separately the middles that connect subject with object. So here, it gets all the examples and counts the middles. It tallies up the middles. And it does that separately from the examples that connect object with subject. And it stores them in separate dictionaries under the keys forward and reverse. So we're going to have forward middles and reverse middles stored separately, stored and counted separately. If we run that code, here's what we get. I'm showing results. I'm only going to show results for 3 of the relations here, not all 16. All 16 are in the Python notebook if you want to take a look. But even from this sample, there's a few things that jump out. First, some of the most frequent middles are really natural and intuitive. For example, comma, "starring" indicates a reverse film performance relation. So that would be one where the film comes first and the actor comes second. And I think that makes perfect sense. Star Wars, comma, "starring" Mark Hamill. Similarly, comma, "son of" indicates a forward parents relation. So this would be one where the son comes first. The child comes first and the parent comes second. So those are extremely intuitive and it's reassuring to see them near the top of the list of most common middles. Another observation is that punctuation and stop words, like comma and "and" are extremely common. Unlike some other NLP applications, it's probably a bad idea to throw these away. They carry lots of useful information. On the other hand, punctuation and stop words tend to be highly ambiguous. For example, if you look across the full range of all 16 relations, you'll see that a bare comma is a likely middle for almost every relation in at least one direction. So that comma does very often indicate a relation, but it's a really ambiguous indicator. OK. Now that we've identified the most common middles for each relation, it's straightforward to build a classifier based on that information, a classifier that predicts true for a candidate KB triple, just in case the two entities in the triple appear in the corpus connected by one of the phrases that we just discovered. I don't show the code for that here, but it's in the Python notebook for this unit. And when we evaluate this approach, we see some really interesting results. First, recall is much worse across the board. And that makes sense because we're no longer just guessing randomly. Before, we were saying true half the time. Now, we're going to be a lot more selective about what we say true to. But precision and F-score have improved dramatically for several relations, especially for adjoins and author and has_sibling and has_spouse. Then again, there are many other relations where precision and F-score are still quite poor, including this one genre where we get straight zeros across the board. I'm not quite sure what happened there. But it indicates that although things have improved a lot in some places, they're still rather poor in others. And our macro-average F-score has improved only modestly, so it improved from 9.7% to 11.1%. We're heading in the right direction, but you'd have to say that's still pretty unimpressive. To make significant gains, we're going to need to apply machine learning. So let's get started on that. We're going to build a very simple classifier using an approach that should be familiar from our look at sentiment analysis last week. And we're going to start that by defining a very simple bag-of-words feature function. So here's the code for that, and let me briefly walk you through it. What we're going to do is to get the features for a KB triple. That's the kbt here. We're going to find all of the corpus examples containing the two entities in the KB triple, the subject and the object. And note that we do that in both directions, subject and object, and then, also object and subject. For each example, we look at the middle. We break it into words. And then, we count up all the words. So a couple of things to note here. One is that the feature representation for one KB triple can be derived from many corpus examples. And this is the point that I was trying to make last time, that we're using the corpus to generate features for a candidate KB triple. And the role of the corpus is to provide the feature representation. And the feature representation for a KB triple will be based on all of the examples in the corpus that contain those two entities. The other observation to make here is that we make no distinction between what you might call forward examples, which have subject first and then object, and reverse examples, which have object and then subject. We're lumping them all together. The words that come from the middles of examples in either direction all get lumped together into one feature counter. And you might have qualms about whether that's really the smartest thing to do. So let's get a sense of what this looks like in action. First, let's print out the very first KB triple in our KB. We actually looked at this last time. It's a KB triple that says-- that contains relation, holds between Brickfields and Kuala Lumpur Sentral railway station. And now, let's look up the first example containing these two entities. I'm just going to look them up in the forward direction, subject and object, and get all the examples. I look at the first one, and let me just point out the middle. The middle says, "It was just a quick 10-minute walk to." So I guess the full example probably said something like, "From Brickfields, it was just a quick 10-minute walk to Kuala Lumpur Sentral railway station." And maybe there was more. Now, let's run our featurizer on this KB triple and see what features we get. So we get a counter that contains "It was just a quick 10-minute walk to the--" So it looks like it's counted up the words in that middle, which is just what we expected. But if you look closely, there's something unexpected here because the word "two" has a count of 2, even though it appears only once in that middle. And also, the word "the" has a count of 1, even though it didn't appear in that middle at all. So where did those come from? Well, remember that the featurizer counts words from the middles of all examples containing those entities in either direction. And it turns out that the corpus contains another example containing those two entities and that other example has-- there's just one other example, but that other example has middle "to the." And so, that's where these counts come from. So all is well. We did the right thing. OK. We have our simple bag-of-words featurizer. Now, we need a way to train models to make predictions and to evaluate the results. The RelExt module contains functions for each of those. And so, I just want to give you a quick tour of what those functions are. But you'll definitely want to go read the code for this, so that you're more familiar with how it can be used. And a lot of this code appears in a file called RelExt, rel_ext.py. So we'll start with a function called train models. This takes as arguments the dictionary of data splits, a list of featurizers. And here, we have a list consisting of just our simple bag-of-words featurizer. The name of the split on which to train, which defaults to train. And a model factory, which is a function that returns a classifier and it's-- sorry, a function which initializes an Sk classifier. And by default, it's a logistic regression classifier, as shown here. But you could easily substitute this with some other Sklearn classifier. It returns this thing called train_result which is a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and most importantly, a dictionary holding the trained models, one per relation. So it's a dictionary which maps from relation names to models. So that's train models. Next comes predict, this is a function that takes his arguments a dictionary of data splits. The output of train models, that train results thing, and the name of the split on which to make predictions. And by default that's dev. And it returns two parallel dictionaries, one holds the predictions grouped by relation and the other holds the true labels grouped by relation. And our third building block is evaluate predictions. So this is a function that takes as arguments the two parallel dictionaries of predictions and true labels produced by predict, and it prints evaluation metrics for each relation, like we saw earlier. Now, before we dwell on these results, I want to show one more function, which is a function called experiment. And experiment simply chains together the three functions that I just showed you. It chains together training, prediction, and evaluation. So that's very convenient for running end to end experiments. I haven't shown all the parameters here, but if you go look at the source code, you'll see that it actually takes a lot of optional parameters. And those parameters let you specify everything about how to run the experiment. It lets you specify your featurizers, your model factory, which splits to train and test on, and more. So for example, earlier, I mentioned that the tiny split is really useful for running fast experiments to work out the kinks. If you wanted to do that, it's very easy using the experiment function just to set the training split and the test split to tiny to run a very quick experiment. Now, here are the results of evaluating our simple bag-of-words logistic regression classifier. And let's take a closer look, because this is quite stunning. Even though this is just about the simplest possible classifier, we've achieved huge gains over the phrase matching approach. The first thing that jumps out is that our macro-averaged F-score has jumped from 11.1 to 56.7. And we see big gains in precision for almost every single relation and correspondingly, big gains in F-score. On the other hand, there's still plenty of room for improvement. I mean this is much, much more impressive than where we were before, but we're very far from perfection. There's abundant headroom and opportunity to continue to improve.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Introduction_and_Welcome_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
Hi, I'm Chris Potts. I'm a professor in linguistics at Stanford with a courtesy appointment in computer science and I'm the director of Stanford Center for the Study of Language and Information, which is an interdisciplinary research center focused on logic, language, decision-making, human sentence processing, and computation. My undergraduate and graduate degrees are all in linguistics. The work I did for my PhD focused on topics in linguistic pragmatics, which is the study of how language use is shaped by the physical and social contexts that we're in. At a certain point I went looking for new ways to support those theories quantitatively, and that began my journey into the world of natural language processing. I now think of myself as being on a mission to help with the sharing of ideas back and forth between linguistics and NLP. I've taught natural-language understanding nine times at Stanford with my first year in 2012. In 2012, we were just beginning to see how NLU was gonna revolutionize the field and reshape the technology landscape. IBM's Watson had recently won Jeopardy, Apple's Siri was new, and the other tech giants were on the verge of launching their own intelligent assistants. So there was a widespread perception that NLU was poised to have a transformative impact on the world and that perception was certainly correct. Since then, NLU has only become more central to the field of NLP and to all of artificial intelligence more generally, and the progress in the field has been amazing. We have more large NLU datasets than ever before and the level of innovation in modeling and model analysis is just astounding. As a result, we can tackle more ambitious problems than ever and there are opportunities to find lots of creative new ways to apply NLU to technology development and scientific inquiry. So it's certainly an exciting moment to welcome you to this course. It's an adapted version of the course we teach on campus. The course begins by covering a wide range of models for distributed word representations. From there, we branch out into a series of important NLU topics, including relation extraction, natural language inference, and grounded language understanding. We've chosen these topics because they allow us to highlight many of the central concepts in NLU which you can then apply more widely. One of the special aspects of this course is that it's project-oriented and with luck, the project that you develop will become a professional asset for you. We aim to help you design and conduct a successful research project in the field and we have an accomplished teaching team to help you with this process. Even the regular assignments are oriented toward building original projects. Each assignment is grounded in a specific topic area and they all have a common rhythm in that they ask you to build up some baseline systems and then develop your own original system for solving the task at hand. You'll enter each one of these systems into what we call a bake-off, which is an informal competition around data and modeling, and the teaching team will reflect the insights we gain from these bake-off entries back to the whole class. It's also common for bake-off entries to grow into final projects. We hope that all this spurs you to think creatively and to hone your practical and theoretical skills in NLU. So without further ado, let's get started.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Natural_Language_Generation_Metrics_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRIS POTTS: Welcome, everyone. This is part three in our series on methods and metrics. We're going to be talking about metrics for assessing natural language generation systems. We previously talked about classifier metrics, and the issues seem relatively straightforward. As you'll see, assessment for NLG systems is considerably more difficult. Let's actually begin with those fundamental challenges. Maybe the most fundamental of all is that in natural language, there is more than one effective way to say most things. The datasets we have might have one or a few good examples of how something should be said. But that's just a sample of the many ways in which we could communicate effectively. And that leaves us with fundamental open questions about what comparisons we should make and how we should assess so-called mistakes. Relatedly, there's just an open question of what we're actually trying to measure. Is it fluency? Or truthfulness? Or communicative effectiveness? Or some blend of the three? As we think about different metrics, we might find that they capture one or a few of these and completely neglect others and that's sure to shape the trajectory of our project and the actual goals that we achieve. So we have to be really thoughtful about what we're actually trying to measure in this space. Let's begin with perplexity. I would say what perplexity has going for it is that it is at least very tightly knit to the structure of many of the models that we work with in NLG. So the core calculation is that given some sequence x of length n, and a probability distribution p, the perplexity of x relative to that distribution is the product of the inverse of all the assigned probabilities. And then we take an average here. There are many ways to express this calculation and many ways to connect with information theoretic measures. Let me defer those issues for just a second, and I'll try to build up an intuition just after getting through the core calculation. So that's perplexity. And then when we do token-level perplexity, right, we want to assign perplexity to individual examples, we need to normalize by the length of those examples. And we do that in log space in order to capture the kind of geometric mean, which is arguably more appropriate for comparing probability values. And then if we want the perplexity for an entire corpus, we again use a geometric mean of all the token-level perplexity predictions. And that gives us a single quantity over an entire batch of examples. What are the properties of perplexity? Well its bounds are 1 to infinity with 1 the best, so we would like to minimize perplexity. It is equivalent to the exponentiation of the cross-entropy loss. That's the tight connection with models that I wanted to call out. We often work with language models that use a cross-entropy loss. And you can see that they are directly optimizing for a quantity that is proportional to perplexity. Now that can be useful as a kind of getting a direct insight into the nature of your model's predictions. What value does it encode? Well I think it's simple. Does the model assign high probability to the input sequences? That is, does it assign low perplexity to the input sequences? The weaknesses, there are many actually. First, it's heavily dependent on the underlying vocabulary. To see that, imagine an edge case where we take every word in the vocabulary and map it to a single UNK token. In that case, we will absolutely minimize perplexity, but our system will be useless. In that edge case, you can see that I could reduce perplexity simply by changing the size of my vocabulary. That's a way that you could kind of game this metric inadvertently. As a result of that, we can't really make comparisons across datasets because of course they could have different vocabularies and different intrinsic notions of perplexity. And it's also even tricky to make comparisons across models. You can see that in my first weakness there. If we do compare models, we need to fix the data set and make sure that the differences between the models are not inherently shaping the range of perplexity values that we're likely to see. Let's move on now into a family of what you might think of as n-gram based methods for assessing NLG systems, beginning with the word-error rate. So the fundamental thing here will be an edit distance measure. And therefore you can see word-error rate as a kind of family of measures depending on the choice of the edit distance function, which we would just plug in. The word error rate is the distance between the actual sequence x and some predicted sequence pred normalized by the length of the actual sequence. And if we would like the word-error rate for an entire corpus, it's easy to scale it up, but there's one twist here. The way that's standardly calculated is that the numerator is the sum of all the distances between the actual and predicted sequences. Not normalized as it was up here for the word error rate. The normalization that happens over the entire corpus, it's the sum of all the lengths of the actual strings in the corpus. So we have one average as opposed to taking an average of averages. The properties of the word error rate. Its bounds are 0 to infinity, and we would like to minimize it, so 0 is the best. The value encoded is similar to F scores. We would like to answer the question, how aligned is the predicted sequence with the actual sequence. And I've invoked F scores here because if our edit distance measure has notions of insertion and deletion, they play roles that are analogous to precision and recall. The weaknesses. Well first, we have just one reference text here. I called out before that there are often many good ways to say something, whereas here we can make only a single comparison. And it's also, maybe this is more fundamental, word error rate is a very syntactic notion. Just consider comparing text like, it was good, it was not good, and it was great. They're likely to have the identical word error rates, even though the first two differ dramatically in their meanings and the first and the third are actually rather similar in their meanings. That semantic notion of similarity is unlikely to be reflected in the word-error rate. Let's move now to BLEU scores. This is another n-gram based metric, but it's going to try to address the fact that we want to make comparisons against multiple human-created reference texts. It has a notion of precision in it, but it's called modified n-gram precision. Let me walk you through an example and hopefully that will motivate. Imagine we have the candidate that had just seven instances of the word the" in it. And we have two reference texts, presumably written by humans. The cat is on the mat. And there is a cat on the mat. The modified precision takes for the token "the." The maximum number of times that "the" occurs in any reference text, and that's 2 with reference 1 here. And it divides that by the number of times that "the" appears in the candidate, which is 7. That would give us 2 over 7 as the modified n-gram 1-gram precision score for this candidate. There's also a brevity penalty, which will play the role of something like recall in the BLEU scoring. So we have a quantity r, which is the sum of all the minimal absolute length differences between candidates and reference. We have c, which is the total length of all the candidates. And then we said that the brevity penalty is 1 if c is greater than r. Otherwise, it's an exponential decay off of the ratio of r and c. And again, that will play kind of the notion of recall. And then the BLEU score is simply the product of that brevity penalty with the sum of the weighted modified n-gram precision values for each n-gram value n considered. So we probably go 1 through 4. That's a standard set of n-grams to consider. We would sum up all of those notions of modified n-gram precision for each n. And possibly weight them differently depending on how we want to value 1-grams, 2-grams, 3-grams, and 4-grams. So that's the BLEU scoring. What are its properties? Its bounds are 0 and 1. And 1 is the best. But we have really no expectation that any system will actually achieve 1 because even comparisons among human translations or human created text will not have a BLEU score of 1. The value encoded is an appropriate balance of modified precision and recall under the guise of that brevity penalty. It's very similar to the word-error rate in that sense, but it seeks to accommodate the fact that there are typically multiple suitable outputs for a given input. And that's a real strength of BLEU score. The weaknesses. Well, this team has argued that BLEU scores just fail to correlate with human scores for translations. And that's kind of worrying, because BLEU scores were originally motivated in the context of machine translation. And the issues that they identify are like it's very sensitive to n-gram order in a way that human intuitions are not and it's insensitive to the type of the n-grams. So again, just consider comparisons like that dog, the dog, and that toaster. Those will likely have very similar BLEU scores, but that dog and the dog are just inherently much more similar than that dog and that toaster in virtue of the fact that that and the, it's just a difference at the level of functional vocabulary, versus dog and toasters, a really contentful change. And then as we move into topics that are more closely aligned with NLU, could possibly have an even more worrying picture. So this team argues that BLEU is just a fundamentally incorrect measure for assessing dialogue systems. And that could be an indicator that it's not going to be appropriate for many kinds of NLG tasks in NLU. That's just a sample of two n-gram based metrics. I thought I'd mention a few more to give you a framework for making some comparisons. So I mentioned the word-error rate. That's fundamentally edit distance from a single reference text. BLEU, as we've seen, is modified precision and a brevity penalty. Kind of recall motion comparing against many reference texts. ROUGE is a recall focused variant of BLEU that's focused on assessing summarization systems. METEOR is interestingly different, because it's trying to push past simple n-gram matching and capture some semantic notions. It's a unit gram based measure that does an alignment measure between not only exact matches of the unit grams, but also stem versions and synonyms really trying to bring in some semantic aspects. And CIDEr is similar. This is going to be even a more semantic notion, because it's going to do its comparisons in vector space. It's kind of approximately a weighted cosine similarity between TF-IDF vectors created from the corpus. Finally, in closing, I just wanted to exhort you all to think about more communication based metrics in the context of NLU. For NLU, it's worth asking whether you can evaluate your system based on how well it actually communicates in the context of a real world goal as opposed to just comparing different strings that are inputs and reference texts. And we've actually seen an example of that in our assignment and bake off on color reference. We didn't really assess how well your system could reproduce the utterances that were in the corpus. Rather, our fundamental notion was listener accuracy, which was keying into a communication goal. How well is your system actually able to take messages and use them to figure out what the speaker was referring to in a simple color context? And for much more on that, and a perspective on a lot of these issues, I encourage you to check out this paper that was led by Ben Newman. It began as a course project for this class and grew into a really successful paper.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
ELECTRA_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is part 5 in our series on contextual word representations. We're going to be talking about the ELECTRA model. ELECTRA stands for efficiently learning an encoder that classifies token replacements accurately, which is a helpfully descriptive breakdown of a colorfully named model. Recall that I finished the BERT screencast by identifying some known limitations of the BERT model. ELECTRA is really keying into 2 and 3 in that list. So the second one identified by the BERT authors is just that of the MLM objective, they say we're creating a mismatch between pre-training and fine-tuning since the MASK token that we use is never seen during fine-tuning. So ideally for the model we fine-tune, we would make no use of a MASK token. Devlin et al also observed of the MLM objective that it has a downside. We make predictions about only 15% of the tokens in each batch. We have an intuition that that's a pretty inefficient use of the data we have available to us. Ideally, we would make more predictions, and ELECTRA seeks to make good on that intuition as well. So let's dive into the core model structure. Here we'll use a simple example. We have an input token sequence x. The chef cooked the meal. And as usual with BERT, we can MASK out some of those tokens, and then have a BERT or BERT-like model try to reconstruct those MASK tokens. However, we're going to do that with a twist. Instead of always trying to learn the actual input token, we're going to sample tokens proportional to the generator probabilities so that sometimes the actual token will be input as with the case with "the" here. And sometimes it will be some other token, as the case with "cook" going to "ate" in this position. Now the job of ELECTRA, the discriminator here, is to figure out which of those tokens were in the original input sequence and which have been replaced. So that's a binary prediction task, and we can make it about all of the tokens in our input sequence if we choose to. Actual loss for ELECTRA is the sum of the generator loss and a weighted version of the ELECTRA, that is the discriminator loss. However, that's kind of masking an important asymmetry in this model here. Once we have trained the generator, we can let it fall away and do all of our fine-tuning on the discriminator. That is on the ELECTRA itself, which means that we'll be fine-tuning a model that never saw any of those MASK tokens. So we address that first limitation of BERT. And we're also going to make a prediction with ELECTRA about every single one of the input tokens, which means that we're making more use of the available data. One thing I really like about the ELECTRA paper is that it offers a really rich set of analyses of the efficiency of the model and of its optimal design. So I'm going to highlight some of those results here starting with this generator/discriminator relationship results. So the authors observe that where the generator and discriminator are the same size, they can share all their transformer parameters. They can kind of be one model in essence. And they find that more sharing is indeed better, which is encouraging. However, they also observed that the best results come from having a generator that is small compared to the discriminator. And this plot kind of summarizes the evidence there. So we have our GLUE score as the goal posts that we're going to use to assess these models. That's along the y-axis. Along the x-axis, we have the generator size. And then they've plotted out a few sizes for the discriminator. And I think what you can see quite clearly is that, in general, you get the best results on GLUE where the discriminator is two to three times larger than the generator. And that's true even for this very small model in green down here. The results are overall not very good, but we see that same relationship where the optimal discriminator is at size 256 and the generator at size 64. That's where we reach our peak results. And it's kind of comparable to this very large model in blue where our optimal size for the discriminator is 768 compared to 256 for the generator. They also do a bunch of really interesting efficiency analyses. One thing I like about the paper is that it's kind of oriented toward figuring out how we can train these models more efficiently with fewer compute resources. And this is a kind of summary of central evidence that they offer that ELECTRA can be an efficient model. So again we're going to use along the y-axis the GLUE score as our goal posts. Along the x-axis, here we have pre-training FLOPs. So this would be the number of compute operations that you need to pre-train the model. In blue, along the top here, is ELECTRA. It's the very best model. In orange, just below it, is adversarial ELECTRA, which is an interesting approach to ELECTRA where we essentially train the generator to try to fool the discriminator as opposed to having the two cooperate as in core ELECTRA, and that turns out to be pretty good. And also these green lines are really interesting. So two-stage ELECTRA is where I start by training just against the BERT objectives. And at a certain point, switch over to training the ELECTRA objective. And you can see that even that is better than just continuing on with BERT all the way up to the maximum for our compute budget here. The paper also explores a bunch of variations on the ELECTRA objective itself. So I presented to you full ELECTRA. And it's full ELECTRA in the sense that over here on the right we're making predictions about every single one of the tokens in the input. We could also explore something that was analogous to BERT. ELECTRA 15% would be the case where we make predictions only about tokens that were way back here in the input x-masked actually masked out. Another variant that the team considered actually relates to how we train BERT. So recall that for BERT we train both by masking and by replacing some tokens with other randomly chosen tokens. And we could try training the generator just with that approach, which would eliminate the MASK token entirely. So that's this area here where we have no masking on x-masked but rather just randomly replace tokens from the actual vocabulary. And then finally all tokens MLM would adopt some ideas from ELECTRA into the BERT model. So recall that for the MLM object, we essentially turned it off for tokens that weren't masked. But there's no principal reason why we're doing that. We could, of course, have the loss apply to every single one of the tokens in the input stream. And that gives us all tokens MLM on the generator side. And the central finding of the paper I suppose is that ELECTRA is the best of all of these models. You also have a really good model if you do all-tokens MLM, which is something that might inform development on the BERT side, in addition to BERT in the context of ELECTRA. Replace MLM is less good. And ELECTRA 15% kind of down there at the bottom there. BERT, I think this is kind of showing us that we should make more predictions. That was a guiding intuition for ELECTRA. And it seems to be borne out by these results. And finally, as is common in the space, the ELECTRA team did some model releases of pre-trained parameters that you can make use of. They did ELECTRA base and ELECTRA large, which is kind of comparable to the corresponding BERT releases. I think an interesting thing they did is also released this ELECTRA small model which is designed to quickly be trained on a single GPU, again tying into the idea that we ought to be thinking about how we can train models like this when we have highly constrained compute resources. ELECTRA was keyed into that idea from the very beginning. I think the small model shows that it can be productive.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Directions_to_Explore_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
BILL MACCARTNEY: OK. We're underway. We have a simple model with reasonable performance. Where do we go from here? Well, to make further gains we need to stop treating the model as a black box. We need to open it up and get visibility into what it's learned and more importantly, where it still falls down. And then we can begin to look at some ideas for how to improve it. One important way to gain understanding of our trained models is to inspect the model weights. Which features are strong positive indicators for each relation, and what features are strong negative indicators? The rel_ext model contains a function called examine_model_weights that makes it easy to inspect. So here, I show results for just 4 of our 16 relations. And in general, I think the features with large positive weight are pretty intuitive. So for the author relation, the biggest weights are author, books, and by. For film performance, we have starring, alongside, and opposite. By the way, I was a little bit puzzled when I first saw alongside and opposite, because I thought that those are words that would naturally appear between the names of two actors, not between the name of a film and the name of an actor. What I did was I wrote a little bit of code to pull up the actual examples that caused these weights to wind up being large. And what I realized was there's a very common pattern which is like X appeared in Y alongside Z. So X and Z are actors. Y is a film. X appeared in Y alongside Z. So you have Y alongside Z. That indicates that Z is an actor that appeared in film Y. And I think something similar happens for opposite. So it does make sense that these are strong indicators of the film performance relation. For has spouse, we have wife, married, and husband. I think this makes perfect sense. The one that's a bit surprising is adjoins. So for adjoins we have Cordoba, Taluks and Valais. It's odd to see specific place names here. They certainly don't seem to express the adjoins relation. I wonder if anyone has a guess what's going on. I was really puzzled by this. And so again, I wrote a bit of code to find the specific examples that contributed to this result. I looked for examples where the two entity mentions, stand in the adjoins relation. And these terms, these specific terms, appear in the middle. And when I looked at the examples, I realized that what's going on is that it's very common to have lists of geographic locations. So A, B, C, D. And in such lists, it's not uncommon that just by chance A and C or A and D stand in the adjoins relation. Maybe it's a list of provinces in a country. And of course, some of those provinces are adjacent to each other. So if A adjoins C or A adjoins D, that will tend to make B appear as a positive indicator for the adjoins relation and especially if the corpus just happens to contain several such examples. So I think that's what contributed to this puzzling result. The features with large negative weights look a bit more haphazard. But I think that's not surprising. It's kind of what you expect for this kind of linear model. By the way, you can fiddle with the code that prints this out. And here it just prints the top three. But you can fiddle with the code, there's actually a parameter that tells you-- it tells it how many of the top of the list to print. And so you can print much longer lists. And for many of the relations, the top 20, even the top 50, features all look very plausible and intuitive. And it's quite satisfying to see those results come out. Another way to gain insight into our model is to use it to discover new relation instances that don't currently appear in the KB. In fact, as we discussed last time, this is the whole point of building a relation extraction system, to augment a KB with knowledge extracted from natural language text at scale. So the decisive question is, can our model do this effectively? We can't really evaluate this capability automatically, because we have no other source of ground truth than the KB itself. But we can evaluate it manually by examining KB triples that aren't in the KB but which our model really, really thinks should be in the KB. So we wrote a function to do this. It's called find_new_relation_instances. And you can go look at the code. Here's how it works. It starts from corpus examples containing pairs of entities that don't belong to any relation in the KB. So these are what we described last time as negative examples. We'll consider each such pair of entities as a candidate to join each relation. So we'll take the cross product of all of those entity pairs and relations. We'll apply our model to all of those candidate KB triples. And we'll just sort the results by the probability assigned by the model in order to find the most likely new instances of each relation. So we'll find the candidate KB triples that aren't currently in the KB, but which the model believes have really high probability of being valid. So let's see what we get when we run it. Here are the results for the adjoins relation. Notice that the model assigned a probability of 1.0 to each of these pairs. It is totally convinced that these pairs belong to the adjoins relation. But the results are, well let's be honest, the results are terrible. Almost all of these pairs belong to the contains relation, which by the way isn't actually one of our 16 relations. But intuitively, they should belong to a contains relation not the adjoins relation. You could make a case maybe for Mexico and Atlantic Ocean belonging to the adjoins relation. But I mean, to be honest, even that one is a stretch. One other thing worth noting, whenever the model predicts that X adjoins Y, it also predicts that Y adjoins X. You might for a moment think that this shows that the model has understood that adjoins is a symmetric relation. Unfortunately no, that's not what's going on. It's just an artifact of how we wrote the simple bag of words featurizer. That simple bag of words featurizer makes no distinction between forward and reverse examples. So it has no idea which one comes first and which one comes second. And that will be true for asymmetric relations just like for symmetric relations. So this is not a very promising start. It's-- I mean, we saw a pretty good quantitative evaluation for this model. So this is a little bit surprising. Let's see what we get for some other relations. So here are the results for the author relation. And these look a lot better. Once again, all of the probabilities are 1. But this time, every single one of these predictions is correct. Well not quite actually, because the book is supposed to appear first, like "Oliver Twist" and the author second, Charles Dickens. So this first one actually is correct. The second one is backwards. It has the author first and the book second. Our model is completely ignorant of order. So it's just as likely to put things in reverse. But if you ignore that, if you're willing to imagine that we could easily fix that, then the results look great. We could put all of these triples right into our KB and we'd have a bigger and better KB because of it. This is relation extraction at its finest. This is what we wanted. Here are the results for the capital relation and it's a similar picture. All of the probabilities are 1.0. The ordering is frequently reversed. It's very haphazard. But if you put that aside, the results look very good. You could quibble perhaps with Delhi here. I mean, the capital of India is really New Delhi. But New Delhi is part of Delhi, so you know, it's close. Still overall, I think this looks really good. Let me show you one more. This is the last one I'll show. These are results for the worked_at relation. And here, the results are more mixed. So we have Stan Lee and Marvel Comics. Sure, if you can say that Elon Musk worked at Tesla Motors, then you can say that Stan Lee worked at Marvel Comics. And while we're at it, Genghis Khan worked at the Mongol Empire. Sure, why not? But the rest are nonsense. So why? What happened here? Well, when you encounter surprising and mysterious results in your model output, it's really good practice to go dig into the data and investigate. And this is called error analysis. And I want to show you a couple of examples of that now. So first let's see if we can figure out what happened with Lewis Chevrolet and William C Durant. First, let's look up the corpus examples containing these two entities. I'm only going to look up the examples that have them in this order. I should look them up in the other order as well. And as a matter of fact, I did. I'm just not going to put that on the slide. I'm just going to focus on what happens in this order. So I'm going to look up these examples and print out what they look like. And here's what we get. There are 12 examples. And they all look identical. Actually, I didn't print the full context here. If you look at the code closely, you'll see that I'm printing the suffix of the left and the prefix of the right. So there's more context further out on left and right. And if you did see the full context, you would realize that the examples do differ slightly. But they're very, very similar. They're near duplicates. I mentioned this last time that this is one of the warts of this dataset. It contains a lot of near duplicate examples. And I think this is an unfortunate consequence of the way the sample was constructed, the way the web documents that this corpus was based on were sampled from the web. And it seems like that's bitten us here. But it still leaves the question, why? Why did that repetition leave the model to predict that this pair belongs to the worked_at relation? Because it doesn't look obvious that that's the right relation here. I suspect that it's because of the word "founder", because X being a founder of Y strongly implies that X worked at Y. And actually, we can check this. It's not that hard to write some code to inspect the weight that was assigned to the word "founder" in the model for the worked_at relation. So here's a little bit of code that does that. And sure enough, in the model for worked at, the word "founder" gets a weight of 2.05, which is pretty large. If you look at the distribution of weights, it's a relatively large one. I forget exactly, but I think it's in the top 10. It's a relatively significant, quite significant, feature for this model. So that's what happened. We've got 12 examples. Each of them is contributing a sizable weight. And the result is that the model is completely convinced that this is the right-- that the worked_at relation holds here. By the way, I didn't check. But I'm confident that the founder, the model for the founders relation, will also predict that the founder relation holds here. An understanding of what went wrong here could help to stimulate some ideas for how to fix it. I don't think I'll-- I think I won't. I mean, I have some ideas. I think I won't give them away. But I hope this underscores the value of error analysis. If you really want to understand-- when you see weird results, you really want to understand what's going on in your data that led to these weird results. Let me show you one more example that has a bit of a different flavor. Let's look at what's going on with Homer and the Iliad. I wrote a little bit of code to investigate this one too. And I'm not going to show the whole investigation but I'm just going to cherry pick the most informative results. So one thing that I notice is that there are a lot of examples for Homer and Iliad. In fact, there are 118 of them just in that direction. There's more in the reverse direction. That's impressive. But again, by itself, it doesn't explain why worked_at looked like a good prediction. By the way, I did check to see if it was the same explanation as last time, a lot of near duplicates. It's not. That's not what's going on here. But the next thing I did was to write some code to count up the most common middles that join Homer and Iliad across these 118 examples. And so that code looks like this. And here are the results. And there was one middle that strongly dominated. And it's apostrophe s, so as in "Homer's Iliad." So that makes sense because clearly the possessive can indicate the author relation. You expect to see Homer's Iliad and Jane Austen's Pride and Prejudice and many other similar formulations. But the apostrophe s can equally well indicate the worked_at relation, as in Tesla's Elon Musk or Microsoft's Bill Gates. So this apostrophe s is really highly ambiguous. Just to confirm that this is actually significant to the result that we saw let's check what weight was assigned to apostrophe s in the model for worked_at. So this code is similar to the code on the previous slide. But this time we're looking for the weight for apostrophe s. And it turns out that the weight was 0.58. OK. It's not a huge weight. But it's not small either. And this feature occurred 51 times across the corpus. So I think that's what happened. We had a non-trivial amount of weight that got added up 51 times. And we wound up with a really big contribution and the model feeling really confident about this relation. So again, thinking about this problem might suggest some strategies for how to reduce that ambiguity. The fundamental problem here is that apostrophe s is highly ambiguous. But a good question to ask yourself is, is there other information in the sentence that could help to distinguish the author relation from the worked_at relation? And I think there is. Again, I don't want to give too much away. But I think there is other evidence in these sentences that could help to tease apart these two relations. And this kind of error analysis is really indispensable to the model development process. Now for the homework and the bake off. We're going to turn you loose to find ways to improve this baseline model. And there are a lot of possibilities. One area for innovation is in the feature representation we pass to the learning algorithm. So far, we've just used a simple bag of words representation. But you can imagine lots of ways to enhance this. So you could use word embeddings, like the glove embeddings. You could use a bag of words representation that distinguishes between forward and reverse contexts. You could use bigrams or longer n-grams. You could leverage the part of speech tags that we have in the corpus or information from WordNet. Much of the early work on relation extraction relied heavily on syntactic features. So maybe try that. And so far, we've used features based only on the middle phrase, the phrase between the two entity mentions. You could also try using information about the entity mentions themselves, for example, the entity types. Or you could try deriving features from the left and right context. There are a lot of possibilities for richer feature representations. There's also a lot of room for innovation with the model type. Our baseline model is a simple linear model optimized with logistic regression. That's a good place to start. But there are many other possibilities. If you want to stick with linear models, you could use an SVM. And sklearn makes that easy. Or you could experiment with neural networks. You could use a simple feed-forward neural network as a drop in replacement for our linear model. Or since examples can be a variable length, you might consider a recurrent neural network, like an LSTM. If you go this way you'll have to think carefully about how to encode the input. If the input is just the middle phrase, things are probably relatively straightforward. But if you want to include the entity mentions or the left and right context, you might need to think carefully about how to demarcate the segments. Or you could use a transformer-based architecture like BERTs, although the quantity of training data that we have available here might be a bit small. I think all of these are potentially interesting and fruitful directions for exploration. And I think you can have a lot of fun with this.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Problem_Formulation_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
BILL AMCCARTNEY: So I now want to turn to the question of how to formulate our prediction problem precisely. I want to be precise about how we're defining the inputs and outputs of our predictions, and that, in turn, is going to have consequences for how we join the corpus and the KB, how we construct negative examples for our learning algorithms, and how we handle multi-label classification. So first what is the input to our prediction problem? In the supervised learning paradigm, the input is a pair of entity mentions in the context of a specific sentence. We're trying to label a sentence just like we do in part of speech tagging or sentiment analysis. But in the distance supervision paradigm, we'll do things differently. The input will be a pair of entities full stop, independent of any specific context. We're trying to determine the relation between this entity and that entity, and that's it. The other question I want to look at is, what's the output of the prediction problem? Are we trying to assign a pair of entities to a single relation? That's called multi-class classification. Or are we trying to assign a pair of entities to multiple relations? That's called multi-label classification, and it's a different beast. So over the next couple of slides, I want to explore the consequences of these choices. The difference between these two ways of thinking about the input becomes really important when we talk about how we're going to join the corpus and the KB. In order to leverage the distance supervision paradigm, we need to connect those two. We need to connect information in the corpus with information in the KB. And there's two different possibilities depending on how we formulate the prediction problem, depending on how we define the input to the problem. If our problem is to classify a pair of entity mentions in a specific example in the corpus, in a specific sentence, then we can use the KB to provide the label, and this is what it looks like. We have a corpus example like this. We're trying to label this specific example. And to do it, we can check to see if these two entities are related in the KB. Yep, they are. And we can use that to generate a label for this example. Labeling specific examples is how the fully supervised paradigm works. So it's an obvious way to think about leveraging distance supervision as well. It can be made to work, but it's not actually the preferred approach. If we do it this way, we'll be doing things exactly as they're done in the supervised paradigm. It does work, but it's not the best way to take advantage of the opportunity that distance supervision creates. There's another way of doing things, and the other way is where, instead, we define our problem as classifying a pair of entities. Not entity mentions in a specific sentence, but just entities. Elon_Musk and Tesla, period, apart from any sentence. And if that's how we define the input to our problem, then we can use the corpus to provide a feature representation that will be the input to the prediction. So if we have an entity pair like Elon_Musk and SpaceX that we're considering adding to a relation in the KB, we can find all sentences in the corpus containing this pair of entities, and then we can use all of those sentences to generate a feature representation for this pair. So in this example, and I'm imagining-- it doesn't have to be this way, but I'm imagining that we're using a simple bag of words feature representation. The bag of words has come from the middle, that is, the phrase between the two entity mentions, the blue phrases here. And all I've done is counted up the words in all of these blue phrases across all of the examples in the corpus where these two entities co-occur. Yeah. Well, you can see here in the token counts that they include tokens from the various examples. All of these examples together are used to generate a single feature representation. This is a feature representation for this pair, and it's this feature representation that my learned model will use to make a prediction about this pair. So this is a very interesting way of reversing things. Instead of using the KB to generate a label to make a prediction about a specific pair of entity mentions in a specific sentence, I'm turning things around. I'm using the corpus to generate a feature representation that I will use to make a prediction about an entity pair in abstraction, an entity pair considered just as a pair of entities. Just one more thought on this. This is still kind of about the topic of joining the corpus and the KB. We've created a data set class which does that, which combines a corpus and a KB. Just kind of staples them together and provides a variety of convenience methods for the dataset. And one of those convenience methods is this one, count_examples, which shows, for each relation, how many examples we have in a corpus, how many triples we have in a KB, and the ratio, so the total number of examples, the average number of examples per triple. For most relations, the total number of examples is fairly large, so we can be optimistic about learning which linguistic patterns express a given relation. I mean, even the smallest one has at least 1,500 examples. That's not really industrial grade data, but it's certainly enough for the kind of exploration that we're doing here. However, for individual entity pairs, the number of examples is often quite low. So some of these relations are betwee-- some of these ratios are between 1 and 2. Of course, more data would be better, much better, but more data could quickly become unwieldy to work with in a notebook like this, especially if you're running on an ordinary laptop. And this data is going to be enough to allow us to have a fruitful investigation. First, I want to talk about negative examples. So by joining the corpus to the KB, we can get lots of positive examples for each relation, but we can't train a classifier on positive examples alone. We're also going to need some negative examples, negative instances. So that is entity pairs that don't belong to any relation. We can find such pairs by searching the corpus for examples which contain two entities which don't belong to any relation in the KB. So we wrote some code to do this, and there's a method on the dataset class called find_unrelated_pairs. And when we run it, wow. It found almost 250,000 unrelated pairs, so 250,000 negative instances for our prediction problem. And that's way more than the number of positive instances. If you remember, the KB has 46,000 triples. Each of those is basically a positive instance. It's something that we know is definitely a positive example of the relation. Here, we have 250,000 negative examples. It's so many more that when we train models, we'll wind up downsampling the negative instances substantially so that we have a somewhat more balanced distribution. A reminder, though. Some of these supposedly negative instances may be false negatives. They may be entity pairs that don't appear to be related, but in the real world, actually are. Our KB is not complete. A pair of entities might be related in real life, even if they don't appear together in a KB. And as I said earlier, after all, that's the whole point. That's the whole reason we're doing relation extraction is to find things that are true in real life and true according to some text that somebody wrote, but aren't yet in our KB. OK, now I'm going to come to the question that was asked about pairs that belong to multiple relations. And this is related to the question of the outputs of our prediction problem. We wrote some code to check the KB for entity pairs that belong to more than one relation. So that's this method count_relation_combinations. And it turns out, this is a really common phenomenon in the KB. There are lots of pairs that belong to multiple relations. For example, I won't even mention the most common one, but there are 143 people in the KB whose place of birth is the same as their place of death. And actually, that's not that surprising. That makes perfect sense. It even turns out that there's no fewer than seven people who married a sibling. Well, since lots of entity pairs belong to more than one relation, we probably don't want to be forced to predict a single relation. So this suggests formulating our problem as multi-label classification. We want our models to be able to predict multiple relations for any given entity pair. There are a number of ways to approach multi-label classification, but the most obvious is the binary relevance method, which just factors multi-label classification over n labels into n independent binary classification problems, one for each label. So if you have a pair like Pericles and Athens, you want to be able to predict any combination of these labels. You just train a separate model, a separate binary classifier, for each of the labels independently. Each of them generates a prediction independently. And in this example, we've predicted that the place of birth relation applies, the place of death relation applies, but not the has_sibling relation. A disadvantage of this approach is that it fails to-- because it treats the binary classification problems as independent, it fails to exploit correlations between labels. For example, there may well be a correlation between the place of birth label and the place of death label. And if you already have evidence that the place of birth label applies, that might tilt you, at least a little bit, toward saying yes for place of death. This approach of factoring them into independent binary classification problems is not able to take advantage of that information. But it has the great virtue of simplicity. It's incredibly straightforward, incredibly easy to think about and to implement. And it'll suffice for our purposes. It's going to make the investigation move forward very smoothly. So I want to sum up a little bit. We set out to establish a precise formulation of our prediction problem. And when we put all the pieces together, here's the problem formulation we've arrived at. The input to the prediction will be an entity pair and a candidate relation. The output will be a Boolean indicating whether the entity pair belongs to the relation. Since a KB triple is precisely a relation and a pair of entities, we could say equivalently that our prediction problem amounts to binary classification of KB triples. Given a candidate KB triple like worked_at, Elon_Musk, SpaceX, do we predict that it's valid? This is really nice because it's a very simple way of thinking about what problem we're taking on. We have a bunch of positive examples, which come from our KB. We have a bunch of negative examples, which we synthesize from the corpus using pairs which co-occur in the corpus but don't occur in the KB. Now we have lots of data consisting of candidate KB triples, including positive examples and negative examples. We can use that data both for training and for evaluation. And once we've trained a model to do this binary classification, we can now consider novel KB triples which don't appear anywhere in our data, and ask whether the model will predict them to be true. And by doing that, we may discover new relations that are not currently part of the KB that could be candidates for adding.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Homework_2_Sentiment_Analysis_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
SPEAKER 1 : Hello everyone. This video is an overview of homework 2, which is on supervised sentiment analysis. And I would actually think of it as an experiment in cross domain sentiment analysis. Let's just walk through this notebook, and I'll try to give you a feel for the problem and our thinking behind it. So the plot is the usual one, we're going to introduce a task and associated data. And help you with setting up some baselines and doing error analysis. And that's all will lead into these homework questions, which are meant to help you explore the data in meaningful ways. And also set up some additional baselines that might inform ultimately your original system, which you then enter into the bakeoff Best overview we're doing ternary. That is positive, negative, neutral sentiment analysis, and we're going to be dealing with two data sets. The Stanford sentiment treebank, and a brand new assessment data set that is a dev test split of sentences drawn from restaurant reviews. We're giving you for training the SST train set, and asking you to evaluate on the SST data test. And also on this new data test with the restaurant reviews. And that's the cross domain aspect of this. You're completely unconstrained about what you do, in terms of bringing in new data for training and doing things in development. The one constraint that we really need to firmly impose here is that, of course, the SST3 test set is a public test set. It's actually included in your data distribution. So that other notebooks can run some baseline systems and compare against the literature. But that test that is completely off limits during development. It's really important that you do all your development just on the dev splits. And completely ignore the fact that you have a label version to the SST3 test set. And as I say here, much of the scientific integrity of our field depends on people adhering to this honor code that is doing no development on what is test data. Because test data is our own only chance to get a really clear look at how our systems are generalizing to new examples and new experiences. So please keep that in mind The rationale behind this assignment of course, is to help you get familiar or re familiarize yourself with core concepts and supervise sentiment analysis. And the associated life cycle of developing systems in this space. Which involves writing feature functions, trying out model architectures, hyper parameter tuning. And also possibly doing some comparisons of models using statistical tests to try to get a sense for how much meaningful progress you're making as you iterate on your system design. And we're also trying to push here in this notebook, that error analysis can be a powerful way to help you find problems in your system and then address them. One more methodological note, as you'll see from this notebook I'm encouraging you to use functionality in this sst.py module, which is part of our course code distribution. You're not required to use it. Really, only contract we need to have with you is that your original system have a predict one method that maps strings to predictions very directly. But other than that, you're unconstrained. I do want to say, though, that I think SST. experiment is a flexible framework for doing lots of experiments without writing a lot of boilerplate code. So it should, if you get used to it, be a powerful basis for you for doing a lot of experiments, which I think is crucial to success here. We do some set up by loading a bunch of libraries, and get a pointer to the data. And that brings us to the training set here. So this is going to load independent data frame. You can see that we've got about 8,500 examples. Do review the notebook covering this data set here. There are a bunch of other options for this train reader. In particular, you can decide whether to keep or remove duplicates, and you can also decide whether you want to train on the label subtree that the SST contains. Which vastly increases the amount of training data you have. Which will be very compute intensive, but it could be very productive. It is also a point to say again, that are free to bring in other training sets. And in fact, it might be very productive to bring in the dinaset data set which is covered in our screen test through this unit. That data set has a lot of sentences from restaurant reviews. And it was also labeled in exactly the same way, using the same protocols as we're used for creating the development set of restaurant reviews for this unit. Which is importantly different, I think, from the protocols that were used for the SST. So, bringing in more training data could help you not only with the cross domain problem, but also with the kind of label shift that has probably happened between SST and these new development data sets that we're introducing. That does bring me to the dev sets here. So we have SST dev, that's also PEMDAS data frame as well as this new bakeoff data of restaurant reviews. Also PEMDAS data frame and here you can see just three randomly chosen examples. Example ID, the text of the sentence, a label which is either positive, negative, or neutral. And that is subtree is always zero because these assessment data sets have only four examples. No labeled subtrees the way the SST train set does. We can get a look at the label distribution. And I'll just mention that the label distribution for the test set is very similar. It has one noteworthy property, which is that it's highly skewed. A lot of neutral examples which I think is realistic for actual data even review data, and then there is a skewed toward positivity with negative the smallest. And this kind of label imbalance, I think is severe enough, that it might impact optimization choices that you make. This next section here just sets up a softmax baseline. We use a undergrounds feature function. This couldn't be simpler we're just splitting on whitespace and counting the resulting tokens. And then we have this very thin wrapper around logistic regression. And those are the two pieces that come together to run here and that's SST Experiment. A lot of information about your experiment is stored in this variable and what's being printed out is just a summary classification report. We have SST dev and bakeoff dev as our two assessment data frames. The results for each one of those are printed separately here. And then our bakeoff metric is this mean of the macro average F1 scores across the two data sets. Exactly these two but of course, at the bake off time we'll be using the test sets. So you might be guided in sort of hill climb on this number here while also attending to these two numbers, which are contributing to it. So for example, you can see here that as expected since we trained on the SST. We're doing better on the SST dev by far than we are on the new bakeoff data. The next section here just shows you another kind of baseline and this is a deep learning baseline in RNNClassifier. Our future function is very simple here because we just put on whitespace and we rely on the RNN itself to do all the featurization, which is like an embedding look up and then processing the example. So that's very simple. And then the wrapper is also very simple here. We're going to set the vocabulary for the model, the main counter two, that seems productive. And then finally run the experiment. And the one thing that's important here the one change, is that you set vectorized equals false here. Unlike in the previous baseline, we are not using cycler dict vectorized to process count dictionaries to get us from features to feature matrices. Here we are feeding our examples directly through into the model. Our model expects token streams with no messing about and so vectorized false would give them a pass through all the way to the model. So remember that otherwise this will all fall apart. But other than that, it's exactly the same setup. Let's run it here. I've got some timing information we're going to fast forward to this because this takes a little bit of time. But you'll see a report and I'm currently on just a very old CPU based Mac. So, this will give you a sense for the cost of development for deep learning. In this space. All right, our models at least stopping criterion was met after 49 epochs, and here's our look at the results. Which are kind of comparable to what we saw with the softmax baseline. All right, and that brings us to RNN analysis, which can be an important step in improving your system. I've written a few functions that make use of all the information that is encoded in the return values for SST experiment. Which I hope package together everything you need to do RNN analysis, reproduce your results, and make use of your model and downstream experiments. Here we're going to use this function find errors. I've done a little bit of pre-processing of the errors that were found and packaged them together. And then this cell here is just an example of the kind of things that you might do. Here we're looking at cases where the softmax model is correct, the RNN was incorrect and the correct label is positive. You could of course fiddle with those parameters here. We've got 168 examples going into that class and then we can look at a sample of the actual text that fall into that group, as a way of figuring out how these models differ and maybe improving one or both of them. And that brings us to the homework questions. And again, these are meant to help you explore the data and set up some additional baselines that inform original system development. We're going to start with one that's data oriented. I've called this token level differences. What I'm trying to do is raise to your awareness the fact that the SST data and the new restaurant review data are just encoded in different ways at the level of tokenization. This is mainly the results of the SST being kind of the result of a historical process, beginning with pengelley 2005 and going on through the SST project itself. So there are some funny things about it that I think could certainly affect any kind of transfer from one domain to the other. And since you are training on SST data, it's important to be aware of how it might be idiosyncratic. So that happens here, you write this function get token counts and as usual you have a test, you pass the test, you're in good shape. Next question relates to the cross domain nature of our problem training on some of the bakeoff data. In the standard paradigm, you are training on SST, evaluating on SST and also this new bakeoff data center restaurant review sentences. What would happen if you augmented your training set with a little bit of data from the development of restaurant review sentences? You might have a hunch that that's going to improve system performance. And this question here simply asks you to run such an experiment as usually you have a test. I think you will find that this is very productive in helping your system get traction on the new data. And that should be a clue as how to do a really good job in the bakeoff with your original system. This next question here is about, future representation. A more powerful vector averaging baseline. This is a step toward deep learning. It builds on this section of a notebook here, where essentially we average together vector representations of words to represent each example. And those are the input to a simple logistic regression classifier. So those are nice low dimensional models that tend to be quite powerful. This question is asking you to replace the logistic regression with a shallow neural classifier. So maybe the more powerful part here. And also to explore a wide range of hyper parameters to that model to get a sense for which settings are best for our problem. And that brings us to BERT encoding. And this is like one step further down the line toward deep learning and fine tuning. This question is simply asking you to encode your examples using BERT. In particular, taking the summary representation above the class token, the. final output there as your summary representation of the entire example. And those become presumably the inputs to some downstream classifier or potentially a fine tuning process. The idea that this is like one step better than the vector averaging that we just looked at. You do not need to conduct an experiment with SST, you're simply implementing this feature function here. But since SST experiment does make it really easy to run experiments once you've implemented the feature function, I would encourage you to choose some classifier model, and see how well this does. But as usual, you have a test and the test is just about the feature function. And it will make sure you're using all of these values correctly. And that brings us to the original system. And I just want to remind you that you are unconstrained except for the fact that you cannot make any use of the SST test set during development. The labels for that are off limits. But everything else is fair game. Bringing new training data, try new model architectures. And so forth and so on. We've given a few ideas here but this is by no means meant to be restrictive. It's just meant to get the creative juices flowing. Other than that, this is the same procedure as homework 1, we want a description of your system to inform the teaching team about what worked and what didn't. And it would be great if you reported your pick score, which is the macro average of the two F1 macros F1 scores for our two data sets but it's on the development set there. And that brings us to the bakeoff. And again, the bakeoff procedure is familiar. The one piece here, the crucial piece is that you write up a function, of predict one function that maps the text directly to a prediction using your original system. I've given two examples here, yours might be simpler depending on whether or not you use the SST experiment framework or not. But that all comes together here with great bakeoff submission where you input that function. won't need to change this output file. And you can see that this function here loads in our two test sets which are unlabeled. And uses your predict one function on all of those examples here. And then writes a file which you then upload to the auto grader to Gradescope. That happens here. So I just want to reiterate that in all senses, the test data labels are completely off limits to us. All the development conceptual and otherwise should happen on the development data.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Feature_Attribution_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is part 5 in our series on analysis methods in NLP. We're going to be talking about feature attribution methods. This is fundamentally a powerful tool kit for helping you understand how the features in your model contribute to its output predictions. Our fundamental question here is, kind of, why does your model make the predictions that it makes? There are many motivations for asking this question, here are just a few. To start, you might just want to understand whether your model is systematic with regard to some specific linguistic phenomenon. Has it actually captured that phenomenon? You might also want to know whether it's robust to minor perturbations in its input. You might use these techniques to diagnose unwanted biases in your model. And relatedly, you might use them to find weaknesses in your model that an adversary could exploit to lead your model to do really problematic things. Fundamentally, I think that this is a tool kit that will help you write really excellent analysis sections for your paper. To that end, I'm going to try to show you a bunch of code that will help you get hands on with these techniques. I'll do it at a kind of high level in the screen cast. And I've just contributed this new notebook, feature attribution, to the course code repository. And that should be flexible and adaptable, and help you take these techniques and apply them to whatever models and ideas you're exploring for your projects. The star of our show, really the only reason that I can do this, is this amazing Captum.ai library. It implements a wide range of feature attribution techniques. We're going to talk extensively about the integrated gradients method and use the gradient-based method as a kind of simple baseline for that method. But as you can see here, Captum implements a wide range of different algorithms, some very particular to specific model designs and others completely agnostic about what kind of model you're exploring. So it's a very exciting tool kit. The Sundararajan et al 2017 paper introduced the integrated gradients method. It's also a lovely contribution because it gives us a kind of framework for thinking about feature attribution methods in general. And as part of that, they offer two axioms that I'm going to use to guide this discussion. The first, and the more important one, is sensitivity. If two inputs x and x prime differ only at dimension i and lead to different predictions, then the feature associated with that dimension must have non-zero attribution. And with my simple example here, you can get a sense for why sensitivity is such a fundamental axiom. If for some model m and three dimensional input 1, 0, 1, we get a prediction of positive. And if for that same model the input 1, 1, 1, leads to the prediction negative, then we really ought to expect that the feature associated with the second position must have non-zero attribution. Because it must be decisive in leading the model to make these two different predictions. The second axiom is going to be less important to our discussion, but it's nonetheless worth having in mind. It is implementation invariance. If two models m and m prime have identical input/output behavior, then the attributions for m and m prime are identical. This is really just saying that the attributions we give should be separate from any incidental differences in model implementation that don't affect the input/output behavior of that model. To start our discussion let's begin with this simple baseline, which is simply multiplying the gradients by the inputs. This is implemented in Captum as input by gradient that I'm showing with respect to some particular feature i, given model M and input x. And we simply get the gradients for that feature and then multiply it by the actual value of that feature. It's as simple as that. Here are two implementations. The first one in cell 2 does this kind of using raw PyTorch just to show you how we can use PyTorch's autograd functionality to implement this method. And the second implementation is from Captum. And it's probably more flexible. And it uses this input by gradient class. To give you a full illustration here, I've just set up a simple synthetic classification problem using scikit tools. My model will be a TorchShallowNeuralClassifier which I fit on that data. And then in cells 9 and 10, I use those two implementations of this method. And you can see in 11 and 12, that they give identical outputs. Another thing worth noting here is that I have used the method by taking gradients with respect to the actual labels in our data set. You can often get a different picture if you take gradients with respect to the predictions of that model. And that might give you a better sense for why the model is making the predictions that it makes. In this case, since the model is very good, the attributions are only slightly different. That's our kind of baseline. I want to show you now that the input by gradients method fails the sensitivity test. And this is an example from the Sundararajan et al paper. They give this simple model M here, which is effectively just ReLU applied to 1 minus the input. And then you take 1 minus that ReLU calculation there. And that's the model. It's got one dimensional inputs and outputs. If you calculate for input 0, you get an outcome of 0. And if you give the model input 2, you get an output of 1. Since we have differing output predictions, sensitivity tells us that we have to have differing attributions for these two cases here, these two one dimensional inputs. But unfortunately, when you calculate through with this method you get 0 attribution in both cases. That's a failure of sensitivity and points to a weakness of this method. Let's move now to integrated gradients. And let me start by giving you the intuition for how this method is going to work. Imagine we have a simple two dimensional feature space, feature x1 and x2. So here's the actual point represented here. The idea behind integrated gradients is that we're going to compare that with respect to some baseline. That typical baseline for us will be the all 0s vector. And then to do the comparison what we'll actually do is interpolate a bunch of points between that baseline and our actual input, take gradients with respect to each one of them, and average all of those gradient results. And that will give us some measure of future importance. Here's the calculation of the method in full detail. I've taken this presentation from this really excellent tutorial from TensorFlow integrated gradients. It does all these annotations that I find quite helpful. Here's fundamentally how this works. The core thing is in purple. We're going to interpolate a bunch of different inputs between that baseline of all 0s and our actual input. That's what's happening here. And we'll take the gradients with respect to each one of those with respect to each one of the features. And we're going to sum those up and average them. And that gives us the core calculation here. And then in 5, we just kind of scale that resulting average with respect to the original input to put it back on the same scale. And as I showed here, integrated gradients obeys the sensitivity axiom. Let's go back to that original example of that simple ReLU-based model presented here. I showed you that the input by gradients method failed sensitivity for this model. Integrated gradients of course is sensitive in the relevant sense. And you can kind of see why that's happening, because our core calculation now is not with respect to a single input, in the case of the input 2, but rather with respect to all of those interpolated feature representations. Although some of those interpolated feature representations give a gradient of 0 not all of them do. And the result in effect is that you'll get a feature attribution of approximately 1 for this case of an input 2. The desired result showing sensitivity, because of course the input of 0 in this case would give an attribution of 0. Now let me walk you through a few examples that show you how you can use Captum to get hands on with the integrated gradients method. And I'm going to do that for two classes of model. The first one is just a simple feed-forward network. And what I'm doing is reconnecting with the Stanford Sentiment Treebank that we used during our sentiment unit. So on this slide, I've just set up an SST experiment using sst.experiment from that SST module. My feature representations are going to be essentially a bag of words. And I've filtered off stop words to make this a little more interpretable. And our classifier is a TorchShallowNeuralClassifier. I run the experiment and a lot of information about that experiment, you'll recall, is stored in this variable, experiment. Here I extract the model from that experiment report. And here, we get a bunch of other metadata that we're going to use to run the IntegratedGradients method. The feature representations of our test examples, the actual labels, and the predictive labels, along with the feature names. And the one thing to note here is that for the sake of Captum, we need to turn the string class names into their corresponding indices. And that's what's happening in cell 9 here. Then we set up the integrated gradients using the forward method for our model. And we set up the baseline, which is that all 0s vector. And then finally use the attribute method. And here I'm taking attributions with respect to the predictions of the model. I think this can be a powerful device for doing some simple error analysis. And that's what I've set up on this slide here. I've offered two functions, error analysis and create attribution lookup that will help you understand how features in this model are relating to its output predictions. You can see in cell 14 here, I'm looking for cases where the actual label is neutral and the model predicted positive. We can find those attributions. And this is actually an informative picture here. It looks like the model has overfit to features like the period and the comma. This ought to be indicative of the neutral category. But here it's using them in ways that lead to a positive prediction. So that's something that we might want to address. And we can go one level further if we choose and look at individual examples. So here I have pulled out an individual example. "No one goes unindicted here, which is probably for the best." This is a case where the correct label is neutral. And our model predicted positive. And I think the attributions again help us understand why because by far the feature with the highest attribution is this "best" one. And this is revealing that the model just does not understand the context in which the word "best" is used in this example. That might point to a fundamental weakness of the bag of words approach. For my second example let's connect with transformer models, since I assume that a lot of you will be working with these models. And these present exciting new opportunities for feature attribution. Because in these models, we have so many representations that we could think about doing attributions for. Here's a kind of general picture of a BERT-like model, where I have the outputs up here. You have many layers of transformer block outputs. Those are given in purple and probably an embedding layer in green. And that embedding layer might be itself composed of like a word-embedding layer and a positional embedding layer, and maybe others. All of these layers are potential targets for integrated gradients. And Captum again makes that relatively easy. So to start this off, I just downloaded from HuggingFace a RoBERTa-based Twitter sentiment model that seemed really interesting. And I wrote a predict_one_proba method that will help us with the error analysis that we want to do. This next step here does the encodings of both the actual example, using the model's tokenizer as well as the baseline of all 0s that we'll use for comparisons. In cell 7, I've just designed a small custom forward method to help Captum out, because this model has slightly different output structure than is expected. Here in cell 8, we set up the layer that we want to target. And as you can see I'm targeting the embedding layer. But many other layers could be targeted. Captum makes that easy. For our example, we use, "This is illuminating!" which I'll take to have true class positive. We do our encodings in cell 11 of both the actual example and the baseline. And then that's the basis for our attribution of this single example. Now for BERT, because we have high dimensional representations for each one of the tokens that we're looking at, we need to perform another layer of compression that we didn't have to for the feed-forward example. As you can see here, the attributions have for one example dimensionality 6 by 768. This is one vector per word token. To summarize those at the level of individual word tokens, we'll just sum them up and then z-score normalize them to kind of put them on a consistent scale. So that will reduce the attributions down to one per sub-word token. And that feeds into our final kind of cumulative analysis. So we'll do the probabilistic predictions, look at the actual class, convert the input to something that Captum can digest, and then use this visualization data recorder method to bring this all together into a nice tabular visualization. And that's what's happening here. You can see, for example, we have the True Label, the Predicted Label with the associated probability. And then the really interesting part, per word token, we have a summary of its attributions. And you can see that green is associated with positive, white with neutral, and red with negative. And this is giving us a reassuring picture about the systematicity of these predictions. It's a positive prediction. And most of that is the result of the word "illuminating" and the exclamation mark. And that kind of feeds into a nice kind of error analysis/challenge analysis that you can do with models like this using Captum. For this slide here, I've posed a little challenge or adversarial test to see how deeply my model understands sentences like, "They said it would be great and they were right." You can see it makes the correct prediction in that case. And when I change it to, "They said it would be great and they were wrong.", it predicts negative. That's reassuring, and so are the future attributions. It seems to be keying into exactly the pieces of information that I would hope and even doing it in a context sensitive way. For the next two examples, I just change up the syntax to see whether it's kind of overfit to the position of these words in the string. And it again looks robust. "They were right to say that it would be great." Prediction of positive. "They were wrong to say that it would be great." Prediction of negative. Very reassuring. As is the second to last example, "They said it would be stellar and they were correct." The only disappointing thing in this challenge problem is for this final example it predicts neutral for, "They said it would be stellar and they were incorrect." And the attributions are also a little bit worrisome about the extent to which the model has truly understood this example. Maybe we can think about how to address that problem. But the fundamental takeaway for now is simply that you can see how you can use feature attribution together with challenge examples to kind of home in on exactly how systematic a model's predictions are for an interesting class of cases.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Neural_IR_part_2_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
OMAR KHATTAB: Hello, everyone. Welcome to part 4 of our series on NLU and IR. This screencast will be the second among three of our videos on neural information retrieval. Just to recap, this is the functional view of neural IR that we left in the previous screencast. Our model will take a query, and a document, and will then output a score that will estimate the relevance of this document to the query. We will sort the documents by the decreasing score to get the top K results. Let's begin with a very effective paradigm for building neural IR models, namely query-document interaction. So given a query and a document we'll tokenize them. Then we'll embed the tokens of each into a static vector representation. So these could be GloVe vectors, for example, or the initial representations of BERT. We'll then build what is called a query document interaction matrix. This is typically nothing but a matrix of cosine similarities between each pair of words across the query and the document. Now that we have this matrix, we just need to reduce it to a single score that estimates the relevance of our document to this query. To do this, we'll just learn a bunch of neural layers like convolutional or linear layers with pooling until we end up with a single score for this query-document pair. Many IR models out there fall in this category, especially ones that were introduced between 2016 through 2018 or 2019. With enough training data, query-document human interaction models can achieve considerably better quality than Bag-of-Words models like BM 25. And they can actually do that at a reasonable increase, moderate increase in computational cost. So as discussed in the previous screencasts, these models are typically used as the last stage of every ranking pipeline. And in particular, in this figure here, they're used to re-rank the top 1,000 messages retrieved by BM25. And this is done to make sure that the latency is acceptable while still improving the MRR and the quality over BM25 retrieval. More recently, in 2019, the IR community discovered the power of BERT for ranking. Functionally, this is very similar to the paradigm that we just saw with query-document interactions. So here, we're going to feed BERT the query and the document as one sequence with two segments. One segment for the query and one segment for the document as shown. We'll run this through all the layers of BERT. And we'll finally extract the class token embedding from BERT and reduce it to a single score through a final linear head on top of BERT. As you can probably tell, this is nothing but a standard BERT classifier, where we're going to take the scores or the confidence that's the output of the classifier and use it for ranking our passages. And like any other task with BERT, we should first fine-tune this BERT model with appropriate training data before we use it for our task. We've discussed how to train our models in the previous screencasts, so refer to that if you'd like. So this really simple model on top of BERT was the foundation for tremendous progress in search over the past two years. And in particular, it's worth mentioning the first public instance of this, which was in January of 2019 on the MS MARCO Passage Ranking task. Here, Nogueira and Cho made a simple BERT-based submission to the leaderboard of MS MARCO that demonstrated dramatic gains over the previous state of the art submitted just a few days prior. By October of 2019, almost exactly one year after BERT originally came out, Google had publicly discussed the use of BERT in search. And Bing followed soon after in November of the same year. But the story is actually a bit more complicated. These very large gains in quality came at a drastic increase in computational cost, which dictates latency, and which is very important for user experience in search tasks as we have discussed before. So over a simple query document interaction models like Duet or ConvKNRM, Nogueira and Cho's BERT models increase MRR by over 8 points but also increase latency to multiple seconds per query. And so here it is natural for us to ask ourselves the question, if we could achieve high MRR and low latency at once. And it turns out that the answer is yes, but it will take a lot of progress to get there. And we'll try to cover that in the rest of the screencast in the next one. So let's get started with that. So to seek better trade offs between quality and latency, which is our goal, let's think about why BERT rankers are so slow. Our first observation here will be that BERT rankers are quite redundant in their computations. If you think about what BERT rankers do, they need to compute a contextualized representation of the query for each document that we rank. So that's 1,000 times for 1,000 documents. And they also must encode each document for every single query that comes along that needs a score for that document. Of course, we have the documents in our collections in advance, and we can do as much preprocessing as we want on them offline before we get any queries. So the question becomes, can we somehow precompute some form of document representations in advance once and for all using these powerful models that we have like BERT, and store these representations or cache them somewhere so we can just use them quickly every time we have a query to answer? This will be our guiding question for the remainder of this and the next screencasts. Of course, it is not actually obvious yet, at least, if we can pre-compute such representations in advance without much loss in quality for all we know so far, there might be a lot of empirical value in jointly representing queries and documents at once. But we'll put this hypothesis to the test. The first approach to tame the computational latency of BERT for IR is learning term weights. The key observation here is that Bag-of-Words models like BM25, it composed the score of every document into a summation of term document weights, and maybe, we can do the same. So can we learn these term weights with BERT in particular? A simple way to do this would be to tokenize a query and the document. Feed BERT, only the document, and use a linear layer to project each token in the document into a single numeric score. The idea here is that we can save these document term weights to the inverted index just like we did with BM25 and classical IR and quickly look up these term weights when answering a query. This makes sure we do not need to use BERT at all when answering a query as we just shifted all of our BERT work offline to the indexing stage. So this can be really great. We now get to use BERT to learn much stronger term weights than BM25. And DeepCT and doc2query are two major models under this efficient paradigm. As the figure shows at the bottom, they indeed greatly outperform BM25 and MRR, but actually, they have comparable latency because we're still using an inverted index to do that retrieval. However, the downside is that our query is back to being a Bag-of-Words, and we lose any deeper understanding of our queries beyond that. So our central question remains whether we can jointly achieve high MRR and low computational cost. And as we said before, the answer is yes. And to do this, we'll discuss in the next screencast two very exciting paradigms of neural IR models that get us close to this goal.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
RoBERTa_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome back, everyone. This is part four in our series on Contextual Word Representations. We are going to be talking about a robustly optimized BERT approach a.k.a. RoBERTa. So recall that I finished the BERT screencast by listing out some known limitations of the BERT model, most of which were identified by the original BERT authors themselves. And top of the list was simply that, although the original BERT paper does a good job of exploring ablations of their system and different optimization choices. There's a very large landscape of ideas here, and most of it was left unexplored in the original paper. Essentially, what the RoBERTa team did is explore more widely in this space. That is the robustly optimized part of RoBERTa. So what I've done for this slide here is list out what I take to be the central differences between BERT and RoBERTa. And I'll follow this up with some evidence from the RoBERTa paper in a second. But first let's go through the central differences beginning with this question of static versus dynamic masking. So for the original BERT paper what they did is create four copies of their dataset, each with different masking. And then those four copies were used repeatedly through epochs of training. The RoBERTa team had the intuition that it would be useful to inject some diversity into this training process. So they went to the other extreme, dynamic masking. Every single example when it's presented to the model is masked a potentially different way, via some random function. There are also differences in how examples are presented to the models. So BERT presented two concatenated document segments. This was crucial to its next sentence prediction task. Whereas for RoBERTa we're just going to have sentence sequences, that is pairs, that may even span document boundaries. Relatedly whereas BERT had, as one of its central pieces, this next sentence prediction task. RoBERTa simply drops that as part of the objective here. That simplifies the presentation of examples and also simplifies the modeling objective. Now, RoBERTa is simply using a masked language modeling objective. There are also changes to the size of the training batches. So for BERT that batch size was 256 examples. RoBERTa cranked out all the way up to 2000. There are differences when it comes to tokenization. So as we've seen BERT used this very interesting word piece tokenization approach which mixes some sub-word pieces with some whole words. RoBERTa simplified that down to just character-level byte-pair encoding which I think leads to many more word pieces intuitively. There are also differences in how the model was trained. So BERT trained on a substantial corpus, the BooksCorpus plus English Wikipedia, is a lot of data indeed. RoBERTa again cranked that up even further, they trained on the BooksCorpus, the CC-News corpus, the OpenWebText corpus and the Stories corpus, a substantial increase in the amount of training data. There are also differences in the number of training steps, and there's a subtlety here. So for the BERT model, it was originally trained on 1 million steps. The RoBERTa model was trained on 500,000 steps. Which sounds like fewer steps, but overall this is substantially more training, in virtue of the fact that the training batch sizes are so much larger for RoBERTa than they are for BERT. And finally, the original BERT authors had an intuition that would be useful in getting the optimization process going, to train just on short sequences first. The RoBERTa team dropped that idea, and they train on full length sequences throughout the life cycle of optimization. There are some additional differences related to the optimizer and the data presentation. I'm going to set those aside, if you want the details I refer to section 3.1 of the RoBERTa paper. So let's look at a little bit of evidence for these various choices, starting with that question of dynamic versus static masking. So this is the primary evidence, they're using three benchmarks. SQuAD, MNLI and SST-2, and you can see that more or less across the board dynamic masking is better. Not by a lot, but dynamic masking also has going for this intuition that BERT is kind of data inefficient, we can only mask out a small number of tokens. And it seems like it ought to be useful to inject a lot of diversity into that, so that a lot of different tokens get masked, as we go through the training process. But the choice is of course, supported numerically here, I think pretty substantially. This slide table here summarizes the choice about how to present the examples to the model. And this is also a little bit subtle, so numerically the DOC-SENTENCES approach was best. And this was an approach where they just took contiguous sentences from within documents and treated a document boundary as a kind of hard boundary. That's numerically better, according to the benchmark results. But they actually decided to go with the FULL-SENTENCES approach. And the reason for that is, in not respecting document boundaries, it is easier to create lots of batches of exactly the same size. Which leads to all sorts of gains when you think about optimizing a large model like this. So basically, they decided that those gains offset the slightly lower performance of FULL-SENTENCES as compared to DOC-SENTENCES, and that's why this became their central approach. Here's the summary of evidence for choosing 2K as the batch size. You can see that they chose 256, which was the BERT original. 2K and 8K and 2K looks like the sweet spot according to MLNI, SST-2. And this kind of pseudo perplexity value that you get out of bidirectional models like BERT and RoBERTa. So that's a clear argument. And then finally, when we come to just the amount of training that we do. The lesson here apparently is more is better. On the top of this table here we have some comparisons within the RoBERTa model, pointing to 500K as the best. And I would just remind you that, that is overall substantially more training. Than was done in 1 million steps with BERT, in virtue of the fact that our batch sizes for RoBERTa are so much larger. In closing, I just want to say that RoBERTa too only explored a small part of the potential design choices that we could make in this large landscape. If you would like to hear even more about what we know and what we think we know about models like BERT and RoBERTa, I highly recommend this paper called The Primer in BERTology, which has lots of additional wisdom, and insights, and ideas about these models.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Grounded_Language_Understanding_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is part 1 in our series on grounded language understanding. I'm just going to give an overview. With grounding, I feel like we're really getting at the heart of what makes NLU so special for NLP and also for artificial intelligence more broadly. So this is exciting. Let's dive in. Now, grounding is a very large topic. And so to ground it so to speak we're going to be focused on a particular task, which is color reference in context. I'll be saying much more about that later on. This notebook, Colors Overview, provides an overview of the data set, and that data set is the centerpiece for the homework and associated bake-off. The Core Reading is the paper that introduced that dataset, Monroe et al., 2017. And I think that paper is noteworthy also for introducing some interesting modeling ideas that are worthy of further exploration, possibly in final projects. And then I also just want to recommend a whole bunch of auxiliary readings. Not required but exciting extensions that you might make. I think grounding is a wonderful chance to do interdisciplinary work. You can connect NLP with robotics, and computer vision, and human language acquisition, and probably lots of other topics. So I'm going to be pushing papers, and dataset throughout this series of screencasts in the hopes that you can pick up those ideas, and run with them for your own projects. Now to start, I thought we could just reflect a little bit on the heart of this, which is why grounding is so important and why natural language understanding is so hard. And sort of to kick that off, I've taken a slide idea from Andrew McCallum. Andrew just asks us to reflect a little bit on the 1967 Stanley Kubrick movie, 2001, A Space Odyssey. In that movie, the spaceship's computer, which is called HAL, can do three things that are noteworthy. It can display computer graphics. It can play chess, and it can conduct natural open-domain conversations with humans. So this is a chance to ask, how well did the filmmakers do at predicting what computers would be capable of in the actual year 2001? Which is, of course, ancient history for us at this point. So let's start with the graphics. On the left, you have some of the graphics that HAL able to display in the movie, and you can see that they are extremely primitive. The filmmakers seem to have wildly underestimated just how much progress would happen in computer graphics. By 1993, which is much earlier than 2001, of course, we had the movie Jurassic Park, which had these incredible graphics for lifelike moving dinosaurs. So let's say that this is a kind of failure to imagine a future. For chess, it seems like they've got the prediction just about right. So in the movie, HAL is an excellent chess player. And just a few years before the actual 2001 in 1997, Deep Blue was the first supercomputer to beat world champion chess players. What about dialogue and natural language use? So on the left here, you have a sample dialogue from the movie. Dave Bowman is the human. He says, "Open the pod bay doors, HAL." And HAL replies, "I'm sorry, Dave. I'm afraid I can't do that." "What are you talking about, HAL?" Then HAL replies, "I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen." Very interesting. Not only is it fluent English, of course, but it's also displaying really rich reasoning about plans and goals. And it's fully grounded in what's happening in the ship. Just incredibly realistic. To give the filmmakers even a fighting chance here, let's move forward to the year 2014, which is about when Siri hit the market. And we talked about Siri earlier. Here, you can see Siri doing a much more mundane version of what we just saw HAL doing, which is kind of proactively recognizing plans and goals, and helping a human user solve a problem using fluent English. In this case, it's just about where to buy food, but the vision is very similar. What was life actually like in 2014 or for that matter, in the present day? Well, I also showed you this dialogue from Stephen Colbert from his show that pretenses that he has been playing with this phone all day, and therefore, has failed to produce material for the show. The cameras are on him, and he's desperate. And he asks Siri for help. And you can see here that Siri does not have a deep understanding of what he's trying to achieve. I've bolded God and cameras in Stephen's utterance because you can see Siri just picks up on those as kind of keywords, and says churches, and camera stores. It's not even topically relevant. It's just a complete failure to recognize what he's trying to do. And then later things get even worse, Siri really doesn't understand what Stephen is saying. And so it does that standard escape valve, which is it searches the web for the speech to text transcription of the thing that he said in hopes that that will be helpful. A far cry from anything like a helpful, useful, human-like interaction with language. Now, why is this so difficult? I think another angle on that question is usefully brought to the fore with this analogy that Stephen Levinson offers. So he asks us to look at this Rembrandt sketch here, and just reflect on the fact that you can make out people and structures in the background. But really, it's incredible that you can do any of that. So he says, "We interpret this sketch instantly and effortlessly as a gathering of people before a structure, probably a gateway; the people are listening to a single declaiming figure in the center." And then he says, "But all of this is a miracle for there is little detailed information in the lines or shading, such as there is. Every line is a mere suggestion. So here is the miracle: from a merest, sketchiest, squiggle of lines, you and I converge to find adumbration of a coherent scene." That is indeed a visual miracle and a cognitive miracle. And it's also a glimpse into why computer vision is so challenging. To make the connection with language, Levinson continues. "The problem of utterance interpretation is not dissimilar to this visual miracle. An utterance is not as it were a veridical model or snapshot of the scene it describes. Rather an utterance is just as sketchy as the Rembrandt drawing." So much of what we communicate as speakers is left implicit. And so much of what listeners are able to extract from our utterances and stuff that they're able to extract only by reasoning in a general way about the context, plans, and goals, world knowledge, and so forth. If our utterances were actually fully encoding in their semantics everything we intended to communicate, I think we would have talking robots at this point. But the truth is that so much of communication in natural language is left up to the context in a very general sense, and that's exactly what makes this problem so challenging. In a way though, all of this grounding into the context and all this reasoning, if you break it into your system, it can make things easier. It might make some intractable problems tractable. And one glimpse of that it's just this topic of what linguists and philosophers called indexicality. Indexicals are phrases like "I" as in "I am speaking." That obviously makes reference to the speaker, and that reference is going to vary depending on who's speaking. That's a case where you can't possibly understand the statement unless you know something about who's speaking, which is a very simple kind of grounding. "We won" chose this, a similar kind of grounding, but it's more complicated. So now we have this phrase, we, which probably by default is expected to include the speaker. But it kind of needs to include others, and figuring out who else it includes can be difficult. And you also get more challenging uses, where you say things like "we" as in, we, as the sports team, that I follow or something like that. So we have grounding plus a whole bunch of contextual reasoning in order to figure out what "We won" would mean. "I am here." Of course, "I" for the speaker, that's one kind of grounding, but "here" is an indexical expression referring to a location, and it does that in a very complicated way. When I say I am here, I could be in my office, or Stanford, and I suppose all the way up to planet Earth. Although, that's unlikely because it's not so informative in 2021 to say I'm on planet Earth. "We want to go here" is another use. It has "we" for one kind of grounding. And in this case here, if I'm pointing to a map, would be an even more complicated kind of displaced indexical reference, but the map is doing some iconic duty for some actual place in the world that we are aiming to go. So another kind of complicated reasoning, but again, grounded in something about the utterance context. "We went to a local bar after work." Here, the indexical is the word "local," and it just shows that indexicality can sneak into other parts of speech. "Local" here is going to refer to things that are somehow in the immediate vicinity of the location of the utterance and again, in a very complicated way. And then "three days ago," "tomorrow," and "now" are temporal indexicals. And they just show that the meaning of an utterance can vary depending on when it's spoken. And all of these expressions are kind of anchored to that time of utterance. And there are other kinds of context dependence that really require us to understand utterances in their full grounded context. Let's start with a simple example. "Where are you from?" This can be a vexing question when people ask because it can often be difficult to know what their true goals and intentions are with the question. They could mean your birthplace. I would say Connecticut. It could mean your nationality. I might say the US. Affiliation. For me, that would be Stanford, and again, maybe one day it will be informative to say planet Earth if there are intergalactic meetings. That one is typically ruled out because it's not so helpful in 2021. But for the rest of them, we kind of have to guess often about what the speaker is asking of us in order to figure out how to answer. Here are some other examples. "I didn't see any." That's one particular sentence. Its meaning is underspecified. In the context of the question, "Are there typos in my slides," "I didn't see any" will take on one sense. In the context, "Are there bookstores downtown," "I didn't see any" will take on a very different sense. "Are there cookies in the cupboard," "I didn't see any" yet again, another kind of sense. And of course, there is no end to the number of different contexts we can place the sentence in. And each one is likely to modulate the meaning of "I didn't see any" in some complicated and subtle way. We hardly reflect on this, but it's an incredible process. So just to round this out, here's an example. Routine pragmatic enrichment. I've got this simple sentence in the middle here, "Many students met with me yesterday." It's not a very complicated sentence cognitively or linguistically. I think we can easily understand it, but reflect for a second on just how many hooks this utterance has into the context. We need to know what the time of utterance is to understand yesterday and in turn, to understand the whole sentence. We need to ask, how big is the contextually restricted domain of students here, in order to figure out whether how many, many is. Is it false for most students? Did I avoid saying most or all because that would be false and instead, chose a weaker form, many? That would be a kind of reasoning that many listeners will undergo. What's the additional contextual restriction to students, just students in our course? Students I advise, students at Stanford, students in the world. Again, the context will tell us. Who is the speaker? Of course, that's a straightforward indexical. And then there are other kinds of inferences that we might make based on the restrictive modifiers that the speaker chose. Again, we don't reflect on it, but all of this stuff is happening kind of effortlessly and automatically. This, in Levinson's terms, is the merest, sketchiest squiggle of what actually gets communicated. And that is what's so hard about so many aspects of NLU. Now, I want to go back into history at least, once more to Terry Winograd's system SHRDLU because this just shows that at the start of the field of AI and Natural Language Processing, the focus was entirely on these grounded understanding problems. So SHRDLU was a fully grounded system that parses the user's input, mapped it to a logical form, and interpreted that logical form in a very particular world. And then it would try to take some action and generate responses. It's incredible, and I love this characterization from this YouTube clip. "One project did succeed. Terry Winograd's program SHRDLU could use English intelligently, but there was a catch. The only subject you could discuss was a micro-world of simulated blocks." This is wonderful in the sense that it achieves the goal of grounding, but it was very far from being scalable in any sense that would make it practical. But here's a kind of simple dialogue from SHRDLU. And the thing I just want to point out is that there is so much implicit grounding into the context. "The box" is restricted to the domain and therefore, has reference. Of course, there isn't a unique box in the universe. So "the box" in the general context might be very confusing, but in the blocks world it made sense, and you can see that person leveraging that. And the computer can understand it because it too is grounded in this particular context, and therefore, can make use of all of that implicit information in forming its utterances, and interpreting the human's utterances. And you see that pervasively throughout sample dialogues in the SHRDLU. It's a compelling vision about the kinds of things that we need to have, and all of it turns on this very rich notion of grounding in the blocks world. Finally, another connection I want to make. Let's just think the very best devices in the universe, as far as we know, for acquiring natural languages are humans. What do humans do? Well, first language acquirers children learn language with incredible speed. That's noteworthy. Just a few years. Despite relatively few inputs-- I mean, they get a lot of language data in the ideal situation, but it's nothing compared to what currently language models get to see. And they use cues from contrast inherent in the forms they hear. That's a distributional idea that we're familiar with but also social cues and assumptions about the speaker's goals. It just feel like the very richness of this picture and its multi-modal aspects are really important guiding clues for us. So what are the consequences of all this for NLU? Well, as I said, since human children are the best agents in the universe at learning language and they depend on grounding, it seems like our systems ought to be grounded as well. Problems that are intractable without grounding are solvable with the right kinds of grounding. That's important to keep in mind. Grounded problems can seem hard, but the other aspect of that is that some problems might be completely intractable unless you have some notion of grounding. Indexicals come to mind. Thinking about current modeling. Deep learning is a flexible toolkit for reasoning about different kinds of information in a single model. You can bring in language data, image data, video data, audio data, and so forth. And therefore, it has led to conceptual improvements. The ungrounded language models of today get a lot of publicity, but there are also many exciting systems that are fluently reasoning about images, and video, and language together. And I think that's a really nice step forward into the world of true grounding. So we should seek out and develop dataset that include the right kind of grounding because the central thesis here is that that can lead to progress by leaps and bounds. So again, to round this out, let me encourage you to think about this for final projects. We're going to be working with the Stanford English Colors in Context Corpus. There is also a Chinese version, and we've explored exciting ideas involving monolingual Chinese, and English speakers, as well as bilingual models for this dataset. If you want to do a little bit more in terms of grounding, slightly more complicated context, I would recommend the OneCommon dataset. The Edinburgh Map Corpus is an early task-oriented grounded corpus that could be exciting, especially if you want to do some interesting initial steps involving language and reinforcement learning. The Cards Corpus would be much more ambitious along those same lines. It's a very open-ended, difficult, task-oriented dialogue corpus. Deal or No Deal is a forward thinking negotiation corpus. Negotiation is a very interesting kind of slightly adversarial, social grounding. CraigsListBargain is another dataset that you might use in the context of negotiation agents. And then ALFRED, CrossTalk, and Room-to-Room are all data sets that would allow you to combine grounded language understanding with problems relating to computer vision in various ways. And again, that kind of interdisciplinary connection could be crucial to making progress on truly grounded systems.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Homework_1_Word_Relatedness_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
SPEAKER 1: Hello, everyone. This screencast is going to be a brief playthrough of Homework 1 on word relatedness. I hope to give you a sense for the problem that you're tackling, and also our expectations around the homework questions and the bake-off. So let's dive in here. The overview is just explaining the character of this problem, which is essentially that we're going to give you a development set of word pairs with scores, and the scores reflect relatedness. The scores were produced by humans, and we've just scaled them into 0 to 1, where a larger score means more related in this human sense. And your task is essentially to develop a system that will predict scores that are highly correlated with those human scores according to the Spearman Correlation coefficient, which is the traditional metric in this space. So this is just some setup stuff for the environment. And then we introduce the development set itself, which is a Pandas data frame. It's loaded in from your data folder, and it looks like this. It's got a bunch of word pairs, each with scores. As I said before, these are the human-provided scores. This is a development data set in the sense that you can make whatever use you want of it. You can train systems. You can explore your results and so forth because as you'll see, for the actual bake-off, we have a fresh test set that you'll make predictions on. There are about 5,000 words in this development set. You can train on any subset, and you can expand the data set to other things if you want to include them as well. It's really up to you to decide what you want to do because this is all about making predictions on that brand new test set, as you'll see later. And I will just say that the test set has 1,500 word pairs with scores of the same type. And in terms of the overlap, I will also tell you no word pair in this development set is in the test set. So it's disjoint at the level of these pairs. But some of the individual words are repeated in the test set. So you do have some vocabulary overlap. In this code here, we load the full vocabulary for this thing, which is all the words appearing in all the pairs. The vocabulary for the bake-off test is different. It's partially overlapping with the above, as I said. Now if you wanted to make sure ahead of time that your system has a representation for every word in both the dev and the test sets, then you can check against the vocabularies in any of the vector space models that we've distributed with this unit. So for example, if you ran this code, you get the full test vocabulary. And if you have a representation for every word in there, then you're in good shape when it comes to the test set. It's also useful to look at the score distribution. This will give you a sense for what kind of space you're making predictions into. And I'll give you the hint that the test distribution looks an awful lot like this dev set distribution. It's also worth being aware that there are some repeated pairs in the training set, some words that have different scores associated with them. And they're repeated. Therefore, what I've done here is just provide you with some code that will allow you to rank pairs of words by the variance in their scores. And so you could decide for yourself what you want to do about these minor inconsistencies. You could filter the data set or keep all of these examples in. It's entirely up to you. I will just say that the test set does not force you to confront this issue. It has no repeated pairs in it. All right. And then we come to the evaluation topic here. So there's a central function you'll be using a lot in the homework and the bake-off, word relatedness evaluation. And so there are some instructions about how the interface works. Let me just give a brief illustration. In this cell, I'm loading in one of our account matrices. It's the giga5 matrix. And I'm going to evaluate that directly. So you can see in the next cell that word relatedness evaluation takes in our development data for our test data and whatever vector space model you've developed as its two arguments. And it returns a new version of this input here with a column for the predictions you make, as well as this value here, which is the Spearman Rank Correlation Coefficient. That's our primary metric for this unit. Right here is the score that I achieved. Not so good. I'm sure you'll be able to do better. And here's a look at the new account data frame with that new predict column of predictions inserted into it. And this is just another baseline here, a truly random system that just predicts a random score. And that's even worse than the simple count baseline. Again, you'll be able to do much better without much effort. Error analysis. I've provided you with some functions that will allow you to look at what your system is doing in terms of the best predictions in terms of comparing against the human scores, and the worst predictions. And I am imagining that this might help you figure out where you're doing well and where you're doing poorly. And then you can iterate on that basis. And that brings us to the homework question. So what we're trying to do here is help you establish some baseline systems, get used to the code, and also think in new and creative ways about the underlying problem. Our first one is positive pointwise mutual information as a baseline. As you've seen in the materials for this unit, pointwise mutual information is a very strong baseline for lots of different applications. And it also embodies a kind of core insight that we see running through a lot of the methods that we've covered. So it's a natural and pretty strong baseline. And what we're asking you to do here is simply establish that baseline. Here, and throughout all of the work for this course, we're going to ask you to implement things. And in general, we will provide you with test functions that will help you make sure you have iterated toward the solution that we're looking for. So you can feel rest assured that if you have meaningfully passed this test, then you'll do well in terms of the overall evaluation and your code is functioning as expected. Next question is similar. So again now, we're exploring latent semantic analysis. And in particular, we're asking you to build up some code that will allow you to test different dimensionalities for a given vector space input and try to get a feel for which one is best. So again, you have to implement a function. And then there's a test that will help you make sure that you've implemented the correct function in case there's any uncertainty in the instructions here. Next question. As I mentioned in the lectures, t-test reweighting is a very powerful reweighting scheme. It has some affinities with pointwise mutual information, but it is different. And this question is just asking you to implement that reweighting function. We've given the instructions here. You might also look in vsm.pi, the module at the implementation of pointwise mutual information because you could adopt some of the same techniques. I want to emphasize that you don't need the fastest possible implementation. Any working implementation will get full credit. But the code in vsm.pi is really nicely optimized in terms of its implementation. So you might want to push yourself to do something similarly efficient. But again, as long as your function t-test here passes this test, you're in good shape. And you don't need to evaluate this function. We're just asking you to implement it. But we're assuming, since I've said that this is a good reweighting scheme, that you'll be curious about how it performs in the context of the system you're developing. All right. For the final two questions, we're asking you to think further afield. Pooled BERT representations is drawing on the material in this notebook here, which is just exploration of the ideas from Bommasani et al, 2020, on how to derive static representations from models like BERT. And so what we've got here is some starter code for you and a kind of skeleton for implementing your own version of that solution. Again, we're hoping that this is a foundation for further exploration for you. We've got the implementation to do here, and then a test that you can pass to make sure that you've implemented things according to the design specification. The final question is also really exploratory. It's called learned distance functions. The idea here is that much of the code in this notebook pushes you to think about distance in terms of things like cosine distance or Euclidean distance. But we should have in mind that the only formal requirement is that you have some function that will map a pair of vectors into a real value score. As soon as you see things from that perspective, you realize that a whole world of options opens up to you. And what this question is asking you to do is train a k-nearest neighbors model on the development data that will learn to predict scores. And then you can use that in place of cosine or Euclidean. We've walked you through how to implement that. There's a bunch of guidance here and a few tests for the sub-components if you follow our design. Again, if the tests pass, you should be in good shape. We're not asking you to evaluate this directly. But we're hoping that this is a foundation for exploring what could be a quite productive avenue of solutions in this space. And then finally, the original system. This is worth three points. This is a big deal here. You can piece together any part of what you've done previously. All that stuff is fair game. You can think in entirely original and new ways. You can do something simple. You can do something complex. What we'd like you to do here is not only provide the implementation in the scope of this conditional so that it doesn't cause the auto grader to fail if you have special requirements and so forth. But we're also looking for a textual description and a report on what your highest development set score was. The idea here is that at the end of the bake-off, the teaching team will create a report that kind of analyzes across all the different submissions and reflects back to you all what worked and what didn't. And as part of that effort, these system descriptions and development scores can really help us understand how things played out. And that brings us to the bake-off. So for the bake-off, what you really need to do is just run this function, Create Bake-off Submission, on your vector space model. Here it's my simple one, count_df that I loaded before. And as a reminder that this is an important piece, you also need to specify some distance function. So the idea is that here, my bake-off would be the simple submission, where I'm just doing a count data frame and Euclidean as my distance. And when I run this function, it creates a file, CS224U Word Relatedness Bake-off Entry. And you'll just upload that to Gradescope. We'll give some instructions about that later on. And that will be evaluated by an automatic system. And that's it.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Adversarial_Testing_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is part two in our series on analysis methods in NLP. We're going to be talking about adversarial testing. This is an exciting mode because as you'll see with a few dozen carefully created examples, you can learn something really interesting about the systems that you're developing. To start, let's remind ourselves of how evaluation standardly work in our field. At step one you create a dataset from some single homogeneous process. It could be that you've scraped data from the web or crowd-sourced a dataset, or labelled examples yourself, but the important thing is that we, typically, do this as one single process. And then in step two we divide that dataset into disjoint train and test sets, and we set the test set aside. And you do all your development on the train set and, only after all development is complete, you do an evaluation of your system usually based on accuracy or some similar metric on that held out test set. And then finally, and this is the important part, you report the results of that test set evaluation as providing an estimate of the system's capacity to generalize to new experiences. I hope you can hear in that that we're being awfully generous to our systems. At step one we create a single data set from a single process. We hold out the test set, and we use that test set as our device for measuring how well the system is going to do if deployed out in the real world. But, of course, we know that the real world is not created from some single homogeneous process. We know in our heart of hearts that if we deploy the system, it will encounter examples that are entirely unlike those that it saw in training and assessment in the standard mode, and that might worry us, that we're overstating the capacity of our systems to actually deal with the complexity of the world. And adversarial evaluations are one way that we can begin to close this gap here. So in adversarial evaluations we create a dataset by whatever means you like, at step one, and you develop and assess the system using that dataset according to whatever protocols you choose. It could be the standard evaluation mode, if you like. Here's the new part. At step three you develop a new test set of examples that you suspect, or know, will be challenging given your system and the original data set that it was trained on. And then, as usual, after all your system development is complete you evaluate the system, usually based on accuracy, again, on that new test dataset. And that's the result that you report as your system's capacity to generalize to new experiences, at least of the sort that you carved out in your adversarial test set. And the idea here is that in having a distinction between the data that we develop our system on and the challenge problems that we pose, we'll get a better estimate of how our systems perform on examples that are presumably important examples that it's likely to encounter when it's deployed in the real world. A brief historical note, I think it's fair to say that the idea of adversarial testing traces all the way back to the Turing test, which is introduced by Turing in this classic paper, Computing Machinery and intelligence. The Turing test has an inherently adversarial flavor to it because, of course, a computer is trying to trick a human into thinking that the computer is human. We also hear echoes of adversarial testing in this classic article slash book from Terry Winograd called Understanding Natural Language. There are discussions in that book of the idea of constructing examples that we know will stress test our systems by probing to see whether they have knowledge of what the world is like and the true complexity of language. And then I think Hector Levesque really elevated those Winograd ideas into a full-fledged testing mode in his paper, On Our Best Behavior. Let's look briefly at those Winograd sentences, or Winograd schema, because they're kind of interesting. The idea is that they will key into whether a system has deep knowledge of the world and of language. So we start with an example like, the trophy doesn't fit into the brown suitcase because it's too small. What's too small? And the human answer is, the suitcase. There's a minimally contrasting example, the trophy doesn't fit into the brown suitcase because it's too large. What's too large? Here we answer the trophy. And presumably, we do this because we can do a kind of mental simulation involving suitcases and trophies and figure out how to answer these questions on that basis. This next pair of examples is similar, but it keys more into normative social roles. The council refused the demonstrators a permit because they feared violence. Who feared violence? And our standard answer, drawing on standard social roles, is the council. Again, we have a minimally contrasting example. The council refused the demonstrators a permit because they advocated violence. Who advocated violence? And here, again drawing on default social roles, we're inclined to answer, the demonstrators. The intuition, as I said, is that to resolve these questions, given how minimally different these examples are, you need to have a deep understanding of the questions and the context and also a deep understanding of what our world is like. And then, as I said, Levesque kind of continues this and begins to systematize it. So he says, we should pose questions to our systems like, could a crocodile run a steeplechase? And the intuition is clear. The question can be answered by thinking it through. A crocodile has short legs. The hedges in a steeplechase would be too tall for the crocodile to jump over. So no. A crocodile cannot run a steeplechase. Again, a mental simulation that leads us to an answer to this surprising question. And the idea is that questions like this will not be susceptible to cheap tricks. As Levesque says, can we find questions where cheap tricks like this will not be sufficient to produce the desired behavior? This, unfortunately, has no easy answer. The best we can do, perhaps, is to come up with a suite of multiple choice questions carefully, and then study the sorts of computer programs that might be able to answer them. I think you can hear in that what we now call the adversarial testing mode. Now, I'm encouraging you to pose adversarial tests to your systems. And as I said, you can do that by constructing just a few novel examples, but we should be aware of what we're doing. Primary question, what can adversarial testing tell us, and what can't it tell us about the systems that we're developing? Here are just a few considerations that should guide your work in this area. First, you don't need to be too adversarial. It could just be that you're posing a challenge problem to assess whether your system has an understanding of a particular set of phenomena. Has my system learned anything about numerical terms? Does my system understand how negation works? Does my system work with a new style or genre? These are challenge problems that you could pose open-mindedly, and you might find that your system is surprisingly good at them. Second consideration, we should be thoughtful about the metrics we use. As I signaled to you before, the limitations of accuracy-based metrics, like F1 and so forth, are generally left unaddressed by the adversarial paradigm. And that's because we want a minimal contrast with our standard evaluation modes. But I think you can hear in the mission statements, especially from Levesque, that we might at some point want to break free of that very restrictive mode and pose more open-ended complex evaluations for our systems that would involve requiring them to offer their evidence and interact with us to resolve uncertainty about what they're supposed to be doing. All of that is left aside in the standard adversarial mode, and we should be aware that it's a limitation. This next question is really fundamentally important. In adversarial testing, if you see a model failure, is it actually a failure of the model or of the dataset that the model was trained on? Liu, et al. posed this nicely. What should we conclude when a system fails on a challenge dataset? In some cases, the challenge might exploit blind spots in the design of the original dataset. That would be a dataset weakness. In others, the challenge might expose an inherent inability of a particular model family to handle certain natural language phenomenon. That would be a model weakness. Now, these are interestingly different from the point of view of development in our field. We're apt to hope, I think, that we find model weaknesses because those are really fundamental discoveries, but we should be aware that it might be that the system could have done well on our adversarial test if it had just been trained on the right kind of examples. And that would be a dataset weakness. Data set weaknesses are presumably relatively easy for us to address. We can just supplement the training data with examples of the relevant kind, whereas model weaknesses are forcing us to confront something that might be an inherent limitation of the set of approaches that we're taking, a much more fundamental insight. Atticus Geiger, et al., in this paper offers a similar insight in the context of being fair to our models. For any evaluation method, we should ask whether it's fair. Has the model been shown data sufficient to support the kind of generalization we're asking of it? Unless we can say yes with complete certainty, we can't be sure whether a failed evaluation traces to a model limitation or a data limitation that no model could overcome. And I'm emphasizing this because it's surprisingly easy to fall into the trap of thinking you have imposed an unambiguous learning target when, in fact, you have not. Just think about the simple example here. Human to human, suppose I begin the sequence 3, 3, 5, 4, and I say to you, what comes next in the sequence? Now, I might have in mind the number 7 But the evidence that I have offered you wildly underdetermines how to continue the sequence. And so I think it's fair to say that no learning agent, without a lot of ambiguity, could figure out what my intended continuation is. And sometimes our adversarial tests have this quality that the available data and experiences of these systems just don't fully disambiguate what our intended learning targets are. So we should be aware of that. Those are dataset failings, rather than model failings. Now, I can offer you a constructive set of techniques to figure out whether you're dealing with a dataset weakness or a model weakness, and that falls under the heading of inoculation by fine tuning from this wonderful paper, Liu, et al., that I quoted from before. So Liu, et al., just to remind us that in the standard challenge evaluation mode we train our system on some original dataset and then we test it on both the original test set and our challenge test set. And our expectation is that we'll see outcomes like this where the system does really well on the original data, the original test set, and really poorly on the challenge test set. But when we see this outcome we should ask, why this is happening? In particular, is it a model weakness or a dataset weakness? And their proposed method, this inoculation method, works as follows. We're going to fine-tune our system on a few of our challenge examples, and then retest on both the original test set and the held out parts of the challenge test set. When we do this, there are three classes of outcome that you might see. The first would point to a dataset weakness. If via this little bit of fine tuning on the challenge dataset we can get good performance on the original and the challenge data set, that shows us that in this original evaluation mode the system just didn't see enough of the relevant kinds of examples from your adversarial tests to have any hope of succeeding, but a modest amount of training on those examples leads it to do fine. That's a dataset weakness. A model weakness is what we might have in the back of our minds for our adversarial testing, and this would be the case where, even though we have fine-tuned our system on some of these challenge examples, its performance remains really low, even though the system can maintain good performance on the original dataset. And this is just like there is something special about these new examples and the model simply cannot get traction. And there's a third outcome that might be really worrisome, and they would trace this to kind of like annotation artifacts or label shift or something like that. And that's where, in doing this fine-tuning on some challenge examples, we see degraded performance on both the original dataset and the challenge dataset. And that would show that there is something fundamentally confusing about these adversarial testing examples that are causing a lot of problems for the system that we've developed, because even a modest amount of fine tuning causes kind of consequences to ripple through the system that are impacting even performance on the original dataset. All right to close out this screen cast, let me offer you two examples of interesting adversarial tests in our field beginning with the SQUaD question answering data set. I showed you this leaderboard from SQUaD 2.0 at the start of the quarter, and the funny thing, of course, is that you have to go all the way to place 13 on the leaderboard to find a system that is worse than our estimate of human performance. So we have superhuman performance on SQUaD, but what does that really mean? SQUaD was also the site of one of the first really systematic adversarial testing efforts in our field. This is from Jia and Liang 2017. What they did is quite simple. We begin with SQUaD examples, where we have passages and questions as inputs, and the system task is to answer the question, and we can count on the answer being a literal substring in the passage that the system was given. The adversarial thing that Jia and Liang did was simply to append misleading sentences to the ends of those passages. And what they found is that systems were systematically misled by those final sentences. Whereas humans could easily ignore them, systems were now inclined to answer drawing on information from those misleading new sentences. And this is kind of an interesting dynamic because you might think, well, we'll just train our system now on passages that have these augmented misleading sentences, and then surely our systems will be more robust. That might be true in some sense, but, of course, we could then just append sentences to the start of the passages, and Jia and Liang found that systems were now confused by the appended initial sentences, and they started to give wrong answers in that mode, as well. And you could kind of go back and forth in this adversarial mode showing that systems were worrisomely easy to trick based on these simple appending of misleading sentences. Now, that's very interesting about adversarial evaluation mode. What I think is more interesting about the outcomes is that they begin to show us just how different adversarial testing can be. So here's the SQUaD leaderboard, the original, at the time of the paper, as well as the results of this adversarial test. And you can see, first of all, that system performance has really plummeted. So this turns out to be highly adversarial to these systems, for whatever reason. I think it's more interesting to note that the system ranking has really been mixed up. So it's not like the systems uniformly drop in performance. Now, as we move from the original rank to the adversarial rank, we have the first place system is now in place five. The second has dropped all the way to place 10, but the seventh place system is now in first place. It looks kind of chaotic. Here's a scatterplot where we have the original system performance along the x-axis and the adversarial system performance along the y-axis. And you can see that it's kind of chaotic. There's no way that one predicts the other. So something very interesting has happened, and that's noteworthy because it looks like that's meaningfully different from what we do when we do standard evaluations. I don't have direct evidence of this from SQUaD, but here's a case where people took two classic image datasets and simply created new test sets according to the same protocols that were used for the original dataset and test sets. And what you find is a very strong correlation. Even though the examples are new, because it's the same protocol, system performance is highly predictive, in the sense that the original test accuracy is perfectly correlated with accuracy on these new testsets. Very different from this adversarial mode where something much more chaotic happened. So adversarial testing is meaningfully different it seems from standard evaluations. Let's move to NLI now. This will tell us two different lessons about how adversarial testing can be informative. So we saw at the start of the course that we now have superhuman performance on the SNLI data set. That's certainly noteworthy. And we're reaching superhuman performance on the MultiNLI testset. We're sure to be there, if we're not already, at the time of this screencast. But we've also seen that systems that perform really well on these datasets are often susceptible to adversaries. In the first screen cast I showed you these examples from Glockner, et al.'s, Breaking NLI paper, where they make simple modifications to these hypotheses and find that systems do not behave systematically with respect to human intuitions about the modified examples. And that's the worrisome part, when they quantify that, of course, and that you can see that the best systems at the time up here were doing pretty well on SNLI, and their performance plummeted on these new test sets, showing that this was, for whatever reason, truly adversarial when it comes to those systems. But I also presented this as a story of progress, right, because I showed you that in simply downloading RoBERTA-multi-NLI, that is, RoBERTA fine tuned on the MultiNLI data set, you now have a device where, with no work, you can essentially solve this adversarial test. I think that's really striking, and it points to the fact that RoBERTA, unlike those earlier models, might truly have systematic understanding of the relevant kinds of lexical relationships that you need to solve this adversarial test set. So that's an exciting outcome. Here's a second outcome that you might see, and this is from the Naik,et al. paper that does a wide battery of different adversarial tests on multi-NLI data. They did a bunch of things like antonyms-- "I love the Cinderella story" contradicts "I hate the Cinderella story" just drawing on lexical knowledge. They asked about numerical reasoning across these two premises. Word overlap, and this is a little bit different, in that, you're doing something like just inserting material that you might think is going to be distracting in the mode of the SQUaD adversary, and seeing what effects that has, and the same thing for negation. Adding on to the end information that's going to be misleading for a system and also includes a lot of negation elements. There are a few other modes that I didn't have space for. It's a very rich paper with a very fine-grained breakdown of how systems do on these different adversarial problems. Here's a picture of the dataset and here's a breakdown, and I think the overall takeaway is that the numbers across the board are very low on these adversaries. And so that's interesting, and it looks like even top performing systems from multi-NLI are stumbling with these problems that we surely want our systems to be able to solve, if we're going to call them true common sense reasoners. However, this was actually the basis for a number of the experiments in that Inoculation By Fine Tuning paper that I quoted from before. Here's a kind of rough picture of the performance results that they report on different subsets of that adversarial test on MultiNLI. And you can see that it shows all the different outcomes that we discussed under the heading of Inoculation By Fine Tuning. To simplify things, just focus in on the green lines. The ones with dots are the original system performance, and the ones with crosses are on the challenge dataset. So this first column here is identifying dataset weaknesses. What you're seeing is that, as we fine-tune on more examples from the challenge dataset going along the x-axis here, we very quickly get a system that's actually good at this challenge problem, right. After just about 50 examples, the system is, basically, learning to solve the word overlap and negation problem. So the negation takes a little bit longer at 400 examples. So that's a case where the original systems were failing, not because of any intrinsic property of the models being used, but rather because the data clearly just didn't have enough information to resolve these learning targets. We also see outcome two, which is a model weakness. Again, follow the green lines, and you can see here that, for spelling errors and length mismatch, no amount of fine tuning on challenge examples helps these systems get traction on these problems. That's these flat lines here. And that's showing that there's something fundamentally wrong, possibly, with these models. They are just unable to solve these two challenge problems. And then we also see the third artifact. For numerical reasoning, you'll notice that system performance is kind of really chaotic here, and that's suggesting that there is something importantly and problematically different about these challenge examples because fine tuning on them causes the system to become really chaotic in its predictions, and we get degraded performance, not only on the challenge set, but also on the original dataset. showing that we've done something quite disruptive. So this is really interesting that in this case, from this battery of adversarial tests, we see all these different outcomes pointing us to all sorts of different lessons about what action we should take to make these systems more robust.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Basic_Reweighting_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome back, everyone. This is part 4 on our series on distributed word representations. We're going to be talking about basical reweighting schemes. Essentially, I feel like we've been faithful to the underlying counts of our matrices for too long. It's time to start messing with them. Here are some high-level goals that we have for reweighting. And we would like in these matrices to amplify the associations that are important and trustworthy and unusual while correspondingly deemphasizing the things that are mundane or quirky or reflect errors or idiosyncrasies in the data that we use. Now, of course, absent the defined objective function of the machine-learning sense. This is going to remain a fuzzy goal, but we do have some quantitative hooks, I think. We have this guiding intuition that we would like to move away from raw counts because frequency alone is generally a poor proxy for the kind of semantic information that we hope to extract. So we can ask for each of the reweighting schemes that we consider, first, how does it compare to the underlying raw count values? If the scheme is just rescaling the underlying counts, it's probably not worth the effort. On the other hand, if it gives us a very different distribution, then at least we know that we're cooking with fire when it comes to moving away from raw frequency. There's a related question that I would like us to have in mind, what is the overall distribution of values that the reweighting scheme delivers? Count distributions are very skewed in a way that can make them difficult to deal with for lots of analytic and machine-learning methods. So we might hope that in reweighting, in addition to capturing things that are important and deemphasizing things that are mundane, would also give us an overall distribution of values that was more tractable for these downstream applications. And then finally I personally have a goal that we would like to do, no feature selection based on counts or outside resources like stopword dictionaries. I don't want to be filtering off parts of the vocabulary a priori, because for all I know, something that's a boring stopword for one genre is actually an important content word for another. We would like the method to sort of make that decision. So let's start with the most basic scheme. And this is a scheme that will pay attention only to the row context. This is normalization. So this is actually a repeat from the lecture on vector comparison L2 norming. We've calculated the L2 length as a fixed quantity for each row vector. And then the length normalization of that row vector is just taking each value in the original vector and dividing it by that fixed quantity of the L2 lengths. There's a related, and perhaps more familiar notion, which I've called probability distribution where we follow the same logic. We just replace that normalizing constant, the L2 length with the sum of all the elements in the vectors. But again we do this element-wise division by that fixed quantity to normalize the vector into a probability distribution. I think both of these methods can be powerful, but the shame of them is that they are paying attention only to the row context. For a given cell, IJ, we're looking just across the row I. We're not considering the context that could come from the column J. So let's begin to correct that omission. Here is kind of the star of our show in the quiet sense. This is the first scheme we'll look at that pays attention to both row and column context. This is observed over expected. Let's just go through this notation here. We have the row sum. I think that's intuitive. Correspondingly, the column sum, the sum of all values along the column. And then the sum for some matrix x is just the sum of all the cell values in that matrix. Those are the raw materials for calculating what's called the expected value. The expected value given a matrix x for cell i, j is the rowsum times the columnsum as the numerator divided by the sum of all the values in the matrix. This is an expected quasi count. It is giving us the number we would expect if the row and column were independent of each other in the statistical sense. And that's the sense in which this is an expectation. The observed over expected value simply compared the observed value in the numerator by that expected value. So in a bit more detail, here's how the calculations work. We've got this tiny little count matrix here. Let's look at cell XA. It's got a count of 34. That's our observed count over here in the numerator. The denominator is the product of the rowsum and the columnsum, 45 by 81 divided by the sum of all the values in this matrix, which is 99. We repeat that calculation for all the other cells, making the corresponding adjustments, and that gives us a completely reweighted matrix. Here's the intuition. That was the calculation. Let's think about why we might want to do this. So I've got here a highly idealized little count matrix. And the conceit of this example is that "keep tabs" in English is an idiom. And otherwise, the word "tabs" alone doesn't appear with many other words. It's kind of constrained to this idiomatic context. So we get a really high count for "keep tabs" and a relatively low count for "enjoy tabs" again because "tabs" doesn't really associate with the word "enjoy." On the right here, I've got the expected calculation. And it comes out just like we would hope. The expected count for "keep tabs" is mere 12.48%, compare that with the observed count of 20. "Keep tabs" is overrepresented relative to our expectations in virtue of the fact that the independence assumption built into the expected calculation is just not met here because of the collocational effect. Similarly, the expected count for "enjoy tabs" is 8.5. That's much larger than our observation, again because these are kind of disassociated with each other in virtue of the restricted distribution of tabs. And that brings us to really the star of our show, and in fact the star of a lot of the remainder of this unit. This is pointwise mutual information, or PMI. PMI is simply observed over expected in log-space where we stipulate that the log of 0 is 0. In a bit more detail for matrix x, given cell i, j, the PMI value is the log of the observed count over the expected count. And that's it. Many people find it more intuitive to think of this in probabilistic terms. That's what I've done over here on the right. It's equivalent numerically, but for this kind of calculation, we first form a joint probability table by just dividing all the cell values by the total number of values in all the cells. That gives us the joint probability table and then the row probability and the column probability are just summing across the row and the column respectively. And again we multiply them. And that's kind of nice, because then you can see we really are testing an independence assumption. It's as though we say we can multiply these probabilities because they're independent. If the distribution is truly independent, that ought to match what we observed. And of course discrepancies are the things that these matrices will highlight. Let's look at an example. And there's one thing that I want to track because we work through this example. And that's the cell down here, this lonely little 1. So this is a count matrix. I've got this as a word by document matrix. This is a very flexible method, and we apply to lots of matrix designs. Over here, I form the joint probability table. And I've got here the columnsum and the rowsum corresponding to the column and row probability. These are the raw ingredients for the PMI matrix, which is derived down here by applying this calculation to all of these values. Notice what's happened, that lonely 1 down here because it's in a very infrequent row and a relatively infrequent column. It has the largest PMI value in the resulting matrix. Now that could be good, because this could be a very important event, in which case we want to amplify it. On the other hand, NLP being what it is, this could be just a mistake in the data or something. And then this exaggerated value here could turn out to be problematic. It's difficult to strike this balance. But it's worth keeping in mind as you work with this method that it could amplify not only important things but also idiosyncratic things. Positive PMI is an important variant of PMI, so important, in fact, that I would like to think of it as the kind of default view that we take on PMI for the following reason. PMI is actually undefined where the count is 0, because we need to take the log of 0. So we had to stipulate that the log of 0 was 0 for this calculation. However, that's arguably not coherent if you think about what the underlying matrix represents. What we're saying with PMI is that larger than expected values get a large PMI. Smaller than expected values get a smaller PMI. That's good. But when we encounter a 0, we place it right in the middle. And that's just strange because a 0 isn't evidence of anything larger or smaller. It doesn't deserve to be in the middle of this. If anything, we just don't know what to do with the 0 values. So this is arguably sort of incoherent, and the standard response to it is to simply turn all of the negative values into 0. And that's positive PMI that's defined here. So we simply lop off all the negative values by mapping them to 0. And that at least restores the overall coherence of the claims where all we're doing is reflecting the fact that larger than expected counts have large positive PMI and the rest are put in 0. Let's look briefly at a few other reweighting schemes, starting with the t-test. The t-test is something that you work with on the first assignment to implement it. It turns out to be a very good reweighting scheme. And I like it because it obviously reflects many of the same intuitions that guide the PMI and observed over expected calculations. TF-IDF is quite different. So this is typically performed on word by document matrices in the context of information retrieval. Given some corpus of documents D, we're going to say that the term frequency for a given cell is that value divided by the sum of all the values in the column, giving us the kind of probability of the word given the document that we're in. And then the IDF value is the log of this quantity here. This is the number of documents in our corpus that is the column dimensionality divided by the number of documents that contain the target word. And again we met log of 0 to 0. The TF-IDF is the product of those two values. I think this can be an outstanding method for very large sparse matrices, like the Word Document one. Conversely it is typically not well-behaved for very dense matrices, like the word-by-word ones that we were favoring in this course. The reason, this is IDF value. It's very unlikely that you would have a word that appeared literally in every document. However, in the context of very dense word by word matrices, it is possible for some words to co-occur with every single other word, in which case you'll get an IDF of value of 0, which is probably not the intended outcome for something that's high frequency but might nonetheless be important in the context of individual documents. So I'd probably steer away from TF-IDF unless you're working with a sparse matrix design. And then even further afield from the things we've discussed, you might explore using, for example, pairwise distance matrices where I calculate the cosine distance between every pair of words along the rows and form a matrix on that basis. Really different in its approach and probably in its outcomes, but it could be very interesting. Let's return to our essential questions. Remember for each one of these reweighting schemes we want to ask, how does it compare to the raw count values, and what overall distribution of values does it deliver? So let's do a bit of an assessment of that. I'm working with the giga5 matrix that you can load as part of the course materials. That's Gigaword with a window of 5 and a scaling of 1 over n. Up here in the left, I have the raw counts, and the cell value along the x-axis and the number of things that have that value along the y-axis. And you can see that raw counts, it's a very difficult distribution. First of all, this goes all the way up to about 100 million, and starting from 0. Most things have quantities that are close to 0. And then you have this very long thin tail of things that are very high frequency. This highly skewed distribution is difficult for many machine-learning methods, both in terms of the skew towards 0 and very low values and also in terms of the range of these x-axis values. So we would like to move away from it. That's one motivating reason. When we look at L2 norming and probability distributions, they do kind of the same thing. They're constraining the cell values to be between 0 and 1, or roughly about between 0 and 1. But they still have a heavy skew toward things that are very small in their adjusted values and their reweighted values. Observed over expected is more extreme in that as is TF-IDF. So again the observed over expected values range quite high, up to about almost 50,000 which is somewhat better than the raw counts, but it's still very large in terms of its spread. And we still have that heavy skew towards 0. TF-IDF solves the range problem down here, because it's highly constrained in the set of values, but it still has a very heavy skew, looking a lot like the raw count distribution. From this perspective, it looks like PMI and positive PMI are really steps forward. First of all, for PMI the distribution of cell values has this nice sort of normal distribution. And the values themselves are pretty constrained to like negative 10 to 10. And then for positive PMI, we simply lop off all the negative values and make it back to 0. So it's more skewed towards 0, but not nearly as skewed as all these other methods that we're looking at. So this is looking like PMI, and PPMI are good choices here, just from the point of view of departing from the raw counts and giving us a tractable distribution. There's another perspective where we directly compare in these matrices the co-occurrence count on log scale, so it's doable, with the result the new weighted cell value. What we're looking for here presumably is an overall lack of correlation. I think we find that L2 norming and probabilities are pretty good on this score. They have kind of low correlations. And they make good use of a large part of the scale that they operate on. Observed over expected has a low correlation with the cell counts, which looks initially good, but it has its biggest problem that the cell values are strangely distributed and this correlation value might not even be especially meaningful, given that we have a few outliers and then a whole lot of things that are close to 0. And TF-IDF is frankly similar low correlation but maybe not so trustworthy in terms of that correlation value. Fundamentally, again, these look like difficult distributions of values to work with. Again, PMI and positive PMI look really good. Relatively low correlations, so we've done something meaningful. And both of these are making meaningful use of a substantial part of the overall space that they operate. And we have lots of different combinations of cell values and underlying co-occurrence counts. Something of a correlation, but that could be good. But we're not locked into that correlation. So we've done something meaningful. To wrap up, let's do some relationships and generalizations, just some reminders here. So a theme running through nearly all of these schemes is that we want to reweight the cell value relative to the values we expect, given the row and the column. And we would like to make use of both of those notions of context. The magnitude of the counts might be important, just think about how 1, 10 as a bit of evidence and 1,000, 10,000 as a bit of evidence might be very different situations in terms of the evidence that you have gathered. Creating probability distributions and lengths normalizing will obscure that difference. And that might be something that you want to dwell on. PMI and its variants will amplify the values of counts that are tiny relative to their rows and their columns. That could be good because that might be what you want to do, find the things that are really important and unusual. Unfortunately, with language data, we have to watch out that they might be noise. And finally, TF-IDF severely punishes words that appear in many documents. It behaves oddly for dense matrices, which can include the word by word matrices that we're working with. So you might proceed with caution with that particular reweighting scheme in the context of this course. Finally, some code snippets. I'm just showing off that our VSM module in the course repository makes it really easy to do these reweighting schemes, a lot-- all the ones that we've talked about and more in fact. And returning to the end of our vector comparison method, you might recall that I looked at the neighbors of "bad" in this yelp5 matrix. And it really didn't look good. This does not look especially semantically coherent. When I take those underlying counts and I just adjust them by positive PMI, I start to see something that looks quite semantically coherent. And I think we're starting to see the promise of these methods. And this is really just the beginning in terms of surfacing semantically coherent and interesting information from these underlying counts.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Data_Resources_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
BILL MACCARTNEY: So that's the end of the introduction. Let's now begin to drill down on the data resources that we'll need to launch our investigation. And there are two different kinds of data we need to talk about, the corpus and the KB. Just like any other NLP problem, we need to start with a corpus, a large collection of natural language text. And for relation extraction, we need sentences containing two or more entities. And because our goal is to do relation extraction with distant supervision, we need to be able to connect the entities to a KB. So we need a corpus in which the entity mentions are annotated with entity resolutions, which map them to unique unambiguous identifiers. The same identifiers that are used in the KB. So in this representation, I've got the string Elon Musk, which is just an English language string. That's what we call an entity mention. And then I've got Elon_Musk, which is an entity ID. It's a unique unambiguous identifier for this entity in some predefined dictionary of entity IDs. And it's very common for this purpose to use something like Wikipedia, which has one Wikipedia page for almost any entity that you can think of. For our investigation, we're going to use an adaptation of the Wikilinks corpus, which was produced by Google and UMass in 2013. The full corpus contains 40 million entity mentions from 10 million web pages. And each entity mentioned is annotated with a Wikipedia URL. But we're going to use just a subset of the full corpus in order to make things manageable. So let's start to look at some of the code we'll use in the Python notebooks for this topic. The data assets that we'll use live in a subdirectory of our data directory called rel_ext_data and we've defined a class called corpus, which holds the examples and which lets you quickly look up examples containing specific entities. So if we load our corpus, we find that it contains more than 330,000 examples. Pretty good size. It's small enough that we can work with it easily on an ordinary laptop, but it's big enough to support effective machine learning. And we can print out a representative example from the corpus. Actually, this is a bit hard to read. So let me give you a different view of the same example. We represent examples using the example class, which is a namedtuple with 12 fields listed here. The first two fields entity_1 and entity_2 contain unique identifiers for the two entities mentioned. We name identities using Wiki IDs, which you can think of as the last portion of a Wikipedia URL. The next five fields represent the text surrounding the two mentions, divided into five chunks. So left contains the text before the first mention. mention_1 is the first mention itself. middle contains the text between the two mentions. mention_2 is the second mention and right contains the text after the second mention. And the last five fields contain the same five chunks of text, but this time, annotated with part of speech tags, which may turn out to be useful when we start building models for relation extraction. Now, whenever you start to work with a new dataset it's good practice to do some data exploration to get familiar with the data. A big part of this is getting a sense of the high level characteristics of the data, summary statistics, distributions and so on. For example, how many entities are there and what are the most common ones? Here's some code that computes that and here are the results. So there are more than 95,000 unique entities, and it looks like the most common entities are dominated by geographic locations. Now the main benefit we get from the corpus class is the ability to retrieve the examples containing specific entities. So let's find examples containing Elon Musk and Tesla Motors. There are five such examples, and here's the first one. Actually, this might not be all of the examples containing Elon Musk and Tesla Motors. It's only the examples where Elon Musk was mentioned first and Tesla Motors was mentioned second. There may be additional examples that have them in the reverse order. So let's check, look for Tesla Motors, Elon Musk. Sure enough, two more examples in reverse order. So going forward, we'll have to remember to check both directions when we're looking for examples containing a specific pair of entities. OK, a few last observations on the corpus. First, this corpus is not without flaws. As you get more familiar with it, you'll probably discover that it contains many examples that are nearly, but not exactly duplicates. This seems to be an artifact of the web document sampling methodology that was used in the construction of the Wikilinks data set. And it winds up creating a few distortions and we may see some examples of this later. But even though the corpus has a few warts, it will serve our purposes just fine. One thing that this corpus does not include is any annotation about relations. So it could not be used for the fully supervised approach to relation extraction, because that requires a relation label on each pair of entity mentions. And we don't have any such annotation here. The only annotations that we have in this corpus are entity resolutions. Mapping an entity mention to an entity ID. That means that in order to make headway, we'll need to connect the corpus with an external source of knowledge about relations. We need a KB. Happily our data distribution does include a KB, which is derived from Freebase. Freebase has an interesting history. It was created in the late 2000s by a company called Metaweb led by John Giannandrea, who later became my boss. Google acquired Metaweb in 2010, and Freebase became the foundation of Google's knowledge graph. Unfortunately, Google shut Freebase down in 2016, which was tragic. But the Freebase data is still available from various sources. So our KB is a collection of relational triples each consisting of a relation, a subject, and an object. So for example, place_of_birth, Barack_Obama, Honolulu. has_spouse, Barack_Obama, Michelle_Obama. Author, The Audacity of Hope, Barack_Obama. So as you might guess, the relation is one of a handful of predefined constants like place_of_birth birth or has_spouse. The subject and the object are entities represented by Wiki IDs. It's the same ID space used in the corpus. Wiki IDs are basically the last part of a Wikipedia URL. Now just like we did for the corpus, we've created a KB class to store the KB triples and some associated indexes. This class makes it easy and efficient to look up KB triples both by relation and by entities. So here, we're just loading the data and printing a count of the KB triples. There are 45,000 KB triples, so this is quite a bit smaller than the corpus. If you remember the corpus has 330 examples and we can print out the first KB triple. So this is a KB triple that says that the contains relation holds between Brickfields and Kuala Lumpur Sentral railway station which I did not know. Just like we do with the corpus, let's do some data exploration to get a sense of the high level characteristics of the KB. So first, how many relations are there? The all_relations attribute of the KB contains a list of its relations. And it seems that there are 16 of them. Well, what are the relations and how big are they? This code prints out a list with sizes. Note the get_triples_for_relation method, which returns a list of the KB triples for a given relation. You begin to get a sense of what kind of stuff is in this KB. It looks like the contains relation is really big with more than 18,000 triples. And there are a few relations that are pretty small with fewer than 1,000 triples. Here's some code, that prints one example from each relations so that we can form a better sense of what they mean. Some of these are familiar facts like adjoins, France, Spain. Others might refer to unfamiliar entities. So for example, I've never heard of Sheridan Le Fanu. But I think you can quickly form an intuitive sense of what each relation is about. Now one of the most important message in a KB class is get_triples_for_entities which lets us look up triples by the entities they contain. So let's use it to see what triples contain France and Germany. OK, sure, they belong to the adjoins relation. That makes sense. Now relations like adjoins are intuitively symmetric. So we'd expect to find the inverse triple in the KB as well, and yep, it's there. But note that there's no guarantee that such inverse triples actually appear in the KB. There's no guarantee that the KB is complete. And you could easily write some code to find missing inverses. Now, that relation adjoins is symmetric, but most relations are intuitively asymmetric. So let's see what triples we have for Tesla_Motors and Elon_Musk. OK, they belong to the founders relation. Good. That's expected. That's an asymmetric relation. What about the inverse, Elon Musk and Tesla Motors? OK, they belong to the worked_at relation. Seems like a funny way to describe Elon's role at Tesla, but OK. So this shows that you can have one relation between x and y and a different relation that holds between y and x. One more observation. There may be more than one relation that holds between a given pair of entities, even in one direction. So for example, let's see what triples hold-- what triples contain Cleopatra and Ptolemy_XIII_Theos_Philopator. Oh, my goodness. This pair belongs to both the has_sibling relation and the has_spouse relation to which I can only say, oh. Moving right along, let's look at the distribution of entities in a KB. How many entities are there and what are the most common ones? Well, here's some code that computes that. There are 40,000 entities in the KB. So that's fewer than half as many entities as in the corpus. If you remember, the corpus has 95,000 unique entities. So there are lots of entities in the corpus that don't appear in the KB at all. But just like the corpus, the most common entities are dominated by geographic locations, England, India, Italy and so on. Note that there's no promise or expectation that this KB is complete. For one thing, the KB doesn't even contain many of the entities from the corpus. And even for the entities it does include, there may be possible triples which are true in the world but are missing from the KB. So as an example, these triples are in the KB, founders, Tesla_Motors, Elon_Musk, worked_at, Elon_Musk, Tesla_Motors, Founders, SpaceX, Elon_Musk. You might expect to find worked_at, at, Elon_Musk, SpaceX. But nope, that triple is not in the KB. That's weird. Well, in fact, the whole point of relation extraction is to identify new relational triples from natural language text so that we can add them to a KB. If our KB's were complete, we wouldn't have anything to do. Now actually, in this case, you might object that we don't need to do relation extraction to make that completion. We could write some logic that recognizes that founders, xy entails worked at yx and apply that rule systematically across the KB and use that to fill in the missing triple in this case. But the general point still stands that there may be lots of triples that are true in the world but missing from the KB where that strategy is not going to allow us to add the missing information.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
NLU_and_Information_Retrieval_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
Welcome, everyone, to the first screencast in our NLU and information retrieval series. The goal of this introductory screencast is twofold. I will first introduce the IR area. Then I will discuss ways in which NLU and IR can interact productively and focus on how retrieval can be an effective component in defining our NLU tasks and building our NLU systems. So what is information retrieval or IR? To a first approximation, this is the field concerned with search. The first example that typically comes to mind is web search. But as we'll see today, this field extends richly beyond web search and has strong connections to our work with NLU. Let's now attempt to define IR more formally. Here is a simplified version of the definition used by the Introduction to IR book by Manning et al. They define IR as the process of finding material that fulfills an information need from within a large collection of unstructured documents. Let's unpack this definition. Starting on the left-hand side here, the definition says that we're concerned with finding material from a large collection. In other words, large scale search is at the essence of IR. On the right-hand side, the definition restricts this to unstructured documents, basically items like text, media, and products, ones that lack the clear cut structure of things like database tables or graphs. Structure-based search or structure-based traversal of graphs and databases are not typically considered IR problems for our purposes, although, of course, they're interesting on their own right. This leads us to the term that was at the center of our definition, namely the information need. It is difficult to think of IR without thinking of the user at the center of the system. And the information need is what the user has in mind to solve a task or otherwise learn or reach the material that they are looking for. The goal of a search system is first to identify and fulfill the user's information need. So whatever we retrieve is only going to be considered relevant to the extent that it advances this goal. In most IR tasks, the user would explicitly provide us with a query that summarizes and expresses their information need. It is very important to note that this query may contain ambiguity, may miss some important details, or might even sometimes ask the wrong question. And that's completely normal. The user may not even be sure what precisely they're looking for. That's why they're searching for something. And so, we must rely on our knowledge of the task and whatever we know about the user within the constraints of our application in order to solve IR problems. The second thing is that typical information needs vary by task. And so the typical information need that we have and how to best interpret and deal with that, these are factors that vary greatly by task and by collection type in IR. I'll take it you've already made the connection between IR and searching the web, searching your email, and also finding files in your desktop buried in deep folders or something. But there are plenty of other IR tasks where search is crucial. For instance, you might want to find recent papers related to the BERT paper by Devlin et al. Of course, this is not the best example because there are many, many papers related to BERT these days. But in any case, in this case, your query might be the full text of the BERT paper, and the system might try to search the ACL anthology and the computational language section of archives for some of their papers to BERT. Recommendation is another key IR topic. In recommendation, we still seek relevant material from a large collection of unstructured items, but, in this case, the user has no explicit query. And instead, the previous interactions enabled the recommendation system to suggest the matches. Patent search is yet another IR task. And unlike the others we've mentioned so far, it's often used by experts, not by average users, and it has very strong emphasis on hierarchy. So unlike the average web query where you might be completely content with one very good match at the top, patent search may need to find every relevant patent to a query or something that approximates that. Lastly, even buying a new laptop can be an IR problem. And in particular, a conversation of an IR problem. Here, the system may go in a back and forth style between searching for relevant products and asking the user for their preferences about cost, screen quality, and storage, and other factors on online e-commerce platforms. Having looked at all of these IR tasks, it's important to keep in mind that each of those tasks poses its own unique challenges. So even though we're always interested in relevance and in finding relevant items, each of those tasks has its own challenges and its own components. To underscore this, let's use web search as a frame of reference. While standard web search might pose considerable challenge when it comes to the massive scale involved in terms of documents and also queries, even something as seemingly mundane as searching for conversations on your Slack workspace often lacks key features that makes web search tractable in the first place and makes it work the way it does in the first place. For one, so many web searches ask frequently searched or head queries. The sheer popularity of head queries makes them an easy target for large search engines. Of course, there's always a long tail of various search queries that still pose considerable challenge, especially in highly technical domains, but it still stands that, in a domain like web search, solving the head queries gets you a very big part, a very big share of answering most user queries. There's another factor. Web search enjoys highly redundant documents out there that address common topics, where each document is written in a slightly different way. This often shifts the search problem into a precision one-- basically finding some documents, at least one, that definitely match the query, as opposed to a recall one-- finding every document that matches the query because there's already too many of them. Clearly, this is not always the case if you're looking for a very specific item in your Slack conversation and history. Yet another factor in web search is the rich link structure that links between existing related web pages, which again introduces more hierarchy and might make this task more tractable in practice. The idea here is definitely not that web search is easy because it's not easy. But the different tasks pose different challenges for our IR systems. So that is IR. Where does our work on NLU fit in IR? Well, of course, queries and documents are often expressed in natural language, a decent part. So we naturally want to understand a query's meaning and its intent and understand the document's contents and their topics to be able to effectively match queries to documents. This form of understanding is critical, although you can go pretty far for many IR tasks with intelligently matching terms at a lexical level. The vocabulary mismatch problem makes this quite unattractive in practice. To explain, vocabulary mismatch happens when queries and documents use different terms to refer to the same thing. So I have here on the slide an example query that shows this happening in practice. So the question is, or the query is what compounds protect the digestive system against viruses? And the snippet that we are interested in finding says in the stomach gastric acid and proteases serve as powerful chemical defenses against ingested pathogens. You can see that the passage that we found here uses pathogens instead of viruses, which is a bit more general, stomach instead of the digestive system, which is a bit more specific, and chemical instead of compound, defenses instead of protect. But it's pretty clear that it still answers the same question and answers it very well, in fact. So, where does NLU fit in IR? I guess a nice quote here is Jimmy Lin's statement. Jimmy Lin is an IR researcher who says IR makes NLP useful and NLP makes IR interesting. Of course, we do think NLP is useful anyway, and also IR is interesting anyway. So I added more between brackets here, but we do get Jimmy's point. OK. Onto our more central question, where does IR fit into our study of NLU, and how can IR serve us? In thinking about this, I believe it's helpful to appreciate that as our models become more advanced in NLU, they too, like humans, start to have complete information needs in solving their tasks. More concretely, retrieval can contribute to our NLU tasks and systems in three exciting ways. First, retrieval provides a rich source for creating challenging and realistic NLU tasks, ones where finding information from a large corpus is essential. We will look closely at this bullet in the remainder of these slides. Second, retrieval offers a powerful tool to make NLU models for existing tasks more accurate and more effective. We'll touch upon this today, but we'll discuss it in more depth later. Third, retrieval can often lend us a nice framework for evaluating NLU systems whenever the output domain is large, just like in search or whenever low latency is important, which are key characteristics in IR. We will expand on this in a later screencast as well. In the remainder of the screencast, we'll explore how retrieval allows us to pose very challenging and very realistic open-domain NLU tasks. Chris has briefly introduced SQuAD before in the overview lecture. To remind you of this question answering task, the input that we are given in SquAD is a context message which was obtained from Wikipedia and a question that tests our model's understanding of this one passage. This is an interesting task on its own right, one that has enjoyed tons of work and lots of recent progress due to pre-trained language models. But with retrieval in mind, we can move from standard QA like SQuAD to open-domain question answering. Specifically, in open-domain question answering, we can ask what if we want to answer the same kinds of factoid questions as SQuAD or other types of questions but without the perhaps unrealistic hint of receiving the particular passage in Wikipedia that already contains the answer? In this case, you can take all of the English Wikipedia, just as an example, as our context. And then again, pose the same question as SQuAD over all of Wikipedia and build models that can answer these open questions over large corpora. So how would we answer such questions? The literature, in particular, a nice EMNLP 2020 paper by Roberts et al., introduces a nice analogy for how we might attempt to tackle this task and how we could think about it. The first, perhaps more familiar and perhaps simpler solution, is to pose the question to one of our usual transformers, and specifically to generate a sequence to sequence model, something like T5, GPT-2 or GPT-3. In this case, we're relying on the knowledge stored internally and implicitly in the model parameters. So the model memorizes these facts just like you would do when you enter a closed book exam. Often, this knowledge is memorized the same way language is learned as a result of language model pre-training or other similar tasks. Closed book approaches to these characteristically open-domain problems offer a particularly consistent way of improving quality and coverage. Well, just take a model, train a larger version of it on more data and hope that that includes more knowledge and gives you more accurate results. As an alternative to this, you could think about open book approaches to open-domain question answering. So there's the analogy of doing an open book exam, which tests not really your memory but your awareness of where to look for answers and how to use them quickly and productively. In this case, we will build what are typically called retrieve-and-read architectures. As shown at the bottom of the slide, we take the question and first feed it to a retrieval model. The retriever searches our collection of facts, in this case, Wikipedia as an example, and extracts a bunch of passages or other contexts that seem useful in trying to answer the original question. These passages are then fed to a downstream reader. So that could just be a small BERT-like model, which studies these passages to answer the original question. In this pipeline, we've essentially relied on this new retrieval component to reduce the original open-domain question answering problem to a much smaller scale standard question answering task, where the downstream model sees a question and the relevant passages or a few passages before extracting a short answer. Importantly, you could say that the reader in this architecture is a user that has an information need. And it's the retriever's task to satisfy this need accurately and efficiently. We will study various methods for building retrievers in the subsequent screencasts and look at how these retrievers interact with downstream readers. But for now, let's just explore some of the higher-level differences between open book and closed book solutions to open-domain problems. Our open book solutions often get to be much smaller while being very accurate still. The reason is that we have decoupled knowledge from reasoning and stored the knowledge outside the model. Thus the model itself does not need to store all of these facts inside its parameters, and it gets to be much smaller as a result. As we will see later, this has great implications for efficiency. Moreover, the knowledge can be easily updated by modifying the collection as the facts in Wikipedia, for example, evolve over time. Or alternatively, suppose that you want to switch from answering questions over Wikipedia to posing questions over the NLP literature or perhaps posing questions over the documentation of your favorite software library. You can often do that by simply swapping the collection with a new one and keeping the question-answering model as is to answer questions in this new domain. Lastly, because we can see the actual documents that are retrieved and the documents that are read by the reader to extract answers, we are often better positioned to explain how these models know some facts or why they make particular mistakes. On the downside, though, all of a sudden, we now need to worry about the interactions between two components, the retriever and the reader. But I hope that the subsequent set of screencasts will convince you that working with retrievers in NLU is very rewarding. All of this discussion so far has been in the context of open-domain question answering. But there are many other NLU tasks that either inherently subsume retrieval or at least can directly benefit from interacting with a large collection of relevant facts. One of those is claim verification or fact-checking. Here, the model receives as input, a disputed claim, and its goal is to verify or refute this claim and to return documents that justify its decision. Two other tasks are query focused summarization and informative dialogue, where we might also work with a large collection of facts, and given a topic or in the context of a conversation, generate a useful summary of the resources about that topic, perhaps as part of a conversation with the user interested to learn about the new topic. Lastly, entity linking is the task that can be posed over a large textual knowledge base as well. Given an utterance that refers to any number of ambiguous entities or events, we should resolve this ambiguity and map dimensions of these entities to their descriptions in a large knowledge base like Wikipedia. So that would be a form of entity linking. KILT or Knowledge Intensive Language Tasks is a recent effort aimed at collecting a number of different datasets for retrieval based NLP. Incidentally, all of these tasks in KILT explicitly have a knowledge component, like answering a question or verifying a claim. An open question in this exciting area is whether retrieval can improve performance for standard NLU tasks as well, ones where the knowledge challenge is less explicit. Think, for example, sentiment analysis, natural language inference, or any of the other tasks we've studied so far. Well, this remains an open question. But I think that accurate knowledge matters for most, if not all, of our language tasks. And that converting many of these tasks to an open book format or bring your own book approach may be a promising way to tackle these tasks in practice. In the remainder of this unit, we will dig deeper into traditional methods and metrics for information retrieval and then explore recent advances in neural IR, which will make a lot of use of our NLU models like BERT in new and creative ways. And then, we will finally discuss open-domain question answering in more depth as one of the most mature applications of NLU plus IR.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Neural_IR_part_3_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
OMAR KHATTAB: Hello, everyone. Welcome to part five of our series on NLU and IR. This screencast will be the third among three of our videos on neural IR. In the previous screencast we discussed learning term weights as a paradigm for building neural IR models that are both efficient and effective. We mentioned two such models from the IR literature, DeepCT and doc2query. Both of which, despite outperforming BM25 in MRR, still left a very large margin to the quality that we see with BERT. We asked ourselves can we achieve high MRR and low computational cost at the same time. Can we do better? To answer this question, let us begin exploring more expressive paradigms for efficient neural IR. The next paradigm here is the representation similarity paradigm. In the representation similarity paradigm we begin by tokenizing the query and the document. And we feed each of them independently through an encoder, like BERT, for example. This encoder is then used to produce a single-vector representation, for the query and for the document, separately. So for BERT we could take this through the class token, for example and take the output embeddings, or we could average all the final layer outputs. Once we have those, we finally calculate the relevant score of this document to our query as a single dot product between two vectors. This paradigm is very efficient for retrieval. First each document can be represented as a vector offline. And this precomputed representation can be stored on disk before we even start conducting search. Moreover, the similarity computation between a query and a document here is very cheap and thus very efficient, as it's just a single product between two vectors. A very large number of IR models are Representations Similarity models. Many of those actually precede BERT. Like DSSM and SNRM. But the last year and a half, we've seen numerous similarity models based on BERT for IR tasks, including SBERT, ORQA, DPR, DE-BERT and ANCE, among others. Many of these models were actually proposed concurrently with each other, and their primary differences lie in the specific tasks that each one targets and the supervision approach each one suggests. So let us delve deeper into a representative in one of the earlier and most popular models among those. This is the Dense Passage Retriever, or DPR by Karpukhin et al., which appeared at EMNLP just a few months ago. DPR encodes each passage, or document, as a 768-dimentional vector, and similarly for each query. During training DPR produces a similarity score between the query and the positive passage-- so that's the relevant passage we wanted to retrieve-- as well as between the query and a few negatives. Some of them are sampled from the BM25 top-100 and others are in-batch negatives, which are actually positives, but for other queries in the same training batch. Once DPR has all of those scores during training, it then optimizes a classification loss, Namely N-way classification loss with softmax-- cross entropy loss with softmax over the scores of one positive and all of these negatives, with the target of selecting the positive passage, of course. DPR was not tested on the MS MARCO dataset by the original authors. But subsequent work by Xiong et al. bests a DPR like retriever on MS MARCO and achieves 31% MRR. They also then suggest more sophisticated approaches for supervision which can increase this MRR by a couple of points. So both of these demonstrate considerable progress over the learned term weight models that we looked at before, like DeepCT or doc2query. But they still substantially trail behind BERT's much higher effectiveness. So why is that? As it turns out representation similarity models suffer from two major downsides when it comes to IR tasks. First, are their single-vector representations, which involve cramming each query and each document into one rather low dimensional vector. Second is their lack of fine grained interactions during matching. Representation similarity models estimate relevance as one dot product between two vectors. And as they lose the term level interactions between the query terms and the document terms that we had in query document interaction models like BERT-- in fact, even simple term weighting models like BM25 or DeepCT had, by design, some element of term level matching there that we lose here. So our next natural question then becomes can we obtain these efficiency benefits of precomputation that we get from representation similarity models while still keeping the fine-grained term level interactions that we used to have before with a model like BERT or DeepCT? Toward answering that question, I think it helps to review the neural IR paradigms we've seen so far. On the left hand side, we looked at the learned term weights paradigm. These models offered independent and dependent encoding of queries and documents, which was great for efficiency, but they forced us to work with a bag of words query that loses all context. And thus were not as competitive as we wanted them to be. We then explored the representation similarity models, which also allowed us to complete independent encodings of queries and documents, which again was really useful for efficiency. But this time we were forced to work with single vector representations and we lost our fine-grained term level interactions, which we intuitively believe to be very useful for matching in IR tasks. On the right hand side we looked initially, actually, at the query document interaction models like standard BERT classifiers. These offered very high accuracy but were extremely expensive to use because the entire computation for one document depended on both the query and the document. We simply couldn't do any precomputation in this case offline in advance. So can we somehow combine the advantages of all these three paradigms at once? Before we answer that question there's actually one final feature, one final capability of the first two paradigms that we should discuss. So query document interaction models, which are quite expensive, they forced us to use a re-ranking pipeline. This is a pipeline where we re-scored the top 1,000 documents that we already retrieved by BM25. Sometimes that's OK, but in many cases this can be a problem because it ties our recall to the recall of BM25, which is ultimately a model that relies on finding terms that match exactly across queries and documents, and so it can be quite restrictive in many cases. When recall is an important consideration, you often want our neural model that we trained to do end to end retrieval. That is, to search quickly over all the documents in our collection directly without the re-ranking pipeline. Learning term weights and representation similarity models that we've looked at so far alleviate this constraint and this is a big advantage for them. So, specifically, when we learn term weights, we can save these weights in the inverted index just like with BM25, and that allows us to obtain fast retrieval. When we learn vector representations, it also turns out that we can index these vectors using libraries for fast vector similarity search, like FAISS, f-a-i- double s. This relies on efficient data structures that support pruning, which is basically finding the top-K matches, say the top 10 or the top 100 matches, without having to exhaustively enumerate all possible candidates. The details of search with these pruning data structures is beyond our scope, but it's really useful to be aware of this important capability for end to end retrieval. OK, so let's go back to our last main question. Can we obtain the efficiency benefits of precomputation while still having the fine-grained term level interactions that we used to have? The neural IR paradigm that will allow us to do this is called Late Interaction, and this is something that I've worked on here at Stanford. So let's build late interaction from the ground up. We'll start, as usual with tokenization of the query and the document. We'll seek to independently encode the query and the document but into fine-grained representations this time. So, as you can see on the left hand side, this is actually not hard. As it's shown, we can feed two copies of BERT, the query, and the document separately, and keep all the output embeddings for responding to all the tokens as our fine-grained representation for the query and for the document. OK. So, we're only going to be done here once we actually close this loop. Right? We still need to estimate relevance between this query and that document. Essentially we have two matrices and we need a notion of similarity between these two matrices or these two bags of vectors. However, not every approach will suffice. We insist that we get a scalable mechanism that allows us to use vector similarity search with pruning to conduct end to end retrieval in a scalable fashion across the entire collection. In doing this, it turns out that a very simple interaction mechanism offers both scaling and high quality. So here's what we'll do. For each query embedding, as I show here, we compute a maximum similarity score across all of the document embeddings. So this is just going to be a cosine similarity giving us a single partial score for this query term, which is the maximum cosine similarity across all of the blue embeddings in this case. We'll repeat this here for all the query embeddings and we'll simply sum all of these maximum similarity scores to get our final score for this document. So, we will refer to this general paradigm here as late interaction, and to this specific model shown here on top of BERT as ColBERT. And the intuition is simple. For every term in the query, we're just trying to softly and contextually locate that term in the document, assigning a score to how successful this matching was. Let me illustrate this with a real example from the MS MARCO ranking development set, and I hope it will be quite intuitive once you see it. At the top is a query and at the bottom is a portion of the correct passage that ColBERT retrieves at position one. Because we have the simple late interaction mechanism, we can actually explore the behavior. And we can see in this particular example that ColBERT matches, through maximum similarity operators, the word "when" in the question with the word "on" in the phrase "on August 8," which is a date as we might expect. It matches the word "transformers" with the same word in the document. It matches "cartoon" with "animated." And it matches the individual words, "come" and "out," with the term "released" in the phrase "it was released on August 8th" in the document, as we might intuitively expect. So, we're basically just trying to contextually match these query terms in the document and assign some matching score for each of these terms. So notice here and remember that ColBERT represents each document as a dense matrix of many vectors and in particular one vector per token. And this differs from the representation similarity models we looked at before, which try to cram each document into one vector. And what makes this possible is the maximum similarity operators that we have on top of these matrix representations. So how well does ColBERT do? And how does it do with this gap that we have here between efficient models and highly effective ones? Well, by redesigning the model architecture and offering a late interaction paradigm, ColBERT allows us to achieve quality comparative with BERT at a small fraction of the costs. Perhaps more importantly, ColBERT can scale through the entire collection due to pruning with end to end retrieval. All 9 million passages here in this case, while maintaining subsecond latencies, And thus it allows much higher recall than traditional re-ranking pipelines permit. All right. So far we've looked at in-domain effectiveness evaluations. Basically cases where we had training and evaluation data for the IR task at hand, which was MS MARCO, so far. But we often want to use retrieval in new out-of-domain settings. We just want to throw our search engine at a difficult problem without training data, without validation data, and see it perform well. We briefly discussed BEIR before, which is a recent effort to test IR models in a zero-shot setting where the models are trained on one IR task and then they're fixed, and then they are tested on a completely different set of tasks. BEIR includes 17 IR data sets and there are nine different IR tasks or scenarios, and the authors, Nandan et al., compared a lot of the IR models that we discussed today in a zero-shot manner against each other across all of these tasks. So let's take a look. Here we have BM25 results for an interaction model, which is in this case ELECTRA, which tends to perform slightly better than BERT for ranking. We have two representation similarity models, DPR and SBERT. And we have a late interaction model, which is ColBERT. The best in each row-- in each IR task is shown in bold. And we see that across all tasks the strongest model at NDCG@10 is always one of the three models that involve term level interactions, which are ELECTRA, ColBERT, and BM25. Interestingly, the single vector approaches, which seemed quite promising so far, failed to generalize robustly according to these results. Whereas ColBERT, which is also a fast model, almost matches the quality of the expensive ELECTRA ranker. The results, so far, were on the metric NDCG@10, which is a precision oriented metric-- looks at the top results. But here I have the author's results after the task level aggregation considering recall at 100. And here, although we see that the results are rather similar when we consider recall, one major difference is that ColBERT's late interaction mechanism, which allows it to conduct end to end retrieval with high quality, allows it to achieve the strongest recall in this case. And so we can conclude, basically, that scalable fine-grained interaction is key to robustly high recall. Of course notice that the BM25 and ELECTRA recall here is the same since ELECTRA just re-scores the top 100, in this case, from BM25. So this concludes our neural IR section of the NLU plus IR series. In the next screencast, we will discuss how scalability with these retriever models can actually drive large gains in quality, not just speed, which we haven't seen so far except on the recall case, and how tuning a neural IR model fits into a larger downstream open domain NLU task.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Neural_IR_part_1_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
OMAR KHATTAB: Hello, everyone. Welcome to part 3 of the series. This screencast will be the first of two or three on neural IR. And in it, we'll be exploring the inputs, outputs, training and inference in the context of neural IR. Let's quickly start with a reminder of our setup from the previous screencast. Offline, we're given a large corpus of text documents. We will pre-process and index this corpus for fast retrieval. Online, we're giving a query that we want to answer. Our output will be a list of the Top-K most relevant documents for this query. In the classical IR screencast, we discussed BM25 as a strong term-matching retrieval model. So should we just use BM25? The short answer is that we could. But if our interest is getting the highest quality that we can, then we should probably be using neural IR. As we will see, neural IR makes a lot of use of our NLU work in creative and interesting ways. The long answer to whether we should be using BM25 is that it depends. Among other factors, it depends on our budget. Each IR model poses a different efficiency-effectiveness tradeoff. In many cases, we're interested in maximizing effectiveness, maximizing quality, as long as efficiency is acceptable. Let's begin to explore this on the MS MARCO collection that we introduced in the previous screencast. Here we'll be measuring effectiveness using the mean reciprocal rank at cut-off 10. And we will measure efficiency, and in particular, latency using milliseconds. This figure here shows BM25 retrieval using the popular toolkit called Ansereni as one data point within a wide range of MRR values and latency possibilities. Just as a reminder, lower latency is better. And the latency here is shown on a logarithmic scale. And higher MRR is also better. The higher our MRR is, the better the model's quality. So what else could exist in this large empty space for now? We're going to see this space fill up with many different neural IR models over the next couple of screencasts. And the central question now and then will generally be, how can we improve our MRR@10 or whatever effectiveness metric we choose to work with, possibly at the expense of increasing latency a bit? OK, so let's actually take a look at how neural IR models will work, specifically at their input and output behavior. For the purposes of this short screencast, we'll treat the neural ranker as a black box. We will consider various implementations for this black box function in the next screencast. We will feed this neural IR black box a query and a document. And the model will do its thing and return to us a single score that estimates the relevance of this query to that document. For the same query, we will repeat this process for every document that we want to score. And we will finally sort all of these documents by decreasing relevance score. And that will give us the Top-K list of results. So far, this sounds simple enough. But how should we train this neural model for ranking? This might not be super obvious, but one pretty effective choice is simply two-way classification, pair-wise classification. Here, each training example will be a triple. Specifically, each training instance will contain a query, a relevant or positive document, and an irrelevant document, or a negative. In the forward pass during training, we'll feed the model the query and the positive document. And separately, we'll feed the query and the negative document to the neural ranker. And we optimize the entire neural network end to end with gradient descent, using simple classification loss-- written in this case, CrossEntropy Loss with softmax. The goal here is to maximize the score of the positive document, and to minimize the score assigned to the negative document. Recall that we can get positives for each query from our relevance assessments. And that every document that was not labeled as positive can often be treated as an implicit negative. So we could use this in generating triples for 2-way classification training for our neural ranker. Once our neural ranker is trained, inference or actually conducting the ranking is very easy. Given a query, we'll just pick each document, pass the query and the document through the neural network, get a score, and then we'll sort all the documents by score. And this will give us the Top-K list of documents. However, there is just a small yet very major problem. Collections often have many millions, if not billions of documents. Even if our model is so fast that it processes each document in one microsecond, one millionth of a second, it would still require nine seconds per query for a dataset like MS MARCO with nine million messages, which is way too slow for most practical applications. To deal with this in practice, neural IR models are often used as re-rankers-- models that rescore only the Top-K documents obtained by another model to improve the final ranking. One of the most common pipeline designs is to re-rank the top 1,000 documents obtained by BM25. This can be great because it cuts down the work for a collection with 10 million passages by a factor of 10,000, because we only need to rank 1,000 documents with the neural model. But it also introduces an artificial ceiling on recall, it limits recall in an artificial way. Since now, all of the relevant documents that BM25, our first stage ranker, fails to retrieve cannot possibly be ranked highly by our shiny new IR ranker. So, can we do better? It turns out that the answer is yes. We'll discuss the notion of end to end retrieval later, where our neural model will be able to quickly conduct the search by itself over the entire collection without a re-ranking pipeline. But first, we'll discuss a number of neural re-rankers in detail in the next screencast.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Data_Organization_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is part 4 in our series on methods and metrics. We're going to be talking about how we organize datasets for the purposes of conducting evaluations in NLP. Let's begin with the classic train/dev/test split. This is a very common format for datasets in our field, especially for the very large publicly available ones. And it's really good in the sense that in releasing datasets with these splits predefined, we do ensure some consistency across the different evaluations that people run. It does presuppose that you have a fairly large datasets because after all, right from the get go, you are setting aside a whole lot of examples in the dev and test splits that you can't use at all to train your systems. So even though your system might benefit from those examples, you can't use them in that context. They can be used only for evaluations. You're just giving up a lot of potentially useful examples. As we've discussed many times, we're all on the honor system when it comes to that test set. It's distributed as part of the data set, but it has a privileged status. The test set can be used only once all of system development is complete, and then you do a single evaluation on the test set and report that number completely hands off. This is vital for our field because it's the only way that we can even hope to get a true picture of how our systems are truly generalizing to new examples. That said, the downside of having predefined train/dev/test splits is that inevitably everyone is using those same dev and test sets. And what that means is that over time as we see consistent progress on a benchmark test, we're taking that same measurement on that same test set. And it can be hard to be sure whether we're seeing true progress on the underlying task or the result of a lot of implicit lessons that people have learned about what works and what doesn't for that particular test set. And that's true even if everyone is obeying that honor code and using the test set only for truly final evaluations. Nonetheless, information can leak out, and we might start to mistake true progress when we're actually just seeing progress on that particular test set. And I think the only way that we can really combat this is by continually setting new benchmark tests for ourselves with new test sets so that we see how systems perform in truly unseen environments. As you leave NLP, it's common to find datasets that don't come with that predefined train/dev/test split, and that poses some methodological questions for you. This is especially true for small public data sets that you see out there. And this poses a challenge for assessment. For robust comparisons, you really have to run all your models using your same assessment regime. That is, the same splits. And that's especially important if the dataset is small because, of course, in a small dataset, you're probably going to get more variance across different runs. And this can make it really hard to compare outside of the experimental work that you're doing. If someone has published the results of some random 70/30 train-test split, unless you can reconstruct exactly the splits that they used, it might be unclear whether you're doing a true apples-to-apples comparison. So that's something to keep in mind. And it does mean that if you can for your own experiments, you might impose a split right at the start of your project. This is probably feasible if the data set is large. And what it will mean is that you have a simplified experimental setup, and you have to do less hyperparameter optimization just because there are fewer moving parts in your underlying experimental setup. It does presuppose that you have a pretty large data set because, as I said before, you have to give up a whole bunch of examples to dev and test. But it will simplify other aspects of your project if it's feasible. For small datasets though, imposing a split might leave too little data, leading to highly variable performance. And in that context, if that's the kind of behavior that you observe, you might want to move into the mode of cross-validation. So cross-validation, in this context we take a set of examples, say our entire data set, and we partition them into two or more train-test splits. And you might do that repeatedly and then average over the results of evaluations on those splits in some way to give a holistic summary of system performance. And in that way, even as those numbers vary-- they might have a lot of variance-- we're still getting in the average, we hope, a pretty reliable measure of how the system performs in general on the available data. And I'm going to talk about two ways to do cross-validation, each with its own strengths and weaknesses. Let's begin with what I call random splits here. So under the random splits regime, you take your dataset. And let's say k times, you shuffle it, and you split it. And you have t% for train then probably the rest left out for test. And on each one of those splits, you conduct some kind of evaluation, get back your metrics. And then at the end of all these k evaluations, you probably average those metrics in some way to give a single summary number for system performance. In general, but not always, when we do these splits, we want them to be stratified in the sense that the train and test splits should have approximately the same distribution over the classes in the underlying data. But I have been careful to say that this is not always true. There could, for example, be contexts in which you would like your test set to stress test your system by having a very different distribution, maybe an even distribution, or one that's heavily skewed towards some of the smaller but more important classes. And that will pose a challenge for train-test regimes because the system's experiences at train time will be different in this high level distributional sense from what it sees at test time. But that, of course, might be part of what you're trying to pursue as part of your overall hypothesis. The trade-offs for this kind of cross-validation-- the good is that you can create as many splits as you want without having this impact the ratio of training to testing examples, right, because k times-- we're just going to do a random split. And it can be consistent that we do independent of k 70% train, 30% test or 50-50, or whatever we decide we want that's independent of the number of splits that we set. The bad of this, of course, is that there's no guarantee that every example will be used the same number of times for training and for testing. And for small data sets, this could, of course, be a concern because you might be introducing unwanted correlations across the splits in, for example, never having certain hard examples be part of your test set just as a matter of chance. So that's something to keep in mind. But of course, for very large data sets, it's very unlikely that you'll be susceptible to the bad part of this, and then you do get a lot of the benefits of the freedom of being able to run lots of experiments with a fixed train-test ratio. And of course, as usual, scikit has a lot of tools to help you with this. So I've just given some classic examples down here from the model_selection package. You might import ShuffleSplit, StratifiedShuffleSplit. And of course, train_test_split is a useful utility for very quickly and flexibly creating splits of your data. And I make heavy use of these throughout my own code. The second reason for cross-validation that I'd like to discuss is K-folds cross-validation. Here, the method is slightly different. We're going to take our dataset and split it into three folds, in this case for three-fold cross-validation. You could, of course, pick any fold number that you wanted. And then, given that it's three-fold cross-validation, we're going to conduct three experiments-- one where fold 1 is used for testing and 2 and 3 are merged together for training, the second experiment where we hold out fold 2 for testing and the union of 1 and 3 is used for training, and then finally a third experiment where fold 3 is used for testing and folds 1 and 2 are concatenated for the train set. The trade-offs here are slightly different from the trade-offs for random splits. So the good of this is that every example appears in a train set exactly k minus 1 times and in a test set exactly once. We have that guarantee in virtue of the fact that we use a single split over here to conduct our three experimental paradigms. The bad of this, of course, can be really difficult. The size of K is determining the size of the training test split. But just consider that for three-fold cross-validation, we're going to use 67% of the data for training and 33 for testing. But if three experiments is not enough, if we want 10 folds, the result of that will be that we use 90% of our data for training and 10% for testing. And the bottom line is that those are very different experimental scenarios from the point of view of the amount of training data that your system has and probably the variance that you see in testing because of the way you're changing the size of the test set. Whereas for the random splits that we just discussed, we have an independence of the number of folds and the percentage of train and test examples that we're going to have. And that can be very freeing, especially for large data sets where the value up here in the good is really less pressing. And, again, scikit-learn has lots of tools for this. And I've actually just given a sample of them here. You have KFold, StratifiedKFold, and then cross_val_score is a nice wrapper utility that will, again, give you flexible access to lots of different ways of conceptualizing K-folds cross-validation.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Classical_IR_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
OMAR KHATTAB: Hello, everyone. Welcome to part 2 of our series on NLU and IR. The screen cast will be a crash course in classical IR, as well as evaluation methods and information retrieval. Let us first define the simplest form of our task, namely ranked retrieval. We will be given a large collection of text documents. This could be all of the passages in Wikipedia, perhaps a crawl of parts of the web or maybe all of the documentation of Hugging Face or other software libraries. This corpus will be provided to us offline, that is before we interact with any users. And we will be able to spend a one time effort at organizing or otherwise understanding the content of these documents in the corpus before we start searching. Online though, we will receive a query from the users, which could be a natural language question written in English, for example. The goal of our ranked retrieval system will be to output a Top-K list of documents, sorted in decreasing order of relevance to the information need that the user expressed in the query. So this might be the top 10 or the top 100 results. So how do we conduct this task of ranked retrieval? As it turns out, we've already looked at the way for doing this before, when discussing matrix designs. In particular, we know that we can build Term-Document Occurrence Matrices. And in such a matrix like the one shown, each term-document pair has a corresponding cell in which the matrix will store the number of times that the term appears in the document, in our corpus. Of course, we will probably want to apply some sort of reweighting here, because we don't want to work with these row counts. But once we've done that, we can already answer queries that contain just a single term pretty well. And to do that, we would basically just return the K-documents with the largest weight after normalization or other processes for the single term. And again, as it turns out, this is precisely what is done in classical IR if we have just a single query-- just a single term in our query. When we have multiple terms in the same query, classical IR tends to treat them independently. So we would basically add the weights up across all of the terms in the query per document. And then, that's the score for the document. This is precisely the computation that is shown here, where we compute the relevance score between a query and a document. We would go over all the terms in the query and simply add the corresponding document-term weights for all of these terms for that document. This gives us a score for the document and we can then return the K-documents with the largest total scores. Interestingly, this reduces much of classical IR, of course, not all of it, to thinking about how do we best weigh each term-document pair, which has an undeniable similarity to our first task this quarter in Homework 1. Except of course, that here in IR, we look at the term to document relevance, and not word to word relatedness. So thinking about term-document weighting, here are some intuitions that might be useful, as we think about what makes a strong term weighting model in IR. Of course, later, in the next screencast in particular, we'll be looking at neural models that go beyond this. But for now, perhaps the two most prominent intuitions for term-document weighting are connected to our first unit's discussion of frequency and normalization. In particular, if a term t occurs frequently in document d. The document is more likely to be relevant for queries that include the term t or so is one of our intuitions. And in terms of normalization, if that term t is quite rare, so if it occurs in only a few documents overall, we take that as a stronger signal that document d is even more likely to be relevant for queries including t. Lastly, if document d is rather short, we take that as also yet another signal that might increase our confidence that the term t was included in that rather short document for a reason. Taking a step back and thinking more broadly, we're still functioning under the same statement from the first unit. Our goal is ultimately to amplify the important signals, trustworthy and the unusual and to deemphasize the mundane and the quirky. There are so many different term-weighting functions in IR, but most of them are directly inspired by TF-IDF and take a very similar computational form. For TF-IDF, this is a slightly different version to the one used in Unit 1. What I have here is slightly different. This is the more popular version in the context of IR applications. But TF-IDF is overloaded frequently, and you will see multiple implementations, if you go look for them. So we'll define N to be the size of the collection. And DF or document frequency of a term, DF of term to be the number of documents that contain that term in the collection. Then TF or term frequency of a term-document pair will be defined as the logarithm of the frequency of this term in this document, with 1 just for mathematical reasons. IDF or inverse document frequency is defined as the logarithm of N divided by the document frequency of the term. TF-IDF is then nothing but the product of these two values for each query term summed up at the end to assign a single overall score to each document by summing up across all query terms, as we've discussed before. Of course, higher scores are better and the Top-K scoring documents are those that we would return to the searcher if we were to use TF-IDF. Notice how both TF and IDF grow sub-linearly, in particular, logarithmically with frequency and 1 over DF, respectively. A much stronger term weighting model in practice is BM25 or best match number 25. And as you might imagine, it took many attempts until BM25 was developed. For our purposes, unlike TF-IDF, term frequency in BM25 saturates towards the constant value for each term, and also, it penalizes longer documents when counting frequencies, since a longer document will naturally contain more occurrences of its terms. These are the main differences, and it really helps BM25 in practice be a much stronger term weighting model. Now that we've decided the behavior of these weighting functions or at least a couple of them, how would we actually implement this as an actual system that we could use for search? So let's think about this, whereas the raw collection, the actual text, supports fast access from documents to terms, so basically, [INAUDIBLE] gives us the terms of each document. The term-document matrix that we've studied so far, allows fast access from a term to the documents. So it's a bit of the reverse process. Unfortunately, the term-document matrix is way too sparse and contains too many zeros to be useful, since the average term does not occur in the vast majority of documents, if you think about it. For the inverted index, that's where it comes in. This is a data structure that solves this problem. It's essentially just a sparse representation of our matrix here, which maps each unique term in the collection. So each unique term in our vocabulary is what we call a posting list. The posting list of a term t simply enumerates all of the actual occurrences of the term t in the documents, recording both the ID of each document in which the term t appears. And also, its frequency in each of these documents So beyond term weighting models, IR, of course, contains lots of models for other things. So they're models for expanding queries and documents. This basically entails adding new terms to queries or to documents, or to both, to help with the vocabulary mismatch problem that we discussed in the first screencast of the series, basically, when queries and documents use different terms to express the same thing. There's also plenty of work on term dependence and phrase search. Notice that so far, we've assumed the terms in each query and in each document are independent, and we function in a bag-of-words fashion. But work on term dependence and phrase search relaxes these assumptions that each query is a bag of independent terms. Lastly, there is also lots of work on learning to rank with various features, like how to estimate relevance when documents have multiple fields, like maybe a title, a body, some headings, a footer and also, anchor text, which is a very strong signal when you have it, like in web search. So this is basically the text from links in other pages to your page. The text in those links or around those links tends to be very useful as a relevant signal. And of course, also things like PageRank with link analysis and lots of other features for IR, like recency and other stuff. But I think it's worth mentioning that until recently, if you just had a collection that you want to search and you didn't want to do a lot of tuning, BM25 was a very strong baseline on the best that you could do ad hoc, so without lots of tuning and without lots of training data, et cetera. And this only changed a year or two ago with the advent of BERT-based ranking, which we'll discuss in detail in the next screencast of this set of the series. OK, so we just built an IR system. How do we evaluate our work? What is success like? Well, a search system, as you can imagine, must be both efficient and effective. If we had infinite time and infinite resources, we would just hire experts to look through all the documents one by one to conduct the search, but clearly, we don't have that sort of ability. So efficiency in IR is paramount, after all, we want our retrieval models to work with subsecond latencies for collections that may have hundreds of millions of documents, if not even larger than that. The most common measure of efficiency in IR is latency, which is simply the time it takes to run one query through the system. Say, on average or perhaps, at the tail, like the 95th percentile, for example. But you can also measure throughput in queries per second. Space, how much, maybe the inverted index takes on disk versus say a term-document matrix. How well do you scale to different collection sizes in terms of the number of documents or the size of the documents? And how do you perform on the different query loads? Many queries, few queries, short queries, long queries. And lastly, of course, what sort of hardware do you require? Is it just one CPU core? Many cores? A bunch of GPUs? But latency tends to be kind of, once you've determined the other ones, it's the go-to metric in most cases. More central to our discussion today, and we'll focus on this for the rest of the screencast is IR effectiveness or the quality, basically of an IR system. And here we ask, do our top-k rankings for a query satisfy the users' information need? Answering this question tends to be harder than evaluation for typical machine learning tasks, like classification or regression because we're not really just taking an item and assigning it a class. We're trying to rank all of the items in our corpus with respect to a query. In practice, if you have lots of users, you could run online experiments where you basically give different versions of your system to different users and compare some metrics of satisfaction or conversion, basically in terms of purchases or otherwise. But for research purposes, we're typically interested in reusable test collections. That's collections that allow us to evaluate IR models offline and then, compare them against each other. Building a test collection entails three things. First, we need to decide on a document collection for our corpus, a set of test queries, and we need to find or get or produce relevant assessments for each query. If resources permit, a collection could also include the train dev split of queries, but given the high annotation cost, it's actually not uncommon in IR to find or create only a test set. The key component of a test collection is the relevance assessments. These are basically human annotated labels for each query that enumerate for us, whether specific documents are relevant or not to that query. These query document assessments can either be binary or they could take on a more fine grained graded nature. An example of that is grading a query document pair as -1, 0, 1 or 2, with meanings of hey, this is a junk document, -1. You should not retrieve it for any query or this document is irrelevant, but it might be useful for other queries or this document is quite relevant for this query, but it's not a perfect match. Or here is a really, really good match for our query, which would be a score of 2 or 3, depending on the grades that you're using for the assessments. As you might imagine, because we work with potentially many millions of documents, it's usually infeasible to judge every single document for every single query. So instead, we're often forced to make the assumption that unjudged documents are not relevant or at least to ignore them in some metrics of IR. Though for most purposes, they are treated as not relevant. Some test collections take this further and only label one or two key documents per query as relevant and assume everything else is not relevant. So this tends to be useful when you work with particular data sets. And you want to keep it in mind as you do evaluation. So many of the test collections out there in IR are annotated by TREC or the Text Retrieval Conference, which includes annual tracks for competing and comparing IR systems. For instance, the 2021 TREC Conference has tracks for search in the context of conversational assistance, health misinformation, fair ranking, and has a very popular deep learning track, as well, which we'll discuss in more detail. Each TREC campaign emphasizes careful evaluation with a very small set of queries. So just 50 queries is a very typical size, actually. But TREC extensively judges many, many documents, possibly hundreds of documents or even more for each query here. So you can imagine an alternative, which we'll look at next where you have lots of queries. But you only judge a very small number of key documents for those queries with the intention that the performance that you get will average out over a large enough set of queries. And this is exactly what happens in MS MARCO Ranking Tasks, which is a collection of really popular IR benchmarks by Microsoft. And this MARCO contains more than half a million Bing search queries. And this is the largest public IR benchmark. Each query here is assessed with one or two relevant documents, and we assume everything else is not relevant, and having this sparse annotation is often not a problem at all for training because we have so many training instances. And so, MS MARCO provides a tremendous resource for us when it comes to building and training IR models, especially in the neural domain. It also turns out that sparse labels are not too bad for evaluation, either. Especially because of the size of the test queries, we can use many thousands of test queries, an average of results across all of them to get a pretty reliable signal about how different systems compare. There are multiple test collections out there on top of MS MARCO. And so, there's the original passage ranking task and newer document ranking task where the documents are much longer, but there's fewer of them. And then, there is also a track, the deep learning track, which we've mentioned before, which is happening every year since 2019 and which uses the MS MARCO data, especially for training, mostly. But has far fewer queries for testing with lots more labels for evaluation, a lot more extensive assessments and judgments for evaluation, so these are much denser labels. There are also plenty of other rather domain specific IR benchmarks, many of which are collected in this table by Nandan et al. in a very recent preprint. As you can see, these benchmarks vary greatly in terms of the training size, if there is any training at all. The test set size, the average query length, the average document length, and many other factors. BEIR, or benchmarking for IR is a recent effort by Nandan et al. here to use all of these different data sets for zero shot or out of domain testing of IR models. Specifically, in BEIR, we take already trained IR models that do not have access to any validation or training data on these downstream IR tasks and test them out of the box to observe their out of domain retrieval quality, so that is without training on these new domains. OK, so we now have a test collection with queries, documents, and assessments. How do we compare IR systems in this collection? First, we will ask each IR system to produce its Top-K ranking, say its top 10 results. And we'll use an IR metric to compare all of these systems at that cutoff K. The choice of IR metric and the cutoff K will depend entirely on the task, so I will briefly motivate each metric as we go through them. All of the metrics we will go through are simply averaged across all queries. And so, to keep things simple, I will show the computation of the metric for just one query in each case, but you want to keep in mind that this is averaged across queries. Let us start with two of the simplest IR metrics, which are Success and MRR. For a given query, let rank be the position of the first relevant document that we can see in the Top-K list of results. Success@K will just be 1, if there is a relevant result in the Top-K list, and 0 otherwise. This is a very simple metric, as you can see, that can be useful in cases where we assume that the user just needs one relevant result anywhere in the Top-K. And in particular, it can be useful if our retrieval is spread to a downstream model that looks at the Top-K results and then does something with them. So you can read all of them, and it would read all of them anyway, so we're just interested in buying the irrelevance here. Mean reciprocal rank or MRR also assumes that these only needs one relevant query in the Top-K, but it assumes that the user does care about the position of that relevant document in the ranking. So a relevant document at the second position for example, is only given half of the weight of a relevant document in the top position. You're probably already familiar with precision and recall, but let's define them here in the context of Top-K ranked retrieval. For a given query, let Ret of K be the Top-K retrieved documents, that set of Top-K retrieved documents. And let Rel be the set of all documents that we judged as relevance as part of our assessments. In this case, Precision@K is just a fraction of the retrieved items that are actually relevant. And recall that K is the fraction of all the relevant items that are actually retrieved. A pretty popular metric is also a MAP or mean average precision or just average precision for one query, which essentially brings together notions from both precision and recall. To compute average precision for one query, we will add up the Precision@i for every position i from 1 through K, where the i-th document is relevant. We will divide this whole quantity by the total number of documents that we're judged as relevant for this query. All of the metrics that we've considered so far only interact with binary relevance, that is they just care whether each document that is retrieved is considered relevant or not relevant. DCG or discounted cumulative gain works with graded relevance. So for instance, 0, 1, 2, and 3. For each position in the ranking from 1 through K, we will divide the graded relevance of the retrieved document at that position by the logarithm of the position, which essentially discounts the value of a relevant document if it appears late in the ranking. Unlike the other metrics, the maximum DCG is often not equal to 1. So we can also compute normalized DCG or NDCG by dividing for each query by the ideal DCG. This is obtained basically, if all of the relevant documents are at the top of our Top-K ranking, and they are sorted by decreasing relevance, so all of the 2's before all of the 1's before all of the 0's in this case that are not relevant. All right, having discussed classical IR and evaluation in this screencast, we will focus on Neural IR and in particular, state of the art IR models that use what we've learned so far in NLU in the next screencast.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Retrofitting_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Hello everyone. Welcome to part 6 in our series on distributed word representations. This can be considered an optional part but it's on the irresistibly cool idea of retrofitting vectors to knowledge graphs. Here are the central goals. On the one hand, as we've seen, distributional representations are powerful and also easy to obtain, but they tend to reflect only relatively primitive semantic notions like similarity, or synonymy, or connotation, or relatedness, so that might feel limiting. On the other hand, structured resources like knowledge graphs, while sparse and kind of hard to obtain, support really rich learning of very diverse semantic distinctions. So the question naturally arises, can we have the best aspects of both of these? And the inspiring answer given by retrofitting is yes. We can combine them. The original method for doing this is due to this lovely paper Faruqui et al., 2015, which I'm going to be giving a brief summary of in the screen cast. So here is the retrofitting model. It consists of two sums and they constitute kind of opposing forces. Imagine that we have an existing embedding space, like GloVe, or some embedding space that you built yourself-- that's q-hat. And we're learning these qi's and qj's. The term on the left is basically saying, remain faithful to those original vectors as you learn these new vectors qi. Try not to be too dissimilar from where you started. That pressure is balanced against the pressure on the right, which is saying, make representations that look more like the neighbors for the current node in the knowledge graph, which is defined by this set of relations E. So two opposing pressures, on the one hand, we're saying to be faithful to the original, on the other hand, we're saying look more like your neighbors in the knowledge graph. If we set alpha to 1 and beta to 1 over the out-degree for the node that we're targeting, then we have basically balanced these two pressures. If we set alpha really large, we'll mostly want to stay faithful to the original vectors. If we set beta comparatively very large, then we'll mostly want to look like the neighbors in the knowledge graph, and we won't remain so tethered to the original embedding space that we started with. This illustration kind of nicely depicts what happens in the model-- the gray vectors of the original embedding space. We have these knowledge graphs that connect the associated nodes. And because they're connected in the retrofitting space which is given in white, these nodes are kind of pulled together and look more similar. There's a bunch of code for doing retrofitting in the course repository, and I'll just show you a few quick illustrations using that code. Let's start with a simple case, we have a very simple knowledge graph where node 0 is connected to node 1 and node 0 is connected to node 2, just directionally. What happens when we run the retrofitting model is that 0 is called equally close to 1 and to 2, kind of equidistant between them and closer to both than it was in the original embedded space. Here's a situation in which every node is connected to every other node that's represented on the left here. That's where we start, and as a result of running the retrofitting model with alpha and beta set in their default parameters, what happens is that triangle just gets smaller in kind of fully symmetric way as the nodes become more similar to each other because of the graph structure. Here's a kind of degenerate solution. If I set alpha to 0, I have no pressure to be faithful to the original vectors. All I care about is looking like my neighbors from the term on the right. And as a result, all these vectors shrink down to be the same point after the models run for a few iterations. If instead I had done the opposite, I've made alpha really large comparative to beta, then basically nothing would have happened in the learning of the triangle. It would remain its original size. It's worth considering some extension. So I think the fundamental limitation of this model is that it is kind of assuming, right there in its objective, that to have an edge between nodes is to say that they are similar. But of course, the whole point might be that your knowledge graph has very rich edge relations corresponding to different linguistic notions like antonymy. And we certainly wouldn't want to treat synonymy and antonymy as the same relation and just assume that it meant similarity in our model. So there are various extensions. I think the most general extension that I've seen is from a paper that I was involved with led by Ben Lengerich, which is called functional retrofitting, which allows you to very flexibly learn different retrofitting modes for different edge semantics. And once you start down that road, you have a really natural connection with the literature on graph embedding, that is learning distributional representations for nodes and knowledge graphs. And this paper led by Will Hamilton is an outstanding overview of methods in that space. And then you have this nice synergy between NLP methods and methods that are more associated with work on knowledge graphs and social networks and so forth. And finally, here are some code snippets just showing some simple illustrations of the sort that I showed you earlier in the screen cast. And I would just mention at the end here, if you would like to apply these methods to WordNet, which could be a powerful ingredient for the first assignment and bake off, I would encourage you to check out this notebook bsm_03_retrofitting, because it walks through all the steps for doing that.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Sentiment_Analysis_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
Hello, everyone. This video kicks off our series of screencasts on supervised sentiment analysis. I just want to provide you with an overview of the problem and of the kind of work we'll be doing, and also a rationale for why we'll be doing it. So here's an overview of the entire unit. I want to, in this screencast, motivate for you the idea that sentiment analysis is a deep problem and an important problem for NLU, not only scientifically but also for industrial applications. In the next screencast, I'll give you some practical general tips for doing sentiment analysis. Following that, we'll have two short screencasts that introduce our core data sets, the Stanford Sentiment Treebank and a new data set called DynaSent. After that, I'll introduce the code base that will be working with on the assignment in the bake-off. That's sst.py, which is included in the course code distribution. And I'm going to use that code to illustrate some important methodological issues surrounding supervised analysis in general, which would be hyperparameter tuning and comparison of different classifiers to see whether they are different in some significant statistical sense. Then we'll talk about feature representation, both for large sparse linear models with handbuilt features, and also with more deep learning oriented distributional representations. And that will be a nice segue into the final unit, which is on using recurrent neural networks as classifiers for supervised sentiment analysis. My hope is that this unit can provide a refresher on core concepts in supervised learning, introduce you to the problem of sentiment analysis, which I think is, as I said, a central problem for natural language understanding, and also set you on your way toward doing the assignment and the bake-off and possibly building projects in this space. For the associated materials, as I said, we've got a bunch of code. sst.py is the core module. And then we have a notebook introducing the Stanford Sentiment Treebank as a dataset. We have a second notebook that's on what I've called handbuilt features and mostly linear models, and then a third notebook that's on using neural networks, which more or less pushes you to using distributional representations instead of handbuilt features, although, as you'll see, the notebooks explore various combinations of these ideas. The homework in the bake-off is in the notebook homework sentiment. And I'm going to introduce that probably in a separate screencast. The core readings are the two papers that are oriented around our data sets, the Standard Sentiment Treebank and DynaSent. And as supplementary readings, you might enjoy this compendium from Pang and Lee. It's a kind of overview of the whole field of sentiment analysis. And it poses challenges and questions that are still relevant to this day. And then Goldberg 2015 is an excellent overview of using neural networks in NLP, very generally. But with lots of helpful notation and so forth that we're aligned with. And that might help you get a feel for the landscape of modeling choices that you might make in this space and in subsequent units for this course. So I want to start by just motivating the idea that sentiment analysis is an interesting problem. Because you often hear people say things like sentiment analysis is solved or it's overly simplistic or just too easy. And I think none of those things are true. And to motivate that, I just want to do a little data driven exercise with you. So for these examples, you should ask yourself, which of these sentences expresses sentiment at all? And for the ones that you think do express sentiment, what is that sentiment? Is it positive or negative or maybe neutral or something else? So you might think those are straightforward questions but this is going to get difficult really fast. Consider the first example. There was an earthquake in California. This is probably going to sound like bad news to you. And many sentiment analysis systems will assign this negative sentiment. But we should ask ourselves, is this actually a sentiment-laden sentence? It is on the face of it merely stating a fact. And we might hold that for sentiment to be expressed, we need some kind of subjective evaluative perspective to be included in here, like it was bad that there was an earthquake in California. And absent the "it was bad" clause, this might just be a neutral statement of something that had happened. But the important point here is that unless we settle these questions, we'll have continued indeterminacy about what we're actually doing. The team failed to complete the challenge. Is that positive or negative? We might agree that it's more than just a statement of fact, although it's a borderline case even for that question. But if we did decide it was sentiment-laden, we would need to figure out the perspective of the speaker. Is the speaker advocating for this team or advocating for a different team, right? We win. We lose. It's really going to depend on how the speaker is involved. And that, of course, is going to have to become part of our definition of what we're doing when we assign sentiment labels. They said it would be great. On the face of it, this expresses no speaker perspective at all. This is merely reporting what somebody else said. And we need to decide for those obviously different perspectives what we're going to do in terms of sentiment analysis. Because after all, this could continue. They said it would be great and they were right, which is straightforwardly positive. But it could also continue, they said it would be great and they were wrong. And I think that reveals that sentence 3 is not so obviously encoding a particular speaker perspective, whereas these clauses are what really tell the story for us as sentiment analysts. And then we get into things that you might call non-literal use of language. The party fat cats are sipping their expensive imported wines. This has a lot of positive language in it. Maybe only fat cats is the thing that sounds like a direct smear. But I think we could agree that overall, this is probably negative in its valence. And that will be a challenge for our systems and also a challenge for us in just characterizing precisely what was done here in terms of sentiment. Here's a similar example, oh, you're terrible. This might be a criticism. And it might therefore be straightforwardly negative. On the other hand, it could be a kind of teasing form of social bonding that overall has a positive effect on the discourse. How are we going to resolve that kind of context dependence? Here's another one. "Here's to ya, ya bastard." It's got some negative language, even something that's kind of like a swear. But this could be a friendly, jocular phrase of some kind. And we'll have to sort out whether it's friendly and fun because of its negativity, or whether this is straightforwardly just a positive sentence. And then here's a case that's just going to be a challenge for our systems. This is out of the movie 2001. This is from an actual review. "Many consider the masterpiece bewildering, boring, slow moving, or annoying." There is lot of negative language there. In fact, there's very little that's positive except "masterpiece." But I think we can all anticipate that overall this is probably going to be a positive review of that movie. So that just shows you that even if we're clear about what we're doing in terms of sentiment, the linguistic challenge here is significant. And we could also extend that to sentiment like long suffering fans, bittersweet memories, hilariously embarrassing moments. These are things that are going to blend positivity and negativity and all sorts of other emotional dimensions in ways that just make sentiment analysis very difficult to do reliably. And that's a nice segue into this topic of sentiment analysis in industry. Because of course, sentiment analysis is one of the first tasks that was really transformed by data-driven approaches. And it was the first task to really make an impact in industry. There are lots of startups and companies that offer sentiment analysis tools. And it has obvious import for things like marketing, and customer experience, and so forth. And I would say the first thing I would say is that to this day, the sentiment from industry, so to speak, is that sentiment analysis tools still fall short. This is from an article from 2013. And the gist of it is, anyone who says they're getting better than 70% today is lying, generally speaking. From whatever notion of 70% we have here, I think we can agree that that's too low and that we as a field ought to be offering tools that are better. This is another kind of equivocal headline. Emotion AI technology has great promise when used responsibly. Affective computing knows how you feel, sort of. The sort of is kind of like the equivalent of 70% here. I think it shows that there's a lot of work to be done if we're going to have the kind of impact we want to have in the technological sphere. And then there's another dimension to this which we're not going to really get to capture but is worth planting in your minds because this could become projects, right? We're going to do classification of sentiment into positive, negative, and neutral. And that's often the starting point for these industry tools. Many business leaders think they want these pie charts to point out like 30% negative, 70% positive. And then in Q2, the negativity is slightly up. And that's surely a leading indicator of something. It looks like negativity is on the rise. But the issue is, what do you do? How does this help with decision making? Merely classifying these texts and showing change over time is not enough for any business leader to take action. But we need to know why this is happening, what the underlying factors are, basically what are the customers saying beyond these gross classifications into positive, negative, and neutral. And we should be pushing ourselves to design tools that can offer that next layer of insight. Affective computing, this is a kind of transition into the wider world here. We're going to focus on just sentiment analysis. But you could think about emotional analysis and all other kinds of kind of context dependent expression in language, put that under the heading of affective computing. This is a diagram from a paper I did a few years ago with Moritz Sudhof. It's a diagram of emotions and other kinds of moods that people feel. The arcs give you a transition. So they show that people tend to transition systematically from one emotional state to another. So what we're seeing here is basically just that this is a very high dimensional space. It's not just positive, negative, neutral. We have a wide range of feelings, and moods and emotions and states that we go into. And there's a lot of structure to how we experience those moods in our lives. And it would be great to break out of the simple, positive, negative mode and tackle all of these dimensions. And in that spirit, here what I've done on this slide is just list out a whole bunch of other tasks that you might consider adjacent to sentiment analysis. But they are meaningfully different from sentiment analysis, things like subjectivity, bias, stance taking, hate speech, microaggressions, condescension, sarcasm, deception and betrayal, online trolls, polarization, politeness, and linguistic alignment. These are all deeply social things that are influenced by and shape our language. And I've selected these papers in particular because all of them have really nice either crisp statements of the problem and/or really great public data sets that you could use for experiments in this wide world. I think that's a very exciting space to explore as a kind of next step from what we're doing in this unit. But back down to Earth here. Our primary data sets, as I said, are going to be the ternary formulation of the Stanford Sentiment Treebank which is just positive, negative, neutral. And also the DynaSent data set, which has that same ternary formulation. The SST is movie reviews. DynaSent is mostly reviews of products and services, I think heavily biased toward restaurants because the underlying data is from Yelp. And then for the bake-off, we're going to have a new dev/test split. We'll use SST-3 as well as this new one of a corpus of sentences from restaurant reviews. So you can see that DynaSent might be an asset here. They all have this ternary formulation. And I'm hoping that the combination of these data sets gives us a really interesting perspective, not only on sentiment analysis but also on kind of how we design systems that effectively transfer across domains and maybe learn simultaneously in multiple domains.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Model_Evaluation_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is Part 5 in our series on methods and metrics. We're going to be talking about essential selected topics in model evaluation in our field. Here's our overview. I'd like to start by talking about baselines and their role in experimental comparisons. Then we'll discuss hyperparameter optimization, both the process and the motivations, as well as compromises that you might have to make due to resource constraints and other constraints. We'll touch briefly on classifier comparison, which is a topic we covered in the sentiment analysis unit. And then we'll close with two topics that are really pressing for deep learning models, which is assessing models without convergence and the role of random parameter initialization in shaping experimental results. So let's begin with baselines. The fundamental insight here is that in our field evaluation numbers can never be understood properly in isolation. Let's consider two extreme cases. Suppose your system gets 0.95 F1, then you might feel like you can declare victory at that point. But it will be natural for people who are consuming your results to ask, well, is the task too easy? Is it really an achievement that you've got 0.95? Or would even simpler systems have achieved something similar? At the other end of the spectrum, suppose your system gets 0.6 F1, you might think that means you haven't gotten traction. But we should ask two questions. First, what do humans get as a kind of upper bound? And also, what would a random classifier get? And if your 0.6 is really different from the random classifier and human performance is kind of low, we might then see that this 0.6 F1 is a real achievement. Now it kind of shows you that baselines are just crucial for strong experiments in our field. So defining baselines for you should not be an afterthought, but rather central to how you define your overall hypothesis. Baselines are really important for building a persuasive case. And they can be used to illuminate specific aspects of the problem that you're tackling and specific virtues of your proposed system. But this really comes down to is, right from the start, you might be saying, for example, here's a baseline model, here's my proposed modification of it. And the way we test your hypothesis is by comparing the performance of those two systems. In that context, you can see that the baseline is playing a crucial role in quantifying the extent to which your hypothesis is true. And therefore, careful model comparisons at that level are going to be really fundamental to successful pursuit of the hypothesis. When in doubt, you could include random baselines in your results table. They are very easy to set up and can illuminate what it's like if we're just making random predictions. And here I'm showing you that scikit-learn kind of has you covered on this point. They have two classes, DummyClassifier and DummyRegressor, each with a wide range of different ways that they can make random guesses based on the data. And I would encourage you to use these classes because it will make it easy for you to fit the random baselines into your overall experimental pipeline, which will reduce the amount of code that you have to write and possibly avoid bugs that might come from implementing these baselines yourself. So strongly encouraged. And kind of at the other end of the spectrum, you might think, for your task, whether there are task-specific baselines that you should be considering. Because they might reveal something about the dataset, or the problem, or the way people are going about modeling the problem. We saw an example of this before in the context of natural language inference. We saw that hypothesis-only baselines tended to make predictions that were as good as 0.65 to 0.70 F1, which is substantially better than the baseline random chance which would be at about 0.33. And that's revealing to us that when we measure performance, we should really be thinking about gains above that hypothesis-only baseline. Comparisons against random chance are going to vastly overstate the extent to which we have made meaningful progress on those datasets. The story of the Story Cloze task is somewhat similar. Here the task is to distinguish between a coherent and incoherent ending for a story. And people observed that systems that looked only at the ending options were able to do really well. There is some bias in coherent and incoherent continuations that leads them to be pretty good evidence for making this classification decision. Again, you might think that reveals that there's a fundamental problem with the data set. And that might be true. But another perspective is simply that when we do comparisons and think about the model's performance, it should be with this as the baseline and not random guessing. OK, let's talk about hyperparameter optimization. We discussed this in our unit on sentiment analysis. And we walked through some of the rationale. Let me quickly reiterate the full case for this. First, hyperparameter optimization might be crucial for obtaining the best version of your model that you can, which might be your fundamental goal. Probably for any modern model that you're looking at there is a wide range of hyperparameters. And we might know that different settings of them lead to very different outcomes. So it's in your best interest to do hyperparameter search to put your model in the very best light. We also talked at length about how this is a crucial step in conducting fair comparisons between models. It's really important that when you conduct a comparison you not put one model in its best light with its best hyperparameter settings and have all the other models be kind of randomly chosen or even poorly chosen hyperparameter settings, because that would lead to unfair comparisons and exaggerate differences between the models. What we want to do is compare the models all with their best possible hyperparameter settings. And that implies doing extensive search to find those settings. And the third motivation you might have is just to understand the stability of your architecture. We might want to know for some large space of hyperparameters which ones really matter for final performance, maybe which ones lead to really degenerate solutions, and which space of hyperparameters overall perform the best. So that we have more than just a single set of parameters that work well, but maybe real insights into the overall settings of the models that are really good. There's one more rule that I need to reiterate here. All hyperparameter tuning must be done only on train and development data. It is a sin in our field to do any kind of hyperparameter tuning on a test set. All of that tuning should happen outside of the test set. And then as usual, you get one run on the test set with your chosen parameters and that is the number that you report as performance on the test data. That's the only way that we can really get a look at how these systems behave on completely unseen data. So this is really crucial for understanding progress in our field. Now hyperparameter optimization, as you can imagine, can get very expensive. And let's review that and then talk about some compromises, right? The ideal for hyperparameter optimization is that you identify a large set of values for your model. You create a list of all the combinations of those values. This will be the cross product of all the values of the features that you identified. And for each of the settings, you should cross validate it on the available training data. And then choose the settings that are the best at step three, train on all the training data using those settings. And then finally, evaluate on the test data. That is the ideal here. And let's just think about how that's actually going to work. Suppose for our example that we have one hyperparameter and it has five values. And we have a second hyperparameter with ten values. Then the cross product is going to lead us to have 50 total settings for those hyperparameters. Suppose we add a third hyperparameter with two values. Now the number of settings that we have has jumped up to 100. If we want to do 5-fold cross-validation to select those optimal parameters, then we are talking about doing 500 different experiments. That's probably perfectly fine if you're dealing with a small linear model with some hand-built features. But if you are fitting a large transformer-based model where each experiment takes you up to one day, this is going to be prohibitively expensive in terms of time or compute resources. And that's going to compel us to make some compromises. This is the bottom line here. The above picture, that ideal, is untenable as a set of laws for our scientific community. If we adopted it, then complex models trained on large data sets would end up disfavored and only the very wealthy would be able to participate. And just to give you a glimpse of just how expensive this could get, here's a quotation from this nice paper on NLP and machine learning for health care. In their supplementary materials, they report that performance on all of the above neural networks was tuned automatically using Google Vizier with a total of over 200,000 GPU hours. For me as a private citizen, that could easily cost a half a million dollars just for the process of hyperparameter optimization. And that's what I mean by this being kind of fundamentally untenable for us. So what should we do in response? We need a pragmatic response here. Here are some steps that you take to alleviate the problem in what I view as kind of descending order of attractiveness. So starting with the best option, you could do some random sampling and maybe guided sampling to explore a large space of hyperparameters on a fixed computational budget. You could do search based on just a few epochs of training, right, rather than allowing your model to run for many epochs, which could take a whole day. You might select hyperparameters based on one or two epochs, on the assumption that settings that are good at the start will remain good and settings that are bad at the start will remain bad. That's a heuristic assumption. But it seems reasonable. You could possibly bolster it with some learning curves and so forth. And that could vastly cut down on the amount that you have to spend in this search process. You can also search based on subsets of the data. This would be another kind of compromise. However because a lot of hyperparameters are dependent on dataset size,z think of regularization terms. This might be riskier than the version in 2 there where we just trained for a few epochs. You also might do some heuristic search, maybe by defining which hyperparameters matter less and then set those by hand based on this heuristic search. And then you might just describe that process in the paper that via few observations you made some guesses about parameters that you could fix and therefore explored a smaller subset of the space that you might have liked to explore. Again, I think if you make the case and you're clear about this, readers will be receptive because we're aware of the costs. You can also find optimal hyperparameters via a single split of your data and use them for all subsequent splits. That would be justified if the splits are very similar and your model performance is very stable. And that would reduce all that cross-validation that did cause the number of experiments we had to run to jump up by a large amount. And finally, you might adopt other's choices. Now the skeptic will complain that these findings don't translate to new data sets, but it could be the only option that you just observe for example that for some very large model, the original authors use settings X, Y, and Z, then you might simply adopt them even knowing that your dataset or your test might call for different optimal settings. It isn't the best. But if it's the only thing that you can afford, it's certainly a reasonable case to make. Finally some tools for hyperparameter search. As usual scikit-learn has a bunch of great tools for this, GridSearch, RandomizedSearch, and HalvingGridSearch. GridSearch will be the most expensive, RandomizedSearch the least expensive, and HalvingGridSearch will help you kind of strategically navigate through the space of hyperparameters. And if you want to go even further in that direction, the scikit-optimize package offers a bunch of tools for doing model-based performance-driven exploration of a space of hyperparameters. And that could be very effective indeed. All right, let's talk briefly about classifier comparison. We've-- it's a topic we've reviewed before, but I'll just briefly recap. The scenario is this. Suppose you have assessed two classifier models. Their performance is probably different to some degree numerically. What can be done to establish whether those models are different in some meaningful sense? As we've discussed, I think guidance from the literature is that first we could cover practical differences. If you just observed that one model makes 10,000 more highly important predictions than another, then that might be sufficient to make the case that it's the better model. For differences that are narrower, again the guidance is that we might use confidence intervals on repeated runs. Or the Wilcoxon signed-rank test to give a single summary statistic of whether or not the different runs are truly different in their means and variance. You could use McNemar's test if you can only afford to run one experiment, whereas the Wilcoxon and confidence intervals will require you to run 10 to 20 different experiments, which again, could be prohibitively expensive. And in those situations, you might fall back to McNemar's test, because it's less expensive and arguably better than nothing, especially in scenarios where it's hard to tell whether there are practical differences between the systems. Finally, let's talk about two topics that seem especially pressing in the context of large scale deep learning models. And the first is assessing models without convergence, right? When working with linear models, convergence issues rarely arise, because the models seem to converge quickly based on whatever threshold you've set. And convergence implies kind of maximized performance in a wide range of cases. With neural networks however, convergence issues really take center stage. The models rarely converge even based on liberal thresholds that you might set. They converge at different rates between runs, so it's hard to predict. And their performance on the test data is often heavily dependent on these differences, right? Sometimes a model with a low final error turns out to be great. And sometimes it turns out to be worse than one that finished with a higher error. Who really knows what's going on? Our only fallback in these situations is just do experiments and observe what seems to work the best. So I think a very natural and easy to implement response to this that proves highly effective is what I'm calling your incremental dev-set testing. This is just the idea that as training proceeds, we will regularly collect information about performance on some held out dev-set as part of the training process. For example, at every 100th iteration, you could make predictions on that dev-set and store those predictions for some kind of assessment. All the PyTorch models for our course have an early stopping parameter that will allow you to conduct experiments in this way and keep hold of what seemed to be the best model, performance-wise, and then report that based on the stopping criteria that you've set up. And with luck, heuristically, that will give you the best model in the fewest epochs. And the early_stopping parameter has a bunch of different other settings that you can use to control exactly how it behaves, which might be important for particular models structures or data sets. Here's a bit of our motivation for early stopping. You might be thinking, why not just let my model run to convergence if I possibly can. In the context of these large, very difficult optimization processes, that could lead you really far astray, right? So here is a picture of a deep learning model. And you can see its error going down very quickly over many iterations. And it looks like you might want to iterate out to even to 80 epochs of training. However, if you look at performance on that held-out dev set you see that this model actually very quickly reached its peak of performance. And then all that remaining training was just either wasting time or eroding the performance that you saw early on in the process. And this is exactly why since this is our real goal here, you might want to do some kind of dev-set testing with early stopping. The final thing I'd want to say here is that all of this might lead us to get out of the mode of assuming that we should always be recording one number to summarize our models. We're dealing with very powerful models. In the limit, they might be able to learn very complicated things. And we might want to ask different questions like how quickly can they learn, and how effectively, and how robustly. And that might imply that what we really want to do is not report summary tables of statistics, but rather full learning curves with confidence intervals. This is a picture from a paper that I was involved with. And I think it's illuminating to see a by category breakdown of how the model is performing in addition to the overall average. Because you can see that while this red model is arguably much better than the yellow and the gray overall. It's kind of hard to distinguish it globally from this blue model. But for various of the subcategories, you do see some differences. Whereas for others, you do see that they're kind of indistinguishable. It's a very rich picture. You can also see that early on for some of these categories some of these models are really differentiated. They learn more efficiently. Whereas by the time you've run out to 100,000 epochs, many of the model distinctions have disappeared. That's the kind of rich picture that is already giving us a sense for how different values and different goals we have might guide different choices about which model to use and different choices about how to optimize. And I would just love it if our field got into the habit of reporting this very full picture, as opposed to reducing everything to a single number. The final topic is the role of random parameter initialization. This is kind of yet another hyperparameter that's in the background that's much more difficult to think about. Most deep learning models have their parameters initialized randomly, or many of those parameters are initialized randomly. This is clearly meaningful for these non-convex optimization problems that we're posing. But even simple models can also be impacted if you're dealing with very small data sets with very large feature spaces. In this classic paper here, these authors just observed that different initializations for neural sequence models that were doing named entity recognition led to statistically significantly different results. That is one and the same model with a different random seed was performing in ways that looked significantly different on these data sets. And a number of recent systems actually turned out to be indistinguishable in terms of their raw performance once this source of variation was taken into account. That's just a powerful example of how much a random seed could shape final performance in the context of models like this. Relatedly, at the other end of the spectrum, you can see catastrophic failure as a result of unlucky initialization. Some settings are great and some can be miserable failures. We don't really know ahead of time which will be which. And that means that we just have to be really attentive to how we're initializing these systems in a wide range of settings. And you'll notice that in the evaluation methods notebook that I've distributed as a companion to this lecture, I fit a simple feed-forward network, a very small one, on the classic XOR problem, which is one of the original motivating problems for using deep learning models at all. And what you see is that it succeeds about 8 out of 10 times, where the only thing that we're changing across these models is the way they are randomly initialized. And that again just shows you that this can be powerfully shaping final performance for our systems. And probably what we need to do is be thinking about this as yet another hyperparameter that we need to tune and optimize along with all the rest.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Highlevel_Goals_Guiding_Hypotheses_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Hello, everyone. Welcome to the very first screencast of the very first unit of our course. We're going to be talking about distributed word representations or vector representations of words. And for this screencast, I'm just going to cover some high-level goals we have for this unit as well as discuss the guiding hypotheses, not only for this unit, but also hypotheses that will be with us throughout the quarter. What I've depicted on this slide here is our starting point, both conceptually and computationally. This is a small fragment of a very large word by word co-occurrence matrix. So along the rows here, you have a large vocabulary of words. The first few are emoticons, at least word-like objects. Exactly that same vocabulary is repeated across the columns. And the cell values here give the number of times that each row word appeared with each column word in a very large text corpus. I think the big idea that you want to start getting used to is that there could be meaning latent in such co-occurrence patterns. It's not obvious to mere mortals that we could extract anything about meaning from such an abstract space, but we're going to see time and time again, this is actually a very powerful basis for developing meaning representations. To start building intuition, let's do a small thought experiment. So imagine that I give you a small lexicon of words, each one of them labeled as either negative or positive in the sense of sentiment analysis. Now, that might be a useful resource, but I've called this a hopeless learning scenario because if I give you four new anonymous words to make predictions on, this resource over here is not useful at all for making predictions. In fact, you have essentially no information to go on about what these anonymous words should be labeled. Contrast that with a situation in which I give you that label lexicon, but in addition, I give you the number of times that each lexicon word co-occurs in some large text corpus with the two words excellent and terrible. I think with that information with those columns from the word by word matrix, you can see that you have a lot of predictive power. In fact, a really simple classifier or even decision rule will be able to do really well at predicting these labels. If a word co-occurs more often with terrible than excellent, call it negative. If a word co-occurs with excellent more often than terrible, call it positive. That's a good predictive model. And now, if I give you four new anonymous words, and in addition, you're allowed to collect some co-occurrence information about them with respect to excellent and terrible, then your same rule will be able to make really good predictions about these new anonymous words. And that's the sense in which we move to a very promising learning scenario, and it's just a glimpse of how we could extract latent information about meaning from these co-occurrence patterns. And now just play it forward and think the vector space models that we'll be building will have not just two dimensions, but hundreds or even thousands of dimensions, and there's no telling how much information we'll find latent in such a high-dimensional space. So that brings me to these high-level goals here. First, we want to begin thinking about how these vectors could encode meanings of linguistic units. Get more used to the idea that I just introduced you to. These are foundational concepts that we'll be discussing not only for our unit on vector space models, which are also called embeddings in modern parlance, but in fact, these are foundational concepts for all of the more sophisticated deep learning models that we'll be discussing later on in the quarter. And of course, I'm really hoping that this material is valuable to you throughout the assignments that you do and also valuable for the original project work that you do in the second half of the course. Some guiding hypotheses. Let's start with the literature. I would be remiss in a lecture like this if I didn't quote JR Firth. "You shall know a word by the company it keeps." This is a glimpse at the kind of nominalist position that first took about how to do linguistic analysis. He's really saying that we should trust distributional information. Zellig Harris, a linguist working at around the same time, has an even purer statement of this hypothesis. Harris said, "distributional statements can cover all of the material of a language without requiring support from other types of information." Zellig Harris really only trusted usage information. I think we don't need to be so extreme in our position, but we can certainly align with Harris in thinking that there could be a lot about language latent in these distributional statements that is in co-occurrence patterns. We might as well quote Wittgenstein, "the meaning of a word is its use in the language." I think that's a nice connection that Wittgenstein might have in mind. Might be a point of alignment for him with Firth and Harris. I'm not sure. But finally, here is a kind of direct operationalization of our high-level hypothesis. This is from one of the recommended readings by Turney and Pantel, and they say, "if units of text have similar vectors in a text frequency matrix," like the co-occurrence matrix I showed you before, "then they tend to have similar meanings." If we buy that hypothesis, then we're kind of licensed to build these co-occurrence matrices and then make inferences about at least similarity of meaning on the basis of those objects we've constructed. To finish here, under the heading of great power, a great many design choices. I think one of the difficult things about working in this space is that there are a lot of moving pieces. The first choice you'll have to make is your matrix design. I've talked about the word by word matrix, but of course word by document, word by search proximity, adjective by modified noun, these are all different ways that you could construct your rows and your columns in one of these matrices. And that's going to be really fundamental. You'll capture very different distributional facts depending on what kind of matrix design you choose. And in a way, that's not even the first choice that you need to make because in constructing this matrix, you'll make a lot of choices about how to tokenize, whether to annotate, whether to do part of speech tagging for further distinctions, parsing feature selection and so forth and so on. You also have to decide how you're going to group your text. Is your notion of co-occurrence going to be based on the sentence or the document or maybe documents clustered by date or author or discourse context? All of those things would give you very different notions of what it means to co-occur, and that will feed into your matrix design. Having made all of those difficult choices, you're probably going to want to take your count matrix, and as we'll say, reweight it. That is, adjust the values by stretching and bending the space in order to find more latent information about meaning. We're going to talk about a lot of methods for doing that. And then you might furthermore want to do some kind of dimensionality reduction, which is a step you could take to capture even more higher order notions of co-occurrence beyond the simple co-occurrences that you see evident in the original matrix. That's a powerful step. There are a lot of choices you could make there. And then finally, what's your notion of similarity going to be? For us, we'll operationalize that as a vector comparison method, like Euclidean distance, cosine distance, Jaccard distance and so forth. Depending on previous choices that you've made, the choice of vector comparison method might have a real impact on what you regard as similar and different in your vector space. So this is a kind of dizzying array of choices that you might have to make. There is a glimmer of hope, though. So models like GloVe and word2vec purport to offer package solutions, at least to the design, weighting, and reduction steps here. So they'll tell you, for instance, if you use GloVe, that it needs to be word by word, and then GloVe will simultaneously perform these two steps. And furthermore, for these methods, since they tend to deliver vectors that are pretty well-scaled in terms of their individual values, the choice of vector comparison might not matter so much. So models like GloVe and word2vec are a real step forward in terms of taming this space here. And we can add further that more recent contextual embedding models dictate even more of the design choices, possibly all the way back to how you tokenize. And so they could be thought of as even more unified solutions to the great many design choices that you have here. So that's kind of conceptually a real breakthrough. I will say, though, that baseline models constructed from the simple things that I have in these tables here are often competitive with these more advanced models. But of course, which combination is something that you'll probably have to discuss, have to discover empirically.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Attention_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is part 5 in our series on natural language inference. We're going to be talking about attention mechanisms. Attention was an important source of innovation in the NLI literature, and, of course, it's only grown in prominence since then. Let's begin with some guiding ideas. In the context of the NLI problem, we might have an intuition that we just need more connections for a lot of our architectures between the premise and hypothesis, right? Possibly, in processing the hypothesis, we just need the model to have some reminders about what the premise actually contained. And whatever summary representation we have of that premise might just not be enough from the point of view of processing the hypothesis and feeding the representation into the classifier layer. Relatedly, there's a persistent intuition in the NLI literature that it's useful to softly align the premise and the hypothesis to find corresponding words and phrases between those two texts. It can be difficult to do that at a mechanical level, but attention mechanisms might allow us, via our data-driven learning process, to find soft connections in the weights for these attention layers between the premise and hypothesis and achieve some of the effects that we would get from a real alignment process. So let's begin with global attention. This is the simplest attention mechanism that you see in the NLI literature, but it's already quite powerful. And as you'll see, it has deep connections with the attention mechanisms in the transformer. So to make this concrete, let's start with a simple example. We have "every dog danced" as our premise, "some poodle danced" as our hypothesis, and they're fit together into this chained RNN model for NLI. Now, standardly, what we would do is take this final representation, hC, as the summary representation for the entire sequence and feed that directly into the classifier. What we're going to do when we add attention mechanisms is instead offer some connections back from this state into the premise states. The way that process gets started is via a series of dot products. So we're going to take our target vector, hC, and take its dot product with each one of the hidden representations corresponding to tokens in the premise. And that gives us this vector of unnormalized scores, just the dot products. And it's common, then, to softmax normalize those scores into our attention weights, alpha. What we do with alpha is then create our context vector, and the way that happens is that we're going to get a weighted view of all those premise states. Each one-- h1, h2, and h3-- is weighted by its corresponding attention weight, which is capturing its kind of unnormalized notion of similarity with our target vector hC. And then to get a fixed dimensional version of that, we take the mean-- or it could be the sum-- of all of those weighted views of the premise. Next, we get our attention combination layer, and there are various ways to do this. One simple one would be to simply concatenate our context vector with our original context-- target vector hC and feed those through a kind of dense layer of learned parameters. Another perspective, kind of similar, is to give the context vector and our target vector hC, each one their own weights, and have an additive combination of those two, and, again, feed it through some kind of non-linearity. And you could think of various other designs for this. And that gives us this attention combination, h tilde. And then, finally, the classifier layer is a simple dense layer, just as before, except instead of using just hC, we now use this h tilde representation, which incorporates both hC and that kind of weighted mixture of premise states. It might be useful to go through this with some specific numerical values here. So what I've done is, just imagine that we have two-dimensional representations for all of these vectors. And you can see, what I've done here is kind of ensure that, proportionally, "every" is a lot like this final representation here. And then that kind of similarity drops off as we move through the premise states, and you'll see what happens when we take the dot products here. So the first step gives us the unnormalized scores, and you can see that the highest unnormalized similarity is with the first token, followed by the second, and then the third. The softmax normalization step kind of just flattens out those dot products a little bit, but we get the same proportional ranking with respect to hC. Here's that context vector, and you can see it's just a mean of the weighted values of all of these vectors. That gives us k. And that k is then fed into this attention combination layer. And you can see, in orange here, this is the context vector, two dimensions. Down here, we have hC, just faithfully repeated. And then this matrix of weights Wk is going to give us, in the end, after this non-linearity, h tilde. And then the classifier is as before. So that's a simple worked example of how these attention mechanisms work. And the idea is that we are kind of fundamentally weighting this target representation hC by its similarity with the previous premise states. But all of them are mixed in, and the influence is kind of proportional to that unnormalized similarity. There are other scoring functions that you could use, of course. We've just done a simple dot product up here, but you can also imagine having learned parameters in there or doing concatenation of the learned parameters. This does a kind of bilinear form, and this is just a concatenation of those two states fed through these learned weights. And once you see this kind of design space, you can imagine there are a lot of other ways in which you could mix in parameters and have different views of this global attention mechanism. We could go one step further here. That was global attention. In word-by-word attention, we're going to have a lot more learned parameters and a lot more connections between the hypothesis back into the premise. So to make this kind of tractable, I've picked one pretty simple view of how this could work. And the way we should track these computations is focus on this vector B here, because we're going to move through time. But let's imagine that we've already processed the A state, and we will subsequently process the C state. So we're focused on B. And the way we establish these connections is by taking the previous context vector that we've created. That's kA here. We're going to multiply that by repeated copies of the B state, and that's simply so that we get the same dimensionality as we have in the premise over here, where I've simply copied over into a matrix all three of those states. And we have a matrix of learned parameters here and an additive combination of the two, followed by a non-linearity. That's going to give us this M here, which kind of corresponds to the attention weights in the previous global attention mechanisms. We're going to softmax normalize those, and that literally gives us the weights. And you can see that there are some additional parameters in here to create the right dimensionalities. And then, finally, we have the context at B. So that's going to be a repeated view of all these premises, weighted by our context vector, as before, and then fed through some additional parameters Wa here. And that gives us, as you can see here, the context representation for the state B. When we move to state C, of course, that will be used in the place of A here. And C will go in for all these purple values, and the computation will proceed as before. And in that way, because we have all of these additional learned parameters, we can meaningfully move through the entire sequence, updating our parameters and learning connections from each hypothesis token back into the premise. So it's much more powerful than the previous view, where we had relatively few learned parameters in our attention mechanisms, and, therefore, we could only really meaningfully connect that from the state that we're going to feed into the classifier. So this is much more expressive, right? And then once we have done the entire sequence processing, finally, we get the representation for C here, as fed through these mechanisms, and that becomes the input to the classifier that we ultimately use. The connection with the transformer should be apparent. This is going to return us back to the global attention mechanism. Recall that for the transformer, we have these sequences of tokens with their positional encodings. That gives us an embedding here. And at that point, we establish a lot of dot product connections. And I showed you in the lecture on the transformer that the mechanisms here are identical to the mechanisms that we used for dot product attention. It's just that in the context of the transformer, we do it from every state to every other state. And then, of course, the computations proceed through subsequent steps in the transformer layer and on through multiple transformer layers, potentially. And there are some other variants, right? This is just the beginning of a very large design space for attention mechanisms. Let me just mention a few. We could have local attention. This was actually an early contribution in the context of machine translation. And this would build connections between selected points and the premise and hypothesis, based on some possibly a priori notion we have of which things are likely to be important for our problem. Word-by-word attention, as I've said, can be set up in many ways, with many more learned parameters. And the classic paper is the one that I'm recommending for reading for this unit, Rocktaschel et al., where they do a really pioneering view of this in using even more complex attention mechanisms than I presented under word-by-word attention but following a lot of the same intuitions, I would say. The attention representation at a time t could be appended to the hidden representation at time t plus 1. This would give us another way of moving sequentially through the sequence, having meaningful attention at each one of those points, as opposed to the global attention, which would just be for that final state. And then there are other connections even further afield. For example, memory networks can be used to address similar issues, and they have similar intuitions behind them as attention mechanisms as applied to the NLI problem. And that's kind of more explicitly drawing on this idea that we might, in late states in processing, need a bit of a reminder about what was in the previous context that we processed.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
sstpy_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Hello, everyone. Welcome to part 5 in our series on supervised sentiment analysis. The focus of this screencast is on the module sst.py, which is included in the course code distribution. It contains a bunch of tools that will let you work fluidly, I hope, with the Stanford Sentiment Treebank and conduct a lot of experiments in service of completing the homework and also doing an original system entry for the bake off. Let's say that my goals for the screencast are two-fold. First, I do just want to get you acquainted with this code so that you can work with it on the assignment in the bake off. And in addition, I'd guess I'd like to convey to you some best practices around setting up a code infrastructure for a project, say, that will let you run a lot of experiments and really explore the space of ideas that you have without introducing a lot of bugs or writing a lot of extra code. So let's begin. We'll start with these reader functions. At the top in the first cell here, I just load in not only OS so that we can find our files, but also sst which is the module of interest. We set up this variable here that's a pointer to where the data set itself lives. And then this function, sst.train_reader, will let you load in a Pandas data frame that contains the train set for the sst. You'll notice that there are two optional keywords, include_subtrees and dedup. Dedup will remove repeated examples, and include_subtrees is a flag that will let you include or exclude all of the subtrees that the sst contains. By default, we'll include just the full examples. But if you said, include subtrees equals true, you get a much larger data set as we discussed in the screencast on the sst itself. In cell 4 here, I'm just giving you a look at one random record from this. So remember, it is a PANDAS data frame. But we can get it as a dictionary for a little bit of an easier look. We've got an example ID. We have the text of the sentence, the label, which is either negative, positive, or neutral. And then is_subtree is a flag on whether or not it's a full root level example or a subconstituent of such an example. Since we have loaded this in with include subtrees equals false, we get this distribution of labels here. This is just a distribution of labels on the full examples. But of course, as we change these flags, we get very different counts down here. And then something comparable happens with the dev reader-- dev_df from sst.dev_reader, with a pointer to the home directory for the data as before. And here, the subtree distinction and the dedup distinction, those are much less important because these data sets consist just of root level examples. And there are very few, if any, duplicate examples in those data sets. Now let's turn to feature functions. We'll begin to build up a framework for doing supervised sentiment analysis. And the starting point here is what I've called the feature function. It's given in two, unigrams_phi. It takes in a text that is a string. And what it does is return a dictionary that is essentially a count dictionary over the unigrams in that string as given by this very simple tokenization scheme, which just down cases all of the tokens and then splits on whitespace. So as an example text, if I have "NLU is enlightening," space, and then an exclamation mark, and I call the feature function on that string, I get this count dictionary here, which is just giving the number of times each token appears in that string according to the feature function. I'd say it's really important when you're working with the standard version of this framework doing handbuilt feature functions, that you just abide by the contract that all of these feature functions take in strings and return dictionaries, mappings strings to their counts. Or if you want to, they're Bools or floats or something that we can make use of when we're doing featurization. The next step here is what I've called a model wrapper. And this is going to look a little bit trivial here. But as you'll see as we move through more advanced methods in this unit, especially the next screencast, it's really nice to have these wrappers around the normal, essentially the fit function down here. So I'm going to make use of a scikit linear model called LogisticRegression, a very standard sort of cross-entropy classifier. I've called my function fit_softmax_classifier. And it takes in a supervised data set, so a feature matrix and a list of labels. And I set up my model down here. And I've used some of the keyword parameters. There are many more for the scikit model. And then the crucial thing is that I call fit and return the model, which is now a trained model, trained on this data set xy. It might look like all I've done is called fit on a model that I set up. But as you'll see, it's nice to have a wrapper function so that we can potentially do a lot more as part of this particular step in our experimental workflow. So now let's just bring all those things together into what is called sst.experiment, which is like one-stop shopping for a complete experiment in supervised sentiment analysis. So we load in these two libraries. We get a pointer to our dataset, and then call sst.experiment. The first argument is the dataset that it will be trained on. So that's like train_df from before. We have a feature function and a model wrapper. And then these other things are optional. So if I leave assess_dataframes as none, then it will do a random split on this train reader according to train size. If you do specify some data frames here, a list of them, then each one will be used as a separate evaluation against the model that you train on this original data. You can set the score function if you want. Our default is macro F1. And then we'll return to these two options later. Verbose is just whether you want to print some information. And Vectorize is an option that you can turn on and off. And you'll probably turn it off when you do deep learning experiments, which we'll talk about later in the unit. The result of all that is a bunch of information about your experiments stored in this variable. And because we had verbose equals true, you're going to report here. And this is just a first chance to call out that throughout this course, essentially, when we do classifier experiments, our primary metric is going to be the macro average F1 score. This is useful for us because it gives equal weight to all the classes in our data, regardless of their size, which is typically reflecting our value that we care even about small classes. We want to do well even on the rare events in our space. And it's also perfectly balancing precision and recall which is like a good null hypothesis if we're not told ahead of time based on some other goal whether we should favor precision or recall. So that all leads us to kind of favor as a default this macro average F1 score as an assessment of how the model fit. And here we've gotten 51.3. The return value of sst.experiment, as I said, is a dictionary. And it should package up for you all the objects and information you would need to test the model, assess the model, and do all kinds of deep error analysis. That is the philosophy here that you should, if possible, capture as much information as you can about the experiment that you ran in the service of being able to do subsequent downstream analysis of what happened. And so here I'm just giving an example that we've got the model, the feature function, the train dataset, whenever our assess datasets were used. And if that was a random split of the train data, that will be reflected in these two variables. The set of predictions that you made about each one of the assess datasets, the metrics you chose, and the scores that you got. And then if you do dive in, like if you look at train set, it's a standard data set. x is your feature space. y is your labels. Vectorizer is something that I'll return to. That's an important part about how the internal workings of sst.experiment function. And then you have the raw examples in case you need to do some really serious human level error analysis of the examples as distinct from how they're represented in this feature space. So here is just a slide that brings all of those pieces together. This is one-stop shopping for an entire experiment. We loaded all our libraries. We have our pointer to the data. And then the ingredients are really a feature function and a model wrapper. And that's all you need in our default setting. Point it to the train data and it will do its job and record all you would want for this experiment, I hope, in this experiment variable here. There's a final piece. I want to return to that vectorizer variable that you saw in the return values for sst.experiment. And that is making use of what in scikit-learn is called the DictVectorizer. And this is really nice convenience function for translating from human representations of your data into representations that machine learning models like to consume. So let me just walk through this example here. I've loaded the DictVectorizer, and I've got my train features here in the mode that I just showed you where here we have two examples and each one is represented by our feature function as a dictionary that maps like words into their counts. It could be more flexible than that, but that's like the most basic case that we consider. And I set up my vectorizer in 3, and then I call fit_transform on this list of dictionaries. And the result here, x_train, is a matrix, where each of the columns corresponds to the keys in the dictionary representing a unique feature. And the values are, of course, stored in that column. So this feature space here has been turned into a matrix that has two examples, 0 and 1. There are a total of three features represented across our two examples, A, B, and C. And you can see that the counts are stored here. So example 0 has 1 for A and 1 for B and 0 for C. And example 1 has 0 for A, 1 for B, and 2 for C. So that's recorded in the columns here. You can, of course, undertake this step by hand. But it's a kind of error-prone step. And I'm just encouraging you to use DictVectorizer to handle it all and essentially map you from this, which is pretty human interpretable, into this, which is something your models like to consume. There's a second advantage here, which is that if you use a DictVectorizer and you need to now do something at test time, you can easily use your vectorizer to create feature spaces that are harmonized with what you saw in training. So as an example, if my test features are another pair of examples with a different character, then I can call transform on the original trained vectorizer from up here. And it will translate that list of features into a matrix. Now, the important thing about what's happening here is that it's going to package the test features into the original training space because, of course, those are the features that your model recognizes. Those are the features that you have weights for if you've trained a model. So it's important to call transform at this space. And as an indication of one of the things that's going to happen here is notice that in the test features, my second example has a brand new feature D. But D is not represented in the training space. We have no weights for it. It's simply not part of that original training data set. And so the result is that when we call transform, that feature is simply elided, which is the desired behavior as we're translating from training into testing. And notice that the DictVectorizer has simply handled that seamlessly for you, provided that you remember at the second stage not to call fit_transform. That's the number one gotcha for this interface is that if you call fit_transform a second time, it will simply change the feature space into the one that is represented in your test features. And then everything will fall apart. And your model, as trained from before, will be unable to consume these new matrices that you've created. But provided you remember that the rhythm is fit_transform and then transform, this should be really a nice set of interfaces. And of course, this is what sst.experiment is doing by default under the hood for you.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Giving_Talks_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is part 4 of our series on presenting your research. We're going to be talking about the possibly thrilling and possibly nerve-wracking process of giving a conference talk in our field. Let's start with the basic structure of a talk. This is pretty easy. It's probably going to mirror the structure of papers in our field. But the thing to keep in mind is that the top narrative has to be dramatically simpler. The beginning should start just like a paper does and just like an abstract does. You need to answer questions that involve context. What problem are you solving? Why is the problem important? What's been tried before and why wasn't it a full solution? These things will contextualize your results and set your audience up to be prepared for the contribution that you're about to offer. That will happen in the middle. You'll give concrete details about what data you used, and then crucially what approach you took, and what its details are up to the level of detail you can supply in a talk like this. And also information about your metrics. How do you evaluate success? These concrete details are what constitute your proposal, and they're really crucial. And then in the final part of the talk, you'll offer results in the form of tables, and graphs, and so forth. And you might review some aspects of the analysis from your paper-- what worked and what didn't. What work still needs to be done, and things like that. And then crucially at the very end, you should be sure to articulate what you achieved in the work and why it's significant. The idea is to leave the audience with information that will compel them to take the time and energy to read your actual paper for the full details on your contribution. And then that way, the talk will serve as an effective advertisement for the underlying project. Now you can read lots of advice on the internet about how to give effective talks in various contexts. And I would encourage you to seek it out, because some of it might really align with your style. You should keep in mind though that styles differ and contexts differ. And so this will be a process of figuring out what advice is really suitable for you. The one thing that I can say with confidence is that Patrick Blackburn's fundamental insight will apply no matter what the context and what the style. Patrick Blackburn asks, where do good talks come from? And his answer is honesty. He says, a good talk should never stray far from simple, honest communication. And if you abide by that, and you're introspective about where you've achieved open, honest communication and where you fell short in various talks that you give, and you're willing to learn from that process, you will definitely find your style and become an effective communicator about scientific ideas, according to what works best for you. A note about PowerPoint. You can find lots of think pieces about how PowerPoint and related slide technologies are inherently kind of evil. I think they can be used to confuse and deceive, but they also have lots of good aspects to them. And it's not an accident that slides are pretty pervasive when giving talks in our field. And it's just a matter, again, of finding your style and thinking about how to use these slides for open, honest communication. In that vein, and again, this is a matter of personal style, I thought I would mention two kind of schools of thought when it comes to slide design. The minimalist and the comparative. So the minimalist would probably just have two words on the slide, the minimalist and comparative. And the rest would be delivered via a talk track. The idea behind the minimalist approach is a slide should be as spare as possible. The audience should spend most of their time listening and-- listening to you and looking at you. And individual slides don't stay up for very long or get used in more than one way. They're kind of punctuation for your narrative talk track. By contrast, the comparative approach would be to have lots of details on your slides. Slides should be as full as possible without sacrificing clarity. Your talk should make it easy for people to spend time studying your slides. You have to think about how your narrative is going to align with the very detailed slides. And individual slides might stay up for a long time and get used to make multiple comparisons and establish numerous connections. I want to emphasize again that this is really a personal matter. The minimalist view seems right for telling a story. It's often the best when time is of the essence, and the audience is mainly there to learn about what your paper contains. Whereas the comparative view seems right for teaching, right? It's the closest that the slides can come to full, well-organized chalkboards and things like that, where a lot of information might stay up for a very long time. Fundamentally, though, this is a matter of style. Find the version that works for you for the context you're in. And I'll just say again, as long as you think long and hard about what it would be like to listen to your talk, that is that open communication part, and you adjust accordingly, I'm sure that you'll shine no matter what approach you choose. I really like slides when it comes to using them to guide audience attention and help people follow the narrative of your talk track. One fundamental thing that you can do for that is make heavy use of overlays. Overlays might allow you to fill a slide with information in that comparative mode whilst still keeping the audience with you as you make individual points. You can also use color systematically on the slide to create distinctions and highlight different pieces of information. If you use it consistently, then people will figure out that you're using color for one concept, and that will really help them key into the structure of your ideas. Take an accessible color palette, and then this can really be your friend when it comes to communicating with an audience. You can also use size to draw attention to things, and boxes, and arrows, and other devices to help people navigate, especially complex information displayed on your slides. This is incredibly useful when you are, for example, displaying a figure of results, to have boxes as overlays on the individual comparisons and results that you want to highlight. Same thing for a model diagram. You could show the whole model diagram and then use boxes to highlight different pieces of information in the diagram as you talk about them in your narrative talk track. And that can be incredibly valuable when it comes to helping people navigate what would otherwise be a very complicated looking slide. Of course, you could offer the information that I just delivered in the more minimalist version. This would be like overlays, color, size, boxes, and arrows, and so forth. For particular styles, that might be exactly the right moment to talk about guiding audience attention. You can probably see that it's not really my style, but it's certainly a valid style that can be very effective. Some more mundane things. Turn off any notifications that might appear on your screen. If you're on up in front of an audience of hundreds of people and we see a notification about a friend's email, well, it will certainly be an entertaining thing for your audience to see, but it might not be something that you wanted to be part of your talk. Make sure your computer is out of power-save mode so that the screen doesn't shut off while you're talking. Projectors can be finicky. And even one time, losing your screen could cause you to lose the projector and burn through a bunch of the time that you have allotted for your talk. And that could be really sad. Shut down running applications that might tax your computer or otherwise get in your way, again with notifications and things like that. Make sure your desktop is clear of files and notes that you wouldn't want the world to see. In this day and age, your desktop might flash for a second before the slides come up. And for all you know, this talk is going to end up on YouTube for the whole world to see. So think about the privacy aspects of this. If you're using PowerPoint, or Keynote, or Google Slides, or something like that, create a PDF as a backup. If your program fails or the internet fails, you might not have access to your primary version. Having a PDF backup will certainly be helpful. And be prepared for the worst case. What if the projector fails? You might really be glad that you're prepared to give the talk without any slides. Imagine that scenario. The audience will be on your side in the presence of such a failure of technology, and it could be really a chance for you to shine. And the one thing I'll say is that if you're prepared, genuinely prepared to give your talk without slides, the resulting talk will be better because the narrative part of your talk will be so much stronger. Finally, the discussion period, which could be an exciting discussion period or the most dreaded part of this whole process. It's an important part of your presentation though. It should be a chance for the audience to gain a deeper understanding of your ideas. And when the discussion period actually has that aim, it's really a joy and it feels like you and your audience are moving forward together. Sometimes other things happen, though. You could get a hostile questioner, or a confused questioner, or something even more chaotic. Be ready for this, and just try to remain on an even keel no matter what happens. When you get questions, after each one, take a pause for a second before answering. This will serve two functions. First, you'll make sure that the person is actually finished asking their question, which I think is socially useful. And second, it will just make you appear more deliberative, which is good. So even if you know exactly what answer you want to give, taking the pause will create a good impression on your audience. Avoid where possible saying, I have no idea in response to things, and leaving it at that. If you truly are floored, you might, say I have no idea, but let's think about the following other considerations. There might be cases where you actually just want to say, I have no idea and leave it at that. But I think that that should be used very sparingly. Most questions won't make total sense to you. You have to remember that your questioner doesn't know the work as well as you do. They might have lost track of some of the details or gotten distracted for a moment. The question might not completely make sense. You'll feel victorious though if you can work every question you get into one that makes sense and leaves everyone with the impression that the questioner raised an important issue. Again, that's another way that you can create a collective feeling that the discussion was productive. And as a group, move the ideas forward as part of this discussion period. That's when this is really exciting. And you should do everything you can to strive for such moments knowing, though, that things could go really awry. And don't try to internalize those too much either. This is a messy process. But fundamentally, I think, it can be quite rewarding in the end.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Probing_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is part 4 in our series on analysis methods in NLP. We're going to be talking about probing. This is the first of the two structural evaluation methods that we're going to consider. It's time to get really introspective about what our models are doing. Here's an overview of the probing idea. The core thing is that we're going to use supervised models, those are the probe models, to determine what's latently encoded in the hidden representations of our target models. This is often applied in the context of BERTology, which should be like, I have BERT as a pre-trained artifact and I would like to understand the nature of its hidden representations. What do they latently encode? And for that you might use probe models. Probing, as you will see, can be a source of really valuable and interesting insights, but we do need to proceed with caution on two major issues here. First, a very powerful probe model since it is a supervised model might lead you to see things that aren't really in your target model, but rather just things that your probe model has learned. And you might, therefore, overdiagnose latent information in your target model when, in fact, it's all being stored in the probe. And I'm going to offer you a technique for navigating around that issue. And the second one is that probes cannot tell us about whether the information that we identify has any causal relationship with the target model's behavior. It will be very tempting for you to say, Oh, I have discovered that this representation layer includes part of speech information. And you might therefore conclude that part of speech information is important for whatever task you have set, but we can't actually make that inference. It could be that the part of speech information is simply latently encoded, but not actually especially relevant to your model's input/output behavior. Final section of the slide show, I'm going to just talk briefly about unsupervised probes, which seek to address this first problem here that the probe model might actually be the thing that's encoding all of this information that we claimed to have discovered. And then when we talk about future attribution methods, we'll get closer to being able to address some of these causal questions. Let's begin with the core method for probing. And just because this is a typical framing of these ideas, I've got depicted here what you might think of as a, kind of, generic transformer-based model, where we have three layers with all of these blocks. These maybe are the output representations from each of the transformer blocks. And you can see that I've got an input sequence coming in here. And the idea would be that we could pick some hidden representation in this model, like this middle one, h here. And decide that we're going to fit a small linear model presumably on that hidden representation, and see whether we can figure out whether that representation encodes some information about some task that we care about. So for example, if you wanted to figure out whether sentiment or lexical entailment was encoded at that point, you'd need a labeled data set for sentiment or entailment. And then you would fit the probe model on this representation and use that to determine the extent to which that information is encoded there. This depiction is a little bit poetical. So it's worth just walking through mechanically what you'd actually be doing. You would use this BERT model and process different examples, like for the sequence here. And get an output representation, which should be paired with some task label. And you would repeatedly do that for different inputs. You're essentially using this BERT model as an engine for creating representations that will become your feature representation matrix x paired with your labels y. And it is this model that will be the basis for your linear probe model, this small linear model as I've identified in here. So you're kind of using BERT as an engine to create a data set that is then the input to a supervised learning problem. Another perspective would be that you're kind of using frozen BERT parameters in this case, and fitting a model on top of them. It's just that instead of picking an output point, you've picked possibly one of the internal representations. And this is very general. And in fact, most often, when you read without quotes in the literature, they're actually sequence problems like part of speech tagging or named entity recognition. And therefore, you might use an entire layer or even a set of layers as the basis for your probe model. Now you can hear in my description there, that there is a interesting judgment call that you're making about whether you are probing or simply learning a new model, right? Probes, in the sense that I just presented them, are supervised models whose inputs are frozen parameters of the models that we're probing, our target models, right? This is hard to distinguish from simply fitting a supervised model as usual with some particular choice of featurization. As a result of this, it is essentially a foregone conclusion that at least some of the information that we identify with our probe is actually stored in the probe model parameters and it's just that we provided useful input features that allow this probe to be successful. And that's the sense in which the inputs are latently encoding this information. But with the probe, we have not determined that it is truly latently there, but rather that it's a stepping-- a stepping stone toward a model that could be successful at this as conceived of as a supervised learning task. So those are important distinctions to keep in mind. As a result of this, more powerful probes, like deep neural networks, might find more information than simple linear models. But that's not because they're able to tease out more information from the representations themselves, but rather because the probe model now has so much more capacity for storing information about the task that you're probing for. So there are a bunch of different judgment calls here. And that's difficult. But a very productive entry into this space is this really lovely paper from Hewitt and Liang 2019, where they introduce the notion of a control task and the corresponding metric of probe selectivity. So here's the idea. A control task will be some random task with the same input/output structure as the target task that we want to use for our probing. Like for example, for word-sense classification, you might have words assigned random fixed words senses independent of their context. Or for part of speech tagging, instead of using the actual part of speech tags, you might randomly assigned words to fixed tags from the same tag space. Or for parsing, it gets a little bit more nuanced, but you might have some edge assignment strategies that you use semirandomly to link different pairs of words into a kind of pseudo-parse. And that would serve as a control task for trying to surface latent actual parsing information. So those are control tasks. And then selectivity is simply the difference between your probe performance on the task and your identical probe model structure on these control tasks. And Hewitt and Liang used this to tease out what, I think, is a pretty clear intuition, which is that as you get more powerful probes, they simply become less selective. So along the x-axis here, we have MLP hidden units. So we have model complexity from left to right, where we have very complicated, powerful models at the right-hand side. And here we have accuracy. And we're measuring our control task in red and our actual probe task in this light blue here. And selectivity is the difference between those two. So you can see, for example, that the very weak models, the ones with two hidden units have very high selectivity. Whereas by the time I have this very powerful NLP with lots of hidden units, selectivity has gone almost to 0. And it's very hard to say that you've uncovered any latent information. Because even the control task is fully solvable with a model that has this much capacity. So I think what this is pushing us toward is always having control tasks as part of the picture and always reporting selectivity so that we can control for the complexity of the probe model itself. That's an important and easy, practical step that will give you a clearer picture of what you've actually surfaced with your probe. That's the first issue. The second issue is just something that you should keep in mind as a theoretical fact about probing, which is that it is fundamentally limited in the sense that it cannot tell you that the information you discover has any causal impact on the model's input/output behavior. To illustrate that, I'm just going to show you a simple example that kind of proves this. So imagine over here on the left, I have a simple model that's going to take in three integers and sum them. So the output here will be the sum of three integers. Like if I put in 1, 2, 3, it will output 6. And it does that by representing each one of those integers as the single-dimensional vector that just is that integer. And then we have a whole bunch of transformer like model parameters, dense connections here that will lead us finally to the output layer. So you can easily imagine that you probe this position L1 here and you find that it computes x plus y, which might be starting to reveal for you that there's some kind of tree structure to this model even though it was densely connected. It has learned a structured solution to the problem. And you might probe L2 and find that it computes z. And that would really lead you to think that you've got a, kind of, interesting tree structure with constituents for this addition problem. And that's certainly suggestive. However, here is an example of a model that shows that neither L1 nor L2 has anything to do with the model's output predictions. It is entirely that middle state that tells the complete story about the output. I'll leave you to work through the details if you choose to, but a shortcut way to see that is that the final parameters that take us from these output representations to the predictions have zeroed out the first and third positions, leaving only the second one as having any kind of causal efficacy. Even though in this model, if you probe, you do indeed find that it looks like those representations perfectly encode these two pieces of information. That's a dramatic and clear, simple illustration of how a probe could get divorced from the actual causal behavior of the model. Again, something that's worth keeping in mind. And finally, to close this out, of course, for that first problem about distinguishing between probe capacity and actually latently encoded information, one response to that that's developing in the literature now is to develop unsupervised probes. These would be models like these that seek to find in actual facts about the model with no additional supervision, the latent information that we hope to find. And this would come from simply doing linear transformations of the parameters and measuring distance between parameters as a way of getting a sense for what's actually there without the complications that come from having this additional supervised probe model. And finally, for much more information about probes, and what we think we've learned from them, and what they can tell us, I encourage you to check out this paper by Rogers et al, a primer on BERTology. It has a large and interesting subsection entirely devoted to what probes have told us. Certainly worth a look, and a great overview of the space.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Stanford_Sentiment_Treebank_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome back, everyone. This is part 3 in our series on supervised sentiment analysis. This screencast is going to focus on the Stanford Sentiment Treebank. Let me start with a quick project overview. The associated paper is Socher et al. 2013. I think this paper is a kind of model of open science. At this website here, you can see the full code, all the data of course, as well as an API that will let you try out new examples and kind of interact with the core models that are motivated in the paper. It's a sentence-level corpus. It's got about 11,000 sentences in total. And all of those sentences are originally from Rotten Tomatoes. So they are sentences from movie reviews. The sentences themselves were originally released by Pang and Lee in 2005. It's a kind of classic data set. And what the SST did was expand the data set by labeling not only the full sentences but all of the sub-constituents according to a kind of traditional parse, syntactic parse, of each of the examples. And those are all crowdsourced labels. So what this means is that we have vastly more supervision signals all throughout the structure of these examples than we would get from the original, where we just had a single sentiment label for the entire sentence. The labels themselves and the underlying corpus are five-way labels that are extracted from workers' slider responses. So there's kind of an initial layer of aggregation. They made a slider choice. They were all grouped together into five labels. And then we are going to work with a formulation that is even more collapsed down to ternary sentiment. I'll return to that a bit later. The fully labeled tree thing is one of the really exciting aspects of this corpus that we will be able to take advantage of, especially during training. So the way that worked is, there were parses. This is a simple constituent parse of a sentence, NLU is enlightening. And as I've indicated here, we have labels in that space, 0 through 4, on all of the lexical items, NLU, is, and enlightening, as well as all the subconstituents in this phrase. And so you can see that "is" is neutral. But since "enlightening" is positive, the whole verb phrase, "is enlightening" is positive. We can say that NLU is neutral. But in the context of this sentence, the overall contribution is a highly positive one. So label 4 on the root. In the first screencast for this unit, I motivated sentiment analysis with some cases that I thought were kind of difficult from a syntactic point of view. This is one of them, they said it would be great. I love how this is being handled. We can see that down here, "be great" is kind of clearly positive. But by the time we have filtered that through this report, they said just kind of displacing the sentiment onto another agent. The speaker is not necessarily endorsing the claim of greatness. What we get in the end is more like a neutral sentiment. I think that's interesting. And we can extend that even further, right? These are actual predictions from the model that's motivated in the underlying paper. If we take that constituent that I just showed you and conjoin it with "they were wrong," which is clearly negative, strikingly the model is able to figure out that the overall sentiment is determined by this second clause here, and assigned negatives to the entire thing, despite the fact that there are obviously subconstituents in here that are positive. And that exactly the kind of mixing that I think is correct for how language works in the domain of sentiment. And it's kind of encouraging to see that this model is able to capture at least some aspects of it. Here's a similar case that I think is pretty good as well. Although, maybe not as strikingly positive in the end here. I've just changed from the previous example, "they were wrong," to "they were right." It knows that right is correct. And it seems to get that this is middle of the scale. And I'd hope this was a 3 or a 4. But I think that still we're seeing some interesting interactions between what's happening in subconstituents in these examples and the prediction that's made at the root level. So it's very encouraging. There are a bunch of ways that you can formulate the SST task. Kind of the raw one that comes from the paper would be a 5-way classification problem where we have these numerical labels here with the meaning of kind of 0 is very negative, 1 is negative, 2 is neutral, 3 is positive, and 4 is very positive. I think this is fine, but there are two gotchas underlying this kind of scheme. First, it's not really a fully ordered scale in the sense that 4 is stronger than 3, but 0 is stronger than 1, because we have kind of this polarity split with neutral in the center. So that's a kind of conceptual difficulty. And then the other part is that, by and large, classifier models that you pick will not give you partial credit for being close. We might hope that a model that predicted a 1, negative, was kind of right or certainly more right if the true label is 0 than a model that had predicted 4. But of course, if these are all treated as independent classification bins, then you're just equally wrong no matter which prediction you made relative to the gold label. And that seems unfair to our models. We are going to work with what I've called the ternary problem. I think this is the minimal problem that really makes sense conceptually. For this one, we grouped 0 and 1 into a negative category, 3 and 4 into a positive category, and reserved 2, as before, for what we're calling neutral. And this kind of avoids the false presupposition that every sentence is either negative or positive, because it does allow us to make predictions into this neutral or non-sentiment laden space. It's very common, and you see this in the paper as well as in a lot of work on the SST, to formulate this as a binary problem. For the binary problem, we simply remove the middle of the scale and treat 0 and 1 as negative and 3 and 4 as positive, as before. I think that has two drawbacks. First, we have to throw away some data. And second, then we're making this false presupposition that every sentence is either classified as negative or as positive when for a wide range of cases in the world, that might be inappropriate. Now, I focused here on the root level problem. You can see that the numbers here for train and dev are small, and the test set numbers are a little bit larger than for devs. So they're comparable. But we can also think of this as the all nodes task, because recall that a hallmark feature of the SST is that every single subconstituent in these examples has been labeled by crowd workers. So we could treat each one of those as a kind of independent classification problem. We have the same range for all of the values that they can take on. So we can do similar kind of collapsing down into the ternary problem or the binary problem. And of course, here, we have a much larger dataset. For us, we're going to, by and large, work with the data in one particular way, which I think is common in the literature. As I said, we're going to have the ternary formulations. Our labels will be positive, negative, and neutral. When we do the dev and test step, we are going to test only on full examples. So for them, we will not make predictions into the subconstituent space. And then as a default for the code, as you'll see, it is set up to train only on full examples. So for these two cases, NLU is enlightening and not enlightening here, if those were two independent sentences in the corpus, we would train just on those two independent examples, one negative labeled positive and the other labeled negative. However, you might imagine that you'll get a lot more strength in training if you also trained on all the subconstituents, which would mean essentially expanding this example into its full root version, NLU is enlightening, but also all of the sub-pieces that are captured and labeled in the corpus. So that would give you many more examples and much more diversity. And then of course, "not enlightening" would be split apart as well. And then you could decide for yourself in addition whether you want to treat this as two instances of "enlightening" or one. And the code facilitates all this. So you can formulate it as a root only training scenario or as a subconstituent training scenario. And you can keep or remove duplicates. This is going to impact the amount of computational resources that you need for training models. But of course, bigger could be better in this space, because you're just seeing much more gold labeled information. So that's the overview. And for much more on this, and how to work with our distribution of the corpus, and so forth, I would encourage you to work through this notebook that I've linked at the bottom here.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Listeners_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone to part 3 in our series on grounded language understanding. Recall that in part 2, we focused on speakers, speakers in our sense, taking on linguistic representations as inputs and generate language on that basis. Listeners are the converse of that. They accept linguistic inputs. And try to make a guess about the state of the world on the basis of that linguistic input. For this unit, in terms of modeling, our focus is going to be on speakers. But I think it's helpful to have the listener perspective in mind as you create speakers. And you might even bring in the listener perspective as part of your original system. And I'll cover some techniques for doing that in the context of the rational SpeechX model a bit later in this series. Now to make the speaker task meaningful, we need to complicate our previous task a little bit. So in part 2, we had for the speaker, just a single colors input. And their task was to produce a description on that basis. For listeners, we're going to move to a more complicated task. And this is the task that's our focus for the entire unit. It comes from the Stanford colors in context corpus. And for that corpus, the context is not just a single color representation, but now three colors. And the idea is that the speaker is privately told which of those three is their target. And they produce a description that will hopefully communicate to a listener, who's looking at those same three colors, which one was the speaker's target. You can see that gets really interesting and grounded very quickly. So in this first case, the three colors are very different. And the speaker simply said blue. And that seems to get the job done. And I think a listener receiving blue as input would know which of these three colors was the speaker's private target. When we move to the second context, we have two competing blues. They're very similar. And as a result, the speaker said the darker blue one. And the idea is that this comparative here, darker blue, is making the implicit reference, not only to the target but to at least one of the two distractors. Third example is similar. Teal, not the two that are more green. That's really grounded in the full context here. The speaker is not only identifying properties of the target but also properties of the distractor in order to draw out contrasts. And I think the final two examples here are interesting in different ways. So here we have the target on the left. In the first example, the speaker said purple. And in the second example, the speaker said blue even though these are identical colors here for the targets. The reason we saw a variation is because the distractors are so different. And that just shows you that even though this is a simple task, it is meaningfully grounded in the full context that we're talking about. Now what we'll do for our listeners is essentially give them these utterances as inputs and have them function as classifiers, making a guess about which of the three colors is the most likely, that the speaker was trying to refer to. So in a little more detail. Here's the neural listener model. It's again, an encoder-decoder architecture. For the encoder side, we can imagine some recurrent neural network or something that is going to consume a sequence of tokens, look them up in an embedding space and then have some sequence of hidden states. For the decoder, the handoff happens for the final encoder state, presumably. And what we're going to do here is extract some statistics, in this case, a mean and covariance matrix, and use those for scoring. So in a little more detail. We have those three colors that's given for the listener. Those are represented down here. When we embed those in some color space, we could use the Fourier transform, just like we did for the speakers at the end of the previous screen test. And then we'll use those extract statistics from the encoder to create a scoring function. And then we just need to define a softmax classifier on top of those scores. And it will be that module that makes it guess, based on this encoder representation, about which of the three colors the speaker was referring to, so fundamentally, a kind of classification decision in this continuous space of colors and encoder representations. Now once we start thinking in this mode, I think a lot of other tasks can be thought of as listener-based communication tasks. So even the simplest classifiers are listeners in our sense. They consume language. And they make an inference about the world, usually in a very structured space, right? So even in the simple case of our sentiment analysis, you receive a linguistic input, and you make a guess about whether the state is positive, negative, or neutral as its common classifier. But thinking of it as a communication task might bring new dimensions to the problem. Semantic parsers are also complex listeners. They consume language. They create a rich, latent representations out of logical form. And then they predict into some structured prediction space like a database or something like that. Scene generation is clearly a kind of listener task. In this task, you map from language to structured representations of visual scenes. So it's a very complicated version of our simple color reference problem. Young et al explored the idea that we might learn visual denotations for linguistic expressions, mapping from language into some highly structured space similar to same description. Mei et al, 2015, developed a sequence to sequence model that's very much like the above. But the idea is that instead of having simple output spaces, we have entire navigational instructions that we want to get. So that's going from a linguistic input into some kind of action sequence. And finally, the CerealBar data set is an interesting one to explore in our context. That was a task of learning to execute full instructions. So that's, again, mapping some pretty complicated utterances into some embedded action that you want to take in a game world. And that could be a very exciting extension of what we've just been covering.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Dimensionality_Reduction_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Hello, everyone. Welcome back. This is part 5 in our series on distributed word representations. We're going to be talking about dimensionality reduction techniques. We saw in the previous screencast that reweighting is a powerful tool for finding latent semantic information in count matrices. We're going to push that even further. The promise of dimensionality reduction techniques is that they can capture higher order notions of co-occurrence corresponding to even deeper sorts of semantic relatedness. There's a wide world of these dimensionality reduction techniques. I've chosen three that we're going to focus on as interesting representatives of a much larger space. We'll look at latent semantic analysis, which is a classic linear method. Then we'll talk about autoencoders and newer, powerful deep learning mode for learning reduced dimensional representations. And then finally, GloVe, which is a simple yet very powerful method that, as you'll see, has a deep connection to pointwise mutual information. And then I'm going to close by talking briefly about visualization, which is another kind of dimensionality reduction technique that we might use for very different purposes. So let's begin with Latent Semantic Analysis, a classic method. The paper is due to Deerwester et al, 1990. That's a classic paper that really made a splash. It's one of the-- LSA is now one of the oldest, most widely used dimensionality reduction techniques, not only in scientific research but also an industry. I think it was really eye opening for people at the time of the paper's appearance to see just how powerful this technique could be, especially in contexts involving information retrieval. The method is also known as Truncated Singular Value Decomposition. And I'll explain why that is in a second. The final thing I want to say at this high level is just that LSA remains a very powerful baseline, especially when part of a pipeline of other reweighting methods. So it should probably be in your results table. And it's often very difficult to beat. Now I think we can't, in the time allotted to us, cover all of the technical details surrounding latent semantic analysis. In my experience, this would be kind of the culmination of a full course in linear algebra. But I do think I can convey the guiding intuitions. And that will help you with responsible use of the method. So let's imagine that we have this simple two-dimensional vector space model. I've got four points, A, B, C, and D arrayed out in this two-dimensional space. I think we're all familiar with fitting linear models, which capture the largest source of variation in the data. That's this orange line here. And the perspective I would encourage you to take is that we can think of that linear regression model as performing dimensionality reduction in that it encourages us to project points like B and C down onto that line. And then projecting them down onto that line, essentially in abstracting away from their point of variation along the y-axis, we can see the sense in which they are abstractly similar. They're close together in this reduced dimensional space. Now with a linear model, we captured the source of greatest variation in this little data set. In the high dimensional space, we could continue fitting lines to other sources of variation in the data, other axes of variation. So here's a blue line here that captures the next dimension. And we could, again, project points like A and C down onto that line. And that would capture the abstract sense in which A and C, although very spread out along the x dimension, are very close together along the y dimension. And of course, if we had more dimensions in this vector space model, we could continue to perform these cuts and dimensionality reductions, capturing ever more abstract notions of similarity along these different axes. And that is, in essence, what LSA is going to do for us in our really large matrices. The fundamental method, as I said, is singular value decomposition. This is a theorem from linear algebra that says any matrix of dimension n by n can be decomposed into the product of three matrices, T, S, and D, with the dimensions given. Here's a more concrete example. Start with this matrix of dimension 3 by 4. We learn the term matrix, which is full of length normalized orthogonal vectors. We have this matrix of singular values along the diagonal. They are organized from largest to smallest, corresponding to the greatest to least source of variation in the data. And then we have the document or columnized matrix, which is also length normalized and orthogonal in its space. And the theorem here is that we can reconstruct A from these three matrices. Of course, we don't want to precisely reconstruct A. That probably wouldn't accomplish very much for us. But what we can do is use this to learn reduced dimensional representations of A by being selective about which term and singular value dimensions we include in the model. Let me walk you through an example of how that happens. And first, let me motivate this a little bit with an idealized linguistic case. So I've got up here a word by document matrix. Its vocabulary is gnarly, wicked, awesome, lame, and terrible. And the conceit of my example is that both gnarly and wicked are positive terms. So they tend to co-occur with awesome and not co-occur with lame and terrible. However, gnarly and wicked never occur in the same doc here. The idea is that gnarly is a slang positive term associated with the West Coast of the United States. And wicked is a slang term associated with the East Coast of the United States. In virtue of that idealized dialect split, they never occur in the same document. But nonetheless, they have similar neighbors in this vector space. And that's the kind of abstract notion of co-occurrence that we want to capture. If we simply use our standard distance measures and reweighting techniques and so forth, we will not capture that more abstract notion of co-occurrence. Here, distances in this raw vector space, gnarly, awesome, terrible, and wicked, wicked is farther away from gnarly even than terrible is. So we've got a sentiment confusion and really just not the result we were shooting for. So we perform singular value decomposition into these three matrices. And then the truncated part is that we're going to consider just the first two dimensions of the term matrix corresponding to these two singular values capturing the top two sources of variation in the data. So we multiply those together and we get this reduced dimensional matrix down here, 2 by the size of the vocabulary. And if we do distance measures in that space, just as we were hoping, gnarly and wicked are now neighbors. The method has captured that more abstract notion of having the same neighbors as the other word. In the previous lecture, I encouraged you to think about what you're doing to a matrix when you perform some kind of reweighting scheme. Let's extend that to these dimensionality reduction techniques. So here's a picture of what LSA does starting with a raw count distribution over here. If I just run LSA on that raw count distribution, I get what looks also like a very difficult distribution of values. The cell values are very spread out. And they have a lot of the mass centered around 0 here, corresponding to the peak in the raw counts over here near 0 as well. So that doesn't look like we've done very much in terms of taming the kind of untractable skewed distribution we started with. However, if instead we take the raw counts and first feed them through PMI, which as we saw before gives us this nice distribution of values, highly constrained along the x-axis, and then we run LSA, we retain a lot of those good properties. The values are somewhat more spread out but still nicely distributed. This looks like a much happier input to downstream analytic methods than the top version here. And I think this is beginning to show that it can be powerful to pipeline reweighting and dimensionality reduction techniques. Another note I would want to make, how do you choose the dimensionality for, let's say, it has this variable k corresponding to the number of dimensions that you keep. If you read the literature on LSA, they often imagine this kind of what I've called the dream scenario where you plot the singular values and you see that a lot of them are very large. And then there's a sudden drop off. And if you do see this, then it's obvious that you should pick the point of the sudden drop off as your k. So here, you would pick k as 20. And you'd be confident that you had captured almost all the variation in your data in the reduced dimensional space you were creating. Unfortunately, for the kinds of matrices and problems that we're looking at, I really never see the dream scenario. What I see looks something much more like this, where you have kind of a sudden drop off early and then a long decline and maybe a sudden drop off at the end. And it's basically totally unclear where in the space you should pick k. And the result is that k is often chosen kind of empirically as a hyperparameter to and against whatever problem you're actually trying to solve. If, in doing this work, you do see the dream scenario, please do write to me. It would be very exciting to see that happen. LSA is just one of a large family of matrix decomposition methods. Here's a list of a few of them. And a lot of them are implemented in scikit-learn in its decomposition library. And I would encourage you to try them out and just see how they perform on problems that you're trying to solve. And finally, here's a little bit of code. vsm.lsa with k set to 100 gives me back a reduced dimensional version of that matrix, keeping the same vocabulary, of course, but now with only 100 column dimensions. Let's move to autoencoders. This will be a point of contast with LSA. Because this is a much more powerful method. So here's the overview. Autoencoders are a flexible class of deep learning architectures for learning reduced dimensional representations. If you want to hear much more about this class of models, I would encourage you to read chapter 14 of the Goodfellow et al book, Deep Learning. It has a lot of details and a lot of variations on this theme. Here is the basic autoencoder model. The input would be the, say, that vectors from the rows in our matrices. So this could be the counts or something that you've done to the counts. Those are fed through a hidden layer of representation. And then the goal of this model is to try to literally reconstruct the input. Now, that might be trivial if h had the same dimensionality as x. But the whole idea here is that you're going to feed the input through a very narrow pipe and then try to reconstruct the input. Given that you're feeding it through a potentially very narrow pipe, it's unlikely that you'll be able to fully reconstruct the inputs. But the idea is that the model will learn to reconstruct the important sources of variation in performing this autoencoding step. And then when we use these models for representation learning in the mode that we've been in for this unit, the representation that we choose is this hidden unit here. We typically don't care about what was reconstructed on the output, but rather only about the hidden reduced dimensional representation that the model learned. This slide has a bunch of other annotations on it. And the reason I included them is that the course repository includes a reference implementation of an autoencoder and all the other deep learning models that we cover in pure NumPy. And so if you want it to understand all of the technical details of how the model is constructed and optimized, you could use this as a kind of cheat sheet to understand how the code works. I think the fundamental idea that you want to have is simply that the model is trying to reconstruct its inputs. The error signal that we get is the difference between the reconstructed and actual input. And that error signal is what we use to update the parameters of the model. Final thing I would mention here is that it could be very difficult for this model if you feed in the raw current vectors down here. They have very high dimensionality. And their distribution is highly skewed as we've seen. So it can be very productive to do a little bit of reweighting and maybe even dimensionality reduction with LSA before you start feeding inputs into this model. Of course, it could still be meaningful, even if you've done LSA as a pre-processing step, to learn a hidden dimensional representation because this model is presumably capable of learning even more abstract notions than LSA is in virtue of its non-linearity at this hidden layer. And here's a bit of code just showing how this works, using both the reference implementation that I mentioned as well as a faster and more flexible Torch autoencoder which is also included in the course repository. I think the only interface thing to mention here is that these models have a fit method, like all the other machine learning models for this. But the fit method returns that hidden dimensional representation, the target for our learning in this context, which is a bit non-standard. But it's the intended application for this kind of representation learning. The other thing I would mention is, so let's see how well the autoencoder is performing. This is the raw distances in the giga5 matrix for finance. This is the count matrix. It doesn't look great. If we run the autoencoder directly on the count matrix, it looks a little better. But it's still not excellent. If we think of this as part of a pipeline where we've first done positive pointwise mutual information, and then LSA at dimension 100, and then dub the autoencoding step, it starts to look like a really good and interesting semantic space. And I think that's pointing out the power of including the autoencoder in a larger pipeline of preprocessing on the count matrices. Let's turn to GloVe for Global Vectors for the final major unit for this screencast. Here's a brief overview. GloVe was introduced by Jeffrey Pennington, Richard Socher, and Chris Manning, a Stanford team, in 2014. Roughly speaking, the guiding idea here is that we want to learn vectors for words such that the dot product of those vectors is proportional to the log probability of co-occurrence for those words. And I'll elaborate on that in a second. For doing computational work, we can rely on the implementation torch.glove.py which is in the course repo. I'll mention that there's also a reference implementation in bsm.py. It's very slow but it kind of transparently implements the core GloVe algorithm so it could be interesting to inspect. And then if you're doing practical work with really large corpora and really large vocabularies, I would encourage you to use the GloVe team's C implementation. It's an outstanding software artifact that will allow you to learn lots of good representations quickly. And that kind of brings me to my last point. I just want to mention that the GloVe team was among the first teams in NLP to release not just data and code but pre-trained model parameters. Everyone does that these days. But it was rare at the time. And I think this team was kind of really forward thinking in seeing the value of releasing these centralized resources. And a lot of really interesting work happened with GloVe vectors as a foundation. All right, so let's think about the technical aspects of this model. This is the GloVe objective. And you're going to see, pointwise, mutual information kind of creep into this picture in an interesting way. So this is equation 6 from the paper. It's kind of an idealized objective for the GloVe model. And it says what I said before. We have a row vector and a column vector, wi and wk. We're going to get their dot product. And the goal is to learn to have that dot product be proportional to the log of the probability of co-occurrence of word i and word k. Where the probability of co-occurence is defined in the way that we defined it before when we were talking about row normalization. It's just done in log space. This is the co-occurrence count. This is the sum of all the counts along that row. And basically in log space, we're just dividing this value by this value. So keep that in mind. Now the reason they have only the row represented is that in the paper they're assuming that the rows and columns in the underlying count matrix are identical. And so we don't need to include both. However, if we did allow that the row and context could be different, we would just elaborate equation 6 to have a slightly different denominator, right? We would have the product of the row sum and the column sum, and take the log of that and subtract that out. And that would be kind of our goal for learning these dot products here. But aha, this is where PMI sneaks in. Because that simply is the PMI objective, right? Where we stated that is the log of the probability of co-occurrence divided by the product of the row and the column probabilities, here they've just stated exactly that calculation in log space. And these are numerically equivalent by the equivalence of log of x over y being the same as log of x minus the log of y. So that's the deep connection that I was highlighting between GloVe and PMI. And I think that's really interesting because it shows that fundamentally we're testing a very similar hypothesis using very similar notions of row and column context. Now the GloVe team doesn't just stop there. The GloVe objective is actually much more interesting as an elaboration of that core PMI idea. But it's worth having PMI in mind because it's there throughout this presentation. In the paper, they state this is a kind of idealized objective where we're going to have the dot product, as I said before, and two bias terms. And the goal will be to make that equivalent to the log of the co-occurrence count. That has some undesirable properties from the point of view of machine learning. So they propose, in the end, a weighted version of that objective. You can see we still have the product of the row and the column vectors and two biased terms. We're going to subtract out the log of the co-occurance count and take the square of that. And that's going to be weighted by f of the co-occurrence count, where f is a function that you can define by hand. And what they do in the paper is that it was two parameters, x max and alpha. For any count that is above x max, we're going to set it to 1, kind of flatten out all the really large counts. Everything that's below x max, we will take as a proportion of x max with some exponential scaling as specified by alpha. That's the function there. And typically, alpha is set to 0.75 and x max to 100. But I encourage you to be critical in thinking about both those choices and how they relate to your data. I'll return to that in a second. So GloVe really has these three hyperparameters, the dimensionality of the learned representations, x max which is going to have this flattening effect, and alpha which is going to scale the values, right? And so here's an example of how those are interacting, x max and alpha. If I start with this vector, 100 99, 75, 10, and 1, the function f, as we specified it, is going to flatten that out into 1, 99, 81, 18, and 0.3. You should just be aware that kind of flattening is happening. So GloVe learning, so it's kind of interesting to think about it analytically about how GloVe manages to learn interesting representations. And one thing that might be on your mind is the question, can it actually learn higher order notions of co-occurrence? That's been the major selling point of this lecture. I gave that example involving "gnarly" and "wicked" with LSA. Is GloVe going to be able to do that, right? We could just pose that as a question. So let's start that and see what happens, see how this works. The loss calculations for GloVe, this is a kind of simplified version of the derivative of the model. And we're going to show how GloVe manages to pull "gnarly" and "wicked" toward off center, that little idealized space that I used before. I'm going to leave out the bias terms for simplicity. But we could bring those in. And so here's how this is going to proceed. What I've done, just from this idealized example, is begin in a GloVe space where wicked and gnarly are as far apart as I could make them, so as different as I could possibly make them. But I've got awesome and terrible. And awesome is kind of close to gnarly already. What you'll see is that after just one iteration of the model, what has happened is that wicked and gnarly have been pulled toward awesome. And that's just the kind of effect that we wanted. That's the sense in which GloVe can capture these higher order notions of co-occurrence. Just in a little more detail, you might want to study this on your own. But the high level overview of exactly how that learning happens proceeds as follows. We start from these counts up here. And the crucial assumption I'm making is that wicked and gnarly never co-occur. But they occur a lot with awesome. And awesome will be kind of the gravitational pull that makes gnarly and wicked look similar. Keep in mind that because of that function f, by and large with GloVe we're dealing not with the raw counts but rather with the reweighted matrix. And that preserves this property that may never co-occur. It gives differently scaled values for the rest of the co-occurrence or pseudo co-occurrence probabilities. Right, and here's what we're going to track. This is the "gnarly" vector in 0. And you can see I've made them as far apart as I could. They're kind of opposed to each other. But we're going to see how they get pulled toward "awesome" in the context vector. So this is that loss calculation. I have just plugged in all the values here. And you can see that we get this initial set of losses. That's after one iteration, and we update the weight matrices. And we perform one more round of learning. And you can see that both of these models, the values here are getting larger corresponding to getting pulled closer and closer toward awesome. And you can see that graphically happening over here in these plots on the left. And as I do more iterations of the GloVe model, this effect is just going to strengthen corresponding to wicked and gnarly getting pulled toward awesome and away from terrible as a result of these underlying counts. And I take this as good evidence that GloVe, like the other methods we've discussed, is capable of capturing those higher order notions of co-occurrence that we're so interested in pursuing with these methods. Let's close the loop also. We have those central questions. What is GloVe doing to our underlying spaces? With GloVe, because of the design of the matrix, we have to begin from word by word co-occurrence matrices accounts. So we begin with these raw count values. And GloVe is one-stop shopping. It's going to take us all the way to these reduced dimensional representations. And boy, by the criteria we've set up, does GloVe do an outstanding job. This is the result of running GloVe at dimension 50. And you can see that the values are extremely well scaled between negative 2 and 2 and nicely, normally distributed. This is an outstanding input to modern machine learning models. And I think this is probably a non-trivial aspect of why GloVe has been so successful as a kind of pre-trained basis for a lot of subsequent machine learning architectures. And then here's a little bit of code, just showing you how you can work with these interfaces using our code base. The one thing I wanted to call out is that I'm trying to be careful. I have defined this function percentage non-zero values above. And you can set x max here and just feed in a matrix and study what percentage of the values in that matrix are going to get flattened out to 1 as a result of the x max that you've chosen. And this value really varies by the design of the matrix. If I feed in yelp5, only about 5% of the values are getting flattened out. But if I feed in yelp20, which is much denser and has much higher counts, 20% of the values are getting flattened out to 1. If this number gets too high, the matrix might become completely homogeneous. And so we should really be aware of how the setting of x max is affecting the kind of learning that we could even be performing with GloVe. And it might turn out that this is even more important than the number of iterations or the dimensionality of the representations that you learn. Once you've made those choices, though, the interface is pretty clear. And the fit method, as with the autoencoder, returns the matrix of learned representations that we want to use for the current purposes. And then finally, I've included a score method. And the score method is literally just testing to see how well the vectors that you've learned correspond to the GloVe objective of having the dot products be proportional to the log of the co-occurrence probabilities. Since you can get a score for that, we're doing pretty well here, let's say for a large empirical matrix. Final section, let's just say a bit about visualization. And this is a dimensionality reduction technique in the sense that the whole point is to try to flatten out a very high dimensional space into possibly two or three dimensions. You have to recognize that inevitably this will involve a lot of compromises. It's just impossible to capture all the sources of variation in your underlying matrix in just a few dimensions. But nonetheless, this can be productive. I think it's especially valuable if you pair it with some kind of hands-on qualitative exploration using something like BSN neighbors to understand at a low level what your matrix encodes. And then the high level visualizations can be a kind of counterpart to that. There are many visualization techniques and a lot of them are implemented in the scikit manifold package. So I encourage you to use them. I'm going to show you some results from t-SNE which stands for t-distributed stochastic neighbor embedding. There are lots of user guides there that you can study for more details. Let me just give you the high level. This is t-SNE run on our giga20 matrix. I think this is typical of pretty good output from t-SNE. So what we're seeing here is some pockets of high density. Those are areas of local coherence. Globally, we should be careful not to over interpret this entire diagram because as you rerun the model with different random seeds, you'll see that it gets kind of reoriented in different parts or correspondingly close to different other parts. But what you can count on pretty reliably is that these local pockets of coherence correspond to coherence parts of the space that you've defined. And if you zoom in on them, you can assess what the model has uncovered. So for this giga21, for example, I think we see prominent clusters corresponding to things like cooking and conflict. If we do the same thing for our Yelp matrix, again, this looks pretty good in terms of having some substructure that we can analyze. And if we zoom in, we do see clusters like positive terms and negative terms corresponding to the evaluative setting of these Yelp reviews. So this is all very encouraging and suggests that the underlying spaces have some really interesting structure that might be useful for subsequent analysis. And here are some code snippets. We have this simple wrapper around the scikit t-SNE implementation that will allow you to flexibly work with this stuff using the account matrices from our unit. And I'm just mentioning here that it's pretty easy if you want it to color code the words in your vocabulary, say, according to a sentiment lexicon or some other kind of lexicon. That could be a way for you to reveal exactly what structure your model has been able to uncover with respect to those underlying labels. And that can be useful.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Modeling_Strategies_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRIS POTTS: Welcome, everyone. This is part 4 in our series on Natural Language Inference. We're going to be talking about different modeling strategies. You might think of this screencast as a companion to the associated notebook that explores a lot of different modeling ideas and suggests some architectural variations that you might explore yourself. I thought we would start by just considering hand-built feature functions because they can still be quite powerful for the NLI problem. So some standard hand-built feature ideas include, as a kind of baseline, word overlap between the premise and hypothesis. This is giving you a kind of pretty small feature space, essentially measuring how the premise and hypothesis are alike. If you want to step it up a level, you could consider the word cross-product features, which you just consider as its feature space every pairing of a word in the premise and a word in the hypothesis. This will give you a massive feature space. Very large. Very sparse, but the intuition behind it might be that you're allowing your model a chance to discover points of alignment, and disalignment between the premise and the hypothesis. And so that could be very powerful. You might also consider additional WordNet relations, bringing those in. These would be things like entailment, and contradiction, antonomy, synonymy. And those of course, could be nicely keyed into the underlying logic of the NLI problem. Edit distance is another common feature and just a raw float valued between premise and hypothesis as a kind of high level way of comparing those two texts. Word differences might be a nice juxtaposition with word overlap. You could be considering ways in which the premise and the hypothesis contrast with each other in that feature space. And we have had also moved to alignment-based features. I mentioned that word cross-product is kind of an attempt to have the model learn points of alignment between the premise and hypothesis. But of course, we can also do some things kind of even before we begin to learn feature rates, trying to figure out which pieces in the premise correspond to which pieces in the hypothesis. We could consider negation. We've seen of course that that's an important indicator function in a lot of NLI data sets. There's a powerful intuition behind that, especially as it pertains to contradiction. And so maybe, we would write some feature functions that were explicitly keyed in to the presence or absence of negation in various spots in the premise and hypothesis. And we can also step that up a level as well and consider, more generally, all kinds of different interesting quantifier relationships that would hold possibly at the level of an alignment as in item 6 here between the premise and hypothesis. And this is kind of keyed into the underlying logic of the NLI problem. And then finally, named entity recognition. We've seen that these features might be important in figuring out which entities co-refer across the premise and hypotheses. And so having some devices for figuring that out could be useful as a kind of low level grounding for your system. Now let's move into a mode that's more like the deep learning mode because as we saw earlier in this screencast series, these models have proven that at this point to be the most powerful models for the NLI problem. So it's productive to think also about different deep learning architectures. And I'd like to start with what I've called here, sentence encoding models. And the most basic form of that would return to the idea of distributed representations as features. So the idea here is that we have in this diagram the premise and the hypothesis. So the premise is, "Every dog danced," then the hypothesis is, "Every poodle moved." And our approach using distributed representations, the simplest one, would be that we're going to simply look up all of those words in some fixed embedding space, which would be like a GloVe embedding space, for example. And then we're going to separately encode the premise and hypothesis by, for example, doing the sum or average of the vectors in each of those two texts. And so that gives us a vector xp and xh. And then we might concatenate those two or do something, some other kind of comparison like difference, or max, or mean, to get a single fixed dimensional representation x that is then the input to a kind of, could be a simple softmax classifier. So all we've done here is take our old approach using distributed representations as features and move it into the NLI problem, where we have both the premise and hypothesis. And I've called this a sentence encoding model because we are separately encoding the two sentences. And then the model is going to learn, we hope, something about how those two representations interact. On this slide and the next, I've given a complete recipe for doing exactly what I just described. I'm not going to linger over it here because it's also in the notebooks. And it just shows you how using our course code-- it can be relatively easy to set up models like this. Most of the code is devoted to doing the low level processing of the words into their embedding space. Here's the rationale for sentence-encoding models. I think this is kind of interesting, right? We might want to encode the premise and hypothesis separately in order to give the model a chance to find rich abstract relationships between them. The sentence encoding approach might also facilitate transfer to other kinds of tasks, right, to the extent that we are separately encoding the two sentences. We might have sentence level representations that are useful even for problems that don't fit into the specific NLI mode of having a single premise and a single hypothesis for the sake of classification. And that can be an important part of that vision from Dagan et al that NLI is a kind of source of effective pretraining for more general problems. Let's move to a more complex model. We'll follow the same narrative that we've used before. We just had that simple fixed model that was going to combine the premise and hypothesis via some fixed function, like sum, or average, or max. Here, we're going to have functions that learn about how those interactions should happen, but we're going to follow the sentence encoding mode. So I have our same example. "Every dog danced," and, "Every poodle moved." And the idea is that each one of those is processed by its own, separate recurrent neural network. And I've indicated in green that although these two models would have the same structure, these are different parameters for the premise and hypothesis. So they function separately and then in the simplest approach, we would take the final hidden representation from each of those and combine them somehow, probably would be a concatenation. And that would be the input to the final classifier layer or layers that actually learn the NLI problem. So it's a sentence-encoding approach in the sense that h3 and hd are taken to be kind of separate summary representations of the premise and hypothesis respectively. And we have a vision that those representations might be independently useful even outside of the NLI context. Now, in the associated notebook, nli_02_models, there are a bunch of different implementations including a full PyTorch implementation using R PyTorch based classes of the sentence-encoding RNN approach that I just described to you. And I thought I would just briefly give you a high level overview of how that modeling approach works because they're actually just a few moving pieces. And the rest is kind of low level implementation details. So the first thing that you need to do is modify the dataset class so that conceptually, it is going to create lists of pairs of examples with their lengths and their associated labels. By default, the underlying code that we're using expects one sequence of tokens, one length, and one label. And here, we just need to raise that up so that we have two as you can see here. Every dog danced. Every poodle moved. Both happened to have length 3, and their label is entailment. So we make some changes to the dataset class to accommodate that change in format, essentially. Then the core model for this is conceptually just two RNNs. And the forward method is just essentially, bringing those two pieces together and feeding them to the subsequent classifier layers. And so that's very conceptually natural, and it's just down to having two separate RNNs that you implement using the raw materials that are already there in the code. And then finally for the actual interface, the TorchRNNSentence EncoderClassifier, this is basically unchanged with the one modification that you need to change the predict problem method, the fundamental method for prediction, because it too needs to deal with this different dataset format that we've established up here. And that is, again, a kind of low level change. And so what I hope you're seeing is that the first and third steps are kind of managing the data. And the middle step is the one that actually modifies the computation graph. But that step is very intuitive because we're basically, just reflecting encode-- our idea that we have separate RNNs for the premise and hypothesis. And then finally, I just want to mention that a common approach you see, especially in the early literature, is a sentence-encoding TreeNN that has exactly the same intuition behind it as the RNNs that we just looked at, except that the premise and hypothesis are processed by tree structured recursive neural networks. And since the underlying dataset often have full parse representations, this is an avenue that you could explore. It can be tricky to implement these efficiently, but conceptually, it's a kind of very natural thing, where you just repeatedly have a dense layer at every one of the constituent nodes on up to a final representation here, pb and pd. That is then then fed into the classifier layer in just the way we've done for the previous models. So those are the sentence including RNNs, now let's move to a different vision. And I've called these change models because they're going to just mush together the premise and hypothesis, as opposed to separately encoding the two. Of course, the simplest thing we could do in the chain mode would be to essentially ignore the fact that we have two text, the premise and hypothesis and just feed them in as one long sequence into a standard recurrent neural network. And since that involves no changes to any of the code we've been using for RNN classifiers, it seems like a pretty natural baseline. And so that's depicted here and actually this can be surprisingly effective, all right? The rationale for doing this? In this context here, you could say that the premise is simply establishing context for processing the hypothesis. And that seems like a very natural notion of conditioning on one text as you process the second one. And correspondingly, at the level of human language processing, this might actually correspond to something that we do as we read through a premise hypothesis text, and figure out what the relationship is. And here's a simple recipe for doing this. The one change from the diagram that you might think about is that I did when representing the examples here. Flatten them, out of course, but also, insert this boundary marker. That would at least give the model a chance to learn that there was a separation happening, some kind of transition between the premise and hypothesis. But that's at just the level of feature causation and in terms of modeling. You hardly need to make any changes in order to run this kind of experiment. We can also think about a modification that would bring together sentence encoding with chains. And this would be where we have two sets of RNN parameters-- one for the premise and one for the hypothesis. But we nonetheless chain them together instead of separately encoding them. So as before, I have a premise RNN in green. I have a hypothesis RNN in purple. They have the same structure but different learned parameters. And the handoff is essentially, that the initial hidden state for the hypothesis-- is the final output state for the premise. And then that way, you get a seamless transition between these two models. And this would allow the models in space to learn that premise, tokens, and premise sequences, have a different status than those that appear in the hypothesis. And let me just close by mentioning a few other strategies because this is by no means exhaustive, but it's kind of interesting at the high level of architecture thinking about sentence encoding versus these chained models. So first, the TorchRNN Classifier feeds its hidden state directly to the classifier layer. But we have options like bidirectional equals true, which would use as the summary representation-- both the final and the initial hidden states, essentially, and feed those into the classifier. So it's a different notion of sentence encoding or of sequence encoding. And other ideas here, right? So we could, instead of restricting, just to one or a few of the final states do some kind of pooling with max or mean across all of the output states. And different pooling options can be combined with different versions of these models, either sentence encoding or chained. We could also, of course, have additional hidden layers between the classifier layer and the embedding. I've shown you just one for the sake of simplicity, but deeper might be better especially for the very large NLI datasets that we have. And finally, an important source of innovation in this and many other spaces is the idea of adding attention mechanisms to these models. And that's such an important idea that I'm going to save it for the next screencast in the series.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Overview_of_Analysis_Methods_in_NLP_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is the first screencast in our series on Analysis Methods in NLP. This is one of my favorite units in the course because it's directly oriented toward helping you do even better final projects. Now there's a lot we could discuss under the rubric of analysis methods in NLP. I've chosen four things. The first two, fall under the heading of behavioral evaluations. We'll talk about adversarial testing, which is a very flexible way for you to expose that your system might have some weaknesses or fail to capture some linguistic phenomenon in a very systematic way. And then at this point, we also have the opportunity for a number of tasks to do adversarial training and testing, and these would be large dataset that are full of examples that we know are difficult for present day architectures. So for every architecture you're exploring, this would be a chance to really stress test that architecture. And then we're going to move beyond behavioral evaluations to talk about what I've called structural evaluation methods. And these include probing and feature attribution. And these are techniques that you could use to peer inside your system and gain an understanding of what its hidden representations are like, and how those representations are impacting the model's predictions. The motivations for this are many. Here are just a few high-level ones that are kind of oriented toward projects. The first, is just that we might want to find the limits of the system that you're developing. All our systems have limitations and finding them is always scientifically useful. We might just also want to understand your system's behavior better. What are its internal representations like, and how are they feeding into its final predictions and its overall behaviors? That's also just incredibly rewarding. And both of these things might feed into just achieving more robust systems. To the extent that we can find weaknesses and understand behaviors, we can possibly take steps toward building even more robust systems. And as I said, all of this is oriented toward your final projects. The techniques that we're discussing are powerful and easy ways to improve the analysis section of a paper. Analysis sections are important, but it can be difficult to write them. It feels very open ended and often very unstructured. People talk in general ways about doing error analysis and so forth, but it can be hard to pinpoint exactly what would be productive. I think the methods that we're talking about here are very generally applicable and can lead to really productive and rich analysis sections. Let's begin with adversarial testing, this is a mode that we've talked about before. The examples on this slide are from this now classic paper Glockner et al., 2018, called "Breaking NLI." And what they did is only really mildly adversarial. It's just kind of a challenge. And it exposes some lack of systematicity in certain NLI models. So here's what they did. They began from SNLI examples like, "a little girl is kneeling in the dirt crying" entails "a little girl is very sad." And they simply use lexical resources to change the hypothesis by one word. So that it now reads, "a little girl is very unhappy." We would expect a system that truly understood the reasoning involved in these examples to continue to predict entail in the second case because these examples are roughly synonymous. But what they found, is that systems would often start to predict contradiction, possibly because of the negation that occurs here. The second example is similar. We begin from the SNLI example, "an elderly couple are sitting outside a restaurant enjoying wine" entails "a couple drinking wine." And here they just changed wine to champagne. What we would expect is that a system that knew about these lexical items and their relations, would flip to predicting neutral in this case. But as you might imagine, systems continue to predict entails because they have only a very fuzzy understanding of how wine and champagne are related to each other. Here is the results table, and recall this is a 2018 paper. And what they're mainly testing here, are models that we might regard as precursors to the transformers that we've been so focused on. And the picture is very clear. These models do well on the SNLI test set, mid-to-high 80s, but their performance plummets on this new adversarial test set. There are two exceptions down here, this WordNet baseline and the KIM architecture. But it's important to note that these models effectively had access directly in the case of WordNet and indirectly in the case of KIM, to a lexical resource that was used to create the adversarial test. And so they don't see such a large performance drop here. But even still, all of these numbers are kind of modest at this point. And I told you that this was an interesting story. Here's the interesting twist. At this point in 2021, you can simply download RoBERTA-MNLI-- that's the RoBERTA parameters fine-tuned on the MultiNLI data set-- and run this adversarial test. And what you find is that model does astoundingly well on the Breaking NLI data set. I would focus on these two f1-scores here for the two classes where we have a lot of support, contradiction and entailment. The numbers are above 90 as is the accuracy here, which is directly comparable to the numbers that Glockner et al. reported. An amazing accomplishment-- recall that the original examples from the adversarial test are from SNLI, this is multi-NLI. It was not developed specifically to solve this adversarial test. And nonetheless, it looks like RoBERTA has systematic knowledge of the lexical relations involved and required to solve this adversarial test-- so possibly a mark of real progress. As I said, you can also, for selective tests, move into the mode of doing adversarial training and testing. Here are the cases I know where the dataset is large enough to support training and testing on examples that were created via some adversarial dynamic-- common sense reasoning, natural language inference, question answering, sentiment and hate speech. And as I said, this is a really exciting opportunity to see just how robust your system is when exposed to examples that we know are difficult for modern architectures, because that's how these datasets were designed. Now let's move into the more behavioral mode. We'll start with probing of internal representations. Probes are little supervised models typically, that you fit on the internal representations of your model of interest to expose what those hidden representations latently encode. This is from a classic paper by Ian Tenney et al., 2019. And what we have along the x-axis is the BERT layers, starting from the embedding layer and going to 24. This is BERT large, so there are 24 layers. And the picture is quite striking. As you start from the top here and move down, you can see that as we move from more syntactic things, up into more discourse-y semantic content, like co-ref and relation extraction, you find that the higher layers of the BERT model are encoding that information latently. That's what these probing results reveal in this picture. Quite striking look at what the pretraining process in this case, of BERT, is learning latently about the structures of language. And then we'll finally talk about feature attribution, which is one step further in this more introspective mode, because here as you'll see, I think we can get a really deep picture at how individual features and representations are directly related to the model's predictions. And what I've done here, is use the Integrated Gradients model, which is the model that we'll focus on. I ran it on a sentiment model. And you can see here, we have the true label, the predicted model with the probability, and then we have word level importances, as measured by Integrated Gradients, where blue means it's a bias toward positive predictions, and red means it's a bias toward negative predictions. And I've picked an example that I think kind of stress tests the model. It's a little bit adversarial because it's-- all these examples involve mean in the sense of good, as in a mean apple pie, meaning a delicious or a good one. And you can see that by and large, this model's predictions are pretty systematic. It's mostly predicting positive for variants, like "they sell," "they make," "he makes," although this last one, "he sells," might worry us a little bit because it has flipped to negative, despite the changes to the example being truly incidental. And this might point to a way in which the model does or doesn't have knowledge of how the individual components of these examples should be predict-- should be feeding into the final predictions that the model makes. I think that's a wonderful opportunity to get a sense for how robust the model is actually going to be to variations, like the one that you see here.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Speakers_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is part two in our series on grounded language understanding. Our task for this unit is essentially a natural language generation task. And I've called those speakers, the idea is that speakers go from the world, that is some non-linguistic thing that they're trying to communicate about, into language. Those are really the central agents that will be explored. To ground all this, we're going to have a simple task. I'm going to start with the most basic version of the task that we'll ultimately tackle in our assignment and bake-off. And that is color reference. So these are examples taken from a corpus that was originally collected by Randall Munroe of XKCD fame, and processed into an NLP task by McMahan and Stone 2015. And it's a simple formulation in that the state of the world we want to communicate about is a color patch. And the task is simply to produce descriptions of those color patches. Now I've given some examples here. And you can see that they range from simple one word descriptions, all the way up to things that are kind of complicated both cognitively and linguistically, and I think point to the idea that even though this is a simple and constrained domain, it's a pretty cognitively and linguistically interesting one. So our speakers, at least our baseline speakers, are standard versions of encoder decoder models. We're going to have for this initial formulation a very simple encoder. The task of the encoder is simply to take a color representation, which is going to be a list of floats embedded in some embedding space, and then learn some hidden representation for that color. And that's all that needs to happen. So it's just one step. The decoder is where the speaking part happens. So the initial token produced by the decoder, by the speaker, is always the start token which is looked up in an embedding space. And then we get our first decoder hidden state, which is created from the color representation of the initial hidden state in the sequence we're going to build, together with an embedding. And both of those have weight transformations. And it's an additive combination of them that delivers this value h1 here. Then we use some softmax parameters to make a prediction about the next token. Here we've predicted "dark." And we get our error signal by comparing that prediction with the actual token that occurred in our training data. In this case, it was the word "light." So since we made a wrong prediction, we're going to get a substantive error signal that will then, we hope, update the weight parameters throughout this model in a way that leads them to produce better generations the next time. In a little more detail, just as a reminder. So we have an embedding for that start token, and indeed for all tokens. The hidden state is derived from the embedding via a weight transformation, and the color representation, which is state h0 in the recurrence that we're building. And that too has a transformation applied to it, to travel through the hidden layer. That gives us the state H1. And then we have softmax parameters on top of that h1 that make a prediction. The prediction that they make is a prediction over the entire vocabulary. And the probability of the actual token gives us our error signals. So the probability of "light" is the error signal that we'll use here to update the model parameters. And then we begin with the next timestep. I've called this teacher forcing, because in the standard mode, which is the teacher forcing mode, even though we predicted 'dark' at timestep 1 we're going to have as our second token, the token "light" which is the actual token in the underlying training data. And we'll proceed as though we did not make a mistake. So again, we do an embedding look up, we get our second hidden state for the decoder, that's a combination of the embedding x 37 and the previous student state, and we make another prediction. And in this case, our prediction is "blue" and that's the actual token. And life is good for a little bit. And then we proceed with a third timestep. The actual token is "blue" h3. We predict "green." And in this case, we should have predicted the stop token which would cause us to stop processing the sequence. We're just going to get an error signal as we steadily would and propagate that back down through the model in hopes that the next time, when we want to stop, we'll actually produce this stop token that I've given up here. At prediction time, of course the sequence is not given. That doesn't change the encoder because the color representation is part of the model inputs. So then we have to decode and just describe without any feedback. So we proceed as we did before and we predict "dark" here. And then "dark" has to become the token at the next timestep because we don't know what the ground truth is. And we proceed as before and say "blue." And then that becomes the third time step, and with luck, there in that third position, we predict the stop token. And the decoding process is completed. That is the fundamental model. Even though it's simple, it admits of many interesting modifications. Let me just mention a few of them. First, the encoder and the decoder, of course, could have many more hidden layers. Mine just had one. But they could be very deep networks. We would expect that the layer counts for the encoder and the decoder match so that you have this even handoff from encoder to decoder across all the hidden layers. But even that's not a hard constraint. I can imagine that some pooling or copying could accommodate different numbers of layers and these two components. It's very common at present for researchers to tie the embedding and classifier parameters, right? The embedding gives us a representation for every vocabulary item. And the transpose of that can serve as the set of parameters for a softmax classifier when we predict tokens. And tying those weights seems to be very productive in terms of optimization effectiveness. So you might consider that. And finally, during training, we might drop that teacher forcing assumption, which would mean that in a small percentage of cases, we would allow the model to just proceed as though its predicted token was the correct token for the next time step, even if that was a faulty assumption, on the idea that that might help the model explore a wider range of the space and inject generations with some helpful diversity. And then there's one other modification that I want to mention, because you'll see this as part of the homework and the system that you're developing. So we found that in Monroe et al. 2016, it was helpful to remind the decoder at each one of its timesteps about what it was trying to describe. So in more detail, we had HSV color representations as our inputs. We did a Fourier transform to get an embedding. And that was processed into a hidden state. And then during decoding, we appended to each one of the embeddings the Fourier transformation representation of the color as a kind of informal reminder at each timestep about what the input was actually like, on the assumption that for long sequences where we get all the way down to the end, the model might have a hazy memory of what it's trying to describe. And this functions as a kind of reminder at that point. And that proved to be very effective, and I'll encourage you to explore that in the homework. And then I hope you can see that even though this task formulation is simple, it's an instance of a wide range of tasks that we might explore under the heading of grounding. After all, for grounding in this sense, we just need some non-linguistic representation coming in, and the ideas that will generate language in response to that input. So image captioning is an instance of this. Scene description, of course, is another instance. Visual question answering is a slight modification where the input is not just an image, but also a question text and the idea is that you want to produce an answer to that question relative to the image input. And then instruction giving would be a more general form where the input is some kind of state description and the idea is that we want to offer a complicated instruction on that basis. And I think we can think of many others that would fit into this mold and benefit not only from the encoder decoder architecture, but also from conceptualization explicitly as grounded natural language generation tasks.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Contextual_Representation_Models_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRIS POTTS: Welcome, everyone, to our first screencast on contextual word representations. My goal here is to give you an overview for this unit and also give you a sense for the conceptual landscape. Let's start with the associated materials. You might think that the name of the game for this unit is to get you to the point where you can work productively with this notebook called finetuning, which shows you how to fine-tune contextual word representations for classification problems. I think that could be a very powerful mode for you as you work on the current assignment and bakeoff. For background and intuitions, I highly recommend this paper by Noah Smith. The beating heart of this unit is really the Transformer architecture, which was introduced by Vaswani, et al., 2017 in a paper called "Attention is All You Need." It's a highly readable paper. But I recommend that if you want to read it, you instead read Sasha Rush's outstanding contribution, "The Annotated Transformer." What this does is literally reproduce the text of Vaswani, et al., 2017 with PyTorch code woven in, culminating in a complete implementation of the Transformer as applied to problems in machine translation. This is a wonderful contribution in the sense that to the extent that there are points of unclarity or uncertainty in the original text, they are fully resolved by Sasha's code. And of course, this can give you a really good example of how to do efficient and effective implementation of model architectures like this using PyTorch. In practical terms, we're going to make extensive use of the Hugging Face Transformers library, which has really opened up access to a wide range of pretrained transformer models. It's very exciting. And it's enabled lots of new things. For us, the central architecture will be BERT. We'll have a separate screencast on that. And we're also going to have a screencast on RoBERTa, which is Robustly Optimized BERT. I think it's an interesting perspective in the sense that they explored more deeply some of the open questions from the original BERT paper. And they also released very powerful pretrained parameters that you could, again, use in the context of your own fine-tuning. And then for a slightly different perspective on these transformers, we're going to look at the ELECTRA architecture, which came from Kevin Clark and colleagues at Stanford and Google. I really like this as a new perspective. There are, of course, many different modes of using the transformer at this point. I'm going to mention a few at the end of the screencast. And just for the sake of time, I've decided to focus on ELECTRA. And you can explore the others in your own research. Let's begin with some intuitions. And I'd like to begin with a linguistic intuition, which has to do with word representations and how they should be shaped by the context. Let's focus on the English verb break. We have a simple example, the vase broke, which means it shattered to pieces. Here's a superficially similar sentence, dawn broke. Now the sense of break is something more like begin. The news broke. Again, a simple intransitive sentence. But now the verb break means something more like publish, or up here, or become known. Sandy broke the world record. This is a transitive use of the verb break. And it means, surpass the previous level. Sandy broke the law is another transitive use. But now it means, Sandy transgressed. The burglar broke into the house is a physical act of transgression on its face. The newscaster broke into the movie broadcast is a sense that it's more like interrupt. And we have idioms like break even, which means we neither gained nor lost money. And this is just a few of the many ways that the verb break can be used in English. How many senses are at work here? It's very hard to say. It could be one. It could be two. It could be ten. It's very hard to delimit word senses. But it is very clear from this data that our sense for the verb break is being shaped by the immediate linguistic context. Here are a few additional examples, things like flat tire, flat beer, flat note, flat surface. It's clear that there is a conceptual core running through all of these uses. But it's also true that a flat tire is a very different sense for flat than we get from flat note or flat surface. I have something similar for throw a party, throw a fight, throw a ball, throw a fit. We have a mixture of what you might call literal and metaphorical here. But again, a kind of common core that we're drawing on. But the bottom line is that the sense for throw, in this case, is very different depending on what kind of linguistic context it's in. And we can extend this to things that seem to turn more on world knowledge. So if you have something like, a crane caught a fish, we have a sense that the crane here is a bird. Whereas, if we have a crane picked up the steel beam, we have a sense that it's a piece of equipment. This seems like something that's guided by our understanding of birds, and equipment, and fish, and beams. And when we have relatively unbiased sentences like, I saw a crane, we're kind of left guessing about which object is involved, the bird or the machine. And we can extend this past world knowledge into things that are more like discourse understanding. So if you have a sentence like, "are there typos? I didn't see any." The sense of "any" here, we have a feeling that something is elided but probably localized on "any." And "any" here means any typos as a result of the preceding linguistic context. "Are there any bookstores downtown? I didn't see any." Same second sentence. But now the sense of "any" is probably going to be something more like bookstores, as a result of the discourse context that it appears in. So all of this is just showing how much individual linguistic units can be shaped by context. Linguists know this deeply. This is a primary thing that linguists try to get a grip on. And I think it's a wonderful point of connection between what linguists do and the way we're representing examples in NLP using contextual models. This is a very exciting development for me as a linguist, as well as an NLPer. Here's another set of intuitions that's related more to things like model architecture and what you might call inductive biases for different model designs. Let's start off here in the left. This is a high-bias model in the sense that it makes a lot of a priori decisions about how we will represent our examples. The idea is that we have three tokens, which we look up in a fixed embedding space. And then we have decided to summarize those embeddings by simply summing them together to get a representation for the entire example. Very few of these components are learned as part of our problem. We've made most of the decisions ahead of time. As we've seen, as we move to a recurrent neural network, we relax some of those assumptions. We're still going to look upwards in a fixed embedding space. But now instead of deciding that we know the proper way to combine them is with summation, we're going to learn from our data a very complicated function for combining them. And that will presumably allow us to be more responsive, that is, less biased about what the data are likely to look like. The tree structured architecture down here is an interesting mixture of these ideas. It's like the recurrent neural network. Instead of assuming that I can process the data left-to-right, the data get processed into constituents, like "the Rock" is a constituent, as excluded from "rules" over here. Now this is probably going to be very powerful if we're correct that the language data are structured according to these constituents. Because it will give us a boost in terms of learning. It could be counterproductive, though, to the extent that constituent structure is wrong. And I think that's showing that biases that we impose at the level of our architectures can be helpful, as well as a hindrance, depending on how they align with the data-driven problem that we're trying to solve. In the bottom right here, I have the least biased model in all these sentences, of all the ones depicted here. I've got a recurrent neural network like this one, except now I'm assuming that information can flow bidirectionally, so no longer a presumption of left-to-right. And in addition, I've added these attention mechanisms, which we'll talk a lot about in this unit. But essentially, think of them as ways of creating special connections between all of the hidden units. And the idea here is that we would let the data tell us how to weight all of these various connections and, in turn, represent our examples. We're making very few decisions ahead of time about what connections could be made, and instead of just listening to the data and the learning process. And we are, at this point, on the road toward the Transformer, which is kind of an extreme case of connecting everything with everything else and then allowing the data to tell us how to weight all of those various connections. And that does bring us to this notion of attention, which we've not discussed before. But I think I can introduce the concepts and a bit of the math. And then we'll see them again throughout this unit. So let's start with the simple sentiment example and imagine we're dealing with a recurrent neural network classifier. Our example is really not so good. And we're going to fit the classifier, traditionally, on top of this final hidden state over here. But we might worry about doing that. That by the time we've gotten to this final state, the contribution of these earlier words, which are clearly important linguistically, might be sort of forgotten or overly diffuse. So attention mechanisms would be a way for us to bring that information back in and infuse this representation, hc, with some of those previous important connections. So here's how we do that. We're going to first have some attention scores, which are simply dot products of our target vector with all the preceding hidden states. So that gives us a vector of scores which are traditionally softmax normalized. And then what we do is create a context vector by weighting each of the previous hidden states by its attention weight, and then taking the average of those to give us the context vector K. And then we're going to have this special layer here, which concatenates K with our previous final hidden state, and feeds that through this layer of learned parameters and a nonlinearity to give us this new hidden representation, h tilde. And it is h tilde that is finally the input to our softmax classifier. Whereas before, we would have simply directly input hc up here, we now input this more refined version that is drawing on all of these attention connections that we created with these mechanisms. And again, as you'll see, the transformer does this all over the place with all of its representations at various points in its computations. Here's another guiding idea that really shapes how these models work. I've called this word pieces. And we've seen this before. These models typically do not tokenize data in the way that we might expect. I've loaded in a BERT tokenizer. And you can see that for a sentence, like "this isn't too surprising" the result is some pretty familiar tokens, by and large. But when I feed in something like "encode me" the intuitive word "encode" is split apart into two-word pieces. And clearly, we're implicitly assuming that the model, because it's contextual, can figure out that these pieces are in some conceptual sense one word. And you might extend that up to idioms like "out of this world" where we treat them as a bunch of distinct tokens. But we might hope the model can learn that there's an idiomatic unity to that phrase. And this also has the side advantage that for unknown tokens like "Snuffleupagus" it can break them apart into familiar pieces. And we at least have a hope of getting a sensible representation for that out-of-vocabulary item. The result of all this is that these models can get away with having very small vocabularies, precisely because we are relying on them implicitly to be truly contextual. Here's another inspiring idea that we've not encountered before. This is called positional encoding. And it's another way in which we can capture sensitivity of words to their contexts. So as you'll see, when you go all the way down inside the transformer architecture, you do have a traditional static embedding of the sort we discussed in the first unit for this course. Those are in light gray here, fixed representations for the words. However, in the context of a model like BERT, what we traditionally think of as its embedding representation is actually a combination of that fixed embedding and a separate embedding space called the positional embedding, where we have learned representations for each position in a sequence. This has the intriguing property that one and the same word, like the, will have a different embedding in the sense of the green representation here, depending on where it appears in your sequence. So right from the get-go, we have a notion of context sensitivity even before we've started to connect things in all sorts of interesting ways. Now let's move to some current issues and efforts, some high-level things that you might think about as you work through this unit. This is a really nice graph from the ELECTRA paper from Clark, et al. Along the x-axis here, we have floating point operations, which you could think of as a kind of basic measure of compute resources needed to create these representations. And along the y-axis, we have GLUE score. So that's like a standard NLU benchmark. And the point of this plot here is that we're reaching kind of diminishing returns. So we had rapid increases from GloVE, GPT, and up through to BERT, where we're really doing much better on these GLUE scores. We're increasing the floating point operations. But it seems to be commensurate with how we're doing on the benchmark. But now with these larger models like XLNet and RoBERTa, it's arguably the case that we're reaching diminishing returns. RoBERTa involves more than 3,000 times the floating point operations of GloVe. But it's not that much better along this y-axis than some of its simpler variants like BERT-Base. And so this is something we should think about when we think about the costs in terms of money, and in the environments, and energy, and so forth when we think about developing these large models. And here's a really extreme case. Who knows how long we can train these things or how much benefit we'll get when we do so. But at a certain point, we're likely to incur costs that are larger than any gains that we can justify on the problems we're trying to solve. And that goes and leads us to this lovely paper, which talks about the environmental footprint of training these really big models. And it just shows that by training a big transformer from scratch, it really incurs a large environmental cost. That's certainly something we should have in mind as we think about using these models. For me, it's a complicated question though. Because it's offset by the fact that, by and large, all of us aren't training these from scratch, but rather, benefiting from publicly available pretrained representations. So while the pretraining for that one version had a large environmental cost, it feels like it's kind of offset by the fact that a lot of us are benefiting from it. And it might be that in aggregate, this is less environmentally costly than the old days when all of us always trained all of our models literally from scratch. I just don't know how to do the calculations here. But I do know that increased access has been empowering and is likely offsetting some of the costs. And a lot of that is due to the contributions of the Hugging Face library. There are a lot of efforts along these same lines to make BERT smaller by compressing it literally fewer dimensions than other kinds of simplifications of the training process, and BERT distillation, and so forth. Here are two outstanding contributions, kind of compendiums of lots of different ideas in this space. And I also highly recommend this lovely paper called "A Primer in BERTology," which explores a lot of different aspects of what we know about BERT and how it works, various variations people have tried, various things that people have done to probe these models, and understand their learning dynamics, and so forth and so on. It's a very rich contribution, certainly can be a resource for you as you think about these models. And just because we don't have time to cover them all, there are a bunch of interesting Transformer variants that we will not be able to discuss in detail. I thought I'd mention them here. SBERT is an attempt to develop sentence-level representations from BERT that are particularly good at finding which sentences are similar to which other sentences according to cosine similarity. And I think that could be a powerful mode of thinking about these representations and also of practical utility if you need to find which sentences are similar to which others. You have probably heard of GPT, the Generative Pre-trained Transformer, in various of its forms. You can get GPT-2 from Hugging Space-- and Hugging Face. And you have unfettered access to it. And of course, there's more restrictive access at this point to GPT-3. These are conditional language models, so quite different from BERT. And they might be better than BERT for things like truly conditional language generation. XLNet is an attempt to bring in much more context into these models. It stands for Xtra Long Transformer. So if you need to process long sequences, this might be a good choice. And this is also an attempt to bring in some of the benefits of conditional language models into a mode that is more bidirectional, the way BERT is. T5 is another conditional language mode, as is BART. These models might be better choices for you if you need to actually generate language. What I think the standard wisdom is that models like BERT and RoBERTa are better if you simply need good representations for fine-tuning on a classification problem, for example. More models will appear every day. And I think it's worth trying to stay up to speed on the various developments in this space. Because this is probably just the tip of the iceberg here.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Hyperparameter_Search_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Hello, everyone. Welcome to part 6 in our series on supervised sentiment analysis. This screencast is going to cover two important methods in this space, hyperparameter search and classifier comparisons. So let's begin with hyperparameter search. And first, I'll just offer the rationale. Let's say that the parameters of a model are those whose values are learned as part of optimizing the model itself. So for the classifiers we've been studying, the parameters are really just the weights that you learn on each of the individual features. And those are the things that are directly targeted by the optimization process. The parameters of a model are typically pretty crisply defined because they kind of follow from the structure mathematically of the model under investigation. Much more diffuse are the hyperparameters. We can say the hyperparameters of a model are any settings that are outside of the optimization process mentioned in 1. So examples from models we've seen, our GloVe and LSA have that dimensionality setting. The model itself gives you no guidance about what to choose for the dimensionality. And the dimensionality is not selected as part of the optimization of the model itself. You have to choose it via some external mechanism, making it a hyperparameter. And GloVe actually has two other additional prominent hyperparameters, xmax and alpha. Again, those are not optimized by the model. You have to select them via some external mechanism. And for the classifiers that we've been studying, we have regularization terms. Those are classic hyperparameters. If you have a deep classifier, then the hidden dimensionalities in the model could also be considered hyperparameters. Learning rates, any core feature of the optimization method itself could be considered hyperparameters. And even things that might be considered kind of architectural, like the activation function and the deep classifier, you might think of it as kind of an intrinsic part of the model that you're evaluating. But since it's an easy choice point for us at this point, you'll be tempted to explore a few different options for that particular architectural choice. And in that way, it could become a hyperparameter. And at this point, even the optimization methods could also emerge as a hyperparameter that you would like to do search over. And so forth and so on, you should probably take a fairly expansive view of what the hyperparameters of your model are if you can. Now here's the crux of the argument. Hyperparameter optimization is crucial to building a persuasive argument. Fundamentally, for any kind of comparison we make, we want to put every model in its very best light. We could take it for granted that for any sufficiently complicated model there's some setting of its hyperparameters that's kind of degenerate and would make the model look very bad. And so you certainly wouldn't want to do any comparisons against that really problematic set of choices. Rather, what we want to do is say, let's put all the models in their best light by choosing optimal hyperparameters for them to the best of our ability. And then we can say that one model is better than the other if it emerges victorious in that very rigorous setting. And the final thing I'll say about this methodologically is that, of course, all hyperparameter tuning must be done only on train and development data. You can consider that all fair game in terms of using it however you want to choose the optimal hyperparameters. But once that choice is set, it is fixed. And those are the parameters that you use at test time. And that is the fundamental evaluation that you would use for any kind of model comparison. And at no point should you be tuning these hyperparameters on the test data itself. That would be completely illegitimate. I hope we've made it really easy to do this kind of hyperparameter search in the context of the work you're doing for supervised sentiment analysis. Here are some code snippets that show how that can happen. In loading my libraries, I have a pointer to our sentiment data. And here I have a fixed feature function, which is just a unigram feature function. The change happens inside the model wrapper. Whereas before, essentially, all we did was set up a logistic regression model and then call its fit method, here we set up that model but also established a grid of hyperparameters. These are different choice points for this logistic regression model like whether or not I have a bias term, the value of the regularization parameter, and even the algorithm used for regularization itself, L1 or L2. The model will explore the full grid of these options. It's going to do five-fold cross validation, so test one each five times on different splits of the data. And in that very long search process, it will find what it takes to be the best setting of all of these hyperparameters, of all the combinations that can logically be set. And that is the model that we finally return here, right? So now you can see the value of having a wrapper around these fit methods. Because then I could do all of this extra work without changing the interface to sst.experiment at all. The experiments look just as they did in the previous mode. It's just that they will take a lot longer because you are running dozens and dozens of experiments as part of this exhaustive search of all the possible settings. OK, part 2 is classifier comparison. Once again, begin with the rationale. Suppose you've assessed a baseline Model B and your favorite model M. And your chosen assessment metric favors M. And this seems like a little victory for you. But you should still ask yourself, is M really better, right? Now if the difference between B and M is clearly of practical significance, then you might not need to do anything beyond presenting the numbers, right? If each one of your classification decisions corresponds to something really important in the world and your classifier makes thousands more good predictions than the other model, that might be enough for the argument. But even in that situation, you might ask whether there's variation in how these two models B and M perform. Did you just get lucky when you saw what looked like a practical difference and with minor changes to the initialization or something, you would see very different outcomes? If the answer is possibly yes, then you might still want to do some kind of classifier comparison. Now there's this nice paper by Demsar 2006 that advises using the Wilcoxon signed-rank test for situations in which you can afford to repeatedly assess your two models B and M on different train test splits, right? And we'll talk later in the term about the precise rationale for this. But the idea is just that you would do a lot of experiments on slightly different views of your data and then kind of average across them to get a sense for how the two models compare with each other. In situations where you can't repeatedly assess B and M, McNemar's test is a reasonable alternative. It operates on the confusion matrices produced by the two models, testing the null hypothesis that the two models have the same error rate. The reason you might opt for McNemar's test is, for example, if you're doing a deep learning experiment where all the models take a few weeks to optimize. Then of course, you can't probably afford to do dozens and dozens of experiments with each one. So you might be compelled to use McNemar's based on one single run of the two models. It's a much weaker argument because, of course, precisely the point is that we might see variation across different runs. And McNemar's is not really going to grapple with that in the way that the Wilcoxon signed-rank test will. But this is arguably better than nothing in most situations. So you might default to McNemar's if the Wilcoxon is too expensive. And let me just show you how easy this can be in the context of our code base. So by way of illustration, what we're essentially going to do is compare logistic regression and naive Bayes. I encourage you, when you're doing these comparisons, to have only one point of variation. So we're going to fix the data and we're going to fix the feature function and compare only the model architectures. You could separately say I'm going to have a single fixed model like logistic regression and explore a few different feature functions. But I would advise against exploring two different feature functions as combined with two different models because when you observe differences in the end, you won't be sure whether that was caused by the model choice or by the feature functions. We want to isolate these things and do systematic comparisons. So here, I'm going to do a systematic comparison of logistic regression and naive Bayes on the SST using the Wilcoxon test. And here's the setup. The function is sst_compare_models. I point it to my training data. You can have two feature functions. But in that case, you should have just one model wrapper. Here, I've got one feature function used for both models. And I'll have these two different wrappers corresponding to the evaluation that I want to do of those two model classes. I'm going to use the Wilcoxon as advised. I'll do 10 trials of each on the train size of 70% of the data. And as always, in this setting, I'll use the macro F1 as my score. So what this will do internally is run 10 experiments on different train test splits for each one of these models. That gives us a score vector, 10 numbers for each model. And then what the Wilcoxon is doing is comparing whether the-- or assessing whether the means of those two score vectors are statistically significantly different. And here it looks like we have some evidence that we can reject the null hypothesis that these models are identical, which is presumably the argument that we were trying to build. Now, of course, that's very expensive because we had to run 20 experiments in this situation. And of course, you could run many more if you were also doing hyperparameter tuning as part of your experimental workflow. So in situations where you can't afford to do something that involves so many experiments, as I said, you couldn't default to McNemar's. That is included in utils.mcnemar. And the return values of SST experiment will give you all the information you need. Essentially from McNemar's you need the actual goal vector of labels and then the two vectors of predictions for each one of your experiments. So that's a simple alternative in the situation in which Wilcoxon's was just too expensive.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
RNN_Classifiers_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome back, everyone. This is part 8 in our series on supervised sentiment analysis, the final screencast in the series. We're going to be talking about recurrent neural network or RNA classifiers. I suppose this is officially our first step into the world of deep learning for sentiment analysis. This slide gives an overview of the model. Let's work through it in some detail. So we've got a single example with three tokens, "the Rock rules." These models are prepared for variable length sequences. But this example happens to have like three. And the first step to get this model started is a familiar one. We're going to look up each one of those tokens in what is presumably a fixed embedding space here. So for each token, we'll get a vector representation. The next step is that we have some learned parameters, a weight matrix, wxh. And the subscript indicates that we're going from the inputs x into the hidden layer h. So that's a first transformation and that weight matrix is used at each one of these time steps. There is a second learned weight matrix which I've called whh to indicate that we are now traveling through the hidden layer. And so we start at some initial state h0, which could be an all 0 or a randomly initialized vector, or a vector coming from some other component in the model. And that representation is combined with the representation that we derive going vertically up from the embedding, usually in some additive fashion to create this hidden state here h1. And those parameters whh are used again at each one of these time steps. So that we have two learned weight matrices as part of the core structure of this model, the one that takes us from embeddings into the hidden layer and the one that travels us across the hidden layer. And again, those are typically combined in some additive fashion to create these internal hidden representations. Now we can do anything we want with those internal hidden representations. When we use RNNs as classifiers, we do what is arguably the simplest thing, which is take the final representation and use that as the input to a standard softmax classifier. So from the point of view of h3 going to y here, we just have a learned weight matrix for the classifier, maybe also a bias term. But from this point here, this is really just a classifier of the sort we've been studying up until this point in the unit. But of course, we could elaborate this model in all sorts of ways. It could run bidirectionally. We could make more full use of the different hidden representations here. But in the simplest mode, our RNN classifiers will just derive hidden representations at each time step and use the final one as the input to a classifier. Couple of things I would say about this first, if you would like. A further layer of detail on how these models are structured and optimized, I encourage you to look at this pure NumPy reference implementation of an RNN classifier that is included in our course code distribution. I think that's a great way to get a feel for the recursive process of computing through full sequences and then having the error signals back propagate through to update the weight matrix. But for now, I think just understanding the core structure of this model is sufficient. I just want to remind you from the previous screencast, that we're very close to the idea of distributed representations of features that I introduced before. Recall that for this mode, what we do is look up each token in an embedding space, just as we do for the RNN. But instead of learning some complicated combination function with a bunch of learned parameters, we simply combine them via sum or average. And that's the basis. That's the input to the classifier here. The RNN can be considered an elaboration of that because instead of assuming that these vectors here will be combined in some simple way like sum or mean, we now have really vast capacity to learn a much more complicated way of combining them that is optimal with respect to the classifier that we're trying to fit. But fundamentally, these are very similar ideas. And if it happened that sum or mean as in this picture was exactly the right function to learn for your data, then the RNN would certainly have the capacity to do that. We just tend to favor the RNN because it can learn, of course, a much wider range of complicated custom functions that are particular to the problem that you've posed. Now so far, we've been operating in a mode which I've called standard RNN data set preparation. Let's linger over that in a little bit of detail. Suppose that we have two examples containing the tokens a, b, a and b, c. Those are our two raw inputs. The first step in the standard mode is to look up each one of those in some list of indices. And then those indices are keyed into an embedding space. And those finally give us the vector representations of each examples. So that really and truly, the inputs of the RNN is a list of vectors. It's just that we typically obtain those vectors by looking them up in a fixed embedding space. And so for example, since a occurs twice in this first example, it is literally repeated as the first and third vectors here. Now I think you can see latent in this picture the possibility that we might drop the embedding space and instead just directly input lists of vectors. And that is one way that we will explore later on in the quarter of using contextual models like BERT. We would simply look up entire token streams and get back lists of vectors and use those as fixed inputs to a model like an RNN. And that's a first step toward fine-tuning models like BERT on problems like the ones we've posed in this unit. So have that idea in mind as we talk next about fine-tuning strategies. Now another practical note. What I've shown you so far is what you would call a simple vanilla RNN. LSTMs, Long Short Term Memory networks are much more powerful models. And we'll kind of default to them when we do experiments. The fundamental issue is that plain RNNs tend to perform poorly with very long sequences. You get that error signal from the classifier there at the final token. But now information has to flow all the way back down through the network. Could be a very long sequence. And the result is that the information coming from that error signal is often lost or distorted. Now LSTM cells are a prominent response to this problem. They introduce mechanisms that control the flow of information and help you avoid the problems of optimization that arise for regular RNNs. Now I'm not going to take the time here to review this mechanism in detail. I would instead recommend these two excellent blog posts that have great diagrams and really detailed discussions. They can do a much better job than I can at really conveying the intuitions visually and also with math. And I think you could pick one or both and really pretty quickly gain a deep understanding of precisely how LSTM cells are functioning. The final thing here is just a code snippet to show you how easy it is to use our course code repository to fit models like this. In the context of sentiment analysis, you can again make use of this sst library. And what I've done here is a kind of complicated version showing you a bunch of different features. So in cell 2, you can see that I'm going to have a pointer to GloVe. And I'm going to create a GloVe lookup using the 50 dimensional vectors just to keep things simple. The feature function for this model is not one that returns count dictionaries. It's important for the structure of the model we're going to use that you input raw sequences of tokens. So all we're doing here is down casing the sequence and then splitting on whitespace. Of course, you could do something more sophisticated. The idea, though, is that you want to align with the GloVe vocabulary. Our model wrapper is doing a few things. It's creating a vocabulary and loading it and embedding using this GloVe space. That'll be the initial embedding for our model. And if you leave this step out, you'll have a randomly initialized embedding space which might be fine as well. But presumably GloVe will give us a step up. And then we set up the Torch RNN classifier. And what I've done here is expose a lot of the different keyword arguments, not all of them. There are lots of knobs that you can fiddle with, as is typical for deep learning models. Maybe the one I would call out is that we are using that fixed embedding that we got from GloVe. And I have set early_stopping equals true, which might help you efficiently optimize these models. Otherwise, you'll have to figure out how many iterations you actually want it to run for. And you might run it for much too long or much less time than is needed to get an optimal model. The early stopping options, and there are a few other parameters involved in that, might help you optimize these models efficiently and effectively. In the end though, having set up all that stuff, you call fit as usual and return the trained model. And in that context, you can simply use sst.experiment with these previous components to conduct experiments with RNNs, just as you did for simpler linear models as in previous screencasts. The one change which will be familiar from the previous screencast, is that you need to set vectorize equals false. And that is important because, again, we're going to let the model process these examples. We don't want to pipe everything through some kind of DictVectorizer. That's strictly for handbuilt feature functions and sparse linear models. Here in the land of deep learning, vectorize equals false. And we'll use the components of the model to represent each example as I discussed before.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Relation_Extraction_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
BILL MACCARTNEY: Our topic for today and Wednesday is relation extraction. And this is an exciting topic both because it's a great arena to explore a variety of NLU and machine learning techniques. And because it has so many real world applications as we'll see in a moment. So here's an overview of the next two lectures. I'm going to start by describing the task of relation extraction, what it is, why it matters, and how we might approach it. Then I'll describe the data resources we'll need to make headway on the problem. Next, I'll provide a more precise formulation of the prediction problem that we're taking on and I'll propose a strategy for quantitative evaluation. Then we'll establish some lower bounds on performance by evaluating some very simple approaches to this problem. And finally, will point toward some directions for future exploration. In the first section, I'll start by defining the task of relation extraction, then I'll try to provide some motivation for why it's an important and exciting problem. I'll describe both the vision that originally inspired research in this area and a range of current practical applications for relation extraction. Then I'll describe three paradigms that correspond to three stages in the evolution of approaches to relation extraction. Hand-built patterns, which were dominant in the '80s and '90s. Supervised learning, which became dominant in the 2000s. And distant supervision, which has dominated since about 2010 and will be our main focus. So let's dive in. So the task of relation extraction is about extracting structured knowledge from natural language text. We want to be able to start from a document like this. This could be a new story or a web page. And extract relational triples like founders, PayPal, Elon Musk, and founders, SpaceX, Elon Musk. Next, we find this document and we want to be able to extract, has_spouse, Elon Musk, Talulah Riley. We keep reading another document and we want to extract worked_at, Elon Musk, Tesla Motors. If we can accumulate a large knowledge base of relational triples, we can use it to power question answering and other applications. Building a knowledge base like this manually is slow and expensive. But much of the knowledge that we'd like to capture is already expressed in abundant text on the web. So the aim of relation extraction is to accelerate knowledge base construction by extracting relational triples from natural language text. Here's a nice articulation of the vision for relation extraction. This is from Tom Mitchell, who is the former chair of the machine learning department at CMU. He's also the author of one of the first textbooks on machine learning. By the way, he was also the PhD advisor of Sebastian Thrun and Oren Etzioni. He wrote this piece in 2005 describing a vision for machine reading. And he offered to bet a lobster dinner that by 2015, we will have a computer program capable of automatically reading at least 80% of the factual content on the web, and placing those facts in a structured knowledge base. I think we've come pretty close to achieving that goal. And this is exactly the goal that relation extraction aims at to extract structure knowledge from unstructured text. One of the things that makes relation extraction an exciting topic is the abundance of real world applications. For example, nowadays intelligent assistants like Siri or Google can answer lots of factual questions like who sang "Love Train?" To do this, they rely on knowledge bases or KB's containing thousands of relations, millions of entities and billions of individual facts. There are many different strategies for building and maintaining and extending these KB's, but considering how enormous they are and how quickly the world is creating new facts. It's a process that you want to automate as much as possible. So more and more relation extraction from the web is hugely strategic for Apple and Google and other companies. In fact in 2017, Apple spent $200 million to acquire a startup called Lattice, which was co-founded by Stanford Professor Chris Ré, whom some of you may know, specifically for this purpose. Another example is building ontologies. If you're running an app store, you're going to need a taxonomy of categories of apps, and which apps belong to which categories. One category of apps is video games. But if you're a gamer, you know that there are subcategories and sub, subcategories and sub, sub subcategories of video games. And new ones keep appearing and new games appear every day. How are you going to keep your ontology up to date? Well, there's a lot of people writing about video games on the web. So maybe relation extraction can help. The relation between a category and a subcategory, or between a category and an instance of the category can be a target for relation extraction. And similarly, you can imagine using relation extraction to help build or maintain ontologies of car parts or companies or viruses. Another example comes from bioinformatics. So every year, there are thousands of new research articles describing gene regulatory networks. If we can apply relation extraction to these articles to populate a database of gene regulation relationships, then we can begin to apply existing, well understood data mining techniques. We can look for statistical correlations or apply clever graph algorithms to activation networks. The sky's the limit. We've turned something that a machine can't understand into something that a machine can understand. So let's turn to the question of how you'd actually solve this problem. The most obvious way to start is to write down a few patterns which express each relation. So for example, if we want to find new instances of the founders relation. So we can use patterns like X is the founder of Y or X, who founded Y or Y was founded by X. And then if we search a large corpus, we may find sentences like these that match these patterns and allow us to extract the fact that Elon Musk founded SpaceX. So this seems promising and, in fact, this was the dominant paradigm in relation extraction in the early days. But this approach is really limited. The central challenge of relation extraction is the fantastic diversity of language. The multitude of possible ways to express a given relation. For example, each of these sentences also expresses the fact that Elon Musk founded SpaceX. But in these examples, the patterns which connect Elon Musk with SpaceX are not ones that we could have easily anticipated. They might be ones that will never recur again. So to do relation extraction effectively, we need to go beyond hand-built patterns. So around the turn of the century, the machine learning revolution came to the field of NLP and people began to try a new approach, namely supervised learning. So you start by labeling your examples. So these three examples are positive instances of the founders relation. So these are the positive examples. And these two are negative examples. Now that we have labeled training data, we can train a model. And it could be a simple linear model that uses a bag of words representation and assigns higher weights to words like founder and established that are likely to indicate the founder's relation, or it could be something more complicated. In any case, this was a hugely successful idea. Even simple machine learning models are far better at generalizing to new data than static patterns. But there's a big problem, manually labeling training examples is laborious and time consuming and expensive. And as a consequence, the largest labeled data sets that were produced had only tens of thousands of examples, which by modern standards seems puny. If we want to apply modern machine learning techniques, we need a lot more data. We need a way to leverage vastly greater quantities of training data. The answer appeared around 2010 with an idea called distant supervision, and this is a really big idea. Instead of manually labeling individual examples, we're going to automatically derive the labels from an existing knowledge base. So let's say we already have a KB that contains many examples of the founders relation. So we've got SpaceX and Elon Musk, Apple, and Steve Jobs and so on. And let's say we have a large corpus of text. It can be unlabeled text, raw text, which means that it can be truly enormous. It can be the whole web. What we're going to do is, we're going to simply assume that every sentence which contains a pair of entities which are related in the KB like Elon Musk and SpaceX is a positive example for that relation. And we're going to assume that every sentence which contains a pair of entities that are unrelated in a KB like Elon Musk and Apple is a negative example. Genius. This gives us a way to generate massive quantities of training data practically free. However, you might have some doubts about the validity of those assumptions. So hold that thought. Distant supervision is a really powerful idea, but it has two important limitations. The first is a consequence of making the unreliable assumption that all sentences where related entities co-occur actually express that relation. Inevitably, some of them don't. Like this example, we labeled it as a positive example for the founder's relation, but it doesn't express that relation at all. This doesn't say that Elon Musk is a founder of SpaceX. So this label is a lie, a dirty, dirty lie. Making this assumption blindly has the effect of introducing noise into our training data. Distant supervision is effective in spite of this problem, because it makes it possible to leverage vastly greater quantities of training data. And the benefit of more data outweighs the harm of noisier data. By the way, I feel like I've waited my whole life for the right opportunity to use the Pinocchio emoji. The day finally came and it feels good. The second limitation is that we need an existing KB to start from. We can only train a model to extract new instances of the founder's relation, if we already have many instances of the founder's relation. So while distant supervision is a great way to extend an existing KB, it's not useful for creating a KB containing new relations from scratch.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Presenting_Your_Work_Final_Papers_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Hello, everyone. Welcome to the first screencast in our series on presenting your research. The purpose of this series is really to help you do outstanding scholarship in the field of NLP. And I'm also going to try to demystify publishing in the field of NLP. To kick it off, I'd like to focus on the final papers that you're writing specifically for this course. Here are some practical details. These links take you to essential things about writing final papers, specifically for CS224U. The first link is possibly the most important. That just enumerates the requirements for the final paper. Please do review it to make sure that you don't get points off for something small that could easily have been corrected just because you didn't conform to our requirements. The next two links are much more about helping you with the substance of this. So the projects file has lots of frequently asked questions and other information about writing final papers for this course. And it also expands out to publishing in the field of NLP. So it has a lot of useful resources when it comes to trying to report out your research to the community. And then this third link here links to some excellent past final projects. Many of them became publications typically after a bunch of additional work after the end of the quarter, but it is exciting that a lot of these really excellent published papers began in this course. It's very inspiring. For your projects, I just want to review a really important point for me that concerns how we'll evaluate your work. This is from the methods unit for the course, but I want to repeat it here just to emphasize it. We will never evaluate a project based on how good the results are. We do recognize that in our field, as in all scientific fields, publications tend to do this because they have additional constraints on space. And that leads them, as a cultural fact, to favor positive evidence for new developments over negative results. But we, of course, are not subject to such constraints. So we can do the right and good thing scientifically of valuing positive results, negative results, and everything in between. This has real consequences for how we do evaluation, right? We're going to evaluate your project based on the appropriateness of the metrics that you chose, the strength of the methods that you used, and maybe most importantly, the extent to which your paper is open and clear sighted about the limitations of its findings. Those are the things that really matter for us scientifically. And it is a consequence of this policy that you could have a paper that reported state of the art results. But if it's just not a clear and substantive paper, it might not get very good marks from us. And conversely, if all of your hypotheses fell apart and it turned out that all your evidence pointed away from them as being true, but you nonetheless wrote a paper that was clear about those findings and helped push the field forward by steering us away from those hypotheses, that, of course, could earn top marks. And we'd be very happy to help you report those results out to the rest of the field. Because it is important for us to know about these negative findings so that we know where to invest our energy as scholars. Here's a detail that's from the requirements for the final paper. This is the authorship statement. This is just a section where you explain how the individual authors from your team and anyone else who helped out contributed to the final project. You're free to say whatever you like in these sections. If you would like a model, here's a link to the PNAS guidelines, which give some details and kind of typical statements. The rationale is really just that we think this is an important aspect of such scholarship in general. And it's not yet pervasive in the field of NLP, whereas it is in other fields. But we would like it to be more widespread, because it just seems like a healthy form of disclosure. That's the real rationale. I want to emphasize that only in really extreme cases and after discussion with all the team members would we consider giving separate grades to the team based on what was in the statement. This is really not the intent. The intent rather is this rationale of just disclosing who did what as part of the project. It's really not about evaluation. I also want to emphasize that we have a policy on multiple submissions for this course. It's nuanced and subjective. Here's a link to it. And here are some notes, kind of rationale for this. So first, the policy mirrors the policy on multiple submissions to conferences, right? You can't take the same paper and submit it to two different venues with minor modifications and expect to get two publications out of it. And the same thing holds for us when we think about requirements for final papers for this course. This is designed to ensure that your project is a substantial new effort. This does mean that you can't merely submit an incremental advancement over another project that you did. We are trying to push back against the pattern where people would take final projects from previous courses, add a couple of new models, and submit those as entirely new papers. That's just unfair to the people who are starting from scratch. And it's really not the sort of work that we would say is up to the level of a final project for a course like this. Other courses might have different policies at Stanford, but that fact alone is not going to lead us to change our policy. Because we do think that this is equitable and also reflecting values that are pervasive in our field as you can see from the policies on submission to conferences. If any of these policies seem relevant to your work, for example, if you are taking a previous course project and developing it in lots of fresh and new ways, start the discussion with your mentor as early as possible to make sure that they're in the loop about what you're doing. We don't want any surprises when you submit your final paper or after that when it comes to this policy. So just make sure everyone is in the know, and I predict that things will go fine. To close, a brief note about impact statements. For now, an impact statement is an optional section for your final paper. Absolutely not required. But this has been on my mind a lot lately, and I think it's really healthy that the field is moving toward having authors include impact statements. And so I thought I would exhort you all to consider having that as part of your final paper as well. It does not count against your length limits, and it's up to you exactly what you would disclose as part of the statement. Here are some examples of things that might include. You could try to enumerate both the benefits and the risks of your research to individuals, to society, to the world. Specifically for the risk, you could talk about costs, again, to the participants, to society, to the planet, where, for example, participant costs would be if you had human annotators doing a really difficult or kind of negative annotation project. You might mention that they paid a certain cost and think about whether the costs were worthwhile. You could also think about costs to society, and that would really probably turn on sort of misapplication of your ideas in ways that might have more harm than good. And, of course, if you trained a really large language model or did really a lot of experiments, you could think about the cost to the planet in terms of energy expenditures, and so forth. Just by way of getting us all to think about the fact that our research does have costs and that we should all the time be thinking about the cost benefit analysis when it comes to the work that we do. And these disclosures are part of helping us all have that in mind. And finally, I think it might be really inspiring for you to think about responsible use of your data models and findings. Never mind really evil actors. There are likely to be people out there who are well-meaning and would like to apply your ideas, but they might be unsure of the limits. Unsure of precisely how to do that responsibly. So guidance that you could offer about where your ideas work and where they don't, or where your data are relevant and where they're irrelevant could really help someone who is trying to just make responsible use of your ideas. You could think about them as part of crafting this impact statement. For other resources, I think it's really great to go through the exercise of doing a data sheet and a model card. A data sheet is a disclosure about a data set that you created or used. And a model card is a similar unstructured document for models that you've developed and released out into the world. They're both pretty long documents. So it's a lot of work to do one in full, but it's very rewarding in the sense that it helps you confront some hard truths about the work that you did and articulate the limits of the work that you did. All these things are really helpful for your scholarship. And, of course, these things are helpful when it comes to other people consuming your ideas. So that's highly encouraged. And you could take bits and pieces from those structured documents and have them in for maybe a shorter impact statement that you wrote. And for even more guidance on this, you could check out this survey of NeurlPS impact statements. It has a lot of information about the kinds of things people are disclosing in these statements. And that too could help you kind of figure out what you want to say and what might be relevant to your audience. So again, this is entirely optional, but I hope this is inspiring and interesting for you as a new dimension when it comes to reporting on the work that you did.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
BERT_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is part three in our series on contextual word representations. We're going to be talking about the BERT model. Which is an innovative and powerful application of the transformer architecture which we covered in the previous screencast. Let's dive into the core model structure of BERT. We'll begin with the inputs. As usual we'll work with a simple example; "the Rock rules" that has three tokens. But you'll notice that for BERT, we begin every sequence with the designated class token. And sequences end with a designated SEP token. And the SEP token can also be used as a boundary marker between sub-parts of an input sequence. We'll learn positional embeddings for each one of those tokens. And in addition, as you can see in maroon here, we're going to learn a second notion of position. It sent A for all of these tokens, so it won't contribute in this particular case. But if we had a problem like natural language inference where examples consist of a premise sentence and a hypothesis sentence, we could learn separate embeddings for the premise and the hypothesis. And thereby hope that we can capture that second notion of position within our input. We're going to have learned embeddings for each one of these notions for the word and the two notions of position. And as in the transformer, the actual embeddings given here in light green will be additive combinations of those three learned embedding representations. From there we just do a lot of work with the transformer. Have we have had repeated transformer blocks, it could be 12, it could be 24, it could be even more. And the output of all of those transformer blocks in the end is a sequence of output representations. These are vectors, and I've given that in dark green. That is the core model structure for BERT. And that brings us to how this model is trained. The masked language modeling objective is the fundamental training objective for this model. Fundamentally, what the model is trying to do is work as an auto-encoder and reproduce the entire input sequences. To make that problem non-trivial though, we're going to employ this masked language modeling idea. And what that means is that we're going to go through these input sequences, and randomly replace some small percentage, 10% to 15% of the input tokens, with this designated mask token. And then the job of the model is to learn, for those masked inputs to reconstruct what was the actual input, right? So in this case, we masked out rules and the job of the model is to use this bidirectional context, that's flowing in from all those attention mechanisms to figure out that "rules" was actually the token that belonged to that initial position. The BERT team did a variant of this as well, which is masking by a random word. So in this case, we might replace "rules" with a word like "every" picked randomly from our vocabulary. But there again, the fundamental job of the model is to learn to figure out that "rules" was the actual token in that position. So it's going to make some prediction in this case. If it's different from "rules," then the error signal will flow back down through all the parameters of this model. Affecting, we hope, all the representations because of that dense thicket of attention connections that exists across these timesteps. And in that way the model will learn to update itself, effectively learning how to reconstruct the missing pieces from these inputs that we created during training. So let's dive into that masked language modeling objective, a little more deeply. For transformer parameters H theta, and some sequence of tokens x, with its corresponding masked version x hat. This is the objective here. Well, let's zoom in on some timestep t. The fundamental scoring thing is that we're going to look up the vector representation and the embedding for that timestep t. And we'll take the dot product of that with the output representation at time t, from the entire transformer model. That much there, that scoring procedure looks a lot like what you get from conditional language models. You just have to remember that because of all those attention mechanisms, connecting every token to every other token. This is not just the preceding context before timestep t, but rather the entire surrounding context for this position. And then as usual we normalize that by considering all the alternative tokens x prime, that could be in this position. Now, you'll notice over here there's an indicator variable mt, mt is 1 if token t was masked, else 0. So that's like saying we're going to turn on this loss, only for the tokens that we have masked out. And then the final thing is kind of not a definitional choice about this model, but something worth noting. You'll see that we're using the embedding for this token, effectively as the softmax parameters. There could be separate parameters here that we learn for the classifier part, that's learning to be a language model. But I think people have found over time, that by tying these parameters, by using the transpose of these parameters to create the output space, we get some statistical strength and more efficient learning. There is a second objective in the BERT model and it is the binary next sentence prediction task. And I think this was an attempt to find some coherence beyond just the simple sentence or sequence level. So this is pretty straightforward. For positive instances for this class, we're going to take actual sequences of sentences in the corpus that we're using for training. So here you can see these actually occur together, and they are labeled as next. And for negative examples we just randomly choose the second sentence, and label that as not next. And I think the aspiration here was that this would help the model learn some notion of discourse coherence beyond the local coherence of individual sequences. Now what you probably want to do with the BERT model is not train it from scratch, but rather fine tune it on a particular task that you have. There are many modes that you can think about for doing this. Kind of default choice, the standard and simple choice, would be to use the class token. More specifically, its output in the final layer of the BERT model, as the basis for setting some task specific parameters. And then using that, whatever labels you have for supervision up here. And that could be effective because the class token appears in this position in every single one of these sequences. And so you might think of it as a good summary representation of the entire sequence. And then when you do the fine tuning, you'll of course be updating these task parameters. And then you could, if you wanted to, also update some or all of the actual parameters from the pre-trained model. And that would be a true notion of fine tuning. Now you might worry that the class token is an insufficient summary of the entire sequence. And so you could of course think about pooling all the output states in the sequence. Via some function like sum, or mean, or max, and using those as the input to whatever task-specific parameters you have up here at the top. I just want to remind us that tokenization in BERT is a little unusual. We've covered this a few times before, but just remember that we're getting effectively not full words, but word pieces. So for cases like "encode me", you can see that the word "encode" has been split apart into two word pieces. And we're hoping, implicitly, that the model can learn that that is in some deep sense still a word. Even though it has been split apart and that should draw on the truly contextual nature of these models. The BERT team did two initial model releases. BERT base consists of 12 transformer layers and has representations of dimension 768 with 12 attention heads for a total of 110 million parameters. That's of course, a very large model, but this is manageable for you to do local work. Especially if you just want to do some simple fine tuning, or use this model for inference. The BERT large release is much larger, it has 24 layers. Twice the dimensionality for its representations, and 16 attention heads for a total of 340 million parameters. This is large enough that it might be difficult to do local work with. But of course you might get much more representational power from using it. For both of these models, we have a limitation to 512 tokens. And that is because that is the size of the positional embedding space that they learned. There are many new releases of course. You can find those at the project site and Hugging Face has made it very easy to access these models, and that's been very empowering. To close this let me just mention a few known limitations of BERT that we're going to return to as we go through some subsequent models for this unit. So first in the original paper, there is a large, but still partial number of ablation studies and optimization studies. There's a huge landscape that's in play here, and only small parts of it are explored in the original paper. So we might worry that there are better choices we could be making within this space. The original paper also points out that there's some unnaturalness about this MASK token. They say, the first downside of the MLM objective is that we are creating a mismatch between pre-training and fine tuning, because the MASK token is never seen during fine tuning. So that's something we might want to address. They also point out that there's a downside to using the MLM objective which is to say that it's kind of data inefficient. We can only mask out a small percentage of the tokens. Because we need the surrounding context that would ostensibly reproduce those tokens. And that means that it's kind of data inefficient. And finally this is from the XLNet paper, I think this is quite perceptive. BERT assumes that the predicted tokens are independent of each other given the unmasked tokens, which is oversimplified as high-order, long-range dependency is prevalent in natural language. What they have in mind here, is essentially, if you have an idiom like "Out of this world" and it happens that both the first and the last words in that idiom are masked out, then BERT is going to try to reproduce them as though they were independent of each other. And in fact, we know that there is a statistical dependency between them coming from the fact that they are participating in this idiom. So there's some notion of representational coherence that BERT is simply not capturing with its MLM objective.
Stanford_CS224U_Natural_Language_Understanding_Spring_2021
Natural_Language_Inference_Stanford_CS224U_Natural_Language_Understanding_Spring_2021.txt
CHRISTOPHER POTTS: Welcome, everyone. This is the first screencast in our series on natural language inference, or NLI. This is one of my favorite problems. What I'd like to do is give you a sense for how the task is formulated and then situate the task within the broader landscape of ideas for NLU. As usual, we have a bunch of materials that would allow you to get hands-on with this problem. The core module is nli.py, and then there are two notebooks. The first introduces the three data sets that we'll be exploring in detail, SNLI, multiNLI, and adversarial NLI. And the second notebook offers a bunch of different modeling approaches. It really covers core approaches that people have taken to NLI in the past, and I hope it points to some avenues for modifying those architectures, possibly in the service of developing an original system for a final project. There's also an associated homework and bakeoff. I emphasize that this is not required for us this quarter. I'm mentioning it because I think the problem is an interesting one. It's a word entailment task which is an interesting small-scale problem that I believe could be used to stress-test an NLI system in interesting ways. The core readings cover the three datasets that will be in focus for us. And then the final reading listed here, was I think, the paper that introduced attention mechanisms into the study of NLI and that had an impact that went well beyond this task. And then, for additional readings, I'm kind of suggesting a range of things. Some of these readings cover core fundamentals for deep learning that I think will be useful in the context of studying NLI. Some of them help you with kind of a foundational understanding of the NLI task and how you might think about it. And then some of them are meant to push us to stress test our system, think adversarially, and maybe find artifacts in our datasets. And those are going to be themes of later screencasts in the series. To begin getting a sense for how the task is formulated, let's start with some simple examples here. So in NLI, we have as our inputs a premise sentence and a hypothesis sentence, and the task is a classification one. So in this simple example here, the premise sentence is, "a turtle danced." The hypothesis sentence is "a turtle moved." Both of those are system inputs, and our task is to assign one of three labels. In this case, the correct label would be "entails." The second example looks simple, but it actually begins to suggest how the task is actually formulated. We have as our premise, turtle. And as our hypothesis, linguists. And what we would like to do in the context of NLI is assign that the contradicts label. Now, you might pause there and think, it's not a logical fact that turtles can be linguists, so surely contradiction is too strong, but it is a common sense that to kind of natural inference about the world we live in that no turtles are linguists, and it's for that reason that we would choose the contradicts label, and that begins to key into the fact that fundamentally NLI is not a logical reasoning task, but a more general common sense reasoning task. "Every reptile danced" is neutral with respect to "a turtle ate" which is just to say that these two sentences can be true or false independently of each other. And now, with entails contradiction neutral, we have the three labels that are standardly used for NLI datasets at this point. Look at some additional examples. "Some turtles walk" contradicts "no turtles move." I think that's straightforward. Here's one that shows how intricate this could get. So the premise is "James Byron Dean refused to move without blue jeans." Entails "James Dean didn't dance without pants." This highlights two aspects of the problem. First, you might have to do some complex named entity recognition on James Byron Dean and James Dean to figure out that these are coreferring expressions. And you also might encounter real linguistic complexity, in this case, emphasizing things involving how negations interact with each other. This next example begins to show how much common sense reasoning could be brought into the task. So the premise is Mitsubishi's new vehicle sales in the US fell 46% in June, and the hypothesis is Mitsubishi's sales rose 46%. And we would standardly say that that is in the contradiction relation. Now again, you might pause and think it is certainly possible even in our world that Mitsubishi could see a 46% rise and fall in the same month. So surely these should be labeled neutral. But I think what you'll find in NLI datasets is that these are called contradiction on the informal assumption that the premise and hypothesis are talking about the same event. And in that context, we would say that these are common sense contradictions. Here's another example that highlights how much pragmatics could be brought into the problem. The premise is Acme reported that its CEO resigned, and the hypothesis is that Acme's CEO resigned. We would probably say entailment there. Even though, in a strict logical sense, the premise does not entail the hypothesis because, of course, the company could be reporting things that are false. But here, we kind of make an assumption that the company is an authority and will likely report true things about facts like this, and therefore we allow that this would be in the entailment relation. Again not logical but much more like common sense. So just to emphasize this, here is kind of the fundamental question that we confront. Does the premise justify an inference to the hypothesis? Commonsense reasoning, rather than strict logic. Two other characteristics of this task in the modern era are, first, there's a focus on local inference steps. That is just one premise and one hypothesis, rather than long deductive chains. And the second is that the emphasis is really on the variability of linguistic expressions, so when people have created the large benchmark tasks in this space, they have largely focused on just collecting a lot of data and not placed any special emphasis on collecting examples that have a lot of negations or quantifiers or something that would really shine a spotlight on linguistic and semantic complexity. So that's worth keeping in mind about how we're thinking about the task in the present day. If you would like additional perspectives on this, including some disputes about exactly how to think about the problem, and what would be the most productive, I would encourage you to check out of these three papers by a lot of Stanford researchers. I think the fundamental outcome of this is that we do want to focus on common sense reasoning, even though that's a kind of amorphous and difficult to define concept. It's nonetheless arguably the useful one for us when we think about developing practical systems. Now, in a visionary paper that really set the agenda for NLI, Dagan et al, 2006, they make a lot of connections between NLI and the broader landscape of NLU. So let me just read this opening statement here. It seems that major inferences, as needed by multiple applications, can indeed be cast in terms of textual entailment. Consequently, we hypothesize that textual entailment recognition is a suitable generic task for evaluating and comparing applied semantic inference models. Eventually, such efforts can promote the development of entailment recognition engines which may provide useful generic modules across applications. So wonderful vision and a spin we might put on in the present day is that what we might hope from NLI is that since reasoning about entailment and contradiction are truly fundamental to our use of language, that pre-training on the NLI task might give us representations that are useful in lots of different contexts. And Dagan et al actually continue by showing that we can formulate a lot of traditional tasks as NLI tasks. And here are just a few examples of that. If our task is paraphrased, we might say that in the NLI context, that means we want equality or mutual entailment between the text and the paraphrase. That is premise and the hypothesis. For summarization, we would do something weaker. We would hope just that the text, the original text, entailed the summaries allowing that the summary might be weaker or more general. For information retrieval, we kind of do the reverse. Here we want to find documents that entail the query. And then for question answering, it's kind of similar. We could formulate that as an entailment task by saying that what we want is to find answers that entail the question, and the way we might think about entailment for questions is kind of illustrated here, where we would informally convert a question like, who left? Into someone left, to give us a statement. And then we could say that Sandy left is an answer to, who left? In the sense that it entails someone left. And I think there are many other tasks that we could formulate in this way. And it does show you just how fundamental entailment and contradiction are to reasoning and language. And finally, let me give you a sense for the model landscape and how it has changed. NLI is a pretty old problem in the field, and as a result, we've seen a wide spectrum of different approaches. In the earliest days, you had a lot of systems that were kind of focused on logic and theorem proving. And I've characterized those systems here as offering really deep representations, but they weren't especially effective in the sense that they worked only for the domains and examples that the system designers had been able to anticipate. So they're kind of brittle. Following that, you have a kind of exploration of what Bill MacCartney called natural logic approaches. Bill was one of the early innovators in this space. I think he actually coined the term natural language inference. And he explored natural logic, which has some of the aspects of logic and theorem proving, but it's kind of more open and easily amenable to tackling a lot of data. And so those systems were consequently a little less steep but also more effective, and a similar thing happened with these semantic graphs, which is providing rich, conceptual representations of the underlying domain that we want to reason about. Another interesting thing here is that, until recently, it was the case that clever hand-built features, which I'll show you some a bit later in the screencast series. They were really in the lead and simple n-gram variations, traditional models with hand-built features, they were the best models. There was a kind of faith early on in the deep learning revolution that eventually, those models would prove to be the best at this task. But at the time, we just didn't have the data sets that would support-- that would provide evidence for that kind of claim. And so as a result, for a while, deep learning systems really lagged behind more traditional approaches. And I would say that it was really in about 2017 that deep learning pulled ahead. And that's a result of modeling innovations, and also the arrival of some really large benchmark datasets that would allow us to train systems that were effective for the task. And it's at that point you see that deep learning kind of took over, and as a result, in subsequent screencasts, we too will be focused on deep learning architectures for the NLI problem.
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_11_The_Importance_of_Data_Representation.txt
ALVIN THONG-JUAK KHO: Transforming data, as well as modeling and genomics. It's a very broad category. And I'm sure that in the other, lectures the people who are giving them have touched on several aspects of these same topics, as well. The lecture outline is as follows. The first thing we'll do is to go through two very prototypical study designs, one of which is a two-way comparison which occurs all the time in studies. The second one is the time series, or a dosage study. The idea being that there is a one parameter in the study that moves on, progresses, and you are making measurements alongside that one parameter. And finally, we get to the topic of data representation itself, properly. We'll talking about what it is, what it means to go from a measurement device to the spreadsheet, because in order to do analysis, we must somehow end up in the world of numbers. So how does that mapping actually happen? We'll touch on it very little. Then we talk about the numbers themselves. Do they have a dimensionality attributed to them? Are there scales attributed to them? These things actually matter. Because depending on whether or not it's dimensioned, certain, fundamental theorems of mathematics actually do apply in guiding you on how to formulate, say, a null hypothesis for your experiments. What would the outcome be had it been random? Things like that. And then we talk about how, once you have the data down, you transform it so as to uncover internal or relational structures within the data. And if we get through to this and we have time left, we will go through background concepts which crops up very often when we talk about genomics or microarray analysis, namely, things or words like noise, what is a replicate? What does it mean to be reproducible? Do you have to normalize your data, and what is the notion of a fold in the context of, say, microarray data? I mean, a fold is a very natural measure of change when it comes to PCR. But is it that natural, after all, when you are dealing with microarray data? And finally, because by going through some miscellaneous rules of thumb. So two prototypical study designs-- the first is a two-way comparison. You see this all the time, in fact. For instance, you want to find molecular gene protein differences in the white blood cells of type 2 diabetes human patients versus normal human patients. For instance, you have 27 diabetes patients and 11 normal ones, and you microarray them. Actually, there's a little typo here. It should be the word "micro." So these studies are often carried out. And there are very obvious things to keep in mind when you do these studies. Namely, before you do them, you should check for stratification, say, gender, age, things like that-- other clinical parameters that could enter later on and bedevil your subsequence analysis. Now, a partial mathematical formulation of this problem would be, say, if you were going to microarray every person, every patient goes onto a chip, then the j-th patients, j-th diabetic patient, you could represent by this symbol, d sub j. And j normal patient, you could reproduce by this symbol. And this quantity here is actually a vector or matrix. It's multi-factorial. We are measuring, say, 10,000 genes, or proteins, or RNA of one person. So it's a multi-dimensional quantity. And you go on to do the subsequence analysis. So this is the first prototypical study design. The second one is equally common. It has to do with a time series or dosage study. For instance, you are observing a developing organ, and you are assaying that organ for RNA or protein at different stages of its development as it unfolds. Again, there are reality checks that you should do before you begin, such as that system which you are assaying, how heterogeneous is it? Is it one cell type, several cell type? How is that going to affect your interpretation of the signals later on? And again, a partial math formulation of this design would be let T sub j represent the chip data of the j-th developmental stage. Again, it is a vector. And you can get very fancy pictures like this of the expression profile as a function of time-- of one gene and another gene or RNA or protein. Now, what actually is data representation. It's a very commonly-used term, and I don't think anyone can really agree on what it means, actually. But it could go anywhere from a mathematical formulation of a scientific problem. Or it could be mapping observations and measurements into the set of symbols. Well, it's not just any old set of symbols. Typically, they are symbols which you can do an algebra on, say, numbers, integers, things like that. It's pointless to map it into a set of symbols where you can't actually do higher-order operations on, that being the point. If you go on a website and type in data representation, the governmental agency for atmospheric science gives you this definition here, which you can read for yourself in this slide print out later. Another notion of data representation is something to do with database annotation and standards. You've heard of Miami, presumably, in one of the lectures before or to come. I'm not going to be talking about that at all. And thirdly, it could be multimedia. How you present your data-- graphically, and charts, plots, et cetera, things like that. What I shall be actually concentrating on is actually the first two points. It is arguable that these two points actually have to do with data representation. If anything, the second point certainly does-- mapping observations into a set of symbols. Now, so you make an observation, and a device takes a picture of it. But eventually, it has to get translated into the set of symbols. So it's a very obvious thing that one has to actually understand at least the basic principles of how the measurement device works if you are going to be using it in your undertaking of any biological endeavor. You should know the relevance of the scanner setting, things like that. I'm not going to go through all of this. It's quite obvious, a lot of these things. And you also have to be sure that the journey from an image to the actual number itself-- does it go through analytical or statistical processing software? How far removed are you from the reality of the observation? One should check for these things, in fact. Because subsequent stuff that you're going to be doing on this numbers will matter depending on the condition upon these factors. So you get, in the case of microarrays, basically a bright spot. And somehow, you have to translate this brightness. And the grid here is imaginary. It doesn't just-- you won't see just very discrete grids lighting up. Typically, there is a diffusion of light from one feature to another. So you have to basically translate this whole mess here down to a simplistic-looking table of numbers. So you have genes measured across different samples and their, quote, unquote, "intensity." And next thing we'll do is to understand if these intensities themselves have a dimension associated with them, say, centimeters, Fahrenheit, things like that, or they have no dimensions at all. So dimensionality and scales. And I guess we can concentrate for a while on microarray data. In the case of two-channel or competitive hybridization microarrays where you have two dyes, a size 3 and size 5, competing-- being hybridized onto different tissue types and put onto one array, the readout is arguably dimensionless because it has to do with a ratio. And arithmetic tells you that the ratio operation nullifies or cancels out any dimension there is in the data. Second type species of microarrays are, say, oligonucleotide microarrays where there is no competition. You have, essentially, one tissue type. You tag it with some fluorescence, and then you hybridize it. There is no competition whatsoever. And the registered intensities have some kind of units associated with them. If anything, you don't have to know the actual units. It may be sufficient just to if units exist or not-- dimensions or dimensionless. Why? Because different math techniques actually will apply depending on whether the quantity is a dimension quantity or dimensionless one. For example, if you are having intense-- you are dealing with a set of numbers which have to do with radiation measurements-- clearly phosphor images or radiation-- then physics tells you that the underlying, or the most sensible distribution to this background to be studying this system would be the gamma distribution. Secondly, there are certain, what you've probably heard as power or scaling laws, that could be useful in detecting errors in this set of numbers if they don't have a dimension or they have a dimension-- things, laws such as Zipf's law or Benford's law. For instance, the third digit of every telephone number in this country, things like that. There is a certain underlying distribution that could be very useful in informing you what actually is random and what actually isn't. Because if you get a result after some analysis that you do, whatever it is, you would like, at least, to be sure that that result is not arising completely out of chance. And that is the whole point of this, that knowing the dimension-ness of the numbers could help you or guide you in formulating a null hypothesis for this data set. So that's dimensionality. Why do you-- so let's say you already have the numbers in front of you. Why would you want to transform it or rewrite it in a different format? There are actually several very good reasons for doing it, and I'm sure it comes very obvious to a lot of people's heads. Number one-- it could simplify mathematical manipulation. And secondly, rewriting it in a certain way could uncover certain structures in the data. We will see examples of these very shortly. So number one-- simplifying mathematical manipulation. It can be argued that any spreadsheet is, essentially, a matrix. Of course, the entries of the matrices themselves are not homogeneous. They can be very different, and they could affect what kinds of operations you can sensibly perform on them. But essentially, it is a matrix if there are numbers in these entries. And as such, if there are matrices and the entries are homogeneous, they are subject to formal and very basic linear algebraic roles. You could do a matrix addition, subtraction, et cetera. And you could investigate the eigenvalues that are eigenvectors-- basically, invariant structures within the data. And of course, the hope in any scientific endeavor is that these invariant structures somehow are related to a physical phenomenon from which these numbers come from. There is a hope of all scientists who are trying to study a system. And this warning is obvious. If your matrix-- if you are measuring, say, temperature, and height, and weight, clearly, you can't just perform blindly linear algebraic manipulations in this data and hope to get something sensible out of it. But that's obvious. OK, why transform data number two-- second reason. Revealing intrinsic geometries. What is meant by intrinsic geometry? It could be a very unstructured data. You may not notice upon looking at the numbers themselves in a table that there is a group of variables, which are acting a certain way. But it could turn out that if you write it or transform it, this feature would stand out. And we actually will see graphical examples of when that happens. Let's see, so there are actually-- I'm not sure if you need to know if there are clues to the existence of these structures. But oftentimes, it's not obvious, and you could use some help by transforming the data. Now, these internal structures may be explicit or implicit. In what sense? The explicit structures are clearly the notation you give, like gene 1, gene 2, gene 3, or condition 1, 2, and 3. These are very obvious things of clinical phenotypes. The implicit ones are the relation between gene-- how they interact with one another, at least as captured by these numbers. Now, that, you don't know for sure. You actually have to go in and look for them. But it's just good to keep in mind that this aspect-- explicit and implicitness of these structures. Well, the whole idea of-- it's a bit of a jump from data transformation to modeling use. This is the real-world system, and here is what exists in our heads, or at least in our computers. You're trying to understand a system-- a physical phenomenon, say. And it's going to be-- there clearly has to be some physical quantity one is measuring. And this physical quantity is subject to some kind of perturbation. You make your measurement here. This is where the data representation occurs you map it into a set of numbers or symbols, which there's an algebra on it. And then any perturbation of this physical system will manifest itself as numeric fluctuations. And oftentimes, when you get a lot of data, it's not clear if you're actually seeing a fluctuation or you're just seeing, quote, unquote, "noise." And then you form a model of the system. And hopefully, your model somehow will corroborate with the physical phenomenon. I mean, these are very obvious ideas of modeling. Now, I give an example of uncovering internal structures. Now, let's say that you have two different patient populations, X and O. Say the X's are cancer patients, and the O are normal patients. And they're being controlled for age, gender, et cetera, things like that. And for each patient, you make two gene measurements-- RNA measurements, protein, doesn't really matter. This is just an illustration. And let's call the measurements G1 and G2. OK, so each patient, you measure two of these quantities. Now, what let's graphically represent this data. After all, we're talking about data representation, right? And let's suppose that the measurements came out this way. Now, what is the point here that I'm trying to make? Well, if you were simply using measurement 1 and trying to use measurement 1, somehow, to discriminate between the crosses and the zeros, it would not work right. Do you see why it does not work? Basically, you simply project the axis and the zeros onto the G1 axis, here. If you project it, you notice that there is just a alternation of crosses and X's, right? So G1 by itself certainly does not discriminate the cancer from the normal. The same goes for quantity G2. If you project this onto the G2 axis, you don't see that G2 actually segregates the two different population samples. However, now, when you perform something called principal component analysis, which we will briefly touch on, but it is very standard in any textbook, basically, on multivariate analysis, the data is simply rotated. It's an Fn transformation. It is rotation and translation. And now, in this new coordinates, principal component 1 and 2-- this is how it looks. It's the same picture. And what do you see? It's the same thing. You're just rotating. But the simple act of rotation itself is highly useful. Why? Well, you notice, then, that while PC1 does not distinguish the two populations, PC2 certainly does. And the discriminating quantity is G1 minus G2. If G1 minus G2 is positive or negative, you are either cancer or you're normal. This is the beauty of this technique. And why is this? So this is a very kindergarten example, but there is something to be learnt from here. Because as human beings, we can't visualize beyond three or four dimensions. And when you're making multiple measurements and-- so imagine that you are a person who lives in one dimension, and you are making these two measurements. Your vision of this whole process is only somehow captured in these projections under G1 and G2. So in your one-dimensional world, or in mine, I would not have realized that G1 and G2 actually does anything. However, this transformation helps me. Because then, using this one dimension alone-- remember, I am a one-dimensional beast-- I can tell that this quantity here is a linear combination actually distinguishes one from the other. And imagine, now, you have 10,000 variables, not just three. And the power of this method immediately comes to the fore. Another example-- say you have two different-- so each dot is a gene. And you have-- let's just suppose that you have two patients, or two persons, or two animals-- red and blue animals. And for each animal, you measure 5,000 genes. And these 5,000 genes are measured under three conditions. So you could get a brown mouse and a white mouse-- microarray 5,000 protein levels or something-- under three different conditions, say, heat shock, starvation, exercise. And then you plot it. I hope you're not colorblind. But let's suppose that the red and blue are not there. All you see is one color. And you did this measurement. And let's say you have a three-dimensional vision of this whole process. Look at the projections here, here, and here-- two-dimensional projections. You don't actually see that blue and red are just together, right? Now, if you do the same principle component analysis, simply rotating the data, remember, is simply an Fn transformation. The first projection, you see nothing. But this is where the power comes. Again, you have to somehow remove the green-- the blue and the red, and you see that clearly. The two populations reveal themselves. In a lower dimension, that's the whole point. So we are cutting down from three to two-- basically just one dimension. Clearly, one dimension actually distinguishes the two different populations. The power is you could have 10,000 dimensions. It cuts it down all the way to the first five or 10. It depends on the system. It's just a demonstration. But this is simulated data, by the way. It's not real. Data transformation example two-- I'm not quite sure now why I give you all this principal component analysis examples. But this is actually real data-- pancreatic development time series. And at-- I think it was 11 time points versus something like 10,000 genes, I believe. I'm not quite sure. So it's a matrix. Now, there are two ways to look at the system. It's basically, say, 10,000 genes by 11 conditions. You can look at it as time points in a gene space-- 10,000 dimensions. That's one vision of this experiment. The other vision is the transpose vision, which is that the objects, the graphical objects number are genes, and they live in 11-dimensional sample space or temporal space. I hope you see that. So there are two ways to look at this study. And the two ways are, when you do further analysis on them, will actually bring out different aspects of the experiment. So you can look at the system sample-wise, meaning that dots, genes in time-space, or you can look at it gene-wise-- actually, their names could have been switched-- which is each point is a time, is a whole pancreas, and the space it's sitting on is actually genes. So it's like 10,000 dimensions. And CLT is, basically-- Central Limit Theorem-- scaling. It's a fancy term for saying, you normalize the data to mean 0 variance 1. There are reasons that actually do it, but we come to that later. So the vision of each dot being a gene sitting in time-space-- I'm only showing you 3 times 11, OK? Well, when you do that-- when you don't do anything at all, you just plot it, you get this cloud. It's not very informative, really. Maybe there is some information, but I didn't go deeper into see what these things did. Now, when you do a principal component analysis of the time axis and re-plot this, what you see is a circular object. And the reason it's actually circular or looks like an egg is because of the scaling. This scaling, if you know linear algebra, actually maps everything to the unit hypersphere, which is why it's not a surprise you get an egg-shape like this. But why is this more informative than the previous one, the previous slide? There is a reason. So I will claim, actually, that the density of the-- oh, and the first principal component captures 45% of the variance. The second principal component captures 15%. So the idea of principal component analysis is that as you-- the first principal component captures the greatest amount of variance. The second captures the second greatest, third, et cetera. They are all orthogonal, one to the other. So the utility of doing this is now you can, I claim, that you can actually represent 10,000 different profiles in a convenient egg-shape like this. How so? Well, the first component captures a lot of variance in the system. And so what is the profile of a gene? If you pick any gene from here, say, and you plot its profile, how does it look like? It turns out that the profile looks like that. So in English, essentially, it is a gene of a protein which is highly-expressed early on and it goes down later on. What about if you pick something here from the complete opposite end of PC1? That's the shape that you get. It goes the opposite direction. In fact, if you picked samples going all the way from here to there, you will notice that there is a gradual shifting of this shape to that. It morphs one into the other. And so this is an example of something picked out of the 90 degrees, counting from 12:00. That's how it looks like. So it's a very convenient way to display everything. And another utility is a density of dots. You notice that there is a huge absence of anything here, right? So basically, you could state-- you could claim that there is a family of profiles that is actually missing from all these genes. No genes at all express a theoretical profile here. I think it is something which looks like that but with some variation. But there is a density here. I'm not sure how that looks like. So it's a convenient way to display everything as opposed to what? As opposed to seeing 10,000 of these things. So it's like a dictionary. So that was looking at genes sitting in time-space. The other transpose way of looking at the same system, the same data is to view it as samples, time points. I've labeled them 1 through-- oh, actually, it's 13-- 1 through 13 sitting in a genomic space of 10,000 dimensions. If you just picked any three genes at random and plotted the samples-- the numbers 1 through 13 are consecutive. I didn't just randomly assign them. They're consecutive with time. So time 13 is going to be greater than time 12, greater than time 11, et cetera, et cetera. So you've picked any two genes, and what do you notice? Well, you notice nothing. It's just a mess. You pick any three, and I'm just picking three, for example. I mean, you'd be very lucky if we pick a set of three that actually reveals some beautiful structure in here. But then you have to wonder, what is our noise here? That goes back to understanding, what is the underlying null hypothesis of the system? So now you perform principal component analysis. What happens? The first principal component, second, and third. The most salient thing that jumps out at you is that PC1 looks-- looks-- to be correlated with time. You get 1, 2, 3, 4, 5, 6, 7-- well, 12 is an anomaly. I don't know why it landed there-- and 13. So I'm not sure what the other principal components mean. Maybe there is some biologic import to them. It's unclear to me. But certainly, one captures the progression of time, it looks to me. And you are sort of immune from the possibility that this is actually due to noise. Why? Because a principal component, now, it's not just a single gene or two. It's actually a linear combination of something like 10,000. So that's the power of the method. Had you just picked any tree randomly and found a configuration like this, you have to wonder at the randomness of this thing happening. But this is a linear combination of all of them. This is the power of this methodology. I give you another example, but this is very kindergarten. So Fourier decomposition-- Fourier transforms-- that is another way of transforming data to reveal structures within the data. And the point of doing Fourier analysis is, you want to reveal-- the objects that you're looking for are basically frequencies. So let's say that you have a-- is not real-world data, of course. It was completely cooked out of a machine. If you have this red sinusoid here of period 1, clearly, the frequency is just one frequency. I don't know if sequence is something 1 over 2 pi or whatnot. When you apply Fourier transforms on it-- well, discrete, fast Fourier transform-- you will find you enter the realm of complex numbers, actually. But suffice to know that you get a point in frequency space. That's the point. And to give you some bearing, let's take another waveform, which is twice the frequency. The frequency shouldn't be surprising that it's actually twice. It's 5, 2.5. And now you take yet another waveform which is even faster. So the mapping-- so this is the entire waveform maps to just one point, one point, one point. So the object of interest here are frequencies. It's not localized in time. For localization, the are transformation techniques such as wavelets. Now, of course, the world does not give you data so nicely in this uniform, band-limited, three signals like this, right? So let's suppose that the world makes it more complicated, adds up these three sinusoids. When you add up these three sinusoids, what happens? This is what you would see. Now, let's say the world presents you with this. There are many things can do. You can actually do principal component analysis, but the question is, what are you trying to look for? If you are looking for the predominant frequencies embedded in this waveform, the most natural thing to do is Fourier analysis. And when you do Fourier analysis, it should not shock you that the answer, when you map it to frequency space, is three dots. Happens to be the same three dots up there. So that's the beauty of it. And there are applications, very real applications, actually, when you enter into sequential genomics. Because the alphabets, A, T, C, G can easily be mapped into 0, 1, 2, 3. Of course, the ordering-- I'm not quite sure if the ordering actually matters or the ordinality, but I don't think it does. And if you are interested in repeating structures in the genome, it could be helpful. I'm sure people have done this, in fact. So the summary of data transformation is, basically, somebody gives you a vector x, data x. And you simply rewrite it into a different form based on a set of basis elements that are different from the original ones. How do I say it? Typically, when somebody gives you data, there's going to be real numbers, let's say a matrix. And the standard basis is the basis they're thinking of, that 10000100, et cetera. What you can do is actually transform it by principal component analysis, or Fourier transforms, or even wavelets. All of these techniques are simply names to describe these basis elements. They are actual new representation forms are come out. But in these A subjects-- you can actually read about this in several textbooks that say, do the heart and pattern recognition, which I shall provide in leaflet handouts. And clearly, not all these transformation techniques are equal. They are going to reveal for you very different things, very different internal structures in the data. That should be obvious. And I claim that there is almost always a geometric interpretation of any given data set. Secondary users would be denoising. And feature reduction we have seen, actually, in the case of PCA. Denoising is, say, Fourier transforms, you could-- if you believe that noise are higher frequencies, you could band limit or you could model all the dots which appear much higher in that frequency space. No, I don't think I'll talk about this. Do we actually have time for the next part? We do? Now that we are done with data representation, I'm going to try to cover some common terms that occurs over and over again in the area of microarray analysis or genomics. You hear it all the time, but you wonder what they mean. I sometimes wonder what they mean, but this is my understanding, at least, of what may mean. A very important thing I believe, as do a lot of people, I'm sure, is that nature makes no leaps, that physical phenomenon, at least at the microscopic level, what you observe-- microscopic atoms bumping around-- at least at a microscopic level, it cannot occur abruptly. There has to be a continuity to these processes. And this is a very important guiding principle, at least, in definition of noise. So I'll give an example. I don't know why I called it example four. I make 100 separate measurements of the room temperature in this room in a 1-minute interval at different locations. Depending on the accuracy of the device I'm using, it is not very likely that all these measurements are going to be the same. So the question is-- and it is an ill-posed question-- what is the temperature in this room? On average, or what is the temperature distribution in this room? Questions like this. Now, this is my working definition of noise. And I'm sure that there are going to be-- it can be argued. In a narrow sense, noise is any measurable divergence from axiom 1-- this idea-- or more generally, any applicable axiom in a studied system. So if you believe that-- if you are very, very sure that the room temperature-- example, if you believe that the room temperature in this room cannot be so different from me from here to there, then you make 100 measurements. And you believe that there is an idealized temperature or some belief that there should be a static quantity. Any fluctuation, any fluctuating observations you made away from this idealized temperature-- which you don't know, anyways. The best you can do is estimate with the average-- any fluctuation is noise. So it's practical. I'm not sure if it is useful at all. And in ideal situations, math theorems will apply-- things like the central limit theorem and law of large numbers. They are very, very robust if you have a lot of-- the question is, or the problem is that you need to have a lot of observations in order for these to kick in and help you. Now, what is a replicate and repeated measurement? And what I'm going to say here is not going to be new to you-- how people define replicate. It, in a way, also depends on-- in fact, replicates and reproducibility go hand in hand. I'll give you three examples of a replicate measurement. When you talk about replicate, it always involves two things. You have something, and you need something to compare it with. So let's say that you want to do replicate assays of mice pancreas-- normal mice pancreas, whole pancreas, RNA analysis. And they had been controlled for weight, for gender, et cetera. Now, there are three different situations, or three different ways of defining replicates, right? Number one-- you take the pancreas from each of these mice-- they are, say, the same litter-- and hybridize it, et cetera. This could be a definition one of a replicate. The other is you take the pancreas from one mouse and you split it and hybridize the other two. That's the other one. Notice, in this case, the biological variation. There is none. The call comes to one. Here, clearly, a biological variation is going to be very important. The third one is to somehow homogenize the biological variation at the very top level. You just mix them all up and split them. And there are arguments for using this way, this way, this way-- what you want to control. It can also be argued that you can remove biological variation but only at a later stage when you have the numbers, whereas, you are pulling them here, you can pull them later on. There is no better or worse definition of a replicate. But one should be aware, when you are reading papers, you should be aware, at least, what do they mean by replicate? Because in a way, you will also notice that replicate-- the notion of a replicate will actually guide or affect how you define noise. Because if you believe that these two mice should give you identical reading, then any deviation of this mouse from that one is going to be noise. And that deviations-- it seems very obvious things. Now, yeah, actually, here it is. The definition of replicate will have implication on how you define noise. And there is-- we talked about this-- biological versus measurement variation. This being biological variation, this being measurable variation. And arguably, this could be measurement variation, too. However, if you're too restrictive in your definition of replicate, it could actually hinder the generalizability of your study. Your study is only applicable to mice with a certain genetic background, very, very restricted, et cetera. One has to consider these things. But despite taking all precautions, it is not very likely that your, quote, unquote, "replicate assay" will, down the line, give you numerically identical results. And there is this old saying, I think from the Greek alchemist, that you never step into the same river twice. It's very true here. And as a result, people often will try to massage the data later on after the test tube comes the numbers-- well, it has to go through certain scanners and machines. But when the numbers come out, then they massage it-- they normalize, in other words-- to somehow account for biological variation or measurement variation. There are arguments for and against normalization. And there is no blanket principle whether to do this-- to normalize or not. It depends on your experiment design OK, I give an example of that pancreatic development data-- embryonic day 12, 14, 18, postnatal day 2, and the adult pancreas. And I'm simply plotting that 10,000 genes against itself, here-- embryonic day 12 against itself. And E12 against its aliquot. What is aliquot? I think, actually, aliquot, in this case, was this. It's a measurement variation. Though, in a way, it's not very wise in the case of-- clearly, you cannot remove the whole pancreas from the same mouse twice. So it may have been wiser to have done this, come to think of it, now, to account for biological variation. Because in this case, then, it was measurement variation. Then, clearly, there are two confounding factors here. Number one is measurement variation, number two, biological variation. So it would have been nice, in a way, if we had biological variation captured here. So E12, you see it start to spread like a comet. And as you progress with development, you can see that it spreads even worse. So it looks less and less alike. The most alike is here. They should align up right-- you're supposed to be taking the same reading of the same thing. Of course, by the time it reaches adulthood, anything can happen. We don't know-- what are your priority assumptions about the system guides you on whether or not to normalize. If you believe-- if you have done 10 whole microarrays for some system and you believe that your system should be remaining consistent despite the biological variation, et cetera, then you could claim that the average of these array readings have to be the same, or the variance should be a certain quantity. But that's actually putting your own assumptions of how the system behaves into this actual phenomenon that's unfolding. So one has to be careful. The common normalization technique that people use are the one which we have actually talked about, central limit theorem scaling, meaning that this is a vector, actually-- vector x against the reference r. So you simply subtract the average from each component of the vector x and you divide it by the standard deviation. The end result, actually, is a quantity which has mean 0 and variance 1. Some people have reason to do this. Some people don't have a reason to do it, but they do it anyways. You just have to be very careful. So what happens when-- well we just saw what happens. By so doing, you have mapped, actually, this entire vector into an element of the unit hypersphere, it turns out. So you do lose information. Maybe, you lose all notion of absoluteness, absolute intensities. The second common method of normalization is you have a reference data set and you regress against that reference data set. What do I mean? Well, if you-- back to this thing. If you take the E12 to be the reference data set and you're trying to regress everything to it or normalize everything to it, then, for example, the E12-- second aliquot against the first E12. In an ideal world, you have to get scatter that's clear. However, even through the scatter, some people believe that the linear regression line has to be of slope 1 and going through the origin. If it isn't, make it so. So it's a linear transformation of the second aliquot by, basically, you subtract and you divide. It's just a translation of the data so that the regression of one against the other is now going through the origin and has slope 1. You do it for the entire time series, in this case, for example. There are reasons to do it. There are reasons not to do it. What actually happens after you do it? Well, clearly then, all the samples are now going to have regression intercept 0, slope 1 against the reference. A second thing that comes out of this is all the vectors, the newly-normalized vectors, will have the same average. You can show that for yourself. You can do the arithmetic. You'll see that come out for free. Then, there is a notion of a fold that you hear all the time which is very natural when it comes to PCR and blots. But the question is, does a fold actually make sense in the context of Affymetrix chips? For example, you often encounter this. You have a sample population, A, and you have three readings for that sample population A versus B, which has another three readings. And people will ask, what is the fold change from here to there? And notice that there is a negative here, too. Yes, many ways have been proposed to solve this problem. You could take the arithmetic average here then divide it by the arithmetic average there. There's a negative there, though-- negative 1. So does the fault actually make sense in its setting? Alternatively, some people have used the geometric average rather than the arithmetic average. Or they have logged it, somehow. But logs only-- well, posed on a set of positive numbers-- none, not even zero. One argument against using fold actually, is that it is not stable. It is incredibly not stable and highly-sensitive to its denominator. For instance, 20 over 10, 50 over 25, and 200 over 100 are all equal to 2, right? Let's actually perturb them by the same quantity, epsilon. This is a perturbation epsilon. Can be positive or negative. Now, 20 over 10-- when you perturb it by epsilon, it swings all the way from 3 fold to something so small-- 1.8 folds there. But this quantity here is more stable. So does it mean that folds actually only makes sense if the absolute numbers involved or the absolute averages involved are large, so it's much more flatter here? You can say it's more robust. So these are things that one actually should be aware of when doing analysis or when reading the papers and what they mean by folds. Again, you recall that we have covered, actually, all of this. The study design-- prototypical study designs on data representation, background, and the last slide, essentially, is just miscellaneous. So our discussion so far, if you notice, does not require biology. It's just makes nominal reference to biology. The approaches are very general, if you think about it. It would apply in any data set, whether or not it came from biology. But one should be aware that the math or the statistics will only provide a tool for biological discovery. And the key thing we are all here to do is to understand the biology. And because that, actually, will dictate for you the experiment design, the appropriate measure or similarity space to formulate your problem, and, also, in reading and making sense of what your model gives you after you have done this. And a very, very important thing is, no study is ever hypothesis-free. You are going to read things into your study, no matter what. It is best to be very explicit in the beginning what your hypotheses are. And a lot of studies, I noticed, have completely deluded themselves into thinking that they are unsupervised or hypothesis-free. There is some hypothesis, I'm sorry to say. Even principal component analysis has been claimed to be unsupervised. That's not true. Well, it is unsupervised, in the sense that you just let things fall. But the underlying assumption is that the measure of similarity there is Euclidean distance. That matters, actually. And I end early with a quote from An evolutionary by a biologist, a French one, from the 18th century that the discoveries that one can make with a microscope amount to very little. So you can see with your mind's eyes and without the microscope the real existence of these little beings. I think he was referring to microbes or very small features. Thank you.
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_12_Pharmacogenomics.txt
JEFFREY MARK DRAZEN: So this is how I view pharmacogenomics. It's the use of genetic information to determine who will respond favorably or unfavorably. And I think, actually, unfavorably may be a better phenotype to a given type of treatment. And the reason I say that is that, especially the condition that I study, asthma, there are a lot of folks that have a placebo effect from treatment. And so if I'm studying a drug and I give everybody a placebo-- that's my trial. I compare placebo A to the pink placebo, to the yellow placebo. And I know this is just a sugar pill and shouldn't have any biological impact on asthma. But I know that 25% of people are going to get better from this. And if I'm stratifying my outcome by genotypes, you'll see that I could be misled. But if I'm looking for a phenotype which is less likely to be contaminated by a placebo effect-- for example, people who get worse with treatment-- then I'm looking at the natural history of the disease. And I'm less likely to be confounded by that. Now if you're studying something like cancer or heart disease with a hard end point, which is death-- heart disease with a soft end point, which is chest pain-- it's hard to know. But heart disease with the heart end point, something that's easy to define, you don't have this big a problem. And you can begin to use favorable and unfavorable responses. So I think it's a little more robust phenotype. What it means is that whenever I do a study, I have to think about the contaminating effect of this. Looking at my data. I know that some of the people whom I think are better because of treatment or, in this case, because of a treatment by genotype interaction are just better by chance, and that there are more likely to be people better by chance in the placebo group than there would be if there was a no treatment group, just because of the nature of that effect. So it's a little bit of a contamination but not a big problem. So the idea of pharmacogenetics is that there's variability in treatment response. So if I treat a bunch of people-- that some people are going to get better. Some people are going to stay the same. Some people can get a little worse. Since we don't think the medicine makes them worse, usually we think that it's the disease getting worse because the medicine is ineffective. So the first thing that you want to do if you want to study the pharmacogenetics of anything, you want to look to see how variable the treatment response is among members of a population. And there needs to be substantial variance in treatment response among members of a population. So I'm going to give you an example now, from the asthma world, of a study in which there was variation in the treatment response among members of a population. So this is a clinical trial. It's easy to understand. Here are patients that are enrolled with mild to moderate asthma. And this is asthma which is more of a nuisance than a real medical problem. So if patients are not treated, it doesn't represent an ethical issue. So that's how we can get away with a two-week placebo run, in theory, because it's OK not to treat people. And are then randomized to receive beclomethasone, which is an inhaled steroid and is one of the standard asthma treatments compared to montelukast, which is an antagonist to the action of the cysteinyl leukotrienes at the CysLT1 receptor. And the primary outcome in this and a lot of the data I'm going to show you is the forced expiratory volume in one second to 12 weeks. So as an aside by way of pulmonary physiology, I don't think we recognize any of you from suggestive pulmonary class. No, none of you are taking that. If you breathe in all the way to the top and then breathe out as hard and fast as you possibly can-- something like you might go on your birthday. If you have a lot of candles to blow out-- how fast you can breathe out beyond a certain level of effort is an intrinsic property of your lungs and airways and not how hard you're trying. So you have to try hard enough to reach that plateau. And that is not a superhuman effort. Almost everybody with a little bit of effort can reach that level where how fast they can breathe out is not related to how hard they're trying to breathe. So that if we measure the amount of air that comes out in one second , we're measuring an intrinsic property in the lungs and not the motivation of the subject. And it's beyond a certain level of effort. And the way you determine whether your subjects are motivated is usually do two or three maneuvers. We have somebody coaching them through it. Deep breath in and out. Blast it out so you get to see whether they're actually trying. And if this test is done correctly, the variability among tests done in a row is 3% or 4%, even among you all who've never probably done this test before. Maybe one of you have asthma. Statistics would be that one or two of you would have asthma, have done tests like this. Without any coaching or with minimal coaching, can get tested and reproducible within 3% or 4% off the street. So it's a very useful test because it's easy to do. And we use it as an outcome indicator in a lot of breathing trials. And just to help get you straight. A bigger FEV1, the more air you can get out in one second, the better off are. So that's kind of this outcome indicator. So here are the data. And this is the trial outcome. This is what was published in this journal. And you can see that the patients assigned to the beclomethasone arm improved their FEV1 about 11% to 13% on average. While the people in the montelukast group improve it 7% to 9%. Now we know from studies done in patients with asthma that if you improve your FEV1 about 10%, that you can perceive it. It's something that you say, hey, I'm better from whatever you just did. Below that, it's hard to distinguish. Above that, if you improve it twice as much, you really don't distinguish it that much, unless you happen to be engaging in really heavy exercise. So it's almost a dichotomous kind of variable, people who got better, people who didn't get better. You'd look at these data and you'd say, gee, it looks like beclomethasone is almost twice as good as montelukast. So here are the data now shown in histogram format where on the vertical axis, we have the percentage of patients. And on the horizontal axis, we have changes in FEV1. So the patients that will be over on this end had a big response. Patients over here actually got worse with treatment. And the null is right in the middle. And this 10% to 20% bar, anything over this direction, is an improvement. I don't know how to get that to go away. Usually these little boxes go away if you leave them alone long enough. Let's see if I do it again. No, cancel. So here we are with montelukast. And you see that there were about, altogether, 5% of the population who really did great. There were about 7% of the population that actually got a lot worse. And that it turns out that it's about 42% of the population that improved a significant amount. Now compare this to beclomethasone, which, remember, on average, did twice as well. Well, here's the difference. There were twice as many people that did spectacularly well. But on the average not a lot. It works out to be 10% versus 5% in some. And rather than having 7% of people who did poorly, it was just 4%. So it turns out that if you trim these by getting-- like they do sometimes in the Olympics. They get rid of your best and your worst scores. If you trim this in that end, the groups are almost identical. And so the difference in the means reflects poorly on actually what the difference would be in the population. Because if you're a patient sitting in my waiting room and your FEV1 one's improved by 60% and you're sitting next to somebody whose FEV1's improved 0%, the average improvement between you is 30%. But both of you didn't get better, even though the inference would be that you are [INAUDIBLE] better. So here we see a big population variability in response to montelukast and inhibitor of the actions of leukotrienes and leukotriene receptor in beclomethasone. So I'm going to explore the basis of these two and, depending on time, one more treatment in asthma. And there's big population variability. And the second thing that you need to know-- before I get to this, which I didn't bring you the data to show-- if I study you once, and you're over here, and I'm looking at a genetic event, that is something that you inherited, whenever I study you, you ought to get it better with this medicine. Because if it's a genetic event, it ought to be something that is imprinted on you and stay that way. While if it's variable from time to time, then it's not going to be useful. So you need to know how repeatable the treatment response is. If the basis of the variance is genetic, it ought to be highly repeatable. While if it's due to random noise like the reason you improved so much here was that during the two-week running period, you'd spent time with your mother-in-law who has six cats. And you're allergic to cats. So your lung function was low before you started. You then got better during the treatment period because you got away from the cats. If I then restudied you, you wouldn't get this kind of response. AUDIENCE: [INAUDIBLE] JEFFREY MARK DRAZEN: Well, so actually I can show you those data, depending on how much time we have. I'll show it to you right now. It's out of the context of this talk. Just pardon me one second. I'll see if I can find-- I have these data actually in another talk. No, no, it won't take too long. Hopefully, it won't take you long to find it. could Let's see. No, it's not that one. I'll give it one more try here. Just look in a different place. I think it's here. No. One more try. Thought it was there. The nice thing about having a big hard drive, you can carry all sorts of stuff around. See if it's in this one. Yeah, here they are. OK, so here data from a trial. And I was lucky. Because this is a different drug. But this acts like a 5-lipoxygenase pathway. And the drug company that I worked with gave me the data. They sent it to me on a CD. And a lot of drug companies won't do that because they're afraid that you're going to do something that will hurt them with respect to their market position. They tend to be really paranoid. So to get on this graph, you had to improve your FEV1 by 12%. The consensus definition about what you can perceive varies between 10% and 12%. So when I did this graph, it was four or five years ago. I drew the 12% line. There were 240 people in the trial, 97 of which improved their FEV1 by 12% on the eighth day of therapy. They were then studied on day 36, 64, and 92. And this graph-- when I graphed the data on my computer, it had all these lines lying on top of each other. And they didn't get any thicker. So I can strain the data through this point here. So this isn't a data point. But this is simply to allow you to see how many people are in this group that are in that group. It's a graphical representation thing. I mean, if I had clinical data that looked like that, I wouldn't believe it. So anyhow. What you can see is that of this group of people who improved, some actually got worse and stayed worse. A lot of them got better and stayed better. And there were some that kind of moved around. If you follow the dots carefully, you'll find that three fourths of the people are above this 12% line on two of the three subsequent occasions. So that's active treatment with zileuton, which is the active drug. Here are the same entry criteria, same trial treated with placebo. And you can see that, as I told you, there are placebo responders. These people got better and stayed better. In this trial, however, of the people who were above on day eight, only about 40% are persistently above the line. And, in fact, you can see there are a bunch of people here who flunked out. And then they kind of went up and down. Now if you look at the people who don't get better. These are nonresponders to treatment, which I defined as 5% or less. So I left myself a zone of middle response. So there are a bunch of people who don't get better. And, in general, if you don't get better, you stay not better. Although there were a few people who were kind of late bloomers. It's a relatively small proportion. And when you look at placebo, it looks about the same. So you can see why the nonresponse phenotypes a little better than the response phenotype in terms of the noise. Because the people who don't get better and the placebo people who don't get better look pretty much alike. Well, the active treatment guys are contaminated by these people who you would say were better when, in fact, they weren't. So there you go. That should answer the question there. So I can close this one. I guess I closed the one I was still working on. Here you are. And so we're talking about repeatability. Have you heard about Sewall Wright and how he is a population geneticist? He did most of his work at the University of Chicago. And he died about 15, 20 years ago. He wrote a four volume series on human population genetics before we had any markers or anything. And he kind of forecast and did a lot of the primary mathematics, which are the basis of population epidemiology. And he actually derived an equation for what he calls repeatability. Now what I would really like to know is the heritability of the asthma treatment response. But to do that, I'd have to study multigenerational families. And there are three problems with that. One is that asthma treatment is changing. Second is that we know that an asthmatic at age 10, at age 25, and at age 55 responds differently to treatment. Their disease tends to get ingrained and unresponsive to treatment. And so you'd have to be able to study, let's say, all the asthmatics within a family at age 20. But the asthma treatment that you gave the patient's father is no longer available. There's something better. And so you aren't going to be able to get that treatment. And, of course, the patient's father is now 50. And you can't treat him because the asthma treatment response varies. So it's going to be almost impossible, I think, to get the data you need to determine the heritability of asthma treatment. So we look at the repeatability. And the calculated repeatability for beclomethasone and montelukast from clinical trials. And we've done on the order of 80%, which is good but not awesome. But repeatable enough we think that there's a pharmacogenetic signal. Yeah. AUDIENCE: I have just a more basic question about what [INAUDIBLE] heritability [INAUDIBLE].. JEFFREY MARK DRAZEN: So now we're not talking about the heritability of the asthmatic treatment response. We're talking about the heritability of asthma. And the asthmatic phenotype is got about-- in twin studies varies but is on the order of 65% or so, comparing monozygotic and dizygotic twins. AUDIENCE: Is there any reason to connect those two [INAUDIBLE] treatment? JEFFREY MARK DRAZEN: Well, one of the real problems with asthma is that it's difficult-- it's like an impressionistic picture. If I'm looking at it from 30 feet, it's easy to see what it's about. As I get really close up, I lose the picture. And it's very difficult to define because there's no biochemical test I can use. Quite often people look at the theorem immunoglobulin E levels. Because it's related to allergy but not asthma. But it's a very quantitative phenotype. And so the definition of asthma is airway obstruction, which varies spontaneously as a result of treatment. The problem is-- you take two people that have the same genetic background, but one of them is exposed to a lot more allergen than another. Even if you have monozygotic twins and one of them plays basketball, and changes his clothes, and wears gym clothes that have been sitting around in a ratty locker, and gets that exposure every day while the other one's a swimmer and doesn't have the same kind of exposure, one of them will develop asthma and one may not. And so you'll say it's not inheritable when, in fact, it requires both a genetic background and an environmental exposure to manifest the phenotype. The classic examples are there a lot of people from Hong Kong who have latent ragweed allergy. And they're OK in Hong Kong. There's no ragweed or very little in Hong Kong-- move to the States where there's a lot of ragweed. And they get really asthmatic in the fall. Well, they didn't have a problem back home because there wasn't the allergen. Come here and they have allergen. And they get exposed. So it's a combination of environment and genetics that appears to bring the disease on. The research I'm going to tell you about represents the work of a whole pile of people. This is most for the pile, shown here. There are people at the Channing lab, which is [? free-- ?] have heard from any of these folks, [INAUDIBLE],, I don't know. [? Zach ?] works with some of them. The Pharmacogenomics Research Network, the Asthma Clinical Research Network-- these are both NIH sponsored consortia that I've been part of. And the Whitehead Institute-- I'm sure you've all heard of those folks over at MIT. So I'm going to talk about these three asthma treatments. These are three of the major asthma treatments and what we know about their pharmacogenomics. And we'll see how things go. So with this first one, we used a candidate gene strategy. You're familiar with candidate gene strategies? So this is work I started about a decade ago. I had a post-doc from Canada who was interested in the problem. And I said, what we want to do is to look at the 5-lipoxygenase pathway. That's the leukotriene pathway. Because we knew the biochemistry of drugs. And I showed you the data, the red and green lines were from this drugs zileuton that inhibits the action of the enzyme 5-lipoxygenase. In fact, I've been involved in the development of these four drugs worldwide. And so we knew the structure of this enzyme. We knew its genomic sequence. And our question was, when you have patients with asthma who show up in your office, they all have phenotypically similar asthma. But we think that some of them may have asthma because they have an excess of 5-lipoxygenase products and stimulation at this receptor. But a clinically indistinguishable phenotype can be someone that has asthma due to an excess of substance P, a neuropeptide having nothing to do with this. Or you can have somebody that has excessive of histamine, or someone that has an excess of neurokinin A, or endothelin. So there are probably half a dozen endogenous bronchoconstrictors, which have totally different biochemical pathways that can lead to the same clinical phenotype. And so you recognize that as asthma. And they probably represent people that have a different genetically programmed mast cells in neural responses. So are argument went that if you could look at the variability of the treatment response here related to the enzymes in this pathway, you'd be able to begin to pick out people whose asthma was associated with leukotrienes and specifically whether there would be variability here. So what we started with is we had the human 5-lipoxygenase gene. We knew it's intron and exonic structure. When I started the work, this was hot, new information. And this guy from Canada worked on the problem for a month and quit. He said it was too hard. And I was really blessed. I had a post-doctoral fellow from Korea-- actually he was a visiting scientist. And he's probably the most patient person in the entire world, I think. And he plowed his way through this gene, exon by exon, doing old-fashioned sequencing, SSCP. And this took him almost three years. This kind of work now takes three weeks or maybe three days, depending on what kind of genotyping outfit you have. And what he found, which was quite discouraging, was that there were no common-- that is alleles greater than the frequency 0.15-- DNA sequence variants leading to a modified protein sequence in the entire 5-lipoxygenase gene. You got a question or are you just stretching? Wouldn't want to have a pulmonary embolism over there. So there were, however, variations in the transcription factor binding region, which is just upstream. It's the 5-LO gene promoter. Now what was known about the gene promoter at that time, which has actually hung in there to be true, is it just the head of the translation start site, there are a series of SP1 and EGR1 binding motifs. In fact, there are five of them in tandem. Then if you search the GenBank, this is the only gene where that occurs. So the sequence GGGCGG is repeated over and over and over again. And that's just right in the core promoter. And that's where the sequence variants were found. In fact, the sequence in the GenBank has five of these in a row. And we've identified individuals with 3, 4, and 6, and 7, less commonly, we've, in fact, identified people with 2. And so this is a variable nucleotide tandem repeat. People could use it for genotyping. But we showed that it read true. So it's not one that it tends to expand. It's not like it is in Huntington's disease. If your mother and father have different alleles and are homozygous for them, you're going to be a heterozygous. So we did enough family studies to know that this was a stable VNTR. And then what we did was we took human cells-- these are HeLa cells-- and transected them with the wild type promoter and promoters which has five repeats, and with 3, 4, and 6 repeats. And studied them in a CAT assay, this is a chloramphenicol acetyltransferase assay. So the higher the bar, the greater the promoter activity. And you can see that all the mutants are less effective at driving gene transcription than the wild type. Now this is one of these moments that you never forget in clinical investigation. I was going over these data with Tucker Collins, who's a professor of pathology at Brigham now. And he looked at these data and says, you know, Drazen, you're absolutely right. But you've got to find something else to do because these variations are too small. No one wants to study a 35% or 40% decrease in CAT transferase activities, too small. But what I knew from clinical trials was that if I inhibited 5-lipoxygenase 35% or 40%, that I got a very substantial clinical benefit. So the argument that we had in our head was that if patients with variations in the 5-lipoxygenase, the ALOX5 promoter, had downregulated ALOX5 production, but they wouldn't respond to antileukotriene treatment. The idea is that patients with this funny form of the core promoter would have some other cause for their asthma. Their asthma may be due to the histamines, or to substance P, or some other variant, some other mediator, but that leukotrienes wouldn't be important. Because these are all downregulatory mutations. So I was lucky, at the time, because Abbott Pharmaceuticals had developed the 5-lipoxygenase inhibitors zileuton. And it had two big problems they were trying to overcome. One is that it has to be given to patients four times a day. And that's a pain. If you give patients the pill to take once a day, they'll do it. Twice a day is harder. Three times a day requires a saint. And four times a day requires God. It's almost impossible to do it. Now I did actually have patients that had pretty bad asthma that took it four times a day. So Abbott developed the compound you could take twice a day. The second problem for them was that about 3% of people that took this drug developed abnormalities in liver function tests. And although nobody had a liver that checked out on this drug, it required monitoring of the whole population to find the 3% or 4% of people who had an adverse hepatic response. It's a very common problem in drug development. And so they came up with a daughter of zileuton, which wasn't supposed to have this problem. And it was called ABT 761. It acted at the same location. And that was the plan. So it was the clinical trial that we designed. I actually helped design it. This was not a high-tech clinical trial. We enrolled patients on no treatment. And we assigned them to-- these were asthma patients-- to ABT-761 or a placebo. And since they were patients that we found on no treatment, continuing them on no treatment seemed like it was no problem. Now this was seven or eight years ago. Asthma treatments progressed. It would be harder to do this trial now. Because there are a lot of people who didn't get treatment [? who would. ?] So what happened was we genotyped everybody at the 5-lipoxygenase locus. And as we expected-- oh, I forgot to tell you one more thing which was that when the trial was set up, Abbott, to save money, said, look, when a third of the patients complete-- you don't enroll everybody in the first day. Usually, it takes a couple of years to enroll-- but when the third of the patients complete their 12 weeks, we're going to compare the incidence of liver function tests in the placebo and ABT-761 group. And if this drug isn't improving the rate of adverse effects, we're going to stop the trial. Because it's not worth another $40 million to find out something we don't want to know, which was that this drug wasn't in advance. Because they were hoping it would be in advance. And I thought that was a very reasonable approach to life. So we genotyped everybody. And as you might expect, the wild type allele occurred most commonly, made up about 81% of the alleles. The deletion alleles three and four made up 19%. The addition allele six was so rare-- with one patient with it-- that it really didn't add up to much. So when we stratified the results of FEV1 by genotype, the patients that received active treatment and where the wild type genotype, 55, improved their FEV1 about 17%. Patients on placebo with the 55 genotype improved FEV1 5%. Now it turned out that there were 14 patients that had active treatment that had no 5 allele. So we called them active XX. And they actually had their FEV1 get worse. And since we didn't stratify the enrollment by genotype, it turned out there was only one patient that had XX genotype on placebo. And that person got better. But that's one patient. It's hard to make any sense out of that. So the p-value for this change in FEV1 was 10 to the minus fifth. This is unlikely to have happened by chance. But when we went into the trial based on a preliminary analysis of the data set that we never published because we were concerned about bias in ascertaining the patients, we had thought that heterozygosity at this locus would contribute to the phenotype. And it turned out that that's not the case. You had to be homozygous. So that the allele frequency here is 0.19 squared. So it's very small. It's about 3% or 4% of the population. So from a pharmacogenetic perspective, it's interesting, but it's not economically interesting. On the other hand, if it had been 20% of the population, it would have been interesting. Because if you assume that you could do a genotype for $50 bucks. It's easier to treat the patient and see if they get better than genotype 100 people to find people three people who aren't going to get better. Since there are also probably other reasons that you don't get better with this treatment other than this genotype. So it turned out to be pharmacoeconomically not very interesting. Now it turned out that some work we did at the same time was that we got eosinophils. And we look for the expression of mRNA by PCR of the ALOX5 gene. This is our control with [INAUDIBLE].. Then we had five patients here with a 55 genotype. In most of them, but not all of them, have relatively high levels of the five well picked up ALOX5 picked up by PCR. But none of the three patients that we did that had no 5 allele had this availability. And then we looked at the amount of LPC-4, that is the leukotrienes produced by these NFLs. The deletional variance-- there were four patients for that compared to five here. There was a significant difference in terms of lesser amounts of leukotrienes produced. So that suggested that there was variation in this pathway. We've since looked at the cytosolic phospholipase A2, the LPC-4 synthase, the epoxide hydrolase, all the other genes in this pathway. And we haven't found any other variance related to treatment response. But we did find-- and this is actually interesting confirmatory data. And this is not our work but the work of others. What they reasoned was that if the variance that changed the amount of look leukotriene produced, that you should get the same pharmacogenetic effect if patients are treated with one of these drugs, which rather than inhibiting the enzyme are receptor inhibitors. In fact, I'll tell you that in the trials done with the receptor inhibitors by others, patients with the mutant form of the genotype don't respond to the receptor inhibitor, so confirming our data. It's always heart-warming. So it suggests that variations in this pathway can contribute to a small fraction of the variance. So I think it's interesting pharmacogenetics. But it never made it pharmacoeconomically because it's a too small proportion of the population. We really have to, I think, affect 15%-- unless you're talking about a serious toxic event. Now, for example, with the gene thiopurine methyltransferase, which is a gene involved in the metabolism of 6-mercaptopurine, which is a drug used to treat leukemia. Among members of the population, especially people of Scandinavian heritage, about 2% or 3% of those people harbor an allele where they don't metabolize 6-mercaptopurine at the rate they should. And so if you give them the standard dose, they get toxic. And you can actually kill them with the drug. So when you're treating leukemia now with that drug, it's standard to genotype people to make sure they don't have one of these slow-metabolizing alleles. And they're now you're looking for a life threatening toxicity, which occurs at low frequency. But because the consequences of missing it are so dramatic and so irreversible, it makes sense to screen 1,000 people to find one. In fact, if you screen someone in Asia, you have to screen almost 5,000 people to find one. But yet, in treating leukemia in Asia, they're still doing it, just because it's something that can be prevented if you're going to a higher class treatment center. Well, if you were screening at the Mayo Clinic where there's a lot of Scandinavian blood, really make sense to do it. But for something like this, where the failure to respond to treatment is a nuisance as opposed to a life-threatening event, it didn't make pharmacoeconomic sense. So my second example is about inhaled steroids. And steroids are different from antileukotrienes because we knew how antileukotrienes work in asthma. I mean, they were developed based on understanding of the biology of asthma. But inhaled steroids we knew worked based on observation. But steroids have thousands of potential mechanisms of action. We had no idea which one was accurate. So some of this study is useful because if I can find a gene that controls steroid response, maybe that's telling me a gene that's important in the biology of asthma. And then, I could develop a treatment that inhibited just that gene and wouldn't have a lot of the side effects of steroids. Because one of the big problems with treating asthma with steroids is that if you're a kid, it makes you shorter. If you're an adult, it makes your bones brittle. It causes acne to break out, causes skin to thin. It's got a lot of side effects that aren't terribly good. And to get rid of those side effects would be good until you can get at it. At the same time, with inhaled steroids, just like with antileukotrienes, you saw that there are about 40% of people who are paying for drugs who aren't getting any benefit. But the drug companies think that's perfectly fine. I don't. Because they're getting the toxic effects, we know. But they're not getting the therapeutic benefit. And it's costing them $50 bucks a month, sometimes more. The new inhaled steroids are actually $60 to $75 bucks a month. So it's an expensive habit. It's a lot like your coffee habit. If you buy a couple of cups of coffee at the Starbucks or even at the place down here, you're spending two 2 and 1/2 bucks a day on coffee. Works out $70 bucks a month. That's what an asthma attack costs. So here we are showing two more populations. I showed you the purple population with the beclomethasone There are two different inhaled steroids-- different from the ones I showed you-- showing a variation treatment response quite similar to the one that I showed you from that asthma trial. Now, these two populations are ones that we studied. This is actually a drug company study done by Forest Labs of one of their inhaled steroids. And this was one sponsored by the NIH called the Childhood Asthma Management Program or CAMP. And even though ones kids and ones adults, totally different study designs, you see this same variation in the asthma treatment response to inhaled steroids in three populations, allowing one to believe it's highly likely to be true. So what we did here is we have a different candidate gene strategy. And the 5-lipoxygenase-- we knew the genes in the pathway. And we could identify them based on how the treatment worked. Here, we were guessing how the treatment worked. So we got a bunch of people around in a room who we thought knew about steroids. And we compiled the list of possible genes in the pathway. We then looked for DNA sequence variance in that pathway. We then look to see if there's a statistical relationship between clinical response and the presence of the sequence variant. And then we determined their functional relevance. And so I don't need to define SNPs, right? You all know about SNPs and haplotypes, right? Talked all about that. So here's our strategy. We have 32 control and 16 asthmatic cell lines, which we had an infinite amount of DNA from. And we identified variants from sequencing and from a database. We then selected SNPs based on the allele frequency. We designed genotyping assays and did primary genotyping in the adult study-- that was the drug company study-- of cases and controls. What we were using is cases and controls was comparing people at this end of the graph to people at that end of the graph. So we call these controls and these cases or vice versa. And then we would identify haplotype tag SNPs that had greater than 5% prevalence. And that gave us first [INAUDIBLE] [? statistical ?] associations in the adult population. We then said if this is true, we should be able to replicate it in the kids. And then we said if it's true still, should be able to replicate it in a third population. And then we did some fancy statistics on it. So here are the three populations we studied. This adult study is a drug company study, eight weeks in duration, people whose lung function as a percent [INAUDIBLE] is 70% to start. They're adults. They're age 40. And they're interesting. Their steroid response is 7% on average. In the CAMP trial, which is our first replicate, they're kids. So their average age is 9. The trial duration was four years not four weeks or four months. And their average improvement FEV1 is 7%. And then our second replication trial-- was government sponsored, six weeks. Again, average improvements about 7%. So this is about what you get in these populations. This kind of population, you get about 7% improvement FEV1. This is the list of genes that we interrogated. And this was simply a list of genes made up by understanding the pathway, thinking how it could impinge on asthma. So it's entirely theoretical. And so here our primary outcome using the single SNP analysis-- these are haplotype tag SNPs. So in the adult study, RS-242941, so this is just the name of one of these haplotype tag SNPs-- using the eight-week FEV1 percent change as a continuous variable, gave us a p-value of 0.025. This is adjusted for covariates but not for the number of [? looks. ?] We then did it in a CAMP population, now many fewer looks, again eight-week percent change, continuous variable, p-value of 0.006. So you're on a roll. And we looked at that same genotype in the ACRN population and didn't find a relationship. That would have been hitting gold. So we got silver or some other load. I'm not quite sure what you want to call it. But we found that these are three other haplotype tag SNPs. That this one, RS-1876828, which is in the same gene, CRHR1, cortical troponin releasing hormone receptor 1, was positive in both populations. So we got either second or third prize, depending on your perspective. The ideal would have been to have the same haplotype tags SNP positive in all three populations. What we found was different haplotype tags SNPs in the same gene, suggesting that it's the gene that's the problem but not localizing the site, not actually getting to what we think is the active problem. So the half-full way of looking at these data are that, gee, although I don't know the mechanism, if I genotype people at these loci, can I predict their response in lung function? So this is the RS-242941. That was the top one in the kids CAMP and the adults. And what we've not done is taken this haplotype tag SNP and looked at people who were homozygous for it, and heterozygous, and didn't have any copies of it. And you can see that if you own this phenotype, that your improvement in FEV1 is almost twice as great as if you own this genotype on average. When you look at a different haplotype. This is GAT haplotype pair. And this is now in the adult and the CAMP study. Even it's a little more informative here. I can show you the difference in the size of these bars. Here, this is about a 10% to 15%. If you look here, this is now 15% to 20%. This is a little bigger response in people with this haplotype compared to people without it. So we think that it has some predictive value. It did not turn out to be statistically significant, as I said again, in the ACRN population, smaller number of people. It was in the right sign but not statistically significant. This is a different genotype in that same gene but a different SNP, the RS-1876828, showing again that if you're homozygous AA versus GG, a big difference in response. But when we went back and did this population with that genotype, it didn't work out. The second thing we learned is that CRHR1 is probably involved in the of the asthma treatment response, suggesting that if you could target CRHR1 antagonist to the lung, where CRHR1 receptor. It's a seven-transmembrane spanning G protein coupled receptor. It's got a bunch of ligands in addition to corticotropin releasing hormone that can activate it. So some of the questions are, can you find it in the lung? So this is one of these body blots that you buy from one of those companies out west. [? Say, ?] oh, [? it's ?] RNA, if you do, what we did here is 12 cycles of PCR. And you pick it up in the positive control and in the brain. This is a hypothalamic hormone. So the fact that you find the hypothalamic hormone receptor in the brain is no surprise. If you do twice as much PCR, you not only pick it up in the brain. It shows up in the lung, in the placenta, the thymus, and the lymph nodes. It shows up in epithelial cells and CD4 positive lymphocytes, all tissues that have been implicated in the biology of asthma. So at least it's where it needs to be. Doesn't quite meet Koch's postulates yet. But we're headed in the right direction. For the third example, what I think I'll talk about is a beta-agonist receptor. So what I've done so far is talk about two association studies. Clinical trials that were done. We then get the data and the DNA and we do a bunch of genotyping and associations but it's all post hoc. So now the story of beta agonists are slightly different. Beta-agonist inhalers are the most commonly used asthma treatment in the world. I don't know how many of you read the New England Journal of Medicine. Have ever see the New England Journal of Medicine? Nice graphics. [INAUDIBLE] lollipops in this picture. She's one of the artists that draws them. And so that these inhalers are going off worldwide at the rate of about 1,000 times a second. So there's a lot of-- treat people using this kind of treatment. So in 1990, almost 15 years ago, this fire was started. A long-acting beta agonist came out. Now the problem with those little inhalers-- they last three or four hours. So this drug company designed a drug that can last and agonise at the receptor for 12 hours. And the hypothesis was that if you took it on a regular basis, your asthma would improve because you'd be chronically dilated. These drugs work by relaxing airway smooth muscle constricted in asthma. So in this trial designed by the sponsors, they were comparing this long-acting beta-agonist with placebo. And it was a crossover design. So half the patients started on formoterol, half started on placebo for 26 weeks of treatment. They got a month off. And then they were switch to the other treatment and followed for another 26 weeks. And in this trial, the primary outcome was an asthma exacerbation. Rather than measuring lung function, they said I'm going to wait for your asthma get bad enough for you to be upset by it. And I'm going to click off the asthma exacerbation box when that happens. In the hypothesis had been that during your final treatment, you'd have fewer asthma exacerbations than during your placebo treatment period. Because of the crossover design, everybody's in both arms. So when these data were published, the asthma world was surprised. Because this is the number of subjects without exacerbation with time. If treatment was 100% effective, your line would go across here. Every time a patient has an asthma exacerbation, the line kicks down. And the patient's then censored from further analysis. And you could see that there for about a week or so they were the same. But then the patients that got the regular treatment-- remember, the onset prevention was worth a pound of cure. This is supposed to make you better but actually having more asthma exacerbations at any given point in time than the people who only used beta-agonist when they needed it. And when these data came out, there was concern that actually this very commonly used asthma treatment was causing harm. And the fires were fueled by this paper that was published by my predecessors at the New England Journal. I had nothing to do with this, let you know. showing that the use of beta-agonists was associated with the risk of death or near death through asthma. And what they did in this study is they went to this huge database, which is where every time you get a prescription in the province of Saskatchewan, somebody knows who you are and what prescription you want. And then they also matched that with the asthma exacerbations, asthma death file when people got really bad asthma. Because they have hospital records. And they discovered that the patients that used a lot of beta-agonist either died from their asthma or were admitted to the hospital with very severe asthma. And they concluded it was cause and effect. And they were totally wrong. This is a classic example of confounding by severity. Or more simply put, the sicker patients were using their beta inhalers a lot. And they're the ones that are more likely to die from asthma. So it was bad epidemiology. But while we were working that out, this trial was conceived by the people at the NIH, who became concerned that a common asthma medicine was making you sick. And so we put together this trial called the beta-agonist study. All clinical trials have to have a name. Otherwise, you would have to say, remember that trial where they compared drug X to drug Y in a 24-week crossover design trial. It was published in The Lancet. remember the bags trial or something like that. So all good clinical trials have a name. This was the bags trial. And in this trial we enrolled patients with really mild asthma for a six-week run-in period when they were on their standard treatment. And then they were randomized to receive either albuterol, which is a short-acting beta inhaler. So you take it four times a day. This is the one that's used so often around the world. And they were given a coated inhaler. It was white. And it said, study drug. Take two puffs four times a day. And I've already told you about four times a day medication. So this inhaler had in it a computer chip. And the patients knew this. Every time they pressed the inhaler, it recorded the time of day and the date. And we knew that we had about 95% compliance, at least with pressing the inhaler. There wasn't a video camera in the computer chip. I don't know whether the patient actually inhaled when they pressed the inhaler. But we think that pressing the inhaler meant that they actually used it. And that was a coating inhaler. And they were also given an open-label albuterol. And said, if your asthma is still acting up, you can take this. And so all the patients were given that. The placebo group was given an identical inhaler, identical instructions, identical computer chip, and open-label albuterol. And the hypothesis was that if regular use of beta-agonist was bad for you, that the blue group would do poorly compared to the green group. Now rather than powering the study for asthma exacerbations, because in these very mild asthma patients, you were taking 1,000 patients studied for a year. We used a surrogate endpoint, which was lung function, strongly related to how patients do with asthma, taken over 16 weeks. And here the data which we published actually in the New England Journal of Medicine in '96. It showed that the morning peak, which is our primary outcome, the two groups were not statistically distinguishable. Here, it's a very expanded scale. And the difference between these two and 15 liters per minute is probably not significant. We had gone in saying that this difference, 25, was significant. And it got only halfway there. But interestingly, the blue group did a little poorer than the green group, even though the difference wasn't statistically significant. This wasn't a sign of these treatments being worse for you. But we concluded that they weren't worse for you. In fact, we looked at a bunch of outcomes. Now while we were doing this trial, while data accrual was going on-- this is the beta-2 adrenergic receptor-- a number of polymorphisms were identified in it. Now these two, which are right up here near the n-terminal part of the receptor, have very high allele frequencies. The minor allele frequencies are on the order of 0.4, and they're very common. And we knew from the work of others that these were functional. That is that an [? RH16 ?] receptor behaved differently than a [? glycine ?] receptor. We also knew that RH16 was in very strong linkage disequilibrium with [? Glyn ?] 27. So that if you were RH16, you were likely Glenn 27. So what we did was we stratified the data from that clinical trial based on these genotypes, again, a retrospective analysis. And here are the findings. If you had the Gly/Gly genotype and used beta-agonist on a regular basis, your morning peak flow after treatment oscillates around the zero line. If you have the Arg/Arg genotype and only used beta-agonist when you needed it, which works out to be about a puff every other day, nothing happened. But if you have the Arg/Arg genotype and are using two puffs four times a day, there's a fall in peak flow during and after the active treatment period, suggesting that it's not that the drugs don't work anymore. In fact, when you take them you get bronchodilation. But there's a side effect. When the drug wears off, you're worse than you were before you started taking it. And that's what this is showing us. Because this is the morning peak flow before treatment. So this says that using a drug on a regular basis makes you worse in the morning than you would be if you hadn't used the drug. But it's genotype related. So that led us to do a perspective trial. And so this is like the gold standard in this business. Rather than sifting through all data, you start off with new data that were designed by BARG trial, beta-agonist receptor by genotype. And this was two identical trials in which patients were enrolled with the Arg/Arg genotype. A matching Gly/Gly genotype patient was found, at least with respect to sex and lung function. And then both groups of patients were put in this design. And the idea was that in the Arg/Arg patients, we were expecting that placebo would be superior to albuterol. Because we think that these are the patients where albuterol given regularly has a detrimental effect. While in the Gly/Gly patients, this difference would not exist. And then if we then did a genotype by treatment interaction-- we would say that if you look at Gly/Gly on active treatment versus Arg/Arg on active treatment, that these patients would be better, these patients would be worse, and there'd be a big difference between the two groups. And that's how we designed it. Our primary outcome variable, again, was a measurement of lung function, morning peak flow, with these secondary outcomes. So we enrolled 332 patients with that genotype. 78 met the criteria. And so this is an interesting trial. Because as far as I know, it's the first trial where people were enrolled by genotype in a non-malignant condition. They came in and they met clinical and physiological criteria. They say, you're a patient for our trial. Now let's genotype you. Because we knew that one out of six people was in Arg/Arg. If you were then met the criteria and were in Arg/Arg, you were enrolled in the trial. We then went through the people who we determined to be Gly/Gly and looked for a match, sex, lung function, and center where they were studying. So we ended up with these people. These were the baseline characteristics of the two groups. We matched on sex. And we came as close as you could. In terms of the proportions that were Caucasian, they worked out to be about the same. Their ages worked out to be the same. Their lung functions worked out to be the same. So we had two reasonably matched groups of people that differed with respect to their genotype at the beta-2 adrenergic receptor. So this was unexpected. What happened in this trial was different from our other trials. We had the patients on beta agonist on a regularly schedule basis. But they had a beta-agonist inhaler they were using if their asthma acted up. So when we did this trial, we switched their rescue inhaler from a beta-agonist inhaler to another type of rescue inhaler called Atrovent, an ipratropium bromide, which doesn't work at this receptor. And we allowed six weeks for patients to get used to using that treatment. Well, we hadn't expected this to happen. But what happened was in the Gly/Gly patients, their peak flow stayed fixed. Well, in the Arg/Arg patients who were using their albuterol four or five times a day, just kind of in regular use, when we switched them to ipratropium bromide over six weeks, it improved their lung function 30 liters a minute, actually 28 liters a minute. Now that's as good as you get out of most treatments you pay money for it. And all we did was switch them from albuterol to ipratropium bromide. But in our trial design, we said the zero point is going to be six weeks. So all our comparisons are made from here, where we had this big effect during the run-in, which we hadn't counted on. Then during the trial-- so the Arg/Arg is yellow. The Gly/Gly is blue-- They start off with peak flows of around 470. The Arg/Args get worse on active treatment while the Gly/Glys get better. While on placebo, the Arg/Args get better and the Gly/Glys get worse. This is the primary outcome peak flow. When we express it this way-- this is now morning peak flow. And so each of these bars represents a difference between active and placebo. So if active is worse than placebo, you go down. And if active is better than placebo, you go up. Arg is red. Gly is green. And you see that the difference between the two, highly statistically significant, with a value that's on the order of 23 liters per minute. We said 25 would have been clinically significant. It came close but didn't hit that bar. Then at evening peak flow, the difference is also in the same sign. But interestingly, there have been some recovery of lung function during the day. And our other outcomes. FEV1-- and difference again in the same sign. Arg/Args get worse. Gly/Glys get better. The difference is about 150 liters per minute. Everybody agrees this is statistically significant. When we look at symptoms. And here, a bigger number is more symptoms. There are more symptoms when the patients are using the drug on a regular basis than they are when they're using placebo. Again, in a sign of Gly/Glys doing better on active and Arg/Args doing worse on active treatment, so highly likely not to have occurred by chance. Now my final outcome here is how often the patients had to use a rescue inhaler. The ipratropium bromide was the non-beta-agonist rescue inhaler. Albuterol is a beta-agonist rescue inhaler. And we found the same thing. Remember, during this period, the patients are getting albuterol eight puffs a day already. And they're taking more of it. It's not enough. When the Gly/Glys are using at eight puffs a day, they cut back on their daily use. So the difference between these groups-- three puffs a day is highly significant. And in the sign that we're making the Arg/Arg patients worse with active treatment. So I think the clue out of this one is that about 1/6 out of patients with asthma in the US are probably being made worse by their albuterol treatment, which is actually a very commonly used asthma therapy. Now this study was not calling for asthma exacerbations. To do that, we would have had to study, as I said, about 1,000 people for a year. And to study 1,000 people, you would've had to screen 3,000 or maybe more like 4,000, because it's going to be genotype stratified. But these two things track. And those kinds of trials are under consideration but are not completed yet. So I've given you three examples-- antileukotrienes where we've shown an effect based on what we know about the biology of the pathway but one of little pharmacoeconomic consequences, inhaled steroids where we weren't quite able to replicate finding [? three ?] genes. Well, we have a gene but probably not a SNP related to the steroid response. It'd also useful to try to do a controlled perspective trial that way. And then with the beta agonist, we've done the controlled perspective trial showing the genotype makes a difference in treatment response. So that it's more than just an idea, it's something we can actually reduce to practice. And so these are three examples in asthma treatment which tested the advantage of being a recurrent disease requiring chronic therapy. And if I were doing a trial where I wasn't allowed to take patients off treatment, right where the outcomes are going to be strokes, or heart, attacks or terrible events, then you'd be-- not have this kind of freedom. Even when you're using blood pressure as an outcome. You don't want to let someone's blood pressure go uncontrolled. But here, you can really show a genotype by treatment interaction different kinds of ways. And you can expand this to other kinds of diseases, doing the right kinds of treatment trials where you begin to enroll people by genotype and informative genotypes, which you find out from the evaluation of all data to decide whether this makes sense to move forward. So this is where I see, to me, one of the major advantages of genetics in actual medicine years. I mean, I can find the genes that cause things. But all I can do is give you bad or good news. You don't have a gene for Huntington's disease. You don't have the gene for Parkinson's disease or you do. Now you're going to worry about it, but you can't do anything about it. I might be able to tell you, you have a gene for a form of emphysema. And you shouldn't smoke. But still, there's very little the information that I can do with the information about a gene that I harbor or don't harbor, other than making decisions about my kids, and when I want to have kids, and who I want to marry. And it's interesting, in some regions of New York City where they have these arranged Ashkenazi Jewish marriages, they have a number of genetic diseases. They're actually genotyping the people in the arranged marriages so that they don't end up with Tay-Sachs disease and other commonly inherited-- but they're arranged marriages. A lot of marriages these days aren't arranged. So it's not terribly useful information. If on the other hand you have a disease, and I can look at your genotype and modify my treatment so that in one case, you're likely to get better. In another case, I know that a drug isn't going to work for you. That's a much more medically useful thing to know. I'm sorry that you inherited these genes that are going to cause you to have this bad disease. But at least I know from your genetic profile what treatment you're more likely to respond to it. And so that's why I think-- what is going to make a difference in genetics and medicine? This is going to make a difference. And we're seeing it now with leukemias. We're going through and doing these studies looking at patients that have a certain kind of leukemia who are likely to have a good treatment response and who aren't likely to have a good treatment response. What that means is that if you're sure in the good treatment response group, I can treat your leukemia and maybe not come so close to killing them. While if you're in a bad treatment response group, I'm going to get out everything I possibly can from day zero. And I'm going to try to wipe out every cell. So that I think this is much more medically useful than finding causative genes. We have to find genes that help us modify and understand our treatment response. So that's my story.
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_20_Practical_Genomic_Medicine.txt
ISAAC SAMUEL KOHANE: So what we'll talk about today is a very practical genomic medicine, by which I mean not only stuff that we think is going to be imminent in two years but what it means today to do genomic medicine, nothing future, just today. And if you think that I have stepped across a line into the future, call me on it. So let's talk about different diffusions of knowledge. The genome draft, as you know, was much heralded with Clinton-- was Clinton still president in 2001? No. So before-- when was the draft? The first-- AUDIENCE: [INAUDIBLE]. ISAAC SAMUEL KOHANE: When was Venter and Collins-- and wasn't Clinton-- yeah, it was Clinton. AUDIENCE: It was 2000. ISAAC SAMUEL KOHANE: It was 2000. That's an error. So it was much heralded. And there was a lot of promise about this would solve problems in human disease and in medicine. And it made it sound-- the way it was described-- as fairly imminent. And it was going to transform medicine. And just as a measure of calibration, the development of penicillin by Chain and Florey in '41, was saving thousands of lives within months. And so has genomic medicine been successful by that measure? No. Do we think it's going to impact medicine in the future? Well, sure. I hope I've convinced you in the course of the class that it's going to be important in the way we diagnose our patients, in the way we manage patients, even the way we treat our patients, but clearly not by this measure. And I think we have to appreciate that probably timescales on the order of 10 to 20 years. And, in fact, one of my colleagues, one of my friends used to be a reporter for the New York Times when this was announced in 2000, asked me, so what's going to be the main impact of the human genome draft for the next 10 years? I said, a lot of bad news. By which I meant that we're probably going to be able to diagnose a lot of things that we had not been able to diagnose and do nothing about it. And that's probably why not. But there's another reason why not, which is, well, what other problems have there been in preventing the diffusion of genomic medicine? Any ideas? AUDIENCE: Massive amounts of data? ISAAC SAMUEL KOHANE: What? AUDIENCE: Massive amounts of data. ISAAC SAMUEL KOHANE: Massive amounts of data and-- AUDIENCE: Lack of trained personnel. ISAAC SAMUEL KOHANE: --and lack of trained personnel. Basically, the medical system does not have any idea how to deal with this. And just to give you some calibration around that. I gave a lecture a year ago to American Professors of Medical Genetics, not genomics but medical genetics. And I was telling them about the whole genomic revolution. And I think it's an odd thing for me to talk to professors of medical genetics and the national association. But they were polite and interested in my lecture. But when I asked them, is this what you see yourself doing for the next 20 years? Absolutely not. They wanted to restrict themselves to monogenic, strongly highly-penetrant diseases that were extremely rare. That's what they wanted to do. And I said, well, you share at least a substring with genomics. Don't you see it as a part of your field? No. And so the geneticists are not holding the torch. Right now, the medical students are not being taught this in any detail whatsoever. And so there's going to be a problem. So who is going to practice genomic medicine? There's a variety of possibilities. Shown in light blue, for some reason, is the medical geneticist. The person who typically has been doing-- sorry about that-- who has typically-- that will be the last light blue thing, I hope-- who has been doing medical diagnosis and counseling. But what I'm telling you is they do not see themselves as bringing into the for the entirety of the impact of genetics into the broad swath of diseases, whether they're primarily inherited or primarily modifying environmental influences. A logical place would be primary care. Because, after all-- as I hope I've convinced you-- that, indeed, part of what genomics gives you is the ability to prognosticate for the future. Part of preventive care should be genetic and genome-wide testing. So a pediatrician, internist, OBGYN would be a natural place for this to happen, and also for specific diseases. We've seen a lot of application of microarrays to cancer, for instance, so oncologists would be the natural person to order these tests. And a gastroenterologist-- we looked at Crohn's disease. And it would not be illogical for gastroenterologist to screen for risk factors. So what's, in fact, the case? So there was a study done just looking at cancer susceptibility tests. So it's a substantial study looking at over 1,200 physicians, 820 of which were in primary care. And in 12 months, approximately 30% ordered or referred genetic testing, looking for susceptibility, not looking for diagnosing a patient but for looking were they at risk. So that's kind of impressive. So a bit less [INAUDIBLE]. Only 7% of them directly did it, whether because they were uncomfortable or not knowledgeable enough to actually directly order it. But that's telling you that 30% of this random sample of physicians were actually ordering these susceptibility tests. What do you imagine are the factors affecting ordering? Let me give you some potential-- what were you saying? AUDIENCE: Cost. ISAAC SAMUEL KOHANE: Cost. Any other suggestions? AUDIENCE: Whether you can interpret the test. ISAAC SAMUEL KOHANE: Whether you can interpret the test. Any other suggestions? What? AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: Which physicians were more likely-- no, no. What I meant was which physicians were more likely to order tests. I apologize for-- let me rephrase the question. What was it about them? What property of the physicians made them much more likely to. AUDIENCE: Probably research hospitals. ISAAC SAMUEL KOHANE: Research hospitals. AUDIENCE: And also probably in disease areas where there's a lot of-- ISAAC SAMUEL KOHANE: So places where they have a high burden of genetically-- disease. Well, here's the answer. The first and foremost was being in the Northeast. I don't know what it means. But that's, in fact-- it was a by far-- so there's plenty of wonderful tertiary care centers on the West Coast, I hear. But that was not a telling thing. Feeling confident that, as you mentioned, the presence of advertising materials and, most importantly, having the patient ask for it. The point here is that the reason there was such a high percentage, 30%, is not because doctors have been trained to order those tests. It's because the patients are reading lay literature. And if they have breast cancer in the family or have ovarian cancer, or the have colon cancer, they're asking the doctors to test them. That's the real insight. [SIDE CONVERSATION] Go ahead. AUDIENCE: When you say susceptibility test-- presumably some kind of multiplexed PCR looking at different [INAUDIBLE] or associated risks that actually [INAUDIBLE].. ISAAC SAMUEL KOHANE: It's a multiplex test not an array test. AUDIENCE: Question about Northeast-- is it just simply medical professional density? ISAAC SAMUEL KOHANE: No, this corrected for it. AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: Yeah. AUDIENCE: The New York Times or [INAUDIBLE].. ISAAC SAMUEL KOHANE: It's something else. It's corrected. But it's something about the nature of the training or it may be patients in the Northeast. Let's put it this way. Maybe there's a lot of Ashkenazi, Jewish women worried about breast cancer knowing that they have this BRCA1, BRCA2 risk factor. The article did not actually elaborate on that. But that's my guess. I think the patients are the driving factor. And for those of us who are in medicine, I think we have to recognize this. And it's actually a sad fact that it's the patients that are driving it. But it's a problem because if one of the things that you bring to bear, as a doctor, is an appreciation of prior probability and what tests you want to do in order to avoid false positives. Because, remember, a lot of these polymorphisms or mutations may not actually be the causative element. They may be in linkage disequilibrium, for instance. And so it's not a 1 to 1. It's not if you have this, you're going to have cancer. It's a probabilistic measure. And so there is going to be a false positive rate with any of these measures. And if the doctor is really not knowledgeable about that interpretation, then they're going to find problems. Just as if doctors did the routine CT scans on everybody, you'd find on the order of, people my age, maybe 10% to 20% pituitary microadenomas, as we find when we do autopsies of car accidents in people my age. But they have no clinical meaning that we can tell. But if you do routine testing, you'll find that. And this is going to be quite a huge problem of false positives if we continue to have patient-driven demand for testing, because it should be done knowledgeably. AUDIENCE: Can that be met with if a doctor orders this or refers them to a specialists [INAUDIBLE].. When these tests come back, I'm assuming you don't-- this is probably wrong-- that they don't come back the way the normal blood test-- there's certain numbers and whatever the doctor's-- blood tests. ISAAC SAMUEL KOHANE: The short answer is theoretically not. AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: No, there actually has to be person. We'll get to that. But even so, if you have an ascertainment bias because of who's doing it without actually letting the medical system try to define a appropriate measure of when you do a test, then you're going to run into those problems more than less. AUDIENCE: Oh, so you're suggesting that maybe-- so patients are asking for a particular test because they're worried. ISAAC SAMUEL KOHANE: They're worried. AUDIENCE: So you're suggesting that maybe that's a bad thing not because the doctor should be the ones to say, hey, you might want to think about this. But you're saying that maybe they-- ISAAC SAMUEL KOHANE: It's a self-selected group. And it may be that the people who are truly at risk are not getting tested. Basically, the doctor is not a decision maker here. And the whole role of a doctor is to decide, if anything, when is a test going to be cost effective and sufficiently useful for the patient. And if they disintermediate themselves and allow the patient to do that, then there's a risk that there's a group of people who will not be screened and a group of people who have been over-screened and, therefore, have an unacceptably high false positive rate. Because, basically, when people come up with calculations of sensitivity, specificity, they do it based on a certain population of patients, when a genetic counselor says, you have X, Y percent risk based on a general population. They don't based on the group of patients who are neurotic and worried, maybe appropriately, about their family history. That might be a different set of specificity and sensitivities. I'm not saying that patients are wrong to be worried. What I'm saying is the medical system is failing. They don't have educated doctors who can intermediate about what is the appropriate testing to be done. I mean, if a patient comes to me as an endocrinologist-- says, I want growth hormone testing. I not to do growth hormone testing when everybody asks because they have a significant false positive rate. And they'd end up treating a whole bunch of people with growth hormone for no reason. And that's the whole point of having an expert as part of their loop. And what I'm saying is this is telling us that there's a demand. And the medical system is failing. So here's the conventional view of genetic information management. It's worth going through it. You can think of disorders being single-gene or chromosomal, major gene or multifactorial, or these complex traits, complex multifactorial. And the conventional view of how this should be used is, in primary care, the point would be to recognize signs and symptoms, make referrals, support family long-term care. The primary care should appreciate the role of family history, arrange testing, referral to specialists, as needed, provide longitudinal care. And for complex traits-- use of genetic tests to guide prevention treatment. The specialist would manage specific problems. They would diagnose and manage system-specific problems. And they'd use also genomic tests to guide, prevent, and treat. And the medical geneticist-- the role there is counseling, longitudinal care, advice and interpretation of test results, and a reservoir of knowledge of handling of complex cases. But I just told you that medical geneticists have largely not met this role. So is there a different role? What to do. Is there a medical genomicist? What do you think? Is that a reasonable specialty or is that the wrong answer. AUDIENCE: Explain what a medical genomicist means. ISAAC SAMUEL KOHANE: If you wanted a good preventive care and you wanted someone who would give you appropriate counseling about either lifestyle changes or appropriate drugs that you need to consider, or a specific diagnostic test that you have to do for all possible genetically influenced diseases, what kind of person would you what you want to go to? Any ideas? AUDIENCE: Actually primary care is my first reaction. ISAAC SAMUEL KOHANE: So your first reaction is primary care. Do primary care practitioners currently know about any of this stuff? No. AUDIENCE: They [INAUDIBLE] with someone who would give them a very simple explanation. If someone has this test and [INAUDIBLE] what it means. And they're going to tell you what it means and then manage that through your whole-- ISAAC SAMUEL KOHANE: So let's do it slowly with feeling. Because you're now the czar of medicine. And you're trying to start a company in this area, in the United States, in 2004. Who's going to actually do this job? AUDIENCE: Well, I think the idea of going to any one, single person, whether it's specialist, genomicist, geneticist, or primary care-- I think is the wrong approach. I don't think any one person-- just in their specific roles can address-- especially as you move down that leftmost column to the very complex disease. I think that's the-- ISAAC SAMUEL KOHANE: So what do you do? We know that there's a large number of people who are going to die of complications of type 2 diabetes. And it's exploding. We know there's a large cancer burden. We know there's a large cardiovascular disease burden. And we know that some component of all of these diseases is, in fact, down at the bottom. So what are we going do? AUDIENCE: So I think rather than having a medical genomicist, it would be nice to have a type of physician who organized with different levels of care into a seamless approach to be able to [INAUDIBLE] approach, to address the [INAUDIBLE] you have. Maybe the genomicist would be appropriate for homing the genetic aspects [INAUDIBLE] collaboration. You have somebody who is seeing a patient, outside of their regular PCP, who can integrate the latest technologies along with the traditional therapies and can communicate the process-- ISAAC SAMUEL KOHANE: So basically someone who's a generalist in medicine but a specialist about inherited diseases, and complex diseases, and also a team communicator, coordinator. AUDIENCE: Right. So for each patient, you've got one person who's not having to worry about the occasional sniffle here or there, like a PCP, but also is focused on a bigger picture than maybe the specialist or the geneticists. ISAAC SAMUEL KOHANE: Go ahead. AUDIENCE: I think it's unrealistic to have a team-- I wish I had a team of specialists around me all the time to do this stuff for me. I think the reality is you're going to have, at most, one person who knows anything about your health, any ongoing basis. And that person should be the counselor that let you know things about lifestyle, things about preventive care. And they should have genomic information to help give you advice about your particular lifestyle decisions you make. And that should be the primary care physician. So I think that it's going to be really hard to have a lot of people around, these specialists like genomicists giving you this preventive care type information. Because they don't that well. ISAAC SAMUEL KOHANE: So you say the primary care practitioner. Cecily. AUDIENCE: I mean, to be honest, more and more genomic tests come online. Like, look at cancer microarrays. You're just going to need people to know about the techniques and know about the methods. And you can help the physicians interpret it. I mean, there's going to be a role for them. We're seeing already there's going to be an explosion of SNPs, and pharmacogenomics, and all this stuff. And you're going to need to have people who know the techniques who can help the primary care-- ISAAC SAMUEL KOHANE: Absolutely true of what you both said. On one hand, it's unlikely we'll have swarms of people worrying about us. And yet, there is going to be this explosion. And no one person-- I mean, it's already pretty hard to be a good primary care practitioner with knowledge circa 1990. How are they going to do that? Any ideas? Operationalize what you were suggesting. AUDIENCE: You might have, in a hospital, a medical genomicist that these primary care physicians can call up and refer a patient to the help interpret things. ISAAC SAMUEL KOHANE: But what if it's primary care? I mean, we're all going to, unfortunately-- unless one of is a major messianic figure-- going to die from one disease or other. So we're at risk. And, hopefully, it will not be a bus hitting us. So it's going to be something that's going to come-- interaction between the environment and our genes. So that's for all of us. So, basically, we need that information, unless we don't want the information, for every one of us. It's not going to be, hey, I have an interesting problem. It's part of routine care, I think. Everybody needs to be-- AUDIENCE: There could be a department of genomics in any-- ISAAC SAMUEL KOHANE: That's the academic answer. I'm talking about the process in the field. Who's going to be the guy or girl who's going to be providing that knowledge, either as a primary care practitioner or to the primary care practitioner? So here's some ideas I had. Your ideas are-- by the way-- are as good as mine. And I want to tell you, this is actually the central-- I mean, this course is about genomic medicine. And I think this is a central conundrum. There is no plan right now how to bring all of this into the field. And I can tell you, in pediatric endocrinology, we're not even ordering the autosomal-dominant, well-understood, highly-penetrant single genes that we know about because of the educational problems, let alone all these other complex diseases. So any idea that you have could be of great interest to our system or to a company, if you could actually figure out how to do this right. AUDIENCE: Some of it is going to either market-driven-- ISAAC SAMUEL KOHANE: It's definitely going to be market-driven. AUDIENCE: --I mean, right now, [INAUDIBLE] done a lot of research in but there's a really good test and really good results come online and generate high-profile papers and get into some of the patients are going to start demanding these things. ISAAC SAMUEL KOHANE: But that's a problem. For instance, a prostate-specific antigen done wrongly gets a bunch of poor guys get their prostates removed for nothing. AUDIENCE: [INAUDIBLE] error too, 7% false-- ISAAC SAMUEL KOHANE: False positives. AUDIENCE: --false positives. ISAAC SAMUEL KOHANE: Yeah, and so that's the problem. And, as I explained to you in my first lecture, there's 7,000 articles just on appropriate use of the prostate-specific antigen. We have now 30,000 genes to worry about. And what are the right cutoffs. And let's be optimistic and say we can actually get the optimal answers for these. No one individual I can think can actually solve it. So you're absolutely right. It will be market-driven. That's certainly true. But the question is, how will the market solve that problem? And I'm claiming there are companies out there that will solve that. And either it'll be inspired by ideas similar to ones we're going to articulate today or by other ideas. So all I've been calling for is thought about this. So one was internet-enabled triage specialists. Basically, you feed your genome to, essentially, a service on the internet. And they take whatever the primary care physician says and they basically-- in India or somewhere else-- provide you with your risk profile and the next steps for you to take. That's one possibility. So you check each patient in door for inheritable genetic or epigenetic phenomenon. A parallel system-- that's, essentially, a fly on the wall to a mental interview, provide decision supports. And this could be a person or, more likely, a computer program. But it's going to don't have to be a damn good computer program to be able to do that. But I was involved in knowledge representation and knowledge-based systems in the 80s when I did my thesis. But there really wasn't any good motivation for it. Because, in the end, doctors know how to diagnose acid base problems. But doctors will never be able to do this. So this is a true motivation, I believe, for automated decision support. The alternative is just to redo the curriculum in a major way, and even so, to teach students how to use electronic resources in near real time. Because no one will be able to keep this in their head. AUDIENCE: For the last one, I think it would require such simplification of outputs in terms feeding a genome in and spitting out numbers, the way that [INAUDIBLE] and all these other things, because that's how-- right? I mean-- ISAAC SAMUEL KOHANE: But it would not be a number. When I was talking about when you're feeding the genome [INAUDIBLE]. The answer would not be these are the levels of your various things. It would be you have a 30% increased risk for prostate cancer. The following test is the right thing to do, that kind of thing. Because, again, no primary care physician can know it. As I said, almost none of them know fully all the literature and prostate-specific antigen screening for prostate cancer and let alone for 30,000 tests. So I think it would have to be a distilled, utility-based, sensitivity, specificity-based-- AUDIENCE: That's what I mean. They get a [INAUDIBLE] from the lab. They get it from the-- ISAAC SAMUEL KOHANE: --oh, it would be simplified in that sense. Absolutely. I think that is the only way. And endocrinologists have made a big career out of just interpreting three numbers-- TSH, TBGI, and T4 in the thyroid test. Because guess what? Most medical students can never figure out which way means hypothyrodism, hyperthyroidism, or changes in the amount of binding protein. AUDIENCE: Doesn't that then relegate the medical geneticists to just the category of specialist once again? the lack of a specific disease-- ISAAC SAMUEL KOHANE: I'm claiming that the system cannot have a expert medical genomicist. Because a lot of what we were talking about is primary care, is prevention. This has to be in every primary care situation. So it's either outsourced, essentially, to the internet, to some other group of people, have a low-cost parallel assistant, whether it's a human or, more likely, a computer program. And the third is redoing education. To your scenario, which, I think, is one of the good ones, which is to have a very simplified set of recommendations come to the primary care physician. It's the primary care physician, still, who just has to learn how to look intelligently at that report, that set of recommendations. I don't see them having to learn the 30,000 genes and all the epigenetic and genetic effects. That does not seem reasonable. But this is the fundamental problem that we have in genomic medicine. And that's going to be, by far, the rate-limiting step for any penetration to managing human disease than any other aspect of the genomic enterprise. We're going to discover lots of interesting things. We're going to find good drugs. We're going to make interesting diagnoses. But translating it into cares will be the real challenge. So how are we going to pay for those roles? Extremely unclear. Right now, you can't get a lot of genetic tests paid for. AUDIENCE: [INAUDIBLE] because it hasn't been developed enough in terms of [INAUDIBLE] ISAAC SAMUEL KOHANE: Right. AUDIENCE: [INAUDIBLE] combined [INAUDIBLE] result. ISAAC SAMUEL KOHANE: With the result. And, finally, because it's very expensive. And if it became free, everybody would use it. I mean, right now, as I told you the first lecture, in a high-throughput lab, it costs $0.10 to do a genotype. Do you know how much the system is built for, for that test? So I was talking about, at first, $0.10 in order to tell you, wow, the genomic future is now. We can do very cheap screening. But how much is our insurance going to be billed for these tests, for that single genotype? AUDIENCE: Over $1,000. ISAAC SAMUEL KOHANE: Over $1,000. AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: And that's not the real cost. That's a huge profit margin. AUDIENCE: There's been a similar debate going on with these full-body MRI scans and things, right? ISAAC SAMUEL KOHANE: Sure. AUDIENCE: And so, I mean, where are the voices in the field for this and for those saying, preventatively, if we do a whole battery of non-invasive, one-day tests- submit a sample that you can do with genomic testing. You sit in the MRI and do all the stuff. And then, hey, we found this particular disease. And we just saved 60, 70 maybe $100,000 worth of treatment. ISAAC SAMUEL KOHANE: Well, first of all, that's not the way the system sees it. The system sees that you spent $4,000 bucks. And my insurance company is going to reward me for what I do the next two or three years. If you drop dead of cancer 10 years from now, that's going to be the balance on the insurance company 10 years from now. There's not a lot of payback to me. But more importantly, speaking as a clinician, I think these total body MRIs are terrible. Because you're finding a lot of incidental findings. In fact, the medical establishment is against those, I think, for good reason. Because, again, the probabilities that we have around the meaning of bumps on the MRI is around people presenting with certain symptoms. Because these studies are aimed around that. And if you just have Joe Blow off the street who just has a birthday present, a full-body MRI scan-- which, by the way, haven't you heard the advertisements for it? Yeah, it's your 40th birthday. Don't you want to do this? Or your dad's birthday-- and the advertisement is Joe Blow, the patient, reports, oh, I'm so reassured. On my 40th birthday, seemed nothing's wrong with me. What they don't report on is all the people who got bumps in their head and then had to go through craniotomy, God knows what to investigate these things. So I promise you that we will stick to the pragmatics. So this is why there's not penetration today. But today, how do you order a genetic test for a clinical problem? What do you do? AUDIENCE: How do you order it? ISAAC SAMUEL KOHANE: Well, yeah. What do you do? AUDIENCE: You go to geneclinics.org and you check if it has it and whether it's a research lab or clean up-- ISAAC SAMUEL KOHANE: Did everybody hear that? That's very, very important. Gene clinics and genetests.org. It's one of the things better stick to you after this class. Because that there are thousands of clinics and labs across the United States that have one or two tests. There's no single, giant aggregator of genetic tests. And there's this one website, maintained by my colleague, Peter Tarczy, at the University of Washington, that has all that. So that's exactly right. AUDIENCE: I mean, just a question of the market stuff. Do you think that's going to eventually roll into the [INAUDIBLE] ISAAC SAMUEL KOHANE: 100%. So that's an excellent question. OK, let me just get to that question in two minutes. Sorry, it's an old slide. It's a year-old slide. And they had-- on gene clinics, gene tests-- 189 gene reviews, which are very nice monographs around a specific disease involving 1,000 different clinics and 500 laboratories covering 948 diseases. And the short answer is, yes. These are going to be rolled up into requests and the various laboratory corporations. The real question is-- ultimately, we know that what it's going to look like. It's going to be a highly-roboticized sequencing genotyping operation in these labs. But in the interim, it may still be that it's rolled up. But the actual sequencing and procedures might still be at these various places. And these companies might just contract to these various sources. Right now, for instance, if you want to do Duchenne testing, your sample goes to a place in Utah. Now why is that? Why not just roll into one place? Because the guys who are resequencing the Duchenne gene for Duchenne muscular dystrophy know which primers aka which sequence works the best to amplify different segments of the gene to get reliable results. And that know-how ultimately can be definitely rolled up. But in the interim, on the five-year timescale, may or may not be just left locally. And so what these requests will be are more aggregators on front ends, conduits or channels-- as the marketers like to say-- to these myriad labs. Ultimately, I think everybody believes they'll be one highly-roboticized facility, each one of these facilities. Because after all, a gene is a gene is a gene. But because of the things like knowledge of what primers work, knowledge of which mutations are common and what they mean-- all that knowledge management is still distributed. And rolling it up is going to be, I think, the brake on that full roll-up. So for those of you who don't know the site. So if you want to test for a particular problem, you can look at this thing and say, I want to look at obesity. And it'll tell you which laboratory will do which testing it for. In fact, probably the only obesity-related gene worth testing for right now is MC4R. It's the most common genetically associated cause of obesity. MC4R is melanocorticotropin receptor, fourth type. And some high percentage, on the order of 2% of individuals with-- according to some studies-- with morbid obesity have mutations here. Unfortunately, for the rest of us, like me, we're just fat because we eat wrong. So we're talking about-- go ahead. AUDIENCE: Geneclinics.org? ISAAC SAMUEL KOHANE: Yeah. But is this indexing, that gene clinics and gene tests are generally sufficient. So if you look at adrenal hyperplasia, which is a disease of children and adulthood. It's responsible for ambiguous genitalia in infants, hirsutism, and infertility in females, and precocious puberty for males and females. So if you look for adrenal hyperplasia, you see the following. You get 21 hydroxylase deficiency. But what's the right thing to actually order? Well, if you actually knew something about the pathway, the steroidogenesis pathway, you'd know that it starts from cholesterol. The cholesterol that we all claim to hate actually is the backbone for all the steroid hormone molecules. Cholesterol will make all the salt-retaining hormones like aldosterone, the glucose-stimulating hormones like cortisol, and the sex hormones like testosterone and estradiol. And an appreciation of what the pathways are is going to allow you to actually do focus testing so that, for instance, 21 hydroxylase deficiency is, in fact, the most common. But there are other deficiencies that will happened with some regularity. 5% of congenital adrenal hyperplasia is 11 hydroxylase deficiency and not 21 hydroxylase deficiency. And the bars here just show the block in the pathway that will be caused by the deficiency of that gene. And so the point is, you still need a lot of knowledge around that specific disease and the pathways rather than just going up to a database that says obesity-- this gene. You still have to have some knowledge about what you're looking for. So what material do we want to test? Let's take four diseases-- cystic fibrosis, a disease where you get plugging of mucosal ducts. And it's a chloride transporter problem. And people die young. McCune Albright disease-- a disease of the G protein complex where, essentially, you have an activating mutation in one of the subunits so that all the G-coupled processes are hyperactive, so that you have kids with precocious puberty, cortical adenomas of the adrenal, and precocious puberty. AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: Well, McCune Albright syndrome, the classical description is cafe au lait spots, which are these-- basically, looks like someone spilled coffee on your skin. Fibrous dysplasia of the bone-- so your bones are screwed around because they have these sort of ropey things going through them, and precocious puberty. AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: It's distantly. As we learn more about it, multiple systems, like thyroid and growth hormone, can also be hyperstimulated. So, basically, it's a mosaic state where these mutations can actually be spread incompletely throughout the body. And so, for instance, the melanocytes are stimulated in just patches. And if you're unlucky enough to have adrenal involved-- adrenal adenoma. Please. AUDIENCE: Can I get a scenario for-- ISAAC SAMUEL KOHANE: So a patient comes to you with McCune Albright syndrome. And you suspect it because Zach told me that they have these cafe au lait spots. Boy, this person has a two big ones. And this person is having a precocious puberty and is complaining of bone pain. What do I send to test? That's the question. AUDIENCE: But whereas with the question of cystic fibrosis, they would know that [INAUDIBLE] ISAAC SAMUEL KOHANE: Well, you suspect it. You did a sweat test, which has a certain specificity and sensitivity. or maybe it's a parent who already had a kid who died from cystic fibrosis. And they just have a newborn. What are you going to test? You don't know the N thing yet. So let's start one at a time. Cystic fibrosis. AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: A testing gene. But I'm a I'm a stupid doctor-- indeed I am. What does it mean to test genes? What thing do I stick into a tube? AUDIENCE: Oh, just do a mucosal swab. ISAAC SAMUEL KOHANE: So [INAUDIBLE] I can do a mucosal swab? AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: The short answer is yes. AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: What? AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: No. McCune Albright. Maybe the cells here are not involved. Maybe it's not-- AUDIENCE: You look for the [INAUDIBLE].. ISAAC SAMUEL KOHANE: Maybe you get a skin biopsy of the cafe-au-lait spot. 21 hydroxylase is efficient germline. You can do the blood. Clots like cancer, blood is not biopsy. You need a hunk of that cancer. Or depends, is a highly inherited cancer or do you believe it's a somatic mutation in the cancer? All the point I'm making here is it's not obvious what you put into the tube. And as dumb as it sounds, when you're a tired resident on the ward and someone says, do the geniculate test. You don't know what the hell to do. It's that simple. But this is what it comes down to in genomic medicine. These are actually answerable questions, but you need some reading. Where are the mutations? And what do they mean when they are in certain tissues? AUDIENCE: So diseases [INAUDIBLE] ISAAC SAMUEL KOHANE: For a germline disease-- any tissue. It doesn't matter. So you want one that is the least invasive-- blood, swab, even spit sometimes. AUDIENCE: Right. For some other complexes, [INAUDIBLE] people suspect you have susceptibility, which then triggers the disease. ISAAC SAMUEL KOHANE: It's going to be certainly not true for many diseases. But, for instance, when Scott Weiss talks about asthma, he's thinking of a risk factor, even if it's a small risk factor and it'll increase your risk of asthma by 1.2, he's still thinking of this as a germline polymorphism. Now it may be that people that asthma-- I'm just making up something that's completely off the beaten path-- is, in fact, a somatic mutation. I doubt it. But all these genotyping studies, haplotype studies are done off of blood. So they're assuming these are germline diseases. AUDIENCE: So for something that's not germline, something like maybe you've got a disease because you're exposed to some sort of environmental toxin or something-- ISAAC SAMUEL KOHANE: Or you have a somatic mutation. Yeah. AUDIENCE: --yeah. Would it be a good standard protocol then to take-- let's say for cancer, part of the tumor and part of just any other tissue to compare-- ISAAC SAMUEL KOHANE: Or blood. Sure. AUDIENCE: --or something like that just be able to-- ISAAC SAMUEL KOHANE: The short answer is cost. But, yes, I mean, the answer is-- AUDIENCE: --wasn't it Weiss who said that in the next five years this cost thing is not going to be an issue-- ISAAC SAMUEL KOHANE: Well, it's not going to be an issue for him doing research. What I just told you is $0.10 a genotype today for Scott Weiss. For you, my friend, $1,000 in the clinic. And by the way, that's a huge commercial opportunity obviously. You're absolute right. That's going to drop. It's like a drop to $0.10. It's going to drop to $10 or $100. And the market leaders in that, who can figure out how to make the bucks on this, at that level, are going to be extremely rich. OK, so if you ordered a test, you going to need to get advice on that genetic test. Where are you going to get that? Found either in your office-- let me call you off the bat-- extremely unlikely. There's no one in my office who's going to give you genetic counseling or other licensed counselors-- maybe it's part of your institution. I'm talking about today. We're not talking about the medical genomics. Today, what are you going to do? Or licensed counselors at Brigham, or you outsource it-- and here, it says, there's a pre-test phone consultation provided patients and so on. And that might be part of the business model, that they understand that a lot of institutions don't have that facility. So they can outsource it. So how do you send a store for something. Again, these are stupid things that most residents don't know. 310 milliliters of whole blood, if it's a germline in a purple top tube, also acceptable green, which is sodium heparin, or light blue-- sodium citrate tubes. It comes down to that. What tube you put it in? If you put it in the wrong tube, you may not have an adequate extraction. And it's totally different, of course, for RNA. If you're interested in RNA, you better flash freeze it, is a short answer, as soon as you can. So what kind of DNA-- AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: Because, basically, the amount the stability and extractability is going to depend on what is inside the tube. And the color of the tube is a code of what's inside the tube. So these tubes are not all featureless glass. They have a little bit of chemical in them. So what kind of DNA testing should be done? So let me ask you guys. We've just got together in a happy company called 512 Genomic Testing. We've got our stock options. And what are we going to do? Are we going to sequence these genes? When someone sends us a gene to 512 Genomics, are we going to sequence it or are we going to genotype it for all the known mutations? What are we going to do? AUDIENCE: [INAUDIBLE] mutation different levels [INAUDIBLE] and more complicated. ISAAC SAMUEL KOHANE: So let me push you. So just arrived in our inbox is a blood sample in the right tube from Joe Schmo, but it's really her baby's blood. And she wants to know, does this baby have 21 hydroxylase deficiency? And they heard that 512 Genomics is a very smart company because they've been trained at Harvard Medical School. And, therefore, they trust us to do it. So you're going to do a panel of what? Genes? Let's simplify your life. Well, first of all, are we going to test all genes? AUDIENCE: No, we'll start testing the genes that are known and that are frequent. ISAAC SAMUEL KOHANE: OK, I want us to have a very nice corporate vacation in Bermuda. So all the money that we spend is going to take away from the bennies I can give you on our vacation. OK, we're going to do-- I'm going to tell you this-- five genes that are [INAUDIBLE]. We're doing all five genes? AUDIENCE: No, then you look at the frequency of a particular mutation occurring. ISAAC SAMUEL KOHANE: OK, so I'm going to tell you there's two genes that account for 99%. Good enough for us? OK, let's say yes. Let's say yes. AUDIENCE: Why can't we just do one gene, if it comes back negative-- if we know two compromise 99%, [INAUDIBLE] do the one, comes back negative, we do it. But if it comes back positive, it's like, hey, we got this. [INAUDIBLE] ISAAC SAMUEL KOHANE: We're going to go back to the patient? That doesn't happen. We're a lab. We're not the doctor. That does not happen. We cannot ask, unless you really want to change-- you live in the dream world. AUDIENCE: No, I'm just saying-- no, you can't go back to the patients and get more blood. But-- ISAAC SAMUEL KOHANE: Just first sequence the more common gene. Is that we're saying? AUDIENCE: Yeah, so if you got two genes, I guess-- ISAAC SAMUEL KOHANE: OK, so let me tell you. Listen to the facts. 21 hydroxylase accounts for 95%, 11 hydroxylase 4%. So we got 99% with those two genes. So what am I actually doing? I'm not the laboratory technician? AUDIENCE: --yeah, do the 95% one. ISAAC SAMUEL KOHANE: OK, and it's negative. What do I do now? AUDIENCE: Then you do the next one. It comes back positive-- I don't know enough about the disease. So assuming that the positive-- ISAAC SAMUEL KOHANE: The positive is important, positive is very important. AUDIENCE: --then you just running that actual test. ISAAC SAMUEL KOHANE: OK, so you're doing a phase one. That's perfectly acceptable. It's probably means that, most of the time, we'll run both tests. Because most of the time, we'll probably get referrals of patients who don't have that disease. So it's a small but important incremental savings you just gave us. You took Jose's advice and said let's only do one gene. And if that's negative, we'll do the second gene. But as I explained, most kids this was sent for will, in fact, not have the disease because it'll be something else. And we'll still have to do the second one. You had a question? AUDIENCE: The question is this is a PCR test, right? Just to clarify. ISAAC SAMUEL KOHANE: Well, we haven't gotten there yet. AUDIENCE: OK. [INAUDIBLE] ISAAC SAMUEL KOHANE: OK, so let's say that we take the modification of the Jose protocol. So what are we going to do? So we look at one gene and then two genes. So what actually are we going to measure? Are we going to measure the full sequence of these genes? Are we going to just look at the known published mutations or SNPs? What are we going to do? AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: Now what if I tell you that 2/3 of the mutations are extremely common and 1/3 of them are one of a kind. AUDIENCE: I'm assuming we know something about the frequency of having the disease associated with the frequency of having that particular mutation. So if we know that a particular mutation affects the one person and has a very high incidence of progressive disease, then that's the [INAUDIBLE] ISAAC SAMUEL KOHANE: OK. I'm Joe Schmo. My kid actually had one of those rare ones. I'm going to assume your pants off. I'm taking the vacation to Bermuda because you guys took-- there was a known rare mutation. Or even-- AUDIENCE: Right, again, this is back to what I said in the Jose modification. If you come back negative-- ISAAC SAMUEL KOHANE: I see. AUDIENCE: --then you go back-- ISAAC SAMUEL KOHANE: All I'm saying is that refinement is still going to cause-- most of the time, you're going to end up doing all the mutations. AUDIENCE: But the alternative is to just do all the mutations all the time. ISAAC SAMUEL KOHANE: What I'm telling you-- AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: What? AUDIENCE: A third of them are kind of unique. ISAAC SAMUEL KOHANE: Unique, are private mutations. So the short answer is, I believe the right answer is, you've got to sequence the whole thing. I mean-- AUDIENCE: There's no value in preemptive screening, say, for instance-- ISAAC SAMUEL KOHANE: It depends. If it's for a given patient around the specific problem, that's a very different situation from I'm a public health authority and I want to screen the population or I want to screen the population and just detect people at risk for this. But if you're a doctor with a specific question, I'm duty-bound to actually do maximum for you. And consequently-- period. AUDIENCE: And it just seems-- maybe I misunderstood this-- but it seems kind of counter to what the current practices were, again, standard testing for things. when you have thrombosis, the first thing they do is they test me for the most common things-- [INAUDIBLE] deficiency, and all these different things. Because, well, if you've got that, then [INAUDIBLE] then we don't really need to continue. Because we already know you had a big problem. So that's, I guess, the big basis for my approach. Joe Schmo came and said, one of my kids has this thing. The fastest, cheapest thing to do would be to test them for the most common-- ISAAC SAMUEL KOHANE: So you see you can do that because the doctor can actually cut a protocol and actually talk this through with the patient-- we'll go over the most common ones. We're 512 Genomics. And 512 Genomics does not have a relationship with the patient. And the doctor doesn't understand genetics, so we already established. So again, I'm trying-- 2004, if there is a private mutation that the patient subsequently finds, we're toast. AUDIENCE: Even if we don't [INAUDIBLE] back. ISAAC SAMUEL KOHANE: When are we going to go back? No one's going to come back to us. AUDIENCE: Take the genes that account for those qualifications and people. ISAAC SAMUEL KOHANE: Well, basically, we're going to follow doctor's orders if they say we want if they say if they say I want to look at congenital adrenal hyperplasia. I think we have to look at every gene known-- we'll build up accordingly, of course. If they say, I want to look at 21 hydroxylase gene, we have to sequence the whole darn gene. Now if they specifically say, I want to look at just a mutation-- which I'll never tell us. I want a mutation in codon 32. Well, we'll do that genotyping. But if the doctor is a standard doctor, 2004, and says, check 21 hydroxylase gene. We've got to sequence the whole thing. There is no choice. Otherwise, we're toast. Yes, so I think the only thing I missed from the slide for epigenetic things like methylation, we need to think about doing that. But for certain diseases involve imprinting, for instance, you want to look at methylation. So pre-flight checklist. What is this going to take to get our clinic up and running? Now we're 5212 Clinic. We're not the company anymore. So would you or a staff member be able to be-- so I took this from some very good website. I wish I could tell you which one it was. But one of the things that you need to do before going, would you or a staff member be able to be an advocacy resource? Is a patient-- and when appropriate-- a family prepared for either a positive or negative test result? Does the patient understand the medical, psychological, and social ramifications of the test? Do you have a referral less than appropriate specialist and/or genetic counseling services to resolve any issues that cannot be handled in the office? And these are all things that we have to do as a clinic before we can even start ordering these tests. And the workflow is as follows. For a positive test, these are all the things we have to do. Now we're a primary care practice. And the interpretation of a positive result is different if it's a diagnostic test, a predictive test, a carrier test, prenatal testing, or newborn screening. If it's a positive test and it's clinical diagnosis confirmed, it's predictive testing. It tells you have increased risk. If it's carrier testing, it tells you that you're a carrier. It's prenatal testing, it says that the fetus has a specific condition. And if it's newborn screening, it tells you the newborn has a disease. And the follow-up includes all the things you could imagine. For negative tests, if it's a diagnostic test, the symptoms are unexplained. For predictive testing, the likelihood of showing symptoms is decreased. If it's carrier, it's highly likely that you're not a carrier. If it's prenatal testing, if the fetus was symptomatic, then it's unexplained. If it's not symptomatic, the chance for the condition is small. And newborn screening-- the newborn is not expected to have the condition. There's a whole bunch of follow up that does or does not develop from these different levels of use of the same darn test across these different clinical states. And thinking about 512 Genomics, our old company, if we didn't know which one of the situations are involved, we'd have to practice a very defensive corporate policy to make sure we'd be in the maximal informed state for all conditions. So can I send a sample for microarray testing? We've heard so much about microarrays in this class. In the USA, the answer is yes for a research protocol. This summer, you're going to over at Harvard Medical School genotyping with a resequencing array for cardic myopathies and for hearing deficiencies. These are custom chips from Affymetrix. And certainly for research protocols, we're using them for expression as well. Can you do this for commercial testing? The answer is no. Roche Diagnostics, three months ago, tried to get approved through the expedited review of the FDA, the P450 chip, which has all the genes that are involved for metabolizing toxins and drugs. And therefore, it's a very good pharmacogenomic screening. And they were blocked dead in their tracks by the FDA. Unless they can appeal it correctly, they will have to go through full bore extensive FDA approval. And So why is it the case? I can give you a lot of different reasons. But the short answer is the FDA does not understand this technology and does not know what to do with it. I was part of a panel with the FDA-- [INAUDIBLE] talking about it-- they have just agreed, right now, with big pharma, just on what is a data structure with which they can transfer results, let alone interpretation or analysis of the results. AUDIENCE: So they're restricting things just because they don't understand it, even though it's not unleashing a new drug on the population that they don't understand the side effects. This actually doesn't do anything other than maybe give patients something to worry about-- ISAAC SAMUEL KOHANE: Let me make it very concrete. It gives you a result that the patient then has ovaries and breasts removed for. AUDIENCE: I guess when you put in that-- ISAAC SAMUEL KOHANE: Yeah, I mean, that's what-- what? AUDIENCE: --it's a classic example of what I've heard from bilateral-- [INAUDIBLE] for no real reason. ISAAC SAMUEL KOHANE: --for no reason. But that's going to happen. Many things are going to happen like that. I mean, just going to the doctor-- I try to stay away from the doctor as much as I can, which is probably a bad idea. Because you probably should get routine care. But if every time you expose yourself to even an investigative procedure for the wrong reason, they're likely to find-- again, to the false positive reason-- things that are incidental but are going to create huge costs, and worry, and morbidity. Diagnostic tests themselves can kill you. So let's say you are told you have a risk for this. And then you get a colonoscopy. A tiny fraction, probably one in 200,000 people, suffer bad outcomes from that procedure. AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: Yeah, those are pain in the you know what. So this is a problem. So the short answer is-- microarray testing, no. But there's nothing wholly about the FDA. In the Netherlands-- I don't know-- again, this is what I had heard last year around this time. And I don't know if it's happened. But they were claiming they were going into routine use of microarray for breast biopsy screening and evaluation. I have to do the research to figure out whether it's happened or not. And there's, of course, the issue of consent. And the short answer-- as we'll get to shortly-- is that you have to do a fairly extensive consent process. However, if the patient is symptomatic, if they have a cancer, or they're having a heart attack, or they have dementia-- they're actually symptomatic, it's a much more simple consent process. All the consent issues that you've heard about in public press are about pre-symptomatic testing. AUDIENCE: So why don't they just approve that the pre-symptomatic testing with the caveat or the [INAUDIBLE] rule that we're just going to follow and observe a patient? Let's say that we usually go in and get a full-body MRI scan [INAUDIBLE] I think you shared a story with someone about that. Found it. And try to do nothing [INAUDIBLE] complications. So rather than go in and cut someone's head open, or a colonoscopy and everything else, just observe the patient-- six months, a year, two years, three years. ISAAC SAMUEL KOHANE: So let me tell you something about medicine. That's impossible. If you find something that looks bad on an MRI-- because even if you believe that it's only one chance in 100 that is truly a bad thing and something bad happens to that patient, not only are you toast with the legal system but you'll feel terrible. I mean, it could make you very, very fidgety to have a patient with a finding that you were taught in medical school is badness. And we're told in 2004, medical education in 2004. We're taught that some bumps on MRIs mean bad things. So if you start changing your priors by screening everybody, then bumps do mean different things. But that's not the way we've been educated. So we kind of get itchy as hell. AUDIENCE: So with the advent of new detection technologies for particular things that you context intelligence specific for this sort of problem [INAUDIBLE] ISAAC SAMUEL KOHANE: Yes, except it never really does because-- well, that's not true. For certain things, we're going to get specific enough markers. I can imagine that a bump-- may be wrong-- imagine a bump plus a functional assay, like with a PET scan, will show that this is a highly rapidly dividing thingamajig and, therefore, it's a problem. Or it's totally metabolically quiescent and we can just observe it. So that's possible. Yes. AUDIENCE: I just listening to [INAUDIBLE] talking about his idea of why don't we treat cancers before they actually become-- ISAAC SAMUEL KOHANE: He's absolutely right. AUDIENCE: --and that, to me, falls right in line with this just what's the harm in doing this analysis. ISAAC SAMUEL KOHANE: Well, I have the greatest respect for Judah. And I'm sure he'd would agree with what I'm about to say. It all comes down to what is the treatment and what threshold do you pick. So if you're looking at a certain angiogenic signature, whether it's in a polymorphism or something that you're measuring, what's your false positive rate and how toxic is your treatment? If you're treatment's totally benign, of course, treat everybody and cheap and treat everybody. But that's almost never true. It's never completely benign and completely cheap. So you end up having to make decisions. So now for presymptomatic asymptomatic testing, consent is actually a very, very complex and drawn out process. So here's what is involved. Again, this is 2004. It may change to be more rigorous or less rigorous, depending on whether or not the Genetic Privacy Act, which is now somewhere in Congress, will be enacted or not. So the current state-of-the-art is, the major medical facts including the diagnosis, the prognosis, and the way the treatment of disorder tested has to be explained to the patients. The genetic facts involving-- including risks for other family members has to be explained. The chance that the test will give a correct prediction as well indeterminate or unexpected findings has explained to them. The risk of receiving an unfavorable test result and the possible consequences for yourself and your family-- in the case of prenatal diagnosis, this may include the risk of facing a decision about abortion. And you have to tell them in a way that they really recognize, they understand. Some people are videotaping these concerns now, well, defensively and also to make sure that the practitioners are doing the right thing. Also, it should be made clear that can refuse the test and informed of potential benefits and disadvantages, including unsettled questions of privacy protection dealing with insurances, banks, or employers. You might be able to say today the insurance company is not going to do anything with it. But that test is now forever in your medical record and maybe not true five years from now. It might be a different insurance company or the same insurance company will feel differently. Your care will not be jeopardized whatever decision you and your family make. Possible use of your tissue sample after testing-- they have to understand that. Whether it's destroyed or kept from an analysis and whether it can be kept for DNA banking. All of these things have to be discussed with the patient. This is not a five-minute discussion. And remember, again, back to our corporate clinic, 512 Clinics, our standard of visit for a patient is on the order of 10 to 15 minutes. And I'm telling you that this concept process requires an hour to do it adequately. Having said this, I want to tell you, we're actually performing routine comprehensive genetic testing on our entire population. Did you know that? AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: You bet. So we're doing we're testing all kids in the United States for a genetic disease like phenylketonuria. We're now looking for all the common polymorphisms of CFTR, the cystic fibrosis responsible gene. And if you look what the Massachusetts State Laboratory is doing right now in Worcester with University of Massachusetts, I said, they're Jamaica Plain but they're affiliated with the University of Massachusetts. They're actually looking at maybe 10, 20 diseases that they're going to do risk factor screening for. Now unlike our company, 512 Genomics, they really have to come to a couple of decisions about when to do this. How is it determined what is routinely screened? Three components-- public health assessment, evaluation of tests and interventions, positive and screening implementation. The public assessment is fairly straightforward. Disease or conditions should be an important public health burden not a rare bird. And what does that mean? That means, typically, they like to see it above one in 20,000. They won't admit to that, but it's around there. Like thyroid disease, which was one of the first things to be screened, is about one in 5,000. Congenital adrenal hyperplasia, which they do screen for now, for 21 hydroxylase is about one in 20,000. The prevalence of the genetic trait has been known. The natural history of the condition for susceptibility to link disease to overt disease should be adequately understood. And, basically, the safety and efficacy of the test-- blah, blah, blah-- has to be known. Now policy issues are obviously important. But the main issue is this one. The cost of the screening should be established. In other words, the screening procedure, whatever it is, whether it's genotyping or tandem mass spec, has to actually be within the budget of the State of Massachusetts. And that's why, unlike 512 Genomics, they're absolutely allowed to make decisions like we're only going to look for the genotypes that account for 99% of the disease burden, because it's a public health intervention and not our doctor. And that's very, very different. AUDIENCE: [INAUDIBLE] tests biochemical or-- ISAAC SAMUEL KOHANE: Most of those tests were biochemical. They're testing a genetic disease. But what I'm telling you is this is actually old. The cystic fibrosis test now is a DNA test. And they have it now to test that going online this is an old slide in the remaining 10 minutes. Let's roll play. Let me tell you about a disease, congenital hyperinsulinemia is the most frequent cause of severe, persistent hyperglycemia in newborn babies and children. In most countries, it occurs in approximately one in 25,000 to one in 50,000 births. About 60% of babies with hyperinsulinemia develop hypoglycemia during that first month of life. Their blood sugar crashes. An additional 30% will be diagnosed later in the first year. And the remainder half of that. So 90% of them are diagnosed in the first year, 60% in the first month. With early treatment and aggressive prevention of hypoglycemia, brain damage can be prevented. The brain damage is from having sustained low blood sugar, which is actually not harmful when you're adult and your brain is stable. But when you have a developing brain, and it's not getting a lot of glucose and, therefore, metabolism for periods of time, especially repeatedly, you can really have brain damage. However, brain damage can occur in up to 50% of children with hyperinsulinism if their condition is not recognized or if treatment is ineffective in the prevention of hypoglycemia. So remember this-- one in 25,000 and one in 50,000-- on the other hand, neonatal hypoglycemia-- so hypoglycemia in these newborn kids has an incidence of two to five per 1,000. There's a zillion reasons why kids have hypoglycemia, from being slightly immature at birth to the IV was switched off too fast and so all of a sudden the pancreas didn't have enough minutes to wake up and switch off the insulin, that they're having an infection, and so on. So the state of our knowledge in 1992, when I was finishing my residency, was there was this disease called nesidioblastosis or hyperplasia of the pancreatic islet cells, the cells that produce insulin. And for reasons that we didn't know, either the entire pancreas or spots in pancreas were hyper-producing insulin. And if we didn't treat it-- some percentage these kids would go on to be medically manageable but a whole bunch of them were not. And we'd have to take out their whole pancreas. And sometimes we saw that we didn't need to take out the whole pancreas, that there were just focal pieces of pancreas that were involved. We could have left them more. Because by taking the whole pancreas out, we made them, A, diabetic, B, with insufficiency of those enzymes that used to dissolve your foods. It was really not a pretty sight. Plus, we kept them in the hospital for weeks to diagnose them. That was the state-of-the-art when I was a Fellow. Here's the state of our knowledge in 2004. There is something called a potassium channel, of which there are many, which controls insulin release in the islet cells. Basically, there are these two proteins-- SUR and Kir6.2, which actually gate the flow of potassium. The channel determines the resting membrane potential, which is maintained at the necessary voltage to keep voltage dependent calcium channels closed in a cell that does not secrete. When glucose comes into the cell and there's a resulting change in the ratio of ATP to ADP because of metabolism, K channels close the membrane that depolarizes. Subsequently, voltage-gated calcium channels are open, initiating insulin secondary cascade. Therefore, the K channel-- the potassium channel-- functions that are linked to the metabolic state of the cell and the electrical activity of the membrane, resulting in the stimulation or inhibition of insulin release. So we have a thermostat, rheostat, which says, essentially, the more glucose I see, the more I'm going to release calcium and, therefore, insulin into the blood. So that's the basic rheostat. And we know that these two genes, which actually happen to be next to each other on chromosome 11-- if you have mutations in them, instead of coming nicely together in this octamer, they come in these various dysfunctional or poorly functional heterodimers or homooctamers. They just come together in the wrong way. And they don't work right. And it turns out, there's a lot of mutations in SUR, one of those genes. In the Kir6.2, there's only three mutations. But looks like maybe 30, 40 mutations in SUR. Some of them are common. They're hotspots. A lot of different families have them. Some of them are one-of-a-kind. Some of them are in the coding regions. These are the 39 exons of SUR. And, by the way, that's really going to cost us in our 512 Genomics company to sequence all of those. And some of them are in introns. They're at the splice junction. They're just right across the intron, exon boundary into the part of the intron, which is the donor or splice acceptor site, which determines when you're going to splice or not. And these all are known highly-penetrant causes of disease. They're autosomal recessive. They are autosomal recessive, but if you look at these two forms of hyperinsulinemia-- so if you look at the slide, there's two patients that have the two different presentations I told you about. One has these focal hyperplasia, which pushes aside the normal tissue, and the other where it diffuses throughout the tissue. We now know why that is. It turns out that there is in the diffuse form, it's germline transmission of the homozygous state. And the focal state-- what's happening is there is loss of heterozygosity so that you have a parental gene-- I think actually a paternal gene. And there's a loss of heterozygosity, so it would be a loss of the maternal allele. And so in those cells with a loss of maternal allele get focal hyperplasia due to the poor functioning of that channel. So I told you about this disease. And in the last five remaining minutes, would you order a genetic test? How about hypoglycemia in a newborn? Maybe. Jose says no. Dr. 512, are you going to order this test or not? At least explain like a doctor because you're actually making a decision. Because we have a patient. We need a decision. AUDIENCE: This is one in a 1,000? ISAAC SAMUEL KOHANE: What? AUDIENCE: You said one in 1,000? ISAAC SAMUEL KOHANE: I think I said two to 5,000 of routine hypoglycemia. And I think I said one in 25,000 for persistent hyperinsulinemia. Yes? No? What are we going to do? Dr. Wolff? AUDIENCE: Yes. ISAAC SAMUEL KOHANE: Yes. So there are people like who will do it. But, thankfully, we don't give them the opportunity. AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: Because 512 Genomics would be incredibly happy with you because we'd be making a mint. But probably you want to look for more signs and symptoms to make you think that this is happening. So my take on it would be if there's a persistent drop in glucose, if the IV runs-- I should say less than twice the basal amines. Let me give you the full scenario. You have a kid that you're maintaining a blood sugar by giving twice the normal amount of glucose. And every time you try to pull it down, the kid gets hypoglycemic. And it doesn't happen once. It happens many times over the first few days. And so, basically, convinced yourself that there's something wrong. So you give yourself two or three days to really convince yourself there's something wrong. And when you do that, well-- I'm just telling you by experience-- probably you want to do that within the first week. So what would you order? AUDIENCE: Sequences that-- ISAAC SAMUEL KOHANE: By the way, don't feel bad about it. If I asked any group of medical students or endocrine Fellows, they would not know the answer. AUDIENCE: Sequence the genes. ISAAC SAMUEL KOHANE: Which gene? AUDIENCE: Both. ISAAC SAMUEL KOHANE: Both, OK. Which part of the gene? Remember that picture of SUR. I thought we saw things everywhere, right? AUDIENCE: Yeah. [INAUDIBLE] ISAAC SAMUEL KOHANE: From what sample? AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: You could. And eventually, three months out, if we have no better story. But parents are going to come wailing at you saying-- AUDIENCE: Presumably-- [INAUDIBLE] ISAAC SAMUEL KOHANE: No, but maybe it's not. I mean, I'm telling you that there's this-- these parents are wailing on you and saying-- and you're saying three months. I thought I came to Harvard. Don't you guys have genetic testing? So what are you going to do? AUDIENCE: Probably just stick a needle in your baby [INAUDIBLE] ISAAC SAMUEL KOHANE: You brute. So here's what I would do. If you ever come certainly to pancreatic removal, which we will have to, in some cases. Definitely won't want to look at the mutation tissue. But what I'll do is I'll actually look at the parents and see if a father or mother also have the mutation. So if the father, for instance, has the mutation in one gene-- if he's heterozygous for the mutation-- I have a very high index of suspicion. If the baby is homozygous, obviously-- so I look at baby blood and look at dad blood is what I would do. I'm pointing out it's not obvious. And every story is slightly different. And that's why this whole area of genomic medicine is incredibly fraught with a knowledge management problem. AUDIENCE: This may be a really stupid and [INAUDIBLE] question. ISAAC SAMUEL KOHANE: No, no such thing. AUDIENCE: Since, in this particular example, you [INAUDIBLE] possibly inherited gene, why wouldn't you try to build up-- get a sufficient cell sample, extract an array, [INAUDIBLE] and just basically test to see which one those guys [INAUDIBLE] ISAAC SAMUEL KOHANE: Are we looking expression or are we looking at DNA? AUDIENCE: Well, we've been working on expression. ISAAC SAMUEL KOHANE: So those were point mutations. Those were point mutations. Some of them were deletions. But most of them-- in fact, every one I showed, there was a microdeletion, three or four bases gone. Every array that I showed you would actually still say the gene was present. AUDIENCE: But if it was in one of the [INAUDIBLE] ISAAC SAMUEL KOHANE: No, that just changes the splicing. Or it's causing early termination of the gene product. Let's say for the sake of argument-- let's go back. AUDIENCE: Yeah, maybe I just [INAUDIBLE] ISAAC SAMUEL KOHANE: Let's say that it's here. And let's say that you're missing this. That you don't, in fact, have it spliced correctly. And that leads to continued translation of the intron as an exon. And, therefore, since it's mumbo jumbo, it causes premature termination. You still have 90% of the gene there that's being transcribed. And that RNA, therefore, is going to be registered by the expression array. Now you could say maybe I want to have a resequencing array. We still have to resequence the gene, no way around that. OK, where would you send it? We know the answer. We'd look up gene clinics, gene tests. What would you tell the parents when you're obtaining of the consent? Well, I'm running out of time. But I think in addition to all the things that we said before, we actually have to tell them that we might not find the cause of it. But that if the father's shows up one thing, that might be one result. And therefore, we may only have to hack out a bit of the pancreas. If the kid's homozygous for it, we may have to take out the whole pancreas and so on. So that's it for today. That's really the state-of-the-art of genomic medicine, 2004.
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_1_Genomic_Introduction.txt
ISAAC SAMUEL KOHANE: So the overview of today's class-- I'm going to try to convince you that the future is now, that all this talk about the geniculate revolution is not pie in the sky stuff for venture capitalists for the next 10 years, but it's stuff that is of immediate practicality today. I'll spend a little bit of time in distinguishing between genomic and genetic more for your taxonomical edification and for any substantive reason. We have to touch upon heredity and what that is all about. And then we have to talk about the fundamentals of the resequencing of the diagnostic process that's being put into effect by the availability today of all this genomic data and then I'm going to finish up about how all this is resulting in accelerating consumer activation. And for those of you who are practitioners of medicine, and there's a subset of you who are, this should be alarming-- not that it's bad that consumers get involved but that they're ahead of us in their knowledge and application of this knowledge. So the future is now. So just because some of us, in fact, claim not to have taken any biology, I just want to review some basics. This will be old hat to some of you, but since you may not have that old hat, I'm going to review it. This is the old dogma, and this dogma is flawed. And it's not quite true. Hey, welcome. Just introduce yourself. AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: And you are a student at? AUDIENCE: Oh, I'm sorry. I'm a student at MIT. HST student. ISAAC SAMUEL KOHANE: HST student, chemistry background. All right. Todd, have ever had a basic genetics course in your life? I ask the simple questions. Yes? AUDIENCE: Yeah. ISAAC SAMUEL KOHANE: All right. AUDIENCE: Undergrad. ISAAC SAMUEL KOHANE: Undergrad. All right. So I'm just reviewing the basic dogma, which is DNA gives rise to transcribed RNA, which gives rise to translated protein. So let me ask a question of the class, and I start saying you because I can't remember your names yet. Is the DNA in each cell in our body the same? AUDIENCE: No. ISAAC SAMUEL KOHANE: Why is it different? AUDIENCE: Which as the cells differentiate making it every time. ISAAC SAMUEL KOHANE: Good because you just stated the misunderstanding that most people have. In fact, with exception of your sperm or eggs, all the genome of the entire body is exactly the same. The full genome is available. In fact, the reason you can clone an animal from any somatic cell in the body in theory is because with some caveats which I don't want to get into right now, it's, in fact, the same darn genome. The exceptions are gametes, which have half of the genome, and red blood cells, which have no nucleus. Now why is that important? And there's some other things like methylation and the tips of chromosomes which I don't want to give you details of. But why is that important? Because in fact, what makes a liver cell different from a white blood cell is not the DNA but what RNA is being transcribed and also what protein is being translated from the RNA. And the reason I bring this up is because as we measure DNA using sequencing or genotyping or we're measuring RNAs using transcriptional profiling or we measure translation using proteomics, we have to start thinking now, even those of us who are not MDs. As clinicians, where can we actually obtain this biological material? And if you're doing a DNA study, anything is good. You can get white blood cells. You can get a blood sample. But if you wanted to study, for instance, brains and understand what makes someone susceptible to brain tumors, you could not get an RNA sample without actually getting a hunk of someone's brain. And so that immediately tells you some limitations of these technologies. So if you want to do RNA or protein profiling to look at someone's likelihood to get a disease, for instance, you have to get the right tissue, and that tells you the kind of limitations that we face when we're doing clinical studies. I mean, certainly, we can hack out brains out of mice or drosophila, but we can't readily do that with humans. And yet that's frustrating because on this end, we're closer to function than we are to this end. This is the master code, but this is what leads to the actual functioning the interacting of the proteins. And so people are very excited about proteomics, for instance, because it's very close to function. But when you're looking about what are we going to actually be measuring the proteins on, we're fairly limited to things like serum and urine. We can't take pieces of tissue out of human beings. And we can talk on another day about how to get around these obstacles. So what's the magnitude of the task? From one perspective, it seems relatively piddling. I mean, there's, after all, 46 chromosomes, and these chromosomes are these little microscopic things, bushy things of hyper coiled DNA. And it seems fairly discrete. And I know that you know it's 3 billion bases, but I want to give you an appreciation for how much 3 billion bases are. So let's say we have a necklace of beads. Each of those beads represents one base pair. Can I take it for granted that y'all know what the base pairs are? ACTG, right. Good. So if you took a bunch of these necklaces and gave a necklace to everyone in Shea Stadium, it will still take 1,000 stadia to get enough necklaces to get all the bases that we have in the human genome. So that's a lot of beads, and it gives you some sense of the coding challenge and the deciphering challenge. But you'll have technology nerds say the following. Some say that Madonna's music is actually more information than Madonna the person because-- and I actually heard this from-- someone in Microsoft say this in a big [INAUDIBLE] talk because they said, look. A DVD with her music video is 4.7 gigabytes whereas her genome, 3 billion base pairs at two bits per base, since there's four bases, and 8 bits per byte, is only 3/4 of a gigabyte. So what are we making the big deal about? And in fact, they went on to say correctly that every human is on average only less than 0.1% different than any other human. So we could actually store you all as just a difference file of everybody else. So you can it turns out if you did that, you can compress you down to the size that could fit nicely on an obsolete diskette. But that's a the question that I want to raise. Is Madonna her DNA sequence? And the answer, I hope you realize, is no because what makes Madonna particularly Madonna is not only the way she developed in her womb even without environmental influences as programmed but all the developmental influences she had through her life-- the good knocks, the bad knocks, all those experiences, environmental exposures that bring her state, her organismal state, into the current megastar state it is. And there are many other states that could have been depending on what those environmental influences were. And if you really want to capture a state, it's not only-- it's therefore the evolution of her physical state, not just her genome, but all the alternative splicing. As you'll hear about next lecture, genes can be spliced in multiple ways to create RNA. So that's on average about three per gene. The proteins which are derived from RNA can have a multitude of different modifications-- adding of glucose, glycosylation, adding of phosphorous, phosphorylation. And in fact, this is underestimated I'm giving you. That's on order of 100 to 1,000 modifications. So that's greatly increasing the complexity. And then in a multi-cellular organism, each of those gene products can be in any particular compartment in the cell and then can float around to any cell in the body on the order of trillions. So Madonna now is several orders of magnitude larger than her music, as we are all. And this is just as a caveat to note that although it's really impressive that we can sequence an entire genome, all we're showing is the master code that can result in one of many potential fates, and to avoid the reductionism that often happens, you have to remind yourself what were the environmental influences. And we'll come back to that again. Now getting on, again, a more optimistic technological note, it is nonetheless true that we can now measure today with commodity technology large parts of your genome for commodity prices. And what do I mean by that? Let's look at RNA microarrays, and I know some of about these and some of you don't. But I just want to remind you of them. If you take a tissue-- let's say, liver. Or tissue-- by the way, let me just interrupt myself to say that all these PowerPoint presentations-- thank you. All these PowerPoint presentations and the video are going to be available through the MyCourses website. If you don't have an account yet through MyCourses, you can definitely get one. And so all the PowerPoint and all the video will be available. So don't feel compelled to take everything down, and just be more sure that you're asking questions in the flesh. So take liver or, let's say, liver under influence of, perhaps, insulin. You take it. You grind it up. You label the RNA that was extracted from that ground up liver with a fluorescent compound. That means you attach a fluorescent compound to that labeled RNA. And then you purchase. And I used to say purchase or make, but that's becoming less and less true. It truly has become a commodity, and companies such as Affymetrix or Agilent will sell you these chips, and they can do it probably with higher quality, unfortunately, than we can in the homebrew fashion even though if you go to the Stanford website, microarray website, they'll tell you how to build your own microarray manufacturing facility out of RadioShack parts for less than $10,000. Nonetheless, it's easier and more standardized at this time to unfortunately buy it from this Microsoft-like monopoly called [? Affymetrix, ?] for instance. In any case, so you buy this chip, and pre-positioned on each spot on this chip-- and I will have a whole lecture devoted to microarrays, so don't worry about the details too much. Pre-positioned on spot on this microarray is a sequence of DNA corresponding to the gene that you're looking for. So this might be the sequence of DNA corresponding to growth hormone, a sequence of DNA corresponding to insulin, a sequence of DNA corresponding to the insulin receptor. And what's going to happen is through the hybridization reaction that you all learned in basic genetics, the RNA that you extract from the tissue will hybridize with the matching sequence of DNA that is present on this chip. And suddenly, the chips are no bigger than this if not smaller. And then you scan it using a simple HP scanner of no higher quality than one that you used to scan in your family photos. You scan the image of a chip, and it fluoresces proportional to the amount of fluorescent RNA at any given spot. And so what you have is a readout now at every spot of how much of that RNA was present in that sample. And the current density of Affymetrix microwaves is now on a single chip. They have spotted-- and this is using, actually, a photolithographic process. They have spotted every single gene in the human genome-- in fact, so many so that they are actually putting extra ones, such as different alternative splices. So more than 30,000 genes on one chip, and you can buy it through Harvard pricing for about $250. So you can measure the expression of 30,000 genes for $250. Now that sounds academically as an interesting thing, but how does that translate into genomic medicine? So let me give you my poster child for a pulse cycle study which has launched 1,000 chips-- chips, but chips as well. It's launched several million chips. This was a study that was done out of Stanford. And they did the following thing. They had patients with large B-cell lymphoma. Large B-cell lymphoma is a type of cancer that is on a sort of medium bad disease-- not as bad as lung disease, not as benign as thyroid disease, thyroid cancer. And this is a disease that was previously monolithic. In other words, the patient came to you, and you look on the microscope and said, ah, you have large B-cell lymphoma, and you had a rough estimate of how long they were going to live, and you'd give them all the same treatment because you could not distinguish between subclasses of these patients. So what they did through techniques that you will learn about in this class is they performed clustering analysis on these patients when they took RNA extracted out of the lymphoma of several dozen patients. And when they did a clustering experiment, they were able to see that there were two groups of patients based solely on their gene expression profile that there was one group of patients that have one gene expression profile and another group of patients that had a different gene expression profile. Then and only then, they asked themselves after the fact what was different about these patients. And very rewardingly, they found that corresponding to 1 gene expression profile-- in other words, one set of genes being switched on or off-- was this group of patients in red where here on the x-axis is years of survival. Here on the y-axis is probability of survival. And so what you see here is one group of patients who die very fast, most of them, by two years and one group of patients who have better than 50% survival in the 12 year outcome. And that was remarkable because it told us that previously, where we had not had a sub-diagnosis, we have now a new diagnosis-- high risk and low risk large B-cell lymphoma. Two, we have a new prognosis. We can tell patients, unfortunately, this profile, you're at much higher risk than you would have told them previously. Three, you have a new clinical opportunity, a new therapeutic opportunity, because without discovering any new drugs, you're just going to hit these ones much harder with the chemotherapeutic agents that you already have because you know they essentially have a much more dangerous kind of cancer. And then fourth, you have a new research opportunity. Why are these patients dying off? Are they pumping out-- do they have a transporter that's pumping out chemotherapeutic agents out of the cell? It's unclear, but we already know that they're using this as research to-- for instance, it turns out this is a B-cell marker. This group is enriched for B-cell markers. B-cells are a type of immune cell, and they're actually using a therapy directed against that B-cell to actually treat these patients. So that's fairly remarkable. From a trivial clinical research experiment using this commodity chip, we have a new diagnosis, a new prognosis, a new therapeutic opportunity, and a new research opportunity. And this has been reproduced multiple times for lung cancer, for breast cancer. You can now actually purchase a test in the Netherlands based on the needle biopsy of the breast. We'll actually stratify the woman on her prognosis based on the needle biopsy. And it's not only for cancers. Now it's been done for a variety of inflammatory conditions-- inflammatory bowel disease, rheumatoid arthritis, a whole bunch of conditions where you're actually getting a much broader insight into the physiology by looking at 30,000 genes than we were by just a patient asking a few questions and measuring just one or two variables in their blood. So this brings us to this cartoon that says you're not ill yet, Mr. [? Blondel, ?] but you've got potential. And of course, this is the motivation behind a lot of genomic medicine that we're going to be able to by having these highly forecasting markers, these predictive markers, we'll be able to move away from acute and expensive and not very effective interventions to much more cost effective, cheaper, more effective prevention techniques. And it's not theoretical. Right now one of the largest groups-- by the way, how does chemotherapy actually get devised? So how do these [INAUDIBLE] get devised? So there are these large cooperative oncology groups, literally dozens of hospitals and medical centers that cooperate on devising how much of cisplatin, how much Adriamycin, how much azathioprine to give to various patients with the various chemotherapy protocols? And they treat these thousands of patients, see how they do, and then issue a new protocol based on outcome. Very highly funded on the order of hundreds of millions of dollars a year. And today, because I'm part now of the cancer and leukemia group B, one of these cooperative groups is now-- this entire group is now coming up with new protocols that are all based on genome-wide understanding of expression. In other words, they're going to put you into a different treatment category based on your gene expression profile. And this should worry you because as you have learned in this class, there's a lot about this analysis, this cluster analysis-- I gave you the shining great story about it. But even that initial analysis I showed you of large B-cell lymphoma is flawed. And yet people are moving ahead very fast to turn these into clinical use diagnostics. And because they don't understand fundamentally the limitations of their clustering techniques, for instance, and the limitations of the measurement platforms like the microarray analysis, they're actually going to do patients a disservice by taking patients who don't need the therapy and put them into a high risk class and vise versa. So I think it's hugely important. One of the things that I want you to learn in this class is how to read skeptically the literature because in fact, it will help you become better leaders in this area of high dimensional medicine. What do I mean by high dimensional? Well, you're looking at thousands and thousands of variables. Well, here, we're just looking at expression. But as we'll get to later in the course, it's, of course, looking at all the different SNPs on your genome-- the different variations, I should say-- and your protein state. So this is the current state of the art for those of you not doctors. It's the stage of the cancer, where is it in your body, the grade, how ugly do the cells look under the microscope, and what kind of cell type are they is typically what drives diagnosis today. But even today, it's now starting to be driven in real protocols being deployed for real patients by expression profiling. And that begs the question that I think is now upon us, which is, are we now on the threshold of a new taxonomy of human diseases? Human disease has been classified according to manifestation, and it's actually not so long ago that we used to think of fever as a disease rather than a symptom. And if you look very closely at most diseases like the inflammatory diseases like arthritis, they're still, for the most part, based on phenomenology and not on some deep mechanistic etiology or cause. And as Thomas Lewis pointed out in 1944, the diagnosis of most human disease provides only insecure and temporary conceptions because in fact, what we're describing is always the tip of the iceberg, the things that the patients complain about. And there's only so many things you can complain about. This hurts, this hurts, this hurts, and I can or cannot do this. And there's a limited set of all the ways it can manifest. But it's actually reflecting a much larger multitude of possible things that can go wrong in this very complex mechanism that is the human being. And in fact, of the main common diseases only infectious diseases have a truly mechanism-based nomenclature. You see how people name infectious diseases. It goes right down to the organism causing it. The rest of it is pretty loosey goosey as it relates to the mechanism. And here we have part of the challenge in genomic medicine is if we're measuring comprehensively the entirety of the physiology of the patient state, we can actually come with a much more objective and universal nomenclature, which is a nontrivial thing for medical students who are being-- for those of you-- is there any medical students in here? Yes, who are being assaulted by the-- I was told once in medical school that I had to learn on the order of 20,000 new terms, and you can see they're already outdated. There's also a reason why today we should be thinking about changing the use of everyday medications. Shown here is a protein that codes for a potassium channel it's a protein that allows potassium in or out in a selective fashion in and out of cells. And it turns out if you have a misspelling or a polymorphism of one of your nucleotides here, then it will result in a new amino acid in one spot, and that will change the electrical properties of this potassium channel such that individuals are much more prone to cardiac arrhythmias, bad rhythms of their heart, if they are given a sulfa drug. Now sulfa drugs, for those that you don't know, are a broad range of antibiotics, and in pediatrics, for instance, we give almost all kids sulfa drugs. Bactrim has part of it a sulfa drug. And so what we're saying here is there is a subset of children who have this mutation. And if you give them sulfa drugs today, you're going to put them at risk of death, which seems like a bad decision for just treatment of earache. Now if you're still awake, you should be asking yourself the following. Well, Zack, you're telling me about this, but why should I really be worried about this? It's obviously a rare thing. It's probably less than 1 in 10,000. And why should I do this expensive re-engineering of the medical process in order to be able to avoid harming these children? So I'm going to ask you a question. How much do you think in a high throughput laboratory like the Channing Laboratory across the street, how much do you think it costs to see if your genome has this polymorphism, and therefore, we should be careful about treating you with sulfa drugs? How much do you think it would cost us to process your blood and obtain which spelling you had for that one polymorphism? And I've asked this of the entire class, and I've asked this endless times for very senior people and very junior people. And unless you really know the answer, then don't answer. But let's go for it. "Price is Right" rules work. AUDIENCE: Well, you assume that you have a blood sample already there? Just the cost of running the test? ISAAC SAMUEL KOHANE: Yes, I'm not asking you to pay for the blood draw. Right. How much? AUDIENCE: 30 bucks [INAUDIBLE]. ISAAC SAMUEL KOHANE: Less than a buck. AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: 130 bucks. AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: 5 bucks? AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: How much? AUDIENCE: $1 ISAAC SAMUEL KOHANE: $1. AUDIENCE: 50? ISAAC SAMUEL KOHANE: What? 50? AUDIENCE: I think I [INAUDIBLE]. ISAAC SAMUEL KOHANE: 100? AUDIENCE: No, less than $1. ISAAC SAMUEL KOHANE: Less than $1. So let's give you reality. Today, you'd be charged-- charge, which is not the question I asked you. You'd be charged approximately $500 to $1,000. The actual cost if you're running the test in time of your technician and materials, so the full cost is, if you're extremely inefficient, $0.50. If you're a very efficient, $0.10. So that's important to realize because we are now today in an emergency room routinely setting off the SMA 20, which is 20 different things, like sodium potassium, chloride, carbon dioxide, BUN, creatinine, and a few other tests. And you're charged on the order of-- or your insurance, hopefully is charged on order of 100 to 200 bucks to do that. The marginal cost of this is no higher than of doing those electrolytes. So what this tells us is that today, it's possible to run, for uncommon diseases, 1,000 different snip tests at a cost effective way. So even though your polymorphism is rare, if it has clinical significance, we can find 1,000 different rare polymorphisms that in aggregate may end up being a lot of people. And I am not the only one to have recognized this, and there are now a bunch of companies who understand this. So the companies who previously were charging $1,000 per tests are under some cost pressure, and to give you a sort of reality-based feeling for this, probably what you'll see over the next 5 to 10 years as this technology, as-- understand, this technology is actually a commodity level-- is that price is going to be crushed down from the 1,000 level to a 10. Of course, they're going to resist every step of the way and try charge every last dollar along the way, but it's sort of an internet kind of effect because the true costs are trivial, and we're charging the government these bulk rate prices all the time for our human association studies. So this is a hugely important understanding to have. And why is this so important to understand? Because if you look at the number of publications of different polymorphisms that occurred with time, this is 1980. This is 2005. And what you see here is that it's been growing steadily and not so slowly every year so that we're now close to 8,000 publications per year about reporting on different polymorphisms. Now as you'll hear on two or three lectures from now, well over 50% of these are wrong reports, and one of the things you have to learn in this class is what constitutes a good versus a bad report, a protective factor based on a polymorphism. Nonetheless, it's telling you that there's thousands of polymorphisms out there. Even if 50% of the reports are wrong, thousands of polymorphisms are associated with clinical disease that are being reported on literature per year. So that's a huge number. And as we'll get to shortly, genomics is not about single gene diseases. But nonetheless, the number of diseases that are being found to have a monogenic basis is growing even faster exponentially from 1981 till the Millennium. And so what I told you until now the $0.05 figure was, I wanted to make sure you understood that was true today. The following is not in the so distant future. It's actually doable now, but it's at a research basis only. This is a big chip. It's a chip made out of many Affymetrix chips sold by a company called [INAUDIBLE],, which is a wholly owned subsidiary of Affymetrix. And I'll tell you more about this chip two or three lectures from now. But suffice it to say that instead of interrogating for a different gene in every spot, it's interrogating every single possible misspelling of the entire human genome or-- and I think this is-- yeah, the entire human genome on this collection of chips so that in one fell swoop-- this is actually just for chromosome 21-- you can actually fully genotype the entire chromosome of an individual within two hours. And that's available today. It's commodity technology. It's just too expensive to be able to give you the same sense of reality as I did for single SNP genotyping. And I mean, this just shows that they're able to comprehensively show the frequencies of distribution of different SNPs across these individuals in one fell swoop. And that reminds me within the next day, I'm going to start uploading to the My Courses website-- I'll get to you in a second-- to the My Courses website PDFs of all the class reading. So today, I'm going to backfill it with things that would be relevant to today's lecture. Nonetheless, for future lectures, you will have the PDFs for the lecture of that day prior to that day. So-- you're going to ask a question. AUDIENCE: Yeah, [INAUDIBLE] do it [INAUDIBLE].. So these are just identifying a sequence of genes, or is this coming out of the genome project, and it's based upon [INAUDIBLE]? ISAAC SAMUEL KOHANE: So what it's based is it's a confluence of several industrial-- well, what is it? Each little sequence on this microarray corresponds-- name on the blackboard. Each sequence corresponds to a stretch of DNA, and the center of that little stretch of DNA that stretch of all of the nucleotide, 20 bases. In the middle of that is the polymorphism. And so let's say we want to capture all four possible polymorphisms. For that stretch of DNA, you'll have four nucleotides corresponding to the ACE or C or T or G that's different on that central base. So on here are all the different polymorphisms that are known for chromosome 21. So how do they get that? Is a combination of the verified SNPs from the Human Genome Project, the verified SNPs from Celera which they had to pay cold cash for-- but they're a company, they have no problem with that. And they also licensed SNPs from ABI, from [INAUDIBLE] Biosystems. And finally, they did their own resequencing so they would know what are all the SNPs, the common SNPs, for this region. So you're able to, in one fell swoop, directly assess through hybridization of DNA now, not RNA, which are the genotypes of the ones that are known. Now if it was not known, then it wouldn't show up here. AUDIENCE: So just as a follow up question, seeing as how these are SNPs that are known and taken from the sources and we know that those sources are basically a few sample subjects. ISAAC SAMUEL KOHANE: Yes. AUDIENCE: --are we going to talk at all in the course [INAUDIBLE]. ISAAC SAMUEL KOHANE: Class. AUDIENCE: --the class about the implications of a very small pool of individuals an compare with other people's [INAUDIBLE]?? ISAAC SAMUEL KOHANE: We are definitely-- so what you're referring to is, for instance, the Human Genome Project and the Celera project were based on well under 20 individuals each. And so I have two columns. One, a column which I'm not too comfortable with but nonetheless true is that if it's a common SNP, it'll show up typically in one of those 20 more or less. The danger is the opposite, is that a lot of private mutations-- in other words, you may share a SNP with and your immediate family, wherever you came from, that's not present in anybody. But if you're one of the 20, that now has this major representation in this database, and it's probably the other direction that we have to worry about. Nonetheless the answer to it is very simply this-- is massive resequencing of a lot more individuals. And as this technology becomes commoditized, that problem is going to go away. So the HapMap map project, for instance, is a project, which you'll hear about during this class. And it's a project to actually define the structure of SNPs across larger populations and across ethnically diverse populations because as I hinted to you in that term private mutations, you can imagine or you will be taught that there are different subpopulations that have these SNPs that are only present in a subpopulation either within a subpopulation called your family or a subpopulation called people that came from that continent that you originally came from. So the short answer is it's a problem, and it's a problem more on one side and the other. In other words, since-- let me rephrase that. There's a certain frequency of common SNPs, about one per kilo base. But if you look at rare SNPs, they have a much, much higher frequency. In other words, I have a bunch of SNPs that are just unique to me and a few thousand people that have a shared heritage with me. And those probably don't have, by the way, for the most part, any clinical significance. But if I were one of the 20, it'd be over-represented in that sample if it's not a common snip. Ah. So this makes the following simple point. Why-- it's answering the following question. Why is computation such a central part to genomic medicine? And simply this-- these two 1999 style stock curves, stock market curves, correspond to the following two processes. This shallower curve is the number of publications in "Medline," the online repository of articles, and goes all the way back to a century, Index Medicus. And it's [INAUDIBLE],, say, exponential curve. But because of the industrialization of gene sequencing, the number of sequences have risen at a far higher exponential curve. And what you can think of this, as a proxy for the knowledge gap. If you take this to be genomic data, the genome sequences being genomic data and this being as a proxy for our knowledge, this gap is. Growing and it's growing even faster than it's portrayed here because it's not just DNA sequence. It's genetic maps physical, maps, polymorphism, structure information, gene transcription patterns, protein translation activity. And all of these end up in databases. And if you are like me in the middle of a Medical Center, You'll find that there's very, very few people who are trained as clinicians and who have the requisite computational skills. So I have a PhD in computer science, but that's not the typical medical path a lot of you in this room actually have the nice joint skill set, but you're in the minority. And consequently, because all these data types end up in a database, trying to translate this into medicine is fundamentally devolved to the responsibility of individuals who can both understand the biological problems and translate through a variety of computational techniques all this data into more knowledge. So the hubris, the conceit of [? bioinformaticians ?] is that somehow we can use computational techniques to raise this to that level. We did the [INAUDIBLE] already. So now let's talk about what is genomic versus genetic because you hear these terms a lot as if they're interchangeable. And they kind of will be, but they're not quite yet. And it depends on your perspective. So let me just tell this joke. I hope you think it's a joke. An engineer, a physicist, a mathematician, a computer scientist, and a statistician are on a train heading north and had just crossed the border into Scotland. They look out the window and they see a black sheep for the first time. The engineer exclaims, look. Scottish sheep are black. The physicist yells, no, no, some Scottish sheep are black. The mathematician looks irritated and says, there is at least one field containing at least one sheep of which at least one side is black. The computer scientist says, oh no, a special case. And in fact, it's how you view genomics and genetics actually corresponds to where you find yourself on this spectrum. So that will probably inform you why there is some global confusion. And finally, the statistician, I forgot, says it is not statistically significant. So here's one view of genomic versus genetic. So genetic medicine, the kind of medicine that's been around for us for a long time, has to do with low frequencies of high penetrance genes by which I mean genes that if you have that particular mutation in that gene is going to give you a high likelihood of having that disease. And there's thousands of these relatively uncommon diseases. The most common of these, hemochromatosis, which is iron deposition in the liver, has a frequency in the population about 1 to 300. And these diseases previously had been mostly assessed indirectly or focused on single genes by which indirectly linkage-- for instance, linkage within a family. In contrast, genomic medicine is about not in a rare disease but just common diseases. It's diabetes, cancer, heart disease, inflammatory diseases. And therefore, almost by definition, the geniculate risk for common diseases will often due the disease producing areas with relatively high frequencies-- that is, greater than 1%. And all these genes in this perspective may be disease-causing. And they're not for uncommon diseases. They're for common disorders, and they're due to the interactions of multiple genes and environmental factors. And because of the industrialization in genomic medicine, we do direct experimental access to the entire genome in the fashion that I just illustrated. Someone else, [INAUDIBLE] and [INAUDIBLE],, have given a related similar distinction between genetic and genomic. In genetics, you're looking at basically a structure of genes, the sequence. In genomic medicine, you're looking at the function of these genes. In genetic disease, you're looking at genomics. You're looking at DNA. In genomic medicine, we're looking at more than just the gene. We're looking at the RNA and the proteins. This is what I referred to already. Map-based gene discovery-- in other words, probabilistic linkage between loci versus direct sequence based gene discovery. I mean, now, diseases are being discovered computationally. I don't know if I'll get around to it, but there's a great example now of how people are discovering cholesterol risk factor genes purely based on our shared attributes in our genome between man and mouse. Without having ever poured a beaker, people are actually being able to computationally just identify new risk factors in that fashion. But perhaps I will get to that. It just occurred to me that I don't have lecture anymore on comparative genomics, which is how do we actually take advantage of the genomes of other species to inform us about human medicine. Maybe I should retroactively fit that in. And again, in genetics, it's monogenic disorders. In genomics, it's multifactorial disorders. And there's a specific DNA diagnosis in genetic disease. You have this disease. As opposed to genomics, it's more you're at risk for the following factors because it's not, in fact, a high penetrance disease. And looking at gene action and genetic disease, and you're looking more at genetic regulation in genetic medicine. And etiology, specific mutation versus-- because a mutation just tells you there's an error which is associated with a disease whereas in genomic medicine, you're saying, how did this error come to give rise to this disease? And in genetics, it's one species. In genomics, it's several species. Now I think you could in good faith actually argue that there's a lot of overlap in these two, but that's broadly how the world seems to be divided. And I believe that will be blurred soon. But I think the important take-home message is that in genomic medicine, we really look comprehensively at all data types. And basically, you view the patient, if you want to look at them in a reductionist fashion, as the following kind of matrix. We're looking at environmental data, single nucleotide polymorphisms, proteomic measurements, gene expression measurements, and clinical measurements such as your history, your physical exam, laboratory studies, and imaging studies all across time. But although there is missing data and although these are incomplete, we can now view the patient as this big time series that you can apply a lot of machine learning techniques to cluster patients together, create new diagnosis and predictive modeling techniques to figure out where in this space of possibilities of these attributes is the patient likely to be in the future. And it's this comprehensiveness that I think characterizes genomic medicine. As we will get to later in the course, I will identify how unfortunately, this is becoming a commodity. Only genomic measurements are cheap, as I just illustrated you. But this part, because we have to deal with these dumb doctors, it's very expensive because it's labor intensive. Trying to assess the phenotype of the patient is a labor intensive process just as medical care is, just as education is. And so why do we have to be comprehensive? Other than just liking to be comprehensive, what is our motivation? And the answer is very simple. It comes from an understanding that the phenotype is not just directly just probabilistically linked to some gene. So in the old genetic medicine, you would be looking at some gene, some marker, some polymorphism on some gene, and say, ah, this gene has a G in position 291, whenever this person has a risk for heart disease. And what you'd really be saying is when this gene has this spelling, it's associated probabilistically with some other gene that you don't know that is directly causing a change in the phenotype. So in genomic medicine, we want to measure this gene but also every other gene that might be contributing to this phenotype because as we well know, even if you stuck in the same room the three individuals with the same polymorphism of that gene, they would actually have different diseases. If you look at Huntington's disease, Huntington's disease is this awful disease where you lose control of your limbs at age 40 and become demented because of an expanding number of [? CG ?] repeats in the Huntington's gene. The longer that number repeats, the larger the number of repeats, the more acute is the onset, the earlier is the onset. But if you take on average, if you take a set of individuals with the same length of repeats and study them, they have a wide distribution for that given length of repeat and the onset of disease. And why is that? Well, it's because they have a different genetic background, and these other genes, these other genes involving the neurotransmitters, structural genes having to do with the structure of synapses, and so on, are also influencing it. And therefore, this particular gene's action on the phenotype is conditional on all these other genes. And furthermore, let's not forget about the common environment and the individual environment and cultural factors which can further influence this. And I'm going to shortly point out to you that this is all too real. So in genetics, just to remind you, genetic medicine, basically the study of what was the probability of a particular phenotype happening given a particular genotype, and genomic medicine would try to assess all the genes directly and capture these other factors and see how they interact. And this creates a fundamental problem, and the question you might ask yourself is, why in genomic medicine are biostatisticians not front and center as part of the process? After all, in terms of the quantitative analysis of clinical outcomes, for instance, they have been quite omnipresent. And the answer is simply this. In a traditional clinical study like the Nurses' Health Study or the Framingham Heart Study, you would have on the order of tens of thousands of patients and at most hundreds of variables. And there's a lot of tried and true autoregressive models-- for instance, T-tests, parametric and non-parametric tests-- which work extremely well for this kind of data set. Unfortunately, in genomics, we have quite the opposite. We have 10,000 or hundreds of thousands if you proteomics variables, and we only have, on a good day, hundreds of patients, especially if you're looking at something in the brain. As I explained to you at the beginning, you're only going to get a few hundred brains, let's say, with brain cancer. And so let me put this in sort of grade school terms. If you have one variable, how many questions do you need to solve it? one. If you have two variables, how many equations do you need to solve it? Two. And if you have 10,000, how many do you need to solve it? A lot. If you have fewer equations than that, then it's undetermined. In other words, there's many possible solutions to the relationships of the variables than you have available. And this is so darned undetermined that all these traditional statistical techniques really did not work well. And therefore, frankly, the statistical community until recently were just not interested in this area. Whereas the computer science community, probably because they're just not really smart enough to understand how difficult a problem this was, just started applying a bunch of techniques that they had applied for other high dimensionality data sets such as vision, face recognition, for instance, and hearing recognition, which also have a high number of features, applied these very same techniques to these data sets. And that just gives you a cultural reason why computer scientists are the bioinformaticians right now the center of genetic medicine rather than statisticians, although they are now coming into play. So let me give you a feeling for the new pharmacology. This was a study that I did with a Atul Butte now four years ago. The National Cancer Institute has 60 cancer cell lines called the NCI 60, which they test for efficacy a variety of different drugs. And they have, in their banks, thousands of pharmaceuticals. And they have thousands of pieces of dirt, leaf that has been collected around the world that they test-- literally some leaf in the rainforest, a piece of dust in China-- that they test to see how effective that it is in inhibiting these 60 different tumors. And so what we did back then in collaboration with one of my former interns, Todd Golub, we extracted the RNA from these 66 cancer cell lines and measured how much of the RNA was being expressed in these 60 cancer cell lines. And we obtained from the National Cancer Institute, for a subset of the 50,000 compounds they had tested, 5,000-- let me see how I'm doing for a time-- 5,000 drugs that had been tested against the 60 cancer cell lines and how much they had inhibited the growth of the cells. I'll get to the details of this experiment in-- the common cell details in another lecture. But suffice it to say, this is the overall picture. You have 60 common cell lines, 6,000 genes, 5,000 anticancer agents. So you have 11,000 variables with only 60 cell lines. That sounds like a miserable thing to do. And nonetheless, what we were able to do is the following. In this complicated looking diagram, we picked a correlation coefficient between all-- if we computed all the possible correlations between all the 11,000 variables, that gave us on the order of 68 million different correlations. And if you pick the correlation coefficient threshold of 0.8, we found 202 networks that were joining 834 variables-- in other words, only 7% out of the 11,000-- and only 1,200 links out of the 68 million. 1,200 correlation coefficients out of the 60 million were above the threshold. Only one link was above the threshold between a gene and anticancer agent. And let me just ask you a quick question. Why is it that we found a lot of correlations between drug effect on the cancer cell lines and another drug effect on cancer cell lines? So why did we find a lot of high drug-drug correlations, a lot of high correlations between the inhibitory effect of one drug and the inhibitory effect of another drug? Why did we find a lot of correlations, high correlations? AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: What? AUDIENCE: Were they chemically similar? ISAAC SAMUEL KOHANE: They were chemically similar because in way a lot of drugs are created are by copycat drug discovery, doing small permutations of an existing drug. So of course, they have similar reactions. Why were there a lot of gene-gene correlations? AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: What? AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: Not quite. It's more fundamental insight than that. AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: Right, because basically, in order for the darn cell to live, genes have to have coordinated action. So not all genes, but the genes that make the ribosome, that makes the transcription complex that run oxidation, all have to be synthesized into RNA at approximately same. Time. Otherwise, you're not going to have coordinated action. There's a lot of exceptions to that. But by and large, it's true. And in fact, the whole area of expression analysis clustering, as I'll get to in another lecture, would not work if that intuition was not true. But God or evolution did not evolve genes and chemotherapeutic compounds together. So there's no particular prior reason why the inhibitory pattern of a drug and the gene expression profile should be linked. And so we were darn lucky to find even one, but we did. And it turned out that links were this gene and this drug. Before I explain to you what they are, let me summarize what it means. It means that across all these 60 cancer cell lines-- and these were different very different cancers cell lines. They were liver cancer, breast cancer, skin cancer, blood cancers, 60 different cancers. The more of this gene was expression in the cancer, the more sensitive the cancer cell line was to this chemotherapeutic agent. And it turns out that this gene is L-plastin, also known as LCP1 or Lymphocyte Cytosolic Protein. The drug is this unpronounceable thing. I mean, a lot of these are just drugs that they obtained essentially wholesale from pharmaceutical companies. But it turns out it's a [INAUDIBLE] carboxylic acid derivative. Those of you who know anything about diabetes know that other [INAUDIBLE] carboxylic acid derivatives are known to inhibit tumor cell growth. And by the way, these are the same class of drugs as the oral hypoglycemic agents, the pills that type two diabetics will take. And our role for this gene in [? tumorigenicity, ?] the generation of tumors, have previously been postulated. And in fact, subsequent to our publication of this paper in [INAUDIBLE],, in fact, a group at the National Cancer Institute was able to show the dose response that we had, in fact, calculated. Yes. AUDIENCE: So if I'm understanding the description of the [INAUDIBLE] experiment and the results-- ISAAC SAMUEL KOHANE: Yes. AUDIENCE: --is it safe to say as a generalization that whenever you find a link between a particular [INAUDIBLE] and a gene that [INAUDIBLE] across the [INAUDIBLE] cell lines these are genes that are typically going to have a high conservation rate across the [INAUDIBLE]? ISAAC SAMUEL KOHANE: No. AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: In fact, if it had been true that the gene had a high concentration rate across all 60 cancer cell lines-- AUDIENCE: [INAUDIBLE]. In other words, like, this gene data [INAUDIBLE]---- ISAAC SAMUEL KOHANE: Yes. AUDIENCE: --that this drug is acting on the different cell lines, is acting on this one particular gene. But there probably are differences in [INAUDIBLE].. ISAAC SAMUEL KOHANE: Let's just have our conversation [INAUDIBLE] for the rest of the class. We don't know how it's acting. All we're observing right now is a correlation. AUDIENCE: OK, right. ISAAC SAMUEL KOHANE: Right? But I still think that you have a question in there which is still true regardless of what I just said. So the more of this gene expressed, the cancer cell line is going to be provably more sensitive to the chemotherapeutic agents. So you had a question still. AUDIENCE: Right. Well, it was that it didn't have to do with the concentration of the gene but rather just that the-- just for argument's sake, let's say it's actually acting on [INAUDIBLE]. ISAAC SAMUEL KOHANE: OK, fine. Yeah. Right. AUDIENCE: [INAUDIBLE]. So let's say it's acting on the gene product and across the cell lines, then it would be a safe assumption that the gene product would be very similar. In other words, there would be this [INAUDIBLE].. Well, there's not-- whereas with other gene products, where there is no correlation between a therapeutic agent-- ISAAC SAMUEL KOHANE: Yeah. AUDIENCE: --you have a high variability of different types within those cells. ISAAC SAMUEL KOHANE: I may be misunderstanding your question, and I'll give you back exactly what I believe my misunderstanding is. And maybe it's yours, and we'll clarify it right now. For a given gene, what we're measuring is a gene product, which is the amount of RNA being expressed. And basically, what we know is that the gene product is the same for every cancer cell line. It's just they don't have a new gene. They have the same complement of genes. But just from the point of view point of view of this experiment, the only thing that's different between the 60 cancer cell lines in a very reductionist point of view, just looking at the gene expression, is how much of each of those genes they're making. AUDIENCE: So there aren't any [INAUDIBLE].. ISAAC SAMUEL KOHANE: No, you're addressing a whole other layer of complexity that may be very, very true. But we just didn't measure. AUDIENCE: All right, because I thought that one of the take home messages was that for sort of fundamental gene products within a pathway. ISAAC SAMUEL KOHANE: Yeah. AUDIENCE: Those are what you're going to find links to therapeutic agents more than, say, something, you know-- ISAAC SAMUEL KOHANE: Well, we got out of this. So the answer is not in the way that you conceive it because if you're doing-- and by the way, it's a good point because it reminds us again of the distinction I drew at the beginning where that's a very simple diagram about DNA, RNA, and protein. There would be a study where you'd be looking at the genotype and saying-- doing the very same study and looking at all the possible genotypes and saying, which genotypes are the most predictive of this kind of expression? But here, we're not worrying about individual differences in the genes we're just looking at the amount of the gene being expressed. Nonetheless, if there was, for instance, a direct effect-- in other words, if the more this gene is expressed, the more sensitive the cancer is, short of actually going and understanding the biology, which is the right answer, you still ask yourself a question. So if I make this gene more highly expressed with a vector to make it more expressed, will this cell line become more resistant, more sensitive to chemotherapy? That becomes a therapy in and of itself. But thanks for asking question. So just now-- and this would give you an idea why in The Economist just appeared within last two weeks the following cartoon that was part of an article called "Malignant [? Mass," ?] where they're arguing that actually, coming up with these comprehensive models of which this is a very simple example of susceptibility of cancers to treatment and risk to cancers is actually become so mathematical that we can actually have a whole new class of researchers who are going to take advantage of these models to actually find new therapeutic agents. So let's go to-- we've talked about how much of these data measurement techniques are available today in a large scale. We've discussed [INAUDIBLE] genetic. Let's now talk about hereditary-- heredity, rather. So the way a population geneticist thinks about heredity in the so-called broad sense with a capital H is the following. Heredity includes all genetic influences on the phenotype whether due to additive, dominant, or interactive effects. So when you hear in a technical paper heredity, what they mean is the ratio of the variance due to the genotype over the total variance of the phenotype where the variance of the genotype is additive variance, dominant variance, and interactive variance between the genes. That's the technical definition of heritability. So let's think about that in a concrete fashion. Don't some people just eat and not get fat? Isn't that true? Isn't it in their genes? In other words, these poor people who just, no matter how little they eat, they're just going to balloon up? It's just tough. And it's really their problem that they have this genetic background that makes them at risk for this obesity. So we've got a real big problem. So the way to think about it is an experiment that I only dared to do four years ago at MIT. I asked all the Asian students in the classroom to raise their hand. They raised their hand and I told them to leave. They're upset with me, and they walk out of the room. Then of the remaining students in the class, which, since this was MIT, was about 30% left, I say, take your pulse. And we took a pulse, and I found a nerd in the room. And I said, fire up Excel, and everybody called out their pulses. And I had the nerd take the average and the standard deviation. Welcomed back the Asian students and went through the same process with them. Everybody takes their heart rate. We tally it, mean, standard deviation. And then we show them the board. And in fact, there's a more than two standard deviation separation between the Asians and the non-Asians. And I explain to them, as you well know, the basal heart rate of non-Asians is quite a bit lower than that of Asians. And they sort of look puzzled. Did I know that? Really? Really? Maybe. I don't know. And then I say, well, is that really right? And then suddenly it dawns on what's happened here. We're not talking about genes. What are we talking about? What? AUDIENCE: Work. ISAAC SAMUEL KOHANE: Work-- we're talking about environment because basically, I pissed them off by doing a potentially racist thing. So the catecholamines were pouring through their veins, and I made them get up. And so I did two major environmental stimuli which changed their physiology. But if you didn't know that, that hidden variable, you could legitimately come to a conclusion that there's some racial stratification around this phenotype and therefore is with reduction to genes. And you could ask the same thing about the overweight people. You can say, look, they're eating just as much as I am, and somehow, they're getting fat and I'm not. So obesity is just a massive epidemic, and that's the point. It's an epidemic. This is the prevalence of obesity, note, just from in the last 40 years. Exponential rise. Now I defy you to find any possible genetic evolutionary evolutionary model that explains how our genome changed so fast in 30 years. And note, by the way, this was the same time as the amount of calories from fat has been steadily decreasing. In fact, some argue that's the cause of it, by the way, because we've substituted that those fat calories with carbohydrate calories. AUDIENCE: But it's US only. ISAAC SAMUEL KOHANE: It's US only. By the way-- but there's a lag. Even third world countries are now beginning-- the moment they become caloric sufficient, they actually are showing this. And so obviously, it's not genetic, and it's actually conditioned on some significant change in our environment most likely having to do with our combination of our exercise patterns and our dietary composition. And yet if you did not understand that, you'd just be hammering on the genome. And it's that kind of reductionism, that we have to avoid. But if you-- in terms of genetic medicine, not genomic medicine, where you really think comprehensively, you could easily run into this kind of error. And so how do we define environment, diet, daily habits, environmental insults, medical care? And so the genotype does not capture the individual patient states. It's what I told you before, but I think it's a graphical example of how that's true. So we need to capture and quantify the environmental influences. We need to capture the effect of the genotype and the environmental effects on the phenotype. And these two comprise-- history, physical, laboratory studies, imaging, which, in fact, you'll all recognize those of you getting your medical training is the basic medical history. And so if we really want to do the comprehensive kind of studies that we need to be able to dissect environmental and genetic interactions, we have to have these data items. And that's why this is part of genomic medicine-- because those of you who are going to be doctors or clinical researchers or researchers studying clinical phenomenon, somehow you're going to have to be able to get this accurately and at large scale. And that's kind of boring, frankly, but nontrivial. And so this is just repeating what I've said, but I just really want to emphasize it. There's more to the state description than the genome. Given the necessity to capture both environment genetic state and the interaction, it's only then that we'll be able to elucidate the variation of environment due to the genome and through the interaction between the environment and the genome. For example, you're only going to figure out the risk effects of smoking on lung cancer if you can quantify well the environment, which is both the smoking and the exposure to other harmful chemicals in the environment. And it's required for effective new therapies. It's required for deeper understanding of mechanism, and it requires capturing the aforementioned interaction, and the less we capture, the more undetermined, in the sense that I gave to you, the system is. Well, we're running out of time. One thing I want to make true for this class is that I always end on time. So we'll just talk about the last bullet, and at another time, I'll talk to you about accelerating consumer activation. Here's a standard pediatric question. You have here the standard growth curves in height and in weight. And you have a patient who's falling off their height curve and sort of falling off the weight curve. And in order to diagnose them, you have heights and weights and family history. You can take an X-ray of their wrist. You can measure breast development or size of the testicles. And the disorders show characteristic patterns on a growth chart. But if you just have the misfortune of sending them to my clinic, we'll also do some other tests. We'll look at thyroid function. We'll look at a protein called IGF1 that's made in response to growth hormone. We look for inflammation, looking at your blood count, and so on. And if we still have evidence that you're really not going well, then we bring you into the hospital where we expose you to insulin and to glucagon. And that makes a kid hypoglycemic. So they pass out and feel crummy, and then they get nauseous from the glucagon. So they puke, and then they pay $2,000 for the pleasure of having been with us in the morning. And the interpretation remains controversial nonetheless because there's a significant false positive rate, and there's a significant false negative rate. And why is that? Because we're really not capturing the underlying process. And for instance, obese patients sometimes don't quite secrete enough growth hormone. If you're before puberty, sometimes if you don't have enough sex hormones at all, you might not, in fact, secrete any growth hormone even though you're ultimately going to secrete growth hormone. And we're totally going to miss the following thing. Here at the tip of the X and Y chromosome is a gene called SHOX, Short Stature Homeobox. And those of you who are doctors or medical students, you know what Turner syndrome is? Do you know what Turner syndrome is? AUDIENCE: [INAUDIBLE] ISAAC SAMUEL KOHANE: Close. You have the right category. X no Y, and they look like girls. And they're short. What [? neck? ?] Excellent. They're short. They only have an X and no Y. And after class, I'll give you the full Turner syndrome so I can end on time. And they're short. Why are they short? Because they only have one dose of SHOX. Now it turns out-- because they don't have two X chromosomes. It turns out the most common chromosome is, of course, of short stature in males, and it happened in 2.5% of idiopathic-- that means without a known cause, short children have SHOX mutations, causing them to be short. And these mutants have perfectly normal growth hormone. But if you treat them with growth hormone, they'll grow to a normal size. And this is known in the literature and just not tested. I actually tested this because of this darn class. I made sure I was fully familiar. But like I said, in my own clinic, now probably less than 25% of the doctors order this. And yet this is a fully treatable cause of short stature. And unlike the-- and it's not as expensive as the big insulin glucagon test I gave you. And it's much more certain outcome. If you have this mutation, you're definitely going to be short, and you're definitely going to respond. So this is just a bit of a teaser to show you how today well within my reach, I can order the test today, and I do so. It's just not done. So this brings us to the other problem, which is education. If there can be thousands of mutations that are clinically significant, how in the heck are we going to teach them to our medical students who are already overloaded? And the answer, I think, and unless you can come up with a better solution, it's one or two-fold. Either create a whole bunch of experts whose only job is just to look at the patient as a doc and say, do they have X, Y, or Z, or we have to bring information technology to bear right in the process so that as you enter characteristics of patients, all the genetic possibilities start getting updated. And that's a problem because this is a decision tree for the treatment of prostate cancer based on an old proteomic test, namely prostate specific antigen. And basically, what do you do to someone who has basically a big bump in their prostate? And the problem is, if you look at the literature-- I did this last year. That's 7,325 articles, 7,000 articles-- sorry about the color or lack of it. 7,000 articles are saying, what is the right ordering of decisions based on this one test? So depending on different PSA levels. Now that's just one proteomic test for one gene. What in the heck are we going to do with these decision trees when we have to measure-- or we will measure, as we can today, thousands of gene expression levels or protein levels makes up for a much more complex tree, and we'll never have enough patients, by the way, to sample all the different possibilities. But that's a fundamental challenge of genomic medicine. So this is what I was alluded to, and just in the interest of time, I'm going to short circuit it, although this slide will be on the web shortly. If you only had 10 diseases, 10 hypotheses, and you had five binary tests-- so there's not like a prostate-specific antigen. It's not different levels in a continuous fashion and just say it's normal or abnormal. So five binary tests-- the analysis requires knowing 63,000 probabilities to be able to calculate the probability of all the possible outcomes under all the different tests. And that's just very hard to imagine how we're going to do that comprehensively. And in fact, the number of diseases is much larger than this. The number of tests that we can do is much larger than that. So that's a fundamental methodological challenge for new medicine. And in the same sense, I'm going to just-- now this will be my last slides. Evidence-based medicine is a big movement in medicine which says, let's look at the literature and try to find out what the best practice. And they've done this for a number of different disease areas, and there's literally thousands of doctors and researchers who are filling out these databases of evidence-based medicine, the largest being the Cochrane collaborative. But if you look at the genetic section, it's just a few highly penetrant genetic diseases. There's very little to guide-- in fact, there's nothing to guide the practitioner for anything of these genomic diseases for these very common diseases, which is where I stop today's class because we ran out of time. And for another time, I'll tell you about the problems about patients who are able to select tests, order tests over the web today. You can actually send your own samples to be sequenced and a dubious interpretation or undubious interpretation is given to you. And then you show up in a doctor's office, and he doesn't know or she doesn't know what to do with it. But that'll be for another time. All right, any administrative issues, feel free to--
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_3_Measurement_Techniques.txt
ATUL BUTTE: So last week I talked about-- what did I talk about? I talked about an introduction to molecular biology for about 10 or 12 slides. And then we were going pretty quickly I covered a lot of this material on gene measurement techniques. And so we've already had one question, or one request, to talk about something that I actually flipped pretty quickly through, this concept of SAGE and how is SAGE used to measure microarrays. And actually it's a measure of RNA, and it's quite important that we actually talk about SAGE, I think. But I'm going to start the discussion with SAGE talking about microarrays again for a second. So I we talked about microarrays, how these are made grids of DNA. Each spot in the grid is looking for one particular RNA. And in fact, more than one spot might be actually hybridized into one RNA. It doesn't have to be one to one It could be many to one. And I just want to talk about the differences of the two technologies for microarrays and then talk about how that actually leads itself to the SAGE discussion. So for the past coming up on 10 years now there have been two technologies to make microarrays. There's the oligonucleotide approach and the cDNA approach. The analogy I like to use is this. If you think of a microarray as, say, Manhattan with all the skyscrapers. If you had to rebuild Manhattan, there's two ways you could do it. You could take one skyscraper at a time and put them into place, or you could just build all the first floors, all the second floors, all the third floors, all the fourth floors. And that's essentially the differences between these two technologies. So the oligonucleotide arrays are really made by Affymetrix. They're made-- each of the skyscrapers-- the equivalent, each of the strands of DNA, are only 25 nucleotides long, but they're built one base pair at a time. Because they're only 25 nucleotides long and genes are much longer than that, that's why they have multiple probes going against a particular gene. The cDNA, on the other hand, are essentially strands of DNA that will hybridize to the RNA, but the strands of DNA that are just put into place using primarily using something like a robotic spotter. So it might pick up four at a time, or 16 at a time, or just one at a time, but that's how these two are made. And beyond the technology difference here there's a major difference in what exactly these microarrays are measuring. The cDNA arrays are measuring relative expression amounts. So you have to put two samples on there, one of which the RNA might be colored red, another of which the RNA might be colored green. And you'll look at-- what people will do is after they've hybridized it, they've taken a TIFF image of this, they'll look at the relative ratio of how much red is there compared to how much green. So in that you always have a relative expression level. In theory the oligonucleotide approach is different. Now this is relative, because each of the strands of cDNA has its own properties. Each of the strands might have a different set of genes and As and Ts and Cs, different melting temperatures. And that's why you can't really use that as an absolute measurement. And intensity might be lower on one of these arrays, not because there's less of a gene present, just only because the actual cDNA had a little bit different biochemical properties and wasn't hybridizing efficiently. The oligonucleotide arrays on the other hand, are, in theory, designed so that each of the probes has similar characteristics in terms of melting temperature, and some of these other characteristics. So in theory you can compare one spot to another spot. And because of that, these are made-- these are hybridized using a single sample. So you put one sample on one array here, and if you want to do a relative difference you use two arrays. Now again in theory this is an absolute expression amount. If people want to get to the most absolute measurement, like how many strands of RNA are present, then there's another technology called SAGE, and that's how I just wanted to introduce that discussion here. SAGE stands for serial analysis of gene expression. I think the original paper on this was in Science. It was a one or two-page paper back in 1995. by-- it was Victor Velculescu at Hopkins. And so this actually was developed around the same time as microarrays but actually gained a lot of popularity before microarrays became so commonly available. But essentially the way SAGE works is that it gives you an absolute expression level for a particular gene, but it measures gene expression by sequencing. And automated sequencers were just coming around at the time, and people realized that you can use this technique to use them to measure RNA expression levels. So the way this works is this, is that in theory every RNA-- or let's say every sequence, every gene-- has what they call, is a SAGE tag here. And all the SAGE tag is is, let's say, 10 nucleotides that hopefully uniquely identify that gene. But the catch is it has to be after a CATG, and we'll see why in a moment. So you look at a gene, you look for a CATG, and the most downstream 10 nucleotides are going to be the tag for that gene. Hopefully that's enough to make it unique, but that's not always going to be the case. So the way this works is this. If you start with the strands of RNA that you're going to measure, the first thing you want to do is to make a strand of DNA. RNA is very fragile. You want to convert it to DNA as quickly as you can. Otherwise it's going to start to degrade. So the techniques for taking a strand of RNA and making a strand of DNA are commonly available. It's a reverse transcriptase. You can buy this off the shelf, or you can buy it from a store. And essentially it's going to start by going at one end. You might give it a primer to get started, and then it's just going to go all the way until it falls off the end. So now you've made a cDNA copy of this. And what you do now is, you can do that for the other strand as well. So now you've gotten rid of the RNA, and you have two, you have a double strand of DNA representing the RNA that you had in your original sample. Now it gets a little tricky, because now we're essentially going to use a lot of molecular tools that bacteria have to actually do this kind of fancy cutting and pasting so that we can sequence this and figure out what the genes are and what the gene expression level is. There's a particular restriction enzyme. So a restriction enzyme is actually a component. It's a protein certainly, but it's an enzyme that's present in bacteria that goes around cutting DNA at particular sequences. There's a lot of restriction enzymes out there. If you get the current catalog or the current poster, there might be on the order of, let's say, about 300 of these restriction enzymes. And there are many ways to define them, but one way people define restriction enzymes is based on the sequence it cuts. And so there's some that cut DNA, giving absolutely flat ends, and there's some that actually cut leaving sticky ends. And so here, this particular restriction enzyme called NLA3 takes any strand of double-stranded DNA and cuts it at the CATG, but it leaves the CATG as an overhang. It doesn't cut it flat. So these guys saw this, and they said, wow, we could do some interesting things, a little bit of cutting and pasting, and actually take advantage of this. So to be clear, here's the sequence. Here's the CATG and the unique tag. And basically we're going to-- even though the strand might be longer than this, it's going to get cut like this. Now to use some more fancy parts to this, remember how I told you most messenger RNA has a strand of A at the end, because as it's being copied off the DNA it starts to stutter in some ways and just leaves a poly(A). And that you can bind to it, or you can actually find it by using a poly(T). Using their technique what you can do is instead of just starting with T, start it with Ts and something sticky on the end. In other words, it's labeled in such a way that we can then fish these out after we apply a magnet to it, for example. Or some people use streptavidin biotinylation as a common way to actually pull, fish these things out when you run through a column, for example, with particular beads, however you do it. So now we have a whole bunch of the double-stranded DNA, except we have this little overhang here. We've got all the As and Ts on that end. We fished them out of this big cellular pool, this cellular mess. Now what we're going to do-- it starts to get a little tricky. But now we have this tag here. Keep your eye on the red. That's all I can say here in this technique. Because what you're going to do is, in the end you're going to see how we're going to be able to use a sequencer to tell these RNAs how much of each RNA is present. So we've got one RNA going this way, and we actually got this light blue for another RNA going this way. And you're going to see how they're going to end up being next to each other. So what we have to do then is add an adapter that has another sticky end here, and add it to this side here. And we're going to use another restriction enzyme that's going to cut at the CATG, but it's going to cut after the tag. There's one particular restriction enzyme called BsmF1 that looks for CATG but doesn't cut at the CATG. It cuts 14 base pairs downstream from there. That's why our tags have to be 10 nucleotides long. A lot of fancy biochemistry here to get this to work. So you got a tag here, and you got tag here, and you got a little adapter here that we've added on this side. And then finally, what we're going to do, we've cut it here, and we've cut it here. What we can do is add something called a ligase that basically takes all blunt ends and just tries to make pairs together. It tries to bring any small strands of DNA with blunt ends and tries to attach them together. So we have a blunt end that was here, and a blunt end that was here. Here's the first tag, and here's the second tag. And we're going to have a whole bunch of these. And what we can do is just run it through a sequencer, basically. We're going to do some more fancy footwork to get these to be as compact as possible. And in the end we have one stretch of DNA with just one tag after another, with some CATGs in the middle as spacers so we can make sure we're staying synchronized. Now what you do is run this through an automated sequencer, and then you can just get-- you can read off the sequence just like this, one then the other, then the one then the other, and then search databases to see what gene do we think this is. Now the reason why this is an absolute gene expression measurement, for sure, is that if this gene is over-represented, it's two times more than this one, you're going to see two times more tags than this one, in theory. There are efficiencies to each one of these steps that we talked about. And this is not as simple as a microarray. Literally you can take the sample if you're a physician at Brigham Women's Hospital, or any of these hospitals, and essentially take it to a core and get an microarray done. SAGE is a lot more involved, but in theory this is a great way to pick up low expression-- genes with very low expression levels. So that's why it was worth at least talking about SAGE. Because if you're looking at absolute expression levels, this is where it's going. For now this is a back burner kind of technology. People still use it. There are people who are religiously devoted to it, but most people end up using microarrays today. Yeah? AUDIENCE: Can you go back and explain just a little bit how the tag got there that way? ATUL BUTTE: The tag just happens to be a sequence. So what we're calling a tag here is nothing we put there. AUDIENCE: OK. ATUL BUTTE: It's something that is next to a CATG that's on this end of the gene. If there's nothing there like that, then we're not going to find it, so that's another catch. If the first one is all the way up here, we might not even get it. So they're saying the average, if you do the math, there's at least one of these within 256 base pairs, because it's like 4 to the fourth. So there should be one there, but there might be degeneracy, or it might not be there. So that's the whole technique. It's 10. It's CATG only because of the peculiarities of the enzymes being used to actually cut these things apart and then just look at the tags. Any questions about that? So that's basically-- that was the last thing I want to talk about in this section of gene measurement technologies, or gene measurement techniques. So now Zack was suggesting that I talk about some of the stuff that we've been doing with diabetes, only because this is supposed to be genomic medicine. We can give you a real world example of how we've used microarrays to help us with diabetes. So I'm a pediatric endocrinologist, and I've spent most of my time over at the Joslin Diabetes Center studying type 2 diabetes. It currently affects about 15 million people in the United States, and in fact, the CDC just estimated a couple of months ago that a child born in the year 2000 now has a 1 in 3 risk of getting type 2 diabetes. So it's 33% likelihood now of a child born just two years ago to get type 2 diabetes. Why? Because of the obesity problem. So we all know kids have less exercise today in school, because they have to pass these tests. And now schools have to give more courses so that they pass the MCASes and things like that. There's more TV. There's more channels. There's internet. There's all these things that cause kids to basically sit in one place. They don't get much exercise. I think there was something on CNN last week about how even toddlers one or two years old are just not getting as much exercise as they used to. So this isn't even school age. This is starting at basically age one or two. So the question we had here was this. We know how to define-- the diagnosis of diabetes is actually sort of arbitrary. In fact, the definition of diabetes keeps changing, it turns out. But for example, at one point the definition of diabetes was having a fasting blood sugar-- so morning blood sugar-- level of 127, I think it was, milligrams per deciliter qualified you for having diabetes. But what we know is that it's not just an on and off type of thing. All of a sudden you don't have diabetes, and now all of a sudden you do. Because diabetes is a very heritable trait. It's controversial, but some people say, for example, it's a very genetic trait. Let's put it that way. Some studies that show that in twin-twin studies if you have one twin that has type 2 diabetes, you have a 98% likelihood that the other twin is going to get it, if they don't already have it. So it makes you think that it's incredibly genetic, but the problem is the incidence is so high. This is a common disease. It's not very easy at all to find a particular mutation that causes it. We haven't been that lucky so far. We have found-- the field has found a number of mutations that do lead to diabetes, but they might explain the sum total of maybe 0.01% of the people who have diabetes. Those are very rare forms. The common form, which seems like it's incredibly genetic, it's still a big mystery. The point of what we were going to use genomic medicine for here was trying to define the patterns of how people might go from non-diabetes to diabetes. There's a middle group in there. In fact, the ADA, the Diabetes Association, now calls this pre-diabetes, people who don't have a normal handling of sugar but don't strictly meet the criteria for diabetes today. The question we had here, using microarrays, can we define patterns of gene expression in human subjects with diabetes, specifically in their muscle samples, to identify those that are at high risk not just define what genes are different between diabetes and non-diabetes but also what genes were already different in those that were at high risk of getting diabetes. Specifically these are offspring of type 2 diabetics. So in this particular study that we did was in Mexican-American subjects. We have diabetes and family history of diabetes. So we basically have three groups, family history negative, diabetes-- so obviously these are control-- and then we have family history positives. So we got three groups here. And you can look at the subjects that we used for this particular study. We had one that we use for the arrays, and we had another that we actually did validation work on, some of the hypotheses that we came up with. And so most of these characteristics are not different, but they are different in a number of important ways. So fasting glucose is 99, 92, but 200 in the diabetics obviously, because they have diabetes. The two-hour glucose was higher in the diabetics, but the hemoglobin A1C was already high in the family history positives. What is the hemoglobin A1C? Well that's one of our measurements for how high the blood sugar levels are over a span of 120 days. So in fact most physicians now who take care of patients with diabetes, we measure this hemoglobin A1C at least four times a year so that we can get a gauge as to what the blood sugars are like over the past 120 days, because that's the lifespan of a erythrocyte or a red cell. And as the glucose levels are higher in the blood, they actually glycosylate the hemoglobin in the red cell, so it's a byproduct of this. But it's already high in the family history positive. They're not exactly like the family history negatives here. Fasting insulin already high. So the problem with type 2 diabetes-- just to make sure everyone's on the same page here-- it's not a problem with making the insulin from the pancreas. The problem is that insulin is there, it just doesn't act like it's supposed to in the target tissues. Now to use my analogy from last week, I just had this great food from the Chinese truck again. My glucose levels are going up, my insulin levels are going up. And so insulin is now telling a number of my tissues to do a number of things. It's telling my liver to stop making sugar. I just ate a whole huge meal. I don't need the liver to start making more sugar, because I just ate a whole bunch. But more importantly, it's telling that my muscle and my fat cells to start taking in the sugar. And what happens in type 2 diabetes is that that signal from the insulin to what the tissues are supposed to do is muted for some reason. We just don't know exactly why. But that's the problem. The insulin is there. In fact, now these guys have a higher insulin level, because already their bodies are realizing the same amount of insulin isn't doing what it's supposed to. So the pancreas sees that the sugar levels are not going down after the meals, for example, and it's saying, I need to ramp up the amount of insulin I'm making. But the primary problem is felt to be in the target tissues, insulin resistance. So the groups are already different, and that's going to actually make this analysis complicated. So we have five diabetics there, four without diabetes but with a family history of diabetes, and six controls here. And when you do this, you can compare each group to each other group. So we can take-- we took muscle samples from these individuals. If they were on medicines for the diabetics, they were actually taken off for a number of days. They had no strenuous exercise beforehand. And it's all informed consent, and you get the muscle samples. You extract out the RNA. We put them on these microarrays. The microarrays when we did these experiments measured about 7,000 genes. Today when you do this for the same price you get 44,000 genes. And so we can use a number of different analytic techniques that you're going to be learning about over the rest of the semester-- exactly how we make the distinction of what genes might be different, what genes are not changing. But if you compare family history negative to diabetes, we find 187 genes that are different. Even if you compare family history positive to diabetes, there's 166 genes that are different. So fewer, and there's some overlap. So the 55 genes between-- there are 55 genes that were in this list and in this list. Most of them coded for all sorts of interesting proteins. For example, mitochondrial proteins that are involved in actually energy metabolism, or ATP synthesis, oxidative phosphorylation, which we'll talk about in a moment. So this is one common way of how we actually like to look at these genes. It's called a heat map, or even a dendrogram, depending on the different programs that you use. But the green indicates-- so each of the rows indicates a gene. Each of the columns typically represents a sample. And the green indicates a gene that's higher than the mean, and the red indicates a gene that's lower than the mean. And what we're showing here are a subset of those 55 genes that actually have something to do with energy metabolism. And primarily most of the genes that have something to do with energy metabolism are higher in the family history negative, they're lower in the diabetics, and they're already low in the family history positive, so the ones that are at risk for getting diabetes. So now we've got a piece of this puzzle. We see that transcription of some of these mitochondrial genes, metabolic related genes, is down. That might be causing decreased oxidative phosphorylation. So what happens is, fat is building up in the muscle and the fat cells. And if you have less of these genes and presumably less of the proteins that we're using them as a proxy for, we're getting decreased lipid oxidation. As fat or lipid builds up in muscle, it actually can cause the insulin to not work as well. So maybe that's how we're getting to insulin resistance because of an increase in the lipid in the muscle. But there's a huge half of this that's still unknown today, and that's the part I want to focus on for a second. People show this slide all the time in all sorts of different contexts. Huge formula here, huge formula here, the miracle occurs in the middle. And still the biggest problem with this type of study is that this miracle still has to occur to figure out why do we have this list of genes. What is it about this list of genes that's special? How are we getting to this set of genes? What's the biology behind it? So although many of my colleagues will argue with me about this, you cannot ignore the biology here. It's getting harder and harder to just say, here's my list of genes, go run with this. You need to know what is it about this list, what's causing this list of genes to be so special. And unfortunately right now to informaticists it looks like a miracle occurs, that someone has the insight to look at one particular factor, and you actually get a causal chain of events, and then you can get your list of genes. Here what we realized is that most of these genes are downstream of a transcription factor called NRF-1 one and another one called PGC-1alpha. It'd be great if we had some database of these things. Of course, we don't. It'd be great if NRF-1 was even on the chip. It's not. It's on today's chip, but it wasn't on these chips from three generations ago. So no amount of informatics and network building and Bayes networks or any of that would have got us to this particular hypothesis, because the gene wasn't even being measured at the time. But all of these genes that I'm showing here-- again, a subset of the previous subset-- are all downstream of NRF-1. So that led us to actually think about the hypothesis, maybe NRF-1 is down, and that's why all these other genes are down. Because it's much easier to come up with a causal explanation requiring just one thing to be wrong than 50 things to be wrong. That's Occam's razor. Simpler explanations. So when you look at NRF levels-- again this measured a different population using a different technology, RT-PCR. You have the family history negatives here, positives here, diabetics again. And the gene actually has a statistically significant drop in the diabetics. But what's puzzling is that it's not dropping in the middle group. But we saw those genes were down in the middle group, so this isn't a perfect explanation at all. Another piece of biological knowledge led us to realize that NRF is actually itself is downstream, or actually it's co-activated by PGC-1alpha. So when you actually measure that, you also get the clue that PGC-1alpha is down in both of these two groups, as well as another form, PGC-1beta, which is down in these two groups. The biggest problem, though-- so now the hypothesis is that drops in PGC-1alpha and NRF are actually what's causing the difference between diabetics and non-diabetics, and it's already showing a difference in the pre-diabetics. The biggest problem, though, with this study is that what's the cause, and what's the effect here? I'm already showing you that the glucose levels and insulin levels are already high in that middle group. How do I know this isn't just an effect of having high sugar levels? How do I know this is causing the high sugar levels? And that's the problem with this type of approach. We can easily get samples now. Samples aren't the hard part. The arrays are not the hard part. The analysis is not the hard part. Informatics is not the hard part. There's plenty of people around now, plenty of software you can download. The hard part's the interpretation in the end. Even though we have all of these genome scale tools, we still have nothing that makes the interpretation that much faster or that much easier. So now it's still routine. If I go to the Joslin Diabetes Center, I look at their core facility. I was just there a couple of days ago. They've literally built a pyramid of all the arrays they've done so far, on the order of about 1,500 almost 2,000 microarrays already done at the Joslin Diabetes Center. There's maybe been about two papers published so far with all of this array data, this one which had about 30 samples, and another one that had about 10 samples. And still there are thousands of arrays that have already been collected. The data is already sitting there. Analysis has already been done. But people cannot make the leap to the next step. Why is it this list of genes? What's so interesting about that list of genes? I think that's going to be the hard part now in the next few years. This is not that novel to take samples and put them on arrays. That's not what the medicine part of this is going to be about. It's going to be trying to get down to simpler explanations for what we're seeing here. So this is a final hypothesis here, perhaps that PGC-1 is down, and that, along with NRF, is actually causing this list of genes to be different. Any questions about that so far? Any thoughts? AUDIENCE: Do you get any-- we look at that AMP kinase pathway activation in terms of toxins? ATUL BUTTE: A great question. So the question is, have we looked at the AMP kinase pathway? So we know, for example, when people exercise, their sensitivity to insulin improves, even if they have diabetes. People who exercise, the same molecule of insulin has more of an effect. And that's mediated through a different pathway, through the AMP kinase pathway. Did we look at AMP kinase levels here? No. Did we look at the expression levels that might have been on the chip but they weren't different? Did we look at activity levels or protein levels? No. But you can bet in that towering pile of 1,000 arrays-- Lori Goodyear and others have done these arrays already-- but they're still stuck the same way. So what is it about these lists of genes that's going to get us to a hypothesis here? It's very unwieldy. It's very hard to deal with these lists of hundreds of genes. It's not very validating anymore. It's not very fun to work with these genes. The hard part is to get down to the causal mechanism for those genes, and that's where we have the least information right now, like what's upstream of what, for example. AUDIENCE: Can you go back to the chicken and egg conundrum [INAUDIBLE] I was just wondering how do you sustain a direct decrease in ox pause, lipid oxidation. So the idea is that you're not using your glucose efficiently. when you blood sugar is high. ATUL BUTTE: Exactly. AUDIENCE: You're not metabolizing [INAUDIBLE] ATUL BUTTE: Primarily you're not generating ATP as efficiently as you could in a muscle. AUDIENCE: Yeah, well if you're generating ATP inefficiently, and that's the equivalent of having off proxy coupling or something. [INTERPOSING VOICES] ATUL BUTTE: Exactly. AUDIENCE: This is you're actually not generating the ATP and instead-- ATUL BUTTE: Yeah, the pipe is-- AUDIENCE: --substrate. ATUL BUTTE: Exactly. The pipe is not being filled, exactly. The pipe is not being filled as efficiently as it can be. AUDIENCE: Right. ATUL BUTTE: Absolutely. AUDIENCE: If we use that and we go back maybe to sort of backwards reasoning, wouldn't that basically suggest that the initial perturbation is in NRF and CTAR? ATUL BUTTE: So there's other ways we can-- there's another piece of information that we can bring to try to figure out the ordering of them. So there are sequenced polymorphisms that are known in these genes, specifically in PGC-1, that go along to having diabetes. So we think, well, if there's people who have an abnormal sequence and they get diabetes, maybe that's what's causing it. AUDIENCE: Right, right. ATUL BUTTE: That's what puts this upstream of actually having the high sugar levels. But in this sample, without that a priori knowledge, you can't tell that. That's the issue. AUDIENCE: Doesn't it suggest that, though, or was that a giant leap? In other words, if the glucose level was high, if the initial perturbation was the elevated blood glucose-- ATUL BUTTE: OK, yep, if it were that. AUDIENCE: --that doesn't explain why you have upregulation of-- or is it downregulation of NRF and PGC, whatever it is. It up or down? ATUL BUTTE: The NRF is down. All I'm saying is with just this experiment, I could put this square up here and say that is leading to this, leading to this, leading to this, leading to this. I can't make that leap that this is causal with this. [INTERPOSING VOICES] AUDIENCE: More suggested that the square's down there, or am I not thinking about this correctly? ATUL BUTTE: With the sequenced polymorphisms and things, it makes me think that it's down here. But just the experiment by itself without the a priori knowledge, without knowing what the literature tells you, how would you know that, because these patients already have some degree of resistance. That's the issue here. You need to know what-- the biology. You cannot ignore the biology. AUDIENCE: Right, but the question is just the-- [INAUDIBLE] Because if you look at the control group, the body mass index seem to be much lower. ATUL BUTTE: Absolutely. AUDIENCE: [INAUDIBLE] ATUL BUTTE: Exactly right. So you picked up on that as well? The body mass index is different, too, because it's hard to find. I mean, that's life in America now. Everyone is obese. And so even teenagers that are obese might be in this middle group now. They will have some impaired glucose tolerance. AUDIENCE: Of this case in the control group, control group was normal. ATUL BUTTE: The control group was-- I mean, their body mass index is still 30. That qualifies as obese today. And the family history group is actually slightly less. But they're all obese, basically, especially the controls. AUDIENCE: But you can see the distribution on the graphs on the next page. ATUL BUTTE: Absolutely, absolutely. This one here. AUDIENCE: Yep. ATUL BUTTE: Yep, exactly. This is one gene versus the other, and you can see the body mass index here, exactly. And you got one all the way over here, exactly. That's destroying the mean. So these are not otherwise equal groups. That's the problem. Now you have to-- so why am I even showing all this, if I have all these problems with this analysis and this data? The whole point of this kind of course is not just to teach you how to do these types of experiments. To me it would be great if you knew how to interpret these things that come out in the New England Journal every day. Because every month now we see a microarray paper, say, in the New England Journal, or JAMA, or other equivalent publications. And I think we have to teach people how to read these things with a critical eye. Because it looks so fancy to come out with 44,000 genes, and these are the ones that are actually diagnostic. But there's actually problems with the experimental design. And if the experiment was not designed in exactly the way to answer the question that was being asked, no amount of fancy arrays is going to be able to help you. You can't salvage that. That's what I'm trying to get across, is how to be critical here. I think the same thing-- when you see a bunch of genes that distinguishes ALL from AML. Let's think about all the other things in their context that could be making a difference here. When were the samples acquired? Where were they acquired? Are patients with AML preferentially sent to one center, and the ones with ALL sent to another center? My favorite example I love to give is this. Let's say you're looking at a solid tumor, and you're trying to distinguish the genes that are different between the metastatic form of this cancer and the non-metastatic form. If you've spent any time in the wards, you know how surgeons work. If you have a patient, and if a surgeon is looking at their caseload for the next day, and they have a patient with metastatic and patient with non-metastatic, the patient metastatic, it's already metastasized. The surgery is going to be a hell of a lot longer, let's say, because they have to go, check this, check that, clean this out, clean that out. The surgeon might say, I'm going to put that patient as the first case of the day. And the simpler case, I'm going to put on as a follow-on or an add-on case. Keep them waiting in the waiting room or in the recovery room ready to go, and then we're going to do them after this first case is done. So now imagine a study where you look at the difference between metastatic and non-metastatic forms of this particular cancer, and you've got these samples from these surgeons who preferentially put the hard cases in the morning and the easy cases in the afternoon. I'm an endocrinologist. I know that are hormones that are different between morning and evening, like insulin, growth hormone, cortisol. Who knows how many genes are downstream of those hormones? How do you know that the signal that you're seeing isn't because of that bias or that confounding there? Now this is not revolutionary stuff. If we looked at clinical tests, the clinical trials in the past, we knew to look for these things. But all of a sudden now that we have these fancy red/green-ograms, and we have these microarrays, people have forgotten about all of this traditional-- these traditional confounders. We're measuring across the entire measurable genome. The signal that you get just from endocrinology, from circadian cycles, might be stronger than the signal that you actually think you're measuring. Now you go back and look at the last three microarray papers that are in the New England Journal of Medicine and just look to see if you can figure out what time of day these samples were acquired. No way. AUDIENCE: How sensitive is something like that? Well, two-part question. One, is it a matter of is there an 8:00 AM, an 8:30, and that sort of thing. And is that even known? ATUL BUTTE: Right. So it might not be known, but then what I would say is we should randomize across that variable. You could turn the lights on there. We should randomize across that variable. So for example, someone should be looking at the clock and say, well, these are all acquired-- there's some that are here, some that are here, some that are here, and just convince yourself that it's not just a gross bias. I'm just making a fake story here that's believable. AUDIENCE: Couldn't you just do normal controls and do a raise on them in the morning and the afternoon? ATUL BUTTE: Sure. You can try to compensate for some other way. But it's going to be relative to their hormone levels, too, probably. AUDIENCE: I guess the example that you gave and just going out there, but I guess it's different than in a laboratory setting, when [INTERPOSING VOICES] cells and eventually they die. ATUL BUTTE: Exactly. Exactly. We've taken this technology that we've used in micro-- used in cells, in cell culture from cell lines and rapidly moved it to patients that exist in a context. Even the middle ground of just using them in lab mice, look at the same mice, same conditions, same everything. There's genes that are different. One of the mice was an alpha male in that little cage, and one wasn't. There's differences between things that we think are otherwise the same. But now we figure it to this extreme, because there's a course, you're in a course called Genomic Medicine, and that's what we're doing today. These are the problems that we have with that type of approach. I kid you not. Go back to the last few microarray papers in the New England Journal, you will not see what time of day, even if you go to the supplemental material. Nobody might not-- I think people don't even write that stuff down. And it's a problem. I've only seen one paper so far that's addressed this issue. AUDIENCE: So what would your suggestion be, I mean, obviously to us? ATUL BUTTE: Control for that. AUDIENCE: It's to control for it, but in the interim, while there's labs that maybe really aren't thinking about this. ATUL BUTTE: Yep. You can do what [INTERPOSING VOICES] Sunil is saying. AUDIENCE: Stop what you're doing. ATUL BUTTE: No, but you can try to model that. So take some other tissue that's related and see just across a sampling of time points what genes are different, and maybe you can then subtract out that effect. But more importantly what I would say is, if you're going to make a list of genes that you think are diagnostic, we're going to have to-- as physicians, we're going to have to look at that with a very skeptical eye. Because you cannot control for everything here. But we're measuring across so many genes and have so few samples, that no matter what, we're going to be overfitting the data here. That's what I'm worried about. So we'll be overfitting what we have. No amount of control samples is going to help us fix that problem, I think. How are we doing for time? OK. Let's talk-- let's go a little bit more into the diabetes. And so what I was planning to do is talk about one more diabetes project that we've been doing. Then I'm going to talk about some web-based resources for doing some interesting hypothesis generation, and we'll just end it at that point. So all of you are-- so not all of you are actually are physicians or medical students, but nonetheless I'll present this case of a patient that I saw. This was a female presented with Acanthosis nigricans, random glucose of 162. So Acanthosis nigricans, do you know what that is? There's a dark patch that you can get in the back of your neck, and your elbows, and your underarms. And what that means is that your insulin is too high. Because insulin is telling the melanocytes through some unknown mechanism, it's telling the melanocytes to become darker. So you just look at people on the street now, and look to see if they have a dark patch here. It's not dirt. You can't scrub it off. That's a sign of Acanthosis nigricans, so someone's having a problem with insulin. It's too high. Did more studies, the LDL is sky high. LDL is high, total cholesterol sky high, triglycerides are very high, HDL is low. So it's a metabolic syndrome, or otherwise called Syndrome X. Fasting glucose 133, that meets criteria for diabetes. Insulin is 27, which means this is high, but this is very high, bringing that glucose trying to compensate for that. Hemoglobin A1C is 7.3. The normal range is 4 to 6. And again to drive the whole point home, this is a 12-year-old girl with adult onset diabetes. So this is her body mass index. This is a 97th percentile curve, and this is where she is, well over the curve here. And now she has type 2 diabetes. She meets criteria for type 2 diabetes. She was not even entering seventh grade when I started doing summer vacation last year. 1 in 3 kids is going to be like this. That's a problem. So one way to study this problem is to study obesity, and one way to think about obesity related to diabetes. So one way to study obesity is to think about-- specifically what I'm interested in studying is how adipogenesis, or the process of making fat cells, might be related to how insulin works. And the best established way to study that is to look at the insulin receptor. So now we're zooming into one of these target tissues-- remember, liver, muscle, and, fat-- and the insulin receptor is a protein that sticks in the membrane of the cell. Like any other protein, it's got a gene coding for it in the DNA, gets transcribed, gets translated. The protein goes out to the surface. There are known mutations in the insulin receptor that can cause you to have problems with glucose handling and diabetes. In fact, there's a syndrome called leprechaunism, where these kids look like leprechauns actually. They die pretty quickly after birth, but they have mutations in the insulin receptor. It turns out you can actually be born without an insulin receptor. There's been a null person. There's been one patient, I think, that was found to have no insulin receptors. And obviously the insulin level was sky high in this baby at birth, because it's not acting anywhere. But what happens is-- what's supposed to happen is insulin binds to the insulin receptor, and then these phosphate groups are added to itself. And that changes the shape of this molecule in such a way that it can actually start to interact with other molecules. And in fact, the insulin receptor also adds phosphate groups to other things. In fact, these are insulin receptor substrate 1, insulin receptor substrate 2, 3, and 4, and there are some others here. So in collaboration with Ron Kahn, President of the Joslin Diabetes Center, he's created some mice where they've basically knocked out IRS1, 2, 3, and 4 in separate mice. So these otherwise normal mice missing this one particular gene. And if you take fat cells from these mice-- in fact, if you take pre-fat cells or pre-adipocytes, you can make them go into fat cells using a standard cocktail of hormones. This has been done for the past two decades. You take pre-fat cells and make them into fat cells with a particular cocktail. And what happens is if you take normal pre-fat cells and make them into fat cells and stain for the fat or the lipid, you get a big red circle here. This is a dish of the cells. If you knock out IRS 1 and try to do this, you cannot get the fat cells to form. So again, pre-fat cells, the same cocktail, you don't get fat cells in here. There's no stain. There's barely a stain for any red here. And it turns out if you knock out IRS3, 2, and 4 in the middle here, you get a gradation of the phenotype, interestingly. So it's not all or none. If you knock out the other IRS molecules, you get a gradation of the phenotype, curiously. So what we did is in collaboration with Ron Kahn, we actually made a list of genes that behaved in this pattern. We you don't have numbers here, but we're just trying to find genes that go along with that pattern. So let me show you an example of one of these things. Here's a gene. Here's wild type 1. Here's wild type. Here's IRS1 knockout, so no fat cells, easy to make fat cells. And we're taking the genes-- we're taking RNA before we try to make them into fat cells. The hypothesis here is, what genes are different before we try to make them into fat cells that might be impacting the process of making fat cells? Now we'll take a second here, and you're probably asking me, wait a minute. We were just talking about humans. We were just talking about this kid that was obese, and now we're talking about mice here. I'm going to bring this back to the humans in a minute, because I want to intersect this with other data sets and show you where I'm going. But suffice it to say we have about 80 genes that follow this kind of pattern. They're high in the normals. They're low in the IRS1 knockouts before adipogenesis, and they're in this kind of monotonic pattern here, or vice versa, going down or up. Came up with a list of 88 genes that fit this kind of pattern. Now put those 88 genes on hold for a second. Let's talk about a second disease. I happened to be involved with studying this particular syndrome called progeria. Progeria is advanced aging syndrome in children. In fact, it makes-- it's very rare, so about 1 in 8,000,000 births, and it makes children look like they're getting older. We've all heard of these kids and this kind of story. And specifically they lose their hair. They have alopecia. They're certainly short. They're shorter than age-matched kids. That the skin changes that make them look old. They don't go through puberty. They have poor weight gain. And usually they die at age 15 because of cardiovascular problems, atherosclerosis, get strokes, they get heart attacks. Now even though they look older, they don't get everything associated with being old. They don't get Alzheimer's. They don't get cancer. It's only the atherosclerosis and these other changes here that they get. Now as you might know, the gene for this was just recently discovered. Very amazing that they found this gene. I got involved with this particular syndrome only because of my friend Leslie Gordon, who was a medical school classmate of mine and who has a son with this. And she's got an MD-PhD. Instead of just studying this disease herself, she built a foundation called the Progeria Research Foundation. Got Francis Collins interested, the guy who's basically doing the Genome Project, the head of NHGRI. And within 18 months of forming this foundation, they found the gene for this. Incredibly difficult problem. There are people who on the first day would not believe this is still a genetic syndrome. One in 8,000,000 births is way less than the mutation frequency. We have a whole bunch of diseases that at 1 in 40,000, 1 in 100,000. 1 in 8,000,000 seems like it's so rare. How could a mutation be that rare? So it's very questionable. And secondly, they had a very hard time finding any families where two of these kids were in the same family. So it was dumb luck that they were able to find this gene, because in one of the individuals-- at any point there's only about 50 of these kids on the planet, really. And one of these kids, in one of the samples, someone with a sharp eye saw a piece of chromosome one was actually inverted. The bands, those banding patterns, was actually different in that one individual. And that got Francis Collins to think maybe it's something on that particular arm of chromosome 1, and they sequenced all the genes there, and they found the particular mutation. It's a gene called lamin A. And it's not just lamin A that causes it. Lamin A has actually been involved in a lot of other diseases, which we'll talk about in a second. It's just this one mutation in this one gene that causes it. Now amazingly enough, in the same issue of Nature, less than a year ago, the mouse model for this was published. How is that? Because this group at National Cancer Institute realized-- lamin A is known to be involved in certain forms of muscular dystrophy. They said, well, they wanted to create a mouse model of a particular form of muscular dystrophy. They accidentally created a mouse model for progeria. So the same issue-- you got the gene that causes it. You got diagnostic tests. And you got a mouse model to study it, and just in a tour de force issue there. Imagine how long it takes for these things to happen for a disease. Start to finish, 18 months here for something that affects maybe 50 people. So if you put your mind to it, you can solve these diseases. It's amazing what the tools and the people can do today. So in collaboration with the Progeria Foundation what we've done is this. It turns out over the past 20 years when people thought they had kids with progeria, that they were trying to treat kids with progeria, they had no idea what to do. So one thing they did is they took fibroblasts and they saved them in national repositories. So there's a cell repository called the Coriell cell repository where you can order fibroblast cell lines from patients with exceedingly rare disorders, including progeria. So we have gotten three cell lines from patients that were stored in the Coriell collection over the past 10 years, and we ordered them three times to make sure that we're controlling at least for passage number and things like that. And we compared them to age-matched normal fibroblasts from the same Coriell repository. Looked at what genes are different between progeria and non-progeria. And these are more up-to-date arrays. These are U-133 arrays on two-- the whole genome on two chips-- that gave us about 33,000 genes, and used a particular analytic technique, and we got about a list of about 366 genes that were different between fibroblasts in progeria and fibroblasts from age-matched controls. And so if you look at where these genes are and what categories they're involved with, you see interesting things like development, signal transduction, cell adhesion. The only thing I want to point out is this. These tables are made using a catalog called Gene Ontology. You're going to learn about that more in the future. And every week or every other week, the list of known properties of genes and proteins gets updated. But still the majority of genes unclassified, 161. No matter how interesting or how comprehensive we think Gene Ontology is, the vast majority of genes just have no-- we have no idea where they are, what they do, what roles they play. These are where the proteins that they're coding for actually take place. Nucleus, membrane, unclassified 131 again. No idea where these proteins even are in the cell. That's life today. We have a great taxonomy here of all of these terms and structure vocabulary, but we have no-- there's very little data for most genes in this still. And there's a bunch of genes that were upregulated that seemed to already be known to play a role in atherosclerosis. So this paper, again, talking about genome medicine, this paper is going to end with this list, only because a number of these are targets, and pharmaceutical companies actually have drugs against a number of these already. For kids with progeria it's basically a death sentence today. We have no medicines. We treat them with nothing today. Maybe some people treat one or two kids with growth hormone to see if growth hormone does something, but that's basically like a voodoo medicine kind of thing. We have no proof of that doing anything. Now we can at least get to some information to possibly even think about clinical trials here. But let's be frank here. These were fibroblasts that were frozen for years in the cell repository. And now we're making a claim as to what their blood vessels look like. And not just the blood vessels, but the intima of their blood vessels. This is not the same cell. The fibroblasts might have been taken from their cheek, but we're just making a claim here because we don't have access to the real tissue. So one of the reasons why I'm mentioning progeria here is remember that in picture you just saw, one of the ways that these kids get diagnosed is because by the time they get to age two, they start to lose their hair. But even more interestingly is they lose all their fat cells. It turns out in progeria you lose all your subcutaneous fat. And by death at autopsy it's very tough to find any fat in the body. It makes you think something is up with the actual process of making fat cells in these kids. In one study these kids have been known to have some insulin resistance, but it's just one study almost 10 years ago. In the mouse model, the last paragraph of that Nature paper says that they saw some process where muscle was re-differentiating into fat. They said they think there's some developmental problem going on, but it's not very clear. Other mutations in lamin A cause lipodystrophy, which is a syndrome where you lose all your fat cells. So it makes us think that there is legitimately something wrong with fat cell development in kids with progeria. So now I told you about-- where are we here? Hold on. I told you about 80 or 90 genes that were in the mouse models going along with fat, so about 360 genes that are different between progeria kids and age-matched controls, and they have some problem with fat. I have a third data set here, which was published by Gary Ruvkun's group over at Mass General Hospital. This came out about last year. And now this is in a totally different species. This is in the worm. To give you an idea of what we can do today, what this group did is they knocked out every single gene in the worm genome. So a worm has about 16,000, 17,000 genes, and they basically serially knocked out each and every one of these genes to look at what happens to the worm. Now in a worm it's actually pretty easy to do. You can actually get bacteria to make a particular sequence called an RNAi that interferes with the worm's ability to make that gene. So you don't even have to change the DNA. You could just effectively shut down the amount of that gene that's being made in the worm. And basically it's very easy. You don't even have to inject these worms. You essentially set up 16,700 buckets of bacteria, and just put the worms in it, and they eat the bacteria, and basically they knock out the gene themselves. That's literally how it works. All you have to do is get the worm to eat the bacteria that makes the RNAi. It's very easy. It knocks out the gene in that worm. So what these guys did is-- they had back-to-back Nature papers on this process. The first one was on a whole bunch of different phenotypes, but the second paper was looking at that, amazingly enough. And here there's a picture of a worm, and they're using the same stain for lipid that I showed you two or three slides ago, what we were using on the mouse. And basically they're saying, for example, if you knock out daf-2, you get an increase in the fat. If you knock out daf-2 and daf-16, you get a subsequent decrease, daf-2 and daf-3, slightly more. And they can quantitate the amount of fat that results after you've knocked out any of these genes. And essentially they've made a list. It's available on the internet, amazingly. If you knock out any of these 112 genes, it increases the amount of fat in the worm. If you knock out any of these 305 genes, it decreases the amount of fat in the worm. So now we have three big data sets, one in the mouse, one in the human, one in the worm, two microarrays, one RNAi, spanning all of these. But they all have something to do with each other, because they're all thinking about fat and fat storage. Fat storage in the worm is very different than a human, but still a number of the genes are actually intersecting. And really what we're talking about is intersecting these three data sets. We got the worm, we got the mouse, we get the progeria, and we're looking at the intersection of all these three. So this is what I've been working on over the past six months, actually. And this is what we're calling integrative biology or integrative genomics. Because these large data sets are not-- they don't just exist, and a large majority they're actually publicly available. You could go to the National Center for Biotechnology Information, the NCBI GEO, and pull down any number of these kinds of data sets. And you could do it today. It's for your final project. You can go download any of 600 experiments where the microarrays have already been collected. The only hard part now is to have the interesting question. You can't just pair something with something else. You want to have to actually think about an interesting question, because the resources in all likelihood are already there to actually try to answer your question. Whether you're interested in circadian rhythms, or fat development, or anything, that's the hard part now, is knowing enough biology to actually ask an interesting question in this way. So it turns out if you look at the genes in intersection, just to give you an idea, these WNT genes-- WNT is actually a hormone that's made. WNT genes are actually well known to be involved with adipogenesis, and they are actually living in the intersection of these lists. So the worm egl2, which is called WNT7B, increases the amount of fat if it's knocked out. The mouse WNT6 and 10A increase with dysadipogenesis, depending on the IRS knockouts. And WNT5A and WNT7B are down in progeria. So that's an example of a gene that lives in the intersection of these three data sets. I'm just amazed that we got any of these things to intersect, because they're so different. Three different species. We're not even looking at the fat cells here. With the progeria kids we're looking at a fibroblast, which is a proxy for the pre-adipocyte, which is a proxy for the adipocyte. But still you can get some signal this way. This picture is just the same. Even my own daughter knows the difference between a human, a mouse, and a worm. So there's caveats to all of this. So we're doing a lot of this validation ourselves now in the bioinformatics group over at Children's. But the point here is integrative biology. To me this is where I think a lot of fun is going to happen for me in the next few years. We have a lot of these data sets, and now I'm trying to come up with interesting questions so I can put these together to try to get a massive handle on these processes that other people are just using one of these days that might not be able to get a grasp on. We've got gene expression. We've got proteomic data. We've talked about that. We've had genome scans. Joel Hirschhorn is going to be talking about how you actually try to find genes that are associated with a particular phenotype using SNPs. We have the clinical measurements. We have all of these data points that are collected in all of these hospitals. I'll talk about each of these separately. ENU mutagenesis is a new technique where-- it's actually not so new-- but you could take mice and give them this particular chemical, ethyl-nitrosourea, that causes mutations in their gametes. So then all of their offspring basically are mutants. And you do this for a couple of generations, and you find the interesting phenotype that-- wow, all of a sudden, this one has diabetes. You go back and see, what did we mutate? We basically mutate it to see as much as you can. You characterize the output and then go back to see what-- exactly where did we put the mutation in? That's ENU mutagenesis. RNAi we start to talk about with the worm, where you basically can just knock out any particular gene very quickly. People are starting to do this with humans and specifically with mouse models, but there's already a number of pharmaceutical companies within a five-mile radius of here trying to come up with drugs like that. And of course, most importantly, the prior biological knowledge. This is what's going to drive the interesting questions here. Any questions about that so far? All right. I'm going to talk about something just to bring this home so we can talk about-- at least you can see how easy it is to actually do this. Let me skip a couple of slides here. So yeah, we can actually talk about some of this. So let's say you're interested in doing some of this for a final project, that you want to actually put some of these data sets together. There are quite a number of sources where you can get microarray data and other genome size data today. And if you see some things in the description that you're actually biologically interested in, you might be able to run with that for a final project. I just want to point out a few of these data sources and then talk about one way how you can put this together even just using a web browser. So the cardiogenomics program is a source where I think there's over 150 microarrays now. The headquarters of this is over at Beth Israel Deaconess across the street. They have mouse models of cardiac development, so basically they have embryos, and they have sections of their heart. And basically all the genes are turned on and off at different parts during development. They have all sorts of different components of cardiac growth. So for example, people know that if one pathway or another pathway is involved in a heart failure, they have mouse models where those components are already knocked out. And they have a bunch of measures made in time series. So they take a normal mice, and they put them in this tank to swim, and they make them practice swimming a certain number of hours for weeks then look at their hearts. And this is what swimming does your heart, for example. They have those lists of genes available today. These slides are going to be online, so you don't have to copy down every single URL. If you just even write down the heading, then which one to look for. The Whitehead Institute, obviously the mother lode of these arrays. At least 12, maybe 14 publications now. Many different types of cancer, that's their primary focus there. A lot of clinical measurements with some. A lot of these are clinical samples, but they might have one or two columns of clinical measurements. Was this a smoker? Was this a non-smoker? That kind of thing. How long did this patient relapse, not relapse? But you've got leukemias, you've got solid tumors here, brain cancers. Things like that are in this data set. DC Children's, or Children's National Medical Center, 500 arrays for many interesting human diseases. Muscular dystrophy, dermatomyositis, so that's rare. A bunch of rheumatologic things. Heart failure as well as mouse, rat, and dog models of spinal cord injury, if you want to see what are the similarities and differences there. Pulmonary disease, including asthma and heart failure. I mentioned heart failure twice there. So that's DC Children's and Johns Hopkins. The Human Gene Expression Index I think is Brigham and Women's. 121 microarrays just from normal tissues, 19 normal human tissues. It's also an interesting list to think about. Stanford's microarrays is probably the largest besides the GEO, which we'll talk about a second. Stanford has probably close to 4,000 arrays now, measured across 11 species, covering more than 80 publications. Many different fields, most of which are yeast, but a whole bunch of human ones as well. The National NCBI GEO, it's basically right next to PubMed now. It's the same website. Over 8,000 arrays from over 100 different types of microarrays. Never mind how many experiments. Their database can handle over 100 different companies, 100 different special microarray products that can integrate across all of these. TREX is from TiGER, which is The Institute for Genome Research over in Gaithersburg in Maryland, outside of NIH. 500, 600 arrays for mouse and rat models of sleep, infection, hypertension, pulmonary disease. So like for example, if you're interested in asthma and you want to see differences between human asthma and the mouse model, the data is there today. I will guarantee you no one else has done that intersection. If you find some interesting genes that you think might code for proteins, that might be in the blood and you can use that as a measurement, you can get some interesting person, one of any collaborator here in the Longwood area to actually measure this in the blood, that's it. You have a finding right there. Because the data is there. Hardly anyone knows how to intersect these things at all. And so what I'm going to talk about here for the last few minutes is this concept of discovery portal. So what do I mean by that? Let me skip this part. We start with in-vivo, experiments done in humans and mice. Then we had in vitro and cell culture. Now we have NHTTP. So here's an idea. Here's how you can go from an actual thought to a pretty well established hypothesis without even leaving your web browser, because all of these tools are available today. Now for the next maybe 15 slides, I'm going to flip through them pretty quickly. I'm going to cover a number of different databases where you could get all sorts of inspirational ideas from. So I'm going to start with an interest, move to a rat model. We're going to cover QTLs while we're there and physiological findings. Go to a rat genome map. Go to multiple species expression data, and end with the hypothesis here. This is live. You could do this yourself today. We're going to start at a website called PhysGen. Some of us in the room know about this. This is at Medical College of Wisconsin. What these guys do is they have a huge pipeline of things they do to mice and rats to measure things, measure all sorts of different quantities. How much does this rat pee versus that rat? How much does this mouse breathe compared to this mouse? Et cetera et cetera, and they have tables of all of this data publicly available. So if you go to PhysGen, if you click on Data, and if you click on Animal Model, some genotype information for all strains. Now as I scroll down a little I see a whole bunch of the rats. Now I might remember, because I went to the Diabetes Association conference last year, I remember hearing that's the Brown Norway has some problem with diabetes. I might not have even gone to the seminars on the Brown, but everyone kept talking about Brown Norway, Brown Norway. I heard a Brown Norway has a problem with disease. Let's go click on the Brown Norway strain report. Scroll down a little. This is the BN, or Brown Norway. And I can see, yes it does, it has diabetes on here, diabetes QTL. So what is a QTL? So we've got the rat, and it's got its own genome, like any other creature. And what a QTL is a quantitative trait loci. So a particular portion of a particular chromosome appears to be statistically significantly associated with a particular phenotypic measurement. That's a quantitative trait loci. So in other words, this particular region which we're going to look at in a second on chromosome 2 is statistically associated with the insulin level, the fasting insulin level of a rat. So notice if you have one version of this, you might have a low fasting insulin level. If you have another version, you have a high fasting level. But something in this region of a chromosome is associated. We're not at the gene level yet, but we're just in a region of a chromosome as a starting point and and ending point of this chromosome, and some piece in the middle is associated. There's all sorts of other measurements here. You've got X-ray hypersensitivity. What does that mean? Well something on this region of chromosome 1 means you get more sensitive to X-rays. You can think about how esoteric these measurements are. If you're interested in radiation effects, there's your measurement right there. Arthritis severity, blood pressure, body weight, blood pressure, blood pressure. They're all sitting there, waiting to be tapped into. No one has enough time to look at every single one of these, but we're mandated by NIH to just put them out on the internet for people like you to just study. If you're interested in blood pressure, there's three QTLs right there that people by chance, people have probably never even looked at. Let's click on this particular QTL to see what it looks like. Here's the name of the QTL, Nidd/gk2. It's in the Brown Norway. And here's the marker for this particular peak. So a marker means this is a unique sequence in the chromosome that is going to tell me whether I have the one version of this or the other. Let me click on that marker, D2Wox23. And here is the PCR test I would need to do in the rack to tell me, do I have the one letter or the other letter? Or in other words, do I have the high insulin version or the low insulin version of this particular peak? So it even gives you the test. You could just go order those oligonucleotides, find a rat that's walking down Longwood Avenue at night, and just test to see, is this a high insulin or low insulin? You can do it today. Now out of dumb luck it just so happens that they've already done the work, and they've told us in this region of the chromosome we know there's a gene there. It doesn't have to be, but there is a gene there, S100A4. That's a gene that exists in this peak of significance. Now what we're going to do is go take that piece of information, that name of that gene, and let's look more at that QTL region, that region of the chromosome. We're going to go to National Center for Biotechnology Information, click on Genomic Biology. I'm going to click on the rat. And I'm going to just type in S100A4, the name of that gene I told us was in that region. Where is it in the rat chromosome? Here it is on chromosome 2. It's S100A4. If I click on that, if I zoom out a little, here's S100A4, and here are all the neighboring genes. That's interesting. There's a bunch of other interesting genes. First, S100A4, there's A3, A6, A8, A9. This gene is one in many of a gene family. So how new genes are often formed is through duplication events like this. But to me more interestingly I see things like natriuretic peptide receptor. Natriuretic peptide is made in a number of different sources, including the heart. It has to do with blood pressure and blood volume regulation. That's an interesting hormone. Let's see some others. Interleukin 6 there, might be some interesting things there. But let's just keep running with this S100A4 here. Now what I'm going to do is, I'm going to look at where is this gene expressed. I got it in a statistical peak. I've seen what its neighbors are in the genome. Now I want to know, why is it that this particular gene is associated with diabetes or insulin level? Well let's see. Where is that gene expressed in a whole bunch of different samples? This is one particular search engine where you can just type in the name of the gene and see where is this gene expressed in about 1,200 different samples. This is specifically cardiovascular. I type in S100A4, it says, which do you mean? The human, the mouse, or the rat? Well we started with the rat, but let's just see where the human ones are expressed. Click on that. It's measured in 10 microarrays by Johns Hopkins. That's not that many. Let's go back. Let's look at the mouse one. Mouse is measured by 142 by this group, 100 by this group, but 84 by this group. Let's look at the 84 first. And we can see that top samples. So this is the expression level. This is the percentile of that gene on that chip, on each chip. Muscle, muscle, muscle, muscle, muscle, muscle, muscle, muscle. And at least for Johns Hopkins, this gene is most highly expressed in their muscle samples, specifically regenerating muscle. We can look to see exactly what sample there is, but let's go on. Let's hit the Back button, and let's look at cardiogenomics, so that's these guys at the Beth Israel. And again, it's highly expressed in banded smooth muscle. Now it says band here. That says band 48 hours, band 48 hours, 24 hours. What exactly did they do? What exactly are these samples? You click on Overview, it takes you to their website. It tells you what was the experiment they did. They actually did an experiment where they took these mice, they opened them up, and they pull a little rubber band on their aortas. In half of them. The other half, they just sewed them up again. They didn't do anything. And so they made a list of genes that go up in the heart as a response to this pressure overload. The genes are sitting there. I can guarantee you again, not a single paper has been published on this list. Because we're so busy creating more data, we have not had time to publish anything on this data. So they were trying to study pressure overload induced cardiac hypertrophy. So now here's the hypothesis. I started with the Brown Norway rat, and the Brown Norway rat has signs of diabetes because of this gene, S100A4, because S100A4 was under that significant peak. But I think it's because S100A4 is expressed in muscle under a variety of different conditions, and I know muscle is an insulin target tissue. So now I can get some clues to what experiment I can do next. Maybe I should go look up muscle cells for S100A4. If you actually do PubMed searches, you can see S100A4 is associated-- at least it's being studied in diabetes today. Now I just picked the first peak I saw on that website. You can go back and find any number of new genes there and come up with hypotheses, and we didn't even leave the web browser. You didn't even have to download a file to do any of this. So you could do this today. You start just surfing from one of these sites to another from something you might have heard down the hall to an actual hypothesis. We have about 10 minutes. I'm just going to call it quits now to see if you have any questions or thoughts. AUDIENCE: So at what point with these various websites that have the data, what place does that data get put on the website? Are they after publication versus [INAUDIBLE] sort of like-- ATUL BUTTE: Good question. AUDIENCE: [INAUDIBLE] structures [INTERPOSING VOICES] ATUL BUTTE: Great question. So NCBI GEO, you could submit data to them, and then they'll keep it hidden until you say it's been published, and then they could turn it on. These other personal websites like the Whitehead one, Stanford, they typically appear after publication. For these other websites in this PGA, the cardiovascular data I keep showing you, that data appears within 60 days of its creation, regardless of publication. NIH is more and more mandating that to be the model now. Because if they're paying for all of this genomic work, they don't want just one lab to really be able to run with it, because there's too much data for any one lab to deal with. So for some of their grants they actually insist that this data be made available well before publication. So right now on cardiovascular and lung disease, and a little bit of sleep as well, you can get a lot of data, 1,500 arrays. AUDIENCE: [INAUDIBLE] ATUL BUTTE: Yep, these PGA- these Programs and Genomic Applications. PGA was funded by the Heart, Lung, and Blood Institute. That in particular has a lot of data that's already available that hasn't been published on.
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_8_Complex_Traits_What_to_Believe.txt
JOEL HIRSCHHORN: So the topic is basically comple-- the title is Complex Traits, What to Believe, which I think tries to get the gist of one of the main messages here and fit into the number of characters allotted for a title of a lecture. And it basically has to do with current efforts in human genetics to understand the role that inherited DNA sequence variation plays in regulating what are called complex traits. And let me just go through all that. So this is basically the plan for the talk, at least as it stands now, and just give you a little bit of introduction into complex traits and common disease and how they relate to each other. So here's a list of-- a somewhat random list of diseases that you might see in a hospital or in a doctor's office and account for a lot of morbidity and mortality. And it turns out that most of these diseases cluster in families in ways that tell us that genetics plays a role in regulating who will and will not come down with these diseases. Same thing is true for things that aren't diseases but are quantitative traits that you can measure. Some of these, however, are very significant risk factors. I list here just things that are relevant to endocrinology, which is one of my interests. But things like blood pressure, obviously, if you have high blood pressure, that's a huge risk factor for heart disease, stroke, et cetera et cetera. Increased obesity is a huge risk factor for death and diabetes and heart disease, and many of these other things also have to do with future diseases. And these also cluster in families in ways that tell us that genetics plays a role. But for all of these, whether it's disease susceptibility or the quantitative trait illustrated here, adult height, we know also from the way that they cluster in families that multiple genes are involved and furthermore, that environmental and other non-genetic influences must play a role as well. So this is not quite so simple a picture as, for example, trying to tract down the gene that's responsible for cystic fibrosis or for Tay-Sachs disease. And so the goal that a lot of people in genetics have is to try to make this connection. There's variation in the human genome, which I'll talk quite extensively about. And there's variation that we observe out in the world. And we know there's a connection here, and it's a question of making that connection. And if you compare genomes as, again, I'll get to in a little bit, find that people are mostly the same. But there are differences between every person. And again, most of these differences are random spelling differences in the genetic code that have no consequence. But some of them do have biological consequences. Again, the challenge is to figure out which one of these differences matter. So I said that we know that genetics plays a role, and I just want to briefly get into, how do we know that, actually? So one way is from something called twin studies. I forgot the little-- this is from some country music group that I have no affiliation with and have never heard called the Taylor Brothers, but notable for part of their promotion is that they're identical twins. And you can see things like they look the same, they're about the same height. I'm not sure whether you can tell whether they have the same preference in clothing or not from this. But they may have some of the same aptitude. So and what you can do is can do twin studies where you compare identical twins to twins who are what are called dizygotic twins, basically where also commonly called fraternal twins. If you compare same sex fraternal twins and identical twins, they're, in theory, raised in about the same amount of shared environment. So they're both raised in the same household, and that's the thing. But identical twins, or monozygotic twins, share 100% of their DNA, whereas dizygotic twins share only 50% of their DNA. They share the same as siblings. And so if there are things that are more concordant or more similar in monozygotic twins than in dizygotic twins, the usual conclusion is that genetics plays a role in that similarity. So for example, if you look at type 2 diabetes, if you look at monozygotic twins and follow them over their lifetime, there's at least an 80%-- and some people thinks it's actually close to 100%-- concordance rate. What that means is actually sort of striking, which means that there is a particular set of genotypes, which is incredibly predictive of getting diabetes. Yes, thank you. Sorry. I found a-- I found a-- I found a little cable. So one right one and one wrong one. So I'm all right. Whereas if you look at dizygotic twins, there's a lot of concordance. And part of that is because they share 50% of their genetic material. And part of it's probably because they share the environment that they shared. But the concordance rate is much, much higher for monozygotic twins. So that says that there's a strong genetic influence. And you can do other studies that don't depend on twin studies. And you can also do twin studies where there are monozygotic twins reared apart, but those are much less common. But basically family studies where if somebody has diabetes, what's the risk to their sibling for getting diabetes as opposed to the risk of somebody next door, for example? And again, it's much greater for siblings than it is for the general population. That ratio is something called lambda S. And you can estimate also something called heritability, which purports to be the percentage of variation that's attributable to genetic factors and actually certain types of genetic factors. So that's basically an introduction. So most common diseases are these complex genetic traits where we know genes play a role. And we also know, from those family studies, they don't just segregate in a simple pattern where if the father-- if the father has it, half of the children will have it, and then half of their children will have it, et cetera, and/or in a simple recessive pattern. But it's clearly multifactorial. So the next section is basically why we think that there might be a connection between common genetic variation and these common diseases. So this is a stretch of a genome. I think I borrowed this from David Altshuler or Eric Lander. I'm not sure. And this is what genetic variation looks like. So you have a long stretch here. And if you compare two stretches taken from different people, you'll find that there's one place here and one place here where maybe they differ. And most of those variants change a single nucleotide or single DNA letter, and it's called, therefore, Single Nucleotide Polymorphism. And means many forms, or in this case, would be just two forms, or a SNP for short. And what you might find is there's this stretch of DNA where everybody is the same. And then at this nucleotide, some people have a C and some people have a T. And what we want to know is, of all these different genetic variants, which ones are going to be the ones that are going to affect things that we care about? So here's something that we should all be passionate about in the next few months, so whether you're happy or sad that A-Rod went to the Yankees or something like that. But no, in seriousness, if this were healthy and diseased, not to say which is which here, then we would be interested to see which variants track along with disease. And that's going to be the general gist of what people are trying to do. So I said that there's a connection between common variants and disease. And so one question is, why might we be interested in common variants? And if you think about it, before I begin, what's known so far about genetic disease? You might say, well, that's the wrong place to look because if you look at things like Tay-Sachs or cystic fibrosis, those are mostly caused by genetic variants that are quite rare in the population. They're severe deleterious mutations that cause these severe diseases. So if those genetic diseases are caused by rare variants, why wouldn't things like diabetes be caused by rare variants? And it's an open question, and I'll get into that. But let me just take you through at least some of the background. So I showed you earlier that if you sequence a stretch of genome and imagine this is about 1,000 bases from two different chromosomes-- it could be two chromosomes from the same person, or it could be one chromosome from somebody here and one chromosome from somebody around the world-- you'd get the same answer more or less, which is that, on average, you'd see about 1 every 1,200 bases would be different. You can then ask the question, well, having identified these two alleles, the C allele and the A allele, if I then look in the same 1,000 bases of lots of other chromosomes, is it that this A allele or maybe this C allele is just a really unique event that's private to one of these chromosomes? Or do I see both alleles around the whole world? And the answer, most of the time, about 90% of the time, is that you see both alleles around the world. And so what that means is that most of the differences that you can find between two chromosomes are explainable or attributable to the variation, those variants that are common, that are shared throughout the world that are not like the Tay-Sachs mutations. But they're actually old variants that happened a long time ago and are present in multiple populations around the world. So right. And that's basically what that says. And so this has led to a hypothesis. So the assumptions are as follows-- first of all, that most variation in the genome is evolutionarily neutral. That actually may or may not be true, but actually, whether that-- as long as it's not, most variation is evolutionarily beneficial. We're OK. But let's assume there's some background for the genome. So then you have a snip that's 22 kb away from the nearest gene and probably doesn't affect the function of that gene at all. So assume that evolution has not particularly cared about that base change. We know that most of this variation, most of the background variation is probably what you would call this, rather than neutral, is due to these common variants. That's what I just showed you, that 90% of that variation is due to common variants. We know also, what I've been telling you with the Tay-Sachs disease, that traits under negative selection will, in fact, be largely due to rare variants. And there's a nice description of why this should be true in an article by Jonathan Pritchard. But there's a hypothesis that traits that are not under negative selection or at least not under severe negative selection will actually be due to variants that look more like the background, so that are common variants. And so this is what's called the common disease-common variant hypothesis. And one of the more forceful, I would say, descriptions of it is a paper by Reich and Lander in Trends in Genetics, 2002. And the idea for why common diseases might be like-- might be due to alleles that are not under severe negative selection goes like this. First of all, they're actually common. So in some sense, it sounds tautological, but it's actually not. They can't be that evolutionarily deleterious if they're extremely-- if they're extremely common. So that's one argument. Two is that they're multifactorial. So each gene that contributes to these common diseases actually only contributes a little bit to susceptibility. And that's going to be a theme that'll run through this whole talk. So even if it were a-- even if they were somewhat evolutionarily deleterious, each individual allele would only be contributing a small fraction of that. Third is they often tend to be late onset or after reproductive age. So again, evolution cares less about stuff like that. Although, we don't know. May be good to have had your grandfather around in Africa 50,000 years ago. So it could be the late onset diseases were bad. Another thing is that it's been hypothesized that some of these diseases like hypertension or diabetes actually have what's called balancing selection, which is that yeah, they give you diabetes late in life, but the reason that they give you diabetes is because they help make you hold on to calories better. So that gives you obesity now when we have McDonald's around. But 100,000 years ago when food was scarce, maybe that was really good. There's something called the thrifty gene hypothesis. So for hypertension, maybe it was good to hold onto salt if you lived in a desert, that sort of thing. So if we're looking for a common variation or thinking about that this may be a good place to look for the genetic variants that underlie common disease-- and again, this hypothesis does not say that rare variation won't also contribute to common disease. Just says a common variation will be part of the contributors. So if we're looking for it, it's been estimated that there are about 10 million common SNPs where both alleles are found at more than 1% in the general population. About 6 million of those are actually already in databases. So it's actually most-- a good fraction that have been found. And as you'll see, there's a lot of redundancy between SNPs. So probably a very high fraction of the variation, common variation in the genome, actually has been cataloged, which is a remarkable thing. When I started in this field not very long ago, I think six years ago, there was a paper that came out of the Lander lab, which is a massive tour de force, cataloging 2,000 SNPs across the genome. And this quintupled the number of SNPs that were known. And so this is now-- this 3,000 times as many now, just in the last six years. And you can find all of these in websites like dbSNP or the UCSC Genome Browser, and CBI has a genome browser. And they're all right there. So this is a screenshot from the UCSC Genome Browser. Each one of these little lines is a SNP. And this is probably already out of date, even though I took it not too long ago from there. And this is just in one gene, each of these are-- on each of these two columns is a SNP. So if we want to use this catalog of common variation to try to find the alleles that are responsible for disease, how can we do that? Well, one way would actually be to try to study every variant or at least every common variant. This would be great. It would be an unbiased search of the entire genome, basically the entire universe of common variation. And this would be great if we could do it, but it's not currently practical. 10 million people-- sorry, 10 million variants. And as I'll show you, you need many thousands of people to do this. So it actually works out to hundreds of billions of genotypes. So that's something like the Pentagon budget or something for a year to do one study. So NIH doesn't like grants like that. So basically, until this gets much cheaper and/or we apply some of the shortcuts that I'll tell you about, you need to pick which genes and variants we're going to study. So this brings into methods for finding traits. Any questions so far? So we're on the track of trying to find which of these common variants might be contributing to common diseases. So really, three approaches, two of which have to do with common variants, and one actually which explicitly has to do with rare variants. And I'll get to that at the end. So the traditional approach to finding genes for disease has actually been something called linkage. I don't know. Have guys done linkage? Is that something that's been done here? Yes? No? So I'll briefly go through linkage. So linkage requires families where you have, in the case of disease, multiple affected relatives in the same family. So those could be brother/sister, or it could be cousins or an uncle and a niece or something like that. And the idea is that there is a single disease-causing allele that's segregating that family. So it's being passed on to some of the people in the family and not others. And the other assumption is that the disease will more or less-- doesn't have to be perfect, but has to be pretty good-- will more or less track with that allele. Now, for linkage, you're not actually typing the allele itself. What you're doing, you're typing markers that surround the alleles. So they may be a couple million bases away, actually. And that's a typical thing is about markers spaced every 10 million-- every few to 10 million bases. And the idea is that since you're just looking in a family, it turns out that a million bases is what's called one centimorgan, which means that recombination events happens once every hundred meiosis. so within a few centimorgans, there's generally not going to be any recombination events within a family. So the marker that's a few million bases away will travel with the disease allele in the family. So you use that marker as a surrogate for the disease allele, and you follow the marker through the family. And you say which of these markers. So if this marker is traveling with disease, you say maybe that's pretty close to the disease allele. But a marker on some other chromosome won't be segregating with the disease. So you'll say that region isn't linked. So that's linkage, basically. AUDIENCE: [INAUDIBLE] JOEL HIRSCHHORN: So traditionally, it's been microsatellites, which are markers that are things like repeats of the sequence TATATATATA, and some people have-- DNA polymerase doesn't like to copy those very well. So some people have 50 copies, some people have 44, some people have 53, that sort of thing. And you can type those. Those are a pain in the ass to work with, actually. And so people are shifting over, starting to, to use SNP-based linkage analysis. Because all you need is basically a marker that lets you distinguish each of the chromosomes that are segregating in the family. So probably, microsatellites are-- it's going to-- there's going to be a shift. But it's been wildly successful for single gene disorders. And the reason it's been successful is because there's always just a single copy in a family of the disease allele, and the segregation with disease and that marker is really very good because if you have the disease-causing allele, you get disease, and if you don't, you don't. So for common diseases, none of those things are true. First of all, because multiple genes are involved, you might not have the disease-causing allele but get the disease anyway and vice versa. You might have the disease susceptibility allele, really it should be called, and not get the disease. And furthermore, there might be multiple-- because it might be a common variant, there actually might be multiple copies of the susceptibility allele traveling in the same family. So the markers that are a couple million bases away now no longer perfectly mark the disease susceptibility allele because maybe on this side of the family, it's marked by-- it travels with this marker, this allele. But then on this side of the family, where it's segregating, it travels with a different marker. So all of these things conspire against you for common diseases. Nonetheless, people have found a couple of things through linkage of common diseases and tracked it down. DeCODE genetics in Iceland is probably the most prolific example of that. So if linkage isn't working, it brings us to the other main approach that's used, which I'll spend a lot of the talk talking about because it is the main method. It's called association studies. And basically, what this means is that rather than searching through the whole genome, you pick candidate genes. And Rich and [INAUDIBLE] have shown that this is much better suited than linkage to this scenario where you have common alleles that have only modest effects on disease, what's called modest penetrance. And then there's another approach that people are taking, which is they say, OK, I understand linkage might not be the best way to go. But I don't believe this common variants stuff. I want to look for rare variants that contribute to disease. And so there, what you are doing is, again, you're picking the genes to look at. But instead of just taking SNPs off the public map and looking for common variation, they're actually resequencing to try to find those rare variants that are present only in disease. And the best examples of this are BRCA1 for breast cancer, which actually originally identified by linkage, but that's a rare disease, and also MC4R mutations. That's a melanocortin-4-receptor gene, where there's a lot of rare variants that contribute to early onset, particularly, severe obesity. The difficulty with this approach right now is it's very expensive. It's much more expensive to resequence than it is to do genotyping of polymorphisms. And if the expense were flipped, people would probably do this resequencing because this gets you the common variants as well. And it's really just-- it's largely an expense issue that we can't do this. So I talked about the study that would take just the Pentagon budget. If you were going to do this, this would take probably the GDP for the history of the United States or something like that. So we're still a ways away from this. AUDIENCE: That flies in the face of the whole hypothesis, though, [INAUDIBLE] JOEL HIRSCHHORN: Yes. So I'm just saying that if you choose to-- I don't mean to say that the hypothesis has been proven. I believe that common variants contribute to common disease. There are still people who believe that that's not true at all and that really, this is the only thing that's ever going to yield anything is looking for rare variants. So and there are definitely a couple of cases where this has been true. And this really hasn't been explored to any degree just because it's so expensive. And I put it there just because if all of a sudden, to sequence somebody's genome for $100 became feasible, you would take 1,000 people with diabetes and sequence their genomes. And that would be the way you do the experiment. So that's why it's on there. So this is a cartoon version of what I just told you. Linkage analysis lets you search the whole genome and points you to a region, although it does not point you to the gene. So remember, those markers are 10 million bases apart. And in fact, because of some of the vagaries of complex trait genetics and the reasons that linkage isn't that well-suited, usually you get about a 20 centimorgan region or about 20 million bases, which can have something like 200 genes in it. And it'll say somewhere in those 200 genes, there's something that's contributing to disease. Whereas association, you're looking one gene at a time, so you're not-- but you have to guess correctly where to go. And so you have to-- so there's this haystack here of 30,000 genes or 30,000 straws. And you can pull one out and say, is that a straw or a needle? No, it's a straw. Next one. And look. And obviously, the success depends on you being able to guess correctly. But we actually-- at least for the first time, we know what the 30,000 straws are. So there's a website now with the entire genome on it. And we're getting pretty good, as I'll tell you, at telling whether things are real contributors to disease or not. But it's this interpretation of whether this is a straw or a needle that is going to be the focus of a lot of the talk because that's the study that a lot of people do is they pick a gene and they test it, and they claim it either does or does not have a role in disease. And I just want to give you hopefully some sophistication in reading those sorts of studies to try to say should you believe everything that you read? The obvious answer is no. But so this is what an association study is. You can imagine that you had healthy individuals and people with Alzheimer's disease. And there's a common variant in the APOE gene called APOE4. It encodes apolipoprotein E. And it is a common variant because if you look in the control population, you'll see it's there about 10% of the time, something like that. But if you look in people with Alzheimer's disease, you'll see those frequencies increased about threefold. And so this is the-- this is the paradigm for associations of common disease and common variants, and this has been seen over and over again. You look at just about every study that's been done, you see this increase in risk. Yeah. AUDIENCE: Sorry. JOEL HIRSCHHORN: No, no. Sure. AUDIENCE: Should I-- You also looked, obviously, [INAUDIBLE] JOEL HIRSCHHORN: Yeah. So actually, yeah. So there's a little bit of a subtlety to even this association. So it has been seen in other ethnicities. It looks-- although, the last time I looked at this, the data were not strong enough to say that there was definitely a difference. But it looks like the association with Alzheimer's disease might be a little bit weaker in African-derived populations. There could be a wide variety of reasons why that's true, and I'll go through a bunch of them just in general. AUDIENCE: Because there's less [INAUDIBLE] like if you just made that for all individuals-- if you made that [INAUDIBLE] JOEL HIRSCHHORN: No. So the rate, the frequency of APOE4, in this case, doesn't actually vary that much across populations. It's an old allele. Some alleles do, and you're getting into something called ethnic admixture, which I'll definitely get to in a little bit. So as I said, you have to guess right. So there's an issue of which genes do you guess? And ideally, again, one would like to do all of them. And it's getting closer to be able to do that, but-- yeah? AUDIENCE: Individuals with the BCRA1 example, when you take the linkage study and find that region that appears to be linked-- JOEL HIRSCHHORN: So that's definitely one thing you would do. So it's actually been hard enough until basically this year to do that. So you have 200 genes. And to thoroughly go through a gene and basically look at all 200 of them has been something that's just basically been out of reach. So to give you an example, I just got, I think, funded to do exactly that for an obesity linkage peak. And we couldn't have really written to do that as part of a regular R01-type grant. We could not have written an R01 to do that a year ago. It was just too expensive and too hard. It's just the information about which variants to type, and I'll go through, again, using haplotypes and stuff like that to do it. The information just wasn't there, and the expense was too high. But yes, I agree. Those would be the first candidates. So you would use information from linkage studies. And you might just say, I don't care about anything else that anybody says. I'm going to take linkage. If your complex trait or your common disease is fortunate enough to have a linkage peak that you believe, which is not true for most of them, actually, but some of them it is, you would say, those are the 200 best genes. I'm going to go for those. That's definitely an approach. You might say, well, linkage is not very well-powered for common disease. Probably most of the genes are not under linkage peaks. And most of the genes under the linkage peak are not the right one. So I'm going to ignore that. I'm going to go for things that-- I'm going to say 100 years of research has meant something. And if I'm studying diabetes, I'm going to do the insulin signaling pathway and beta cell function and obesi-- fat cell differentiation genes and stuff like that, and those are the genes I'm going to go for. Or you might say, well, this is just lamp post science because it's the biology that people have been doing. And this is not very powered, but this is a genome-wide tool. We can use expression analysis, so survey all the genes, find things that are expressed at different levels in disease versus healthy people and say that some of those or many of those changes might be secondary to the disease state. Maybe some of those represent a primary regulatory variant that affects disease. And so I'm going to look for those. And what we're trying to do and other people are trying to do is actually the most interesting genes are probably the ones that happen to fall in here, although there's no saying that they have to fall into this intersection. Maybe they fall in here or there or there or even just one of these or in the universe outside altogether. But I'll just give you an example for type 2 diabetes just because we are actually doing this sort of thing. So there is hundreds of years of biology or 100 years of biology probably on diabetes, a lot of mouse models, single gene disorders that cause the disease. So that's another thing. If severe variation causes a severe form of the disease, maybe mild variation causes late onset form. So you can-- there's a list of genes there. So for diabetes, this is one of those things where there really isn't a perfect linkage peak to go under. Graeme Bell's group identified the Calpain-10 gene by going under a peak where they had significant linkage. Although, if you actually put all the linkage evidence together from everybody, you would not have said that there was any significant linkage to diabetes there. That association is probably true, but not-- so you can do this, but it's not the end all. And then there's been expression studies in diabetes that's pointed to a pathway of oxidative phosphorylation. So some colleagues of mine, [INAUDIBLE],, David Altshuler, [INAUDIBLE],, Cecilia Lindgren, and others, and also a paper from Nick Joselyn. Mary Elizabeth Patty's pointed to that pathway. And there are genes that are actually in this red area, a few of them. So we're going after those genes, for example. So that's picking genes, and it's more of a-- at this point, more of a art form than any computerized algorithm to do it. Although, just even getting all the information into one place where you could even do the queries of what genes are under linkage peak, what genes are differentially expressed, what genes are in known pathways, that's actually tools that we're just trying to develop. So which genes, and then which variants? So ideally, you have the disease-causing allele in your hand and you're genotyping it, and you test it because they're the marker that you're testing, as opposed to being a distance away, is perfectly correlate with the variants. That's the ideal situation. And so that implies that maybe we should actually really concentrate our efforts on finding the variants that are most likely to be functional, so similar to the way you might try to find the genes that are most likely to be responsible. So the most obvious things are missense variants. And these are easy to recognize. It's been shown, if you look at their-- they're much rarer than they should be, which means evolution has cared about them to suppress them. So many of them are actually mildly deleterious. Maybe they're not severe deleterious, but maybe this is just the right balance of things to cause common disease. And even if you're doing rare variants, you can recognize them and group them together and study them as a class, if you want to say-- if you have the model that rare variants cause disease. So but we know, and there are plenty of cases now, where the variant that's been finally identified is actually for a complex trait has not been a missense variant. It's been a regulatory variant that affects the level of the gene but not the structure of the protein it encodes. There's a great example from type 1 diabetes from John Todd's group and others where there was a missense variant that was thought to be the right variant, but then they did a huge study and showed that in fact, all of the association that they saw with the missense variant was actually explained by something in the 3 prime UTR, and it affected the level of the gene. So the question is, well, these are hard to recognize. There's lots of variation out there we have no idea how to reach. So the reason we can recognize missense variants is we know the genetic code. They cracked that soon after DNA. But we don't know the regulatory code. So there's some effort now to try to crack, essentially, the regulatory code by using multiple species and doing comparisons. So this is from Eddie Rubin's group. This is a great website. It's called the ECR browser. And you can see, what you see here is regions that are actually evolutionarily conserved across species and therefore, may be important. Evolutionists has cared about them again. So they may be important regulatory regions. And there's experimental backup for that hypothesis. So basically, that would suggest that we should-- and [INAUDIBLE] have suggested this approach, at least for missense variants, that we should resequence targeted regions, try to find every variant that's in there, and make sure we type all of them. And again, the issue is that resequencing is expensive. But we might not have the causal variant in hand if we don't go through this approach or if we're bad at guessing what the regulatory variants are. So we should try to understand also the correlation between variants. I alluded to the fact that the SNPs are redundant with each other. So if you have the causal variant that hasn't been genotyped, you have to infer its effect by genotyping neighboring SNPs. And so they have to be correlated or what's called in linkage disequilibrium with the causal variant. It turns out that a lot of the times, that's true. So this is actually what the genome looks like most of the time or most of the genome, where that slide that I showed you before, imagine this is now zoomed out. We're looking now 30,000 bases of a bunch of different chromosomes. What you see is if you have a C here and a C here, you can predict that you're going to have a G here, a C here, an A here, and a C there. And these patterns of alleles are called haplotypes. And in most of the genome, there are these blocks of correlation or blocks of linkage disequilibrium where there are only a few common haplotypes. In this case, there's the red, the blue, and the yellow. And occasionally, there's a shuffling by recombination. These were first identified by Mark Daly and colleagues in genetics, and then this shown to be true on a genome-wide scale with Stacey Gabriel, et al. And the reason this is useful is that if there's a causal polymorphism but not one that you've typed, you can predict it pretty well by the neighboring markers. So that if you have a C here and an A here, then you probably are carrying this causal polymorphism. I'll take you through briefly. But I'm going to blow through this a little bit, just the range of what these patterns look like. There's a measure of correlation, which is not important to understand, called D prime. Basically, red means that the SNPs are correlated with each other, and any other color means that you're not sure if they are. And I'll show you a little plot like this, where this is the data for markers 1 and 2, and this is the data for markers 3 and 5. And this is what blocks look like, where you see a whole bunch of red triangle here, which means that all the markers 1 to 6 are correlated with each other. And then here, 8 to 13 are all correlated with each other. And that corresponds to there being two blocks of linkage disequilibrium. And again, from the Gabriel, et al. paper, if you look, just concentrate on the figure on the right, in, for example, a European population or Asian population, over almost 40% of the genome is found in blocks of over 50 kb, and the average block size is about 22 kb. So on average, if you look in the genome, there's this correlation that extends for tens of thousands of bases on the order of the size of a gene, although it doesn't necessarily correlate perfectly with genes. And within those blocks, as I showed you with that red, blue, and yellow pattern, there's only a few common haplotypes present, so in the order of three to four for non-African populations and one or two more for African populations. And they explain the vast majority of the chromosomes in that population. And furthermore, the haplotypes themselves are generally shared, and Africans have a little bit more. And this is because when humans emerged from Africa, only a subset of the geniculate diversity went with them. So Africa is actually a more diverse continent than the rest of the world is. And I won't go through the reasons that blocks exist, but basically, it has to do with the nature of recombination, the fact that there are hotspots of recombination and this emergence-- this out of Africa pattern. And so that's led to a proposal that basically, if you can recognize these red, blue, and yellow patterns, you can then identify a few tagged SNPs, in this case, two, which distinguish those patterns. So if have the red variant, you're-- if you have the variant for tag SNP 1, you're red, if you have the variant for tag SNP two, you're yellow, and if you don't have either variant, you must have been blue. And so this means that if you can type enough markers to distinguish these patterns, you can then identify these tag SNPs. And that greatly reduces the effort you need to cover a region. And so that's the goal of the human haplotype map, which is ongoing. It's a $100 million international project. It's basically to type about a million or more markers in three different reference populations-- European-American, West African, and then East Asian, which is Japanese and Chinese. And you can go to the hap map website or see this nature description. But its goal is to identify all these tagged SNPs. And so the approach would be basically take the SNPs from the database, genotype these SNPs in reference panels, measure the linkage disequilibrium, and then identify the tag SNPs, and then select them and then take those SNPs and type them in the population. So that's how one would set up, in 2004, to do an association study, where you pick genes. You would understand the haplotype structure. You might even do some resequencing to make sure you had all the missense variants. And then you would take some set, some subset of variants, and type them in populations and do your association study. The problem is that after you've done all that work to set up to do this, you do an association, you publish it, and you find that it's very difficult for anybody else to reproduce it. So that was a lot of work for nothing, right? So we wanted to understand, if we were going to try to use this as a tool, well, why are association studies not reproduced? And there's really three possible explanations. One is could be false positives. So when you did your association, it was just wrong. Maybe you've tested a lot of genes and tested a lot of variants and a lot of genes and by chance, you got a p value of 0.01. So you go ahead and you publish it. And it was just because you had flipped a coin a bunch of times, and you got a row of-- you got a run of heads. Doesn't mean the coin is two-headed. It could mean that, in fact, you were right, but then the people who came after you had little, small studies, and they were looking for a modest effect, and they just didn't have enough power. So it actually was-- the coin was unfair. So it does come up heads more often than tails, but they only flipped it five times, and it came up tails-- or they only flipped it six times, and it came up heads three times and tails three times. This is equivalent doing a study with not very many people. So that's just a lack of power. And then there could be true differences between populations so that the marker really is associated with disease in your population but not in the population that somebody else studied. So we set out to overview the association study literature and say, well, what explanations are actually going on here? And we identified, as of 2000 or 2001, just over 600 associations between common variants and disease. And there are way more than this now. There were 166 that have been studied at least three times, so we get to get a handle on their reproducibility. And the first thing we noticed, of those 166, only 6 were actually highly consistently reproducible. So only 6 were seen 3/4 of the time. So you could say, well, this is just terrible. Almost all associations just are not really reproducible. Or you could say, well, actually, you've only tested 166 things. You found 6 real ones. That's pretty good. There are 10 million things to test out there. So these are the 6. And one of them is this paradigm that I showed you, APOE4. Then there's things like CCR5-delta32, the CTLA-4. Interesting, this is that missense variant that I told you about that actually now this should be corrected to the 3 prime UTR. So even though they didn't have the right variant, it was still pretty reproducible. Factor V in deep venous thrombosis and a couple of others. And there are more on this list now, although not many, that are that highly consistently reproducible. So what about the other 160? Are they just complete nonsense, or what? Why are they-- why are they not reproducible? Well, the next thing we noticed was that of the 160, 91 of them were actually seen multiple times, so not just by the person who reported them, but by somebody else as well. And so that suggested that maybe there was something real going on, but we wanted to look at that in a more formal way. And so to do that, we basically picked 25 of these 160 associations for some diseases we cared about and some others at random. And we got rid of the first person to report it. And we said, of all the other people who tried to replicate it, how well did they do? So there were 301 studies for these 25 associations. And if all of those 25 associations were incorrect, we would expect that the follow-up studies, in theory or naively, should have-- 5% of them should have, again, been statistically significant with a p value of 0.05. 1% should have had a p value of 0.01, et cetera, et cetera. So what did we see? We actually found that about a fifth of them had P values of less than 0.05. So this is way, way more than you would expect. This is not just by chance. And encouragingly, most of the associations were in the same direction. So it was the same allele that was associated with disease as was in the original report. So it's actually-- that would only happen not 1 in 20 times but 1 in 40 times. And furthermore, it wasn't that each of the 25 had a few replications. It was that there was a subfraction of them that had a lot of replications, and then the rest of them were never seen again. So you might say, could this be publication bias? So we obviously could only look at the reports that were published. So it might be that there were lots of people doing these studies out there, and when they got a P value of less than 0.05, they broke out the champagne, celebrated, and sent the journal-- sent the report off to the journal. But maybe if they did it and the p value wasn't over 0.05, it lingered in the desk drawer. And that's actually the formal name for this thing. It's the desk drawer phenomenon. And so what we asked was, could there be a universe of unpublished studies lingering in desk drawers around the world that would explain why we saw so many of these studies? How many unpublished studies would we have to hypothesize per association to explain the fraction that we saw? Was it one or two or three, or was it just a ludicrous number of studies out there? And it turned out, actually, you would have to postulate a ludicrous number of studies to explain the positives we saw. You would have to explain possibly about 40 to 80 unpublished negative studies lingering in desk drawers around the world. And there just aren't even that many people doing association studies for any of these diseases. So we think this is-- and we did some other work to show this is not publication bias. So you asked about ethnic admixture stratification. So I want to talk about that for a little bit. AUDIENCE: I have one question. JOEL HIRSCHHORN: Yeah, sure. Go right ahead. Yeah. AUDIENCE: [INAUDIBLE] a rare disease. JOEL HIRSCHHORN: Yeah. Yeah. So yeah, it's not a common disease. You're right. So it's a very environmental disease. So what it is is actually, it's probably a common disease, but just the environmental exposure that's required is rare. So it's like-- and actually, another one was HIV, which unfortunately is not a rare disease now because the environmental exposure is so common. But you could imagine, if the world were a different place, would be a rare disease and the same sort of thing. And it may be the same thing as diabetes. Type 2 diabetes may be a common disease in part because the environmental exposures. AUDIENCE: Like I said, I'm not sure how they can-- how can you get how you're looking-- JOEL HIRSCHHORN: Yeah, so it turns out it's actually a huge risk factor. It's a huge genetic risk factor. There's an interesting article in Science looking at this variant, showing that there's been waves of selection in favor of the resistance variant in tribes where prion diseases are endemic. AUDIENCE: It's just a pretty common sequence. JOEL HIRSCHHORN: Yeah. It's pretty common, and it turns out that it's a big effect. So that's how you get power so that even if-- and there are enough sporadic-- people collect a couple hundred sporadic cases here and there. And it's just a big effect, and that's why they were able to see it. So even rare diseases can have common variants that contribute to them. So ethnic admixture. What is that? So this is an ideal epidemiology study. So when people collect cases and controls, they often will ask about ethnicity. And they have what's called self-described ethnicity. And the ideal thing is that you would-- clearly, everybody matches on what people put down. But people will self-describe differently. And there may not be an accurate self-description for what people's actual ethnic and genetic backgrounds are. So you hope that somehow that this self-described ethnicity is good enough that you actually have a good match. But you can imagine that within some ethnic group, there might be a couple of subgroups. And this is obviously the distinctions. I've drawn this as there's the blue subgroup and the green subgroup. But in reality, would probably some shading from blue to blue-green to green-blue to green. So you can imagine that there might be some different ethnic subgroups. And you could think about this, for example. European might be Northern European and Southern European or something like that. And there might be diseases where the disease actually is more prevalent, let's say, in northern Europe. So you might have more cases from the Northern European ancestry and more controls than Southern European ancestry. And for example, type 1 diabetes might fit this pattern. Much more common in Finland than in South of France, for example. So if you then had a marker that was tracked genetically with Northern European ancestry but not with Southern European ancestry-- and there are not a lot of markers like that, but there are at least one that I know of-- then you might find that that marker would be overrepresented in the cases relative to the controls just because that ethnic group was overrepresented. And this is thought to be more of a problem also in populations that have had recent admixture, like African-American populations or Latino populations, where instead of this being Northern European/Southern European, this might be degree of ancestry, so what fraction of your chromosomes have alleles that are more common in Europe versus alleles that are more common in Africa. And you can end up with this same setup. So you can get, in theory, false positive associations from that. So there are ways around it now. So the first way that was proposed by Rich Spielman and then a whole bunch of other people is to use what are called family test-based-- family-based tests of association. And I'm not going to go through the details of those other than to tell you that they are immune to this problem. And the other way is to basically, if you're doing-- if you can't do the family-based test, and there are a lot of reasons you might not be able to, you can use something called genomic control, where you type a lot of random markers, and you see, well, do those random markers give you these spurious associations? And you can use those to correct your association results or even to rematch your cases and controls to make sure that their genetic-- regardless of what the self-described ethnicity is, you can match them by their genetic background as assessed by these 100 markers. So we actually looked-- at least for our little study, most of the things that had replicated are actually seen in family-based studies or in multiple ethnic groups, which makes admixture less likely. But doesn't mean that this isn't going on in a rampant way. And for example, the ones that were never replicated, could well be that some of those false positives are due to admixture. Then there's true population differences that could be coming into play. There's a lot of different things. It could be different diets in different populations. It could be that there are other genes that are more common in one population than in another, and there's some interaction between what you're looking at and that. And I want to talk about one very specific type of difference that at least we could address, which is that there might be different patterns of correlation in different populations. And the marker that's being used might not be the causal marker. It might be something which is correlated. So I showed you this diagram, where you could use this CA haplot-- if you look at this two-marker haplotype, you use this CA haplotype to try to predict whether the causal polymorphism is there. And it does pretty well because there wasn't very much of this recombinant haplotype here. So here, actually, the prediction-- I haven't drawn the arrows. But the prediction would actually not be right for this one chromosome. But most of the time, the correlation is pretty good. But what if there was a lot of that recombinant chromosome in some other population? Now, all of a sudden, you have a common CA haplotype without the causal SNP. And so in addition to being-- the correlation is now no longer so good, so you might not see an association. So that's at least one possible thing to keep in mind is that the marker that you're looking at might not be the causal thing, so basically understanding the LD patterns around an association. So once an association has been found, typing lots of other markers in there and seeing, is there another marker that explains it better will become important, and almost nobody does that right now. And then finally, the last thing that we looked at was, could it be that some of the associations that we were seeing in our meta analysis are actually true, but the studies that said that there was no association were actually incorrect? And so to try to get a sense for this, what that requires is that the sample sizes were too small to detect the association, which means, in turn, that the association must have been a weak associations or modest effect. So we estimated the effects for all of these associations and by pooling all of the data. And so eight of them replicated. And the question was, what was the effect size? And basically all eight increased the risk of disease by less than twofold. And some of them was just a 10% increased risk of disease associated with having the allele. And for 10%, for example, you need thousands of cases and controls. And almost none of the studies we looked at had that. So almost all the studies were underpowered for almost all of the associations we looked at. So lack of power is rampant in this field. And a negative study by itself, unless it's got many thousands of people, really can't be taken to be worth much other than by itself. It can be worth a lot in context of other studies. One last thing that I want to just take you through again because it's an important issue thinking about interpreting the literature, when we did this estimation of what was the actual genetic effect, and then we compared those less than twofold risks to what the first report had claimed, almost always, the first positive report had overestimated the actual genetic effect. And we wanted to know if it's consistent with a phenomenon called the winner's curse. So winner's curse is best described for auction theory, and it goes like this. Imagine that there are a bunch of people who are bidding on an item. And they all have an accurate-- I'm sorry, yes, an unbiased estimate of the value of that item. But it's not that precise. So people are just as likely to overestimate its value as underestimate it. And they all place bids. And those unbiased bids will therefore-- and let's say they bid what they think that the item is actually worth. Those unbiased bids will fluctuate around the true value. And then one of those will have fluctuated up the most, and that will be the winning bid. So the winning bid, conditional on it being the winning bid, will almost always have overestimated the value. And the best description of this is a Samuelson article called, "I Won the Auction but Don't Want the Prize." So beware on eBay. But so in association studies, basically the winning bid is equivalent to the first person who gets an exciting enough result that they can publish it quickly and in a prominent journal and that sort of thing. And conditional on them having found something interesting, it's likely that they overestimated the true value. And the true value might either be a weak effect, a weaker effect, or actually no effect at all. And only time can tell. And so we found that it was consistent-- that the degree by which they overestimated was generally consistent with the winner's curse phenomenon. So this is the take home messages for reading the association study literature. So if you pick up a paper and it says that something is associated with a disease, there is a possibility that it will be true. But it's I would say well under 50%. It's not 0% either, but it's well under 50%. The genetic effects for associations are likely to be quite modest. So if somebody is claiming a fivefold effect and their confidence intervals go from 1.1 to 40, it's probably closer to 1.1, if not 1. And then because of these modest effects, you need large study sites to detect these reliably and also to narrow those confidence intervals down. Now, I'll just give you an example, our favorite example, for association of a missense polymorphism, or we think that's the causal polymorphism. In type 2 diabetes, there is a gene called PPAR, which is important for fat cell differentiation and is a nuclear hormone receptor. Probably binds fatty acids. So the first study that was published is right here. So each line is a study. The point is the point estimate for the effect on diabetes. And actually, in this case, anything to the left of this line is an association with diabetes. And the line around the point represents the confidence interval around that estimate. So for example, this study up here, even though it shows the-- it trends towards association was considered negative because this line crosses-- the 95% confidence will cross as 1. So this is the first study which you can see the winner's curse in action here. Overestimates is the strongest estimate of risk. And it was followed by, I believe, these three and a bunch of other studies, all of which were considered to be negative because their 95% confidence intervals crossed 1. We then did a pretty big study where we showed an association. We actually pointed out that all of the literature to date was consistent with an association somewhere in this range. Much weaker than originally described, but modest enough that these studies didn't have power to pick it up. And in fact, as you get larger and larger samples, what you can see is that the confidence intervals-- is that things tend to really focus in. And you put all the data together, you get a pretty narrow confidence interval. And now, the p value for association is 10 to the minus 9. So it's real, but it took 20,000 alleles from people with diabetes and controls to get there. AUDIENCE: Are all the studies in the same [INAUDIBLE].. Can you really put them all together and do the overall? JOEL HIRSCHHORN: So for diabetes, the definition of diabetes is pretty clear. There's a WHO consensus. And they're more or less the same definition. I would say close enough. The allele itself is clearly the same variant. And assuming that the quality of genotyping was good, probably they were looking at the same thing there. Obviously, the populations are different. So I talked about population differences. So it turns out, also one of the best examples of heterogeneity between populations is for this polymorphism, not so much in diabetes, although it hasn't really been looked at well, but actually, obesity. So this has been looked at a lot for obesity, and wildly divergent results. It turns out that how much trans fatty acids you eat, or saturated to unsaturated fatty acids, affects whether this polymorphism has an effect on obesity. I can't remember right now which way it goes. Basically, on one diet, which is much more common in Western countries, there's I think no effect. And on another diet, which is much more common in Asian countries, there is an effect on obesity. So you can imagine that there- and actually, if you look, it turns out that these three are all in Asian populations. So it could be that there is some heterogeneity going on there, although it's statistically marginally significant whether any of these studies are different than any of the others. So these are mostly consistent with if you took darts and threw them all at the same odds ratio, you might end up with points that looked more or less like that. So but it could well be that there are differences. For things like-- for less well-- so for Alzheimer's disease, there's a little more debate sometimes of whether it's pathology-confirmed or not and that sort of thing. So all right. So what to believe is the question. So should you believe it? I would say initial skepticism is warranted. Replication, especially with low p values-- and maybe we can discuss what that means-- is encouraging. Large sample sizes are crucial. So if you're going to take anything home, there you go. So I want to quickly-- so this is a quantitatively-oriented class-- talk about applying Bayes' theorem to interpreting association studies. So Bayes' theorem is one of the fundamental theorems of probability. And in this case, what I've done is I've substituted in the words causal for probability of observing association. And causal is the probability that the variant that you're looking at actually is causal. So if you go through just plugging it into Bayes' theorem, this is what you get. And what you're interested in is basically given that I've seen an association, what's the likelihood that it's true? That's what you want to know when you pick up the paper. So this has to do with three terms. One is the probability that something is associated given that it was causal. So that's the power of the study. So would you see an association if it were causal? Would I have seen this association data if it were not causal? Well, that's the p value, the probability of observing the data by chance. And then the probability that it's causal to begin with, which is your prior probability. So this is the key to Bayes' theorem, which is that you have to specify the priors, which is, of course, always the hardest part. It's very easy to write down Bayes' theorem, but then specifying your prior distribution can sometimes be quite tricky. All right. So I'm going to take you through what I believe prior distributions look like within a couple of orders of magnitude. So what are the prior probabilities? So I'm going to say there are about 600,000 independent common variants, and it has to do with the degree of redundancy that's been determined by the hap map and stuff like that. There are 10 million in total. Some of them aren't that common, so I'm going to shove those under the rug for a little bit. And of the rest, there's enough correlation that I'm going to say there's about 600,000 variants that you can type. The other assumption is that at least a few of those are going to be causal. So I'm assuming the common variants play some role in disease. So I'm going to say somewhere between 6 and 60. It may be more, but I would say causal to a degree where you have any power to detect them at all because obviously, if they have no power to detect them, they may as well not exist from the point of view of this study because if you plug in 0 for your power, then everything starts canceling out. So your prior probability is 1 in 10,000 to 1 in 100,000 more or less. How about candidate genes? Let's say, well, I'm not studying-- you say this paper is not studying some stinking random variant. This is the greatest candidate gene in the whole world. Well, so we've gone through the exercise of writing down all the candidate genes that we think are great candidate genes for diseases. And you usually come up with somewhere between, say, 100 and 500 genes, so let's say 300 that you come up with. And I would argue that you could tell as equally beautiful a story about almost all 300 of those genes once you knew post hoc that there was an association. So you can say, oh, of course. That's why protein kinase C theta is the key to diabetes. And you write your story. So I'm going to say that there are 300 candidate genes, all of them about equally likely to be associated. Again, using some things about there are about three or four blocks of linkage disequilibrium per gene, about three or four haplotypes per gene. I'm going to say there are about 12 independent variants per gene. So there's about 3,600 candidate variants on your list of 300 genes that you've just made. Let's assume-- and this is I think somewhat generous, but let's assume that all that biology that went into making that list is worthwhile and that half of all the causal variants are in the candidate genes. Well, then if you do all the multiplication, the prior probability is about 10 times higher for a candidate gene as it is for a random gene, 1 in 100 to 1 in 1,000. And then what are the probabilities you suggested looking under a linkage peak? Let's go do all the genes under a linkage peak. Well, what are the prior probabilities there? Well, let's say actually not 200 genes. Let's say there are only 100 genes under a linkage peak. Again, the same 12 variants per gene. But this time, there's only one-- the assumption is there's only one gene or one variant that's under that linkage peak that's actually contributing. So again, the prior probability works out to be about the same as a candidate gene, 1 in 1,000. So I would say you should probably, at this point, be agnostic as to whether you're focusing all your effort under a linkage peak or on a list of candidate genes that you've come up with through biology. So let's assume that you now do your study, and you achieve some magical p value of 0.05. What's the likelihood that there was an association that was correct? So the greatest candidate gene in the world. So this is actually-- I'm going to give you 10 times more likelihood that this was correct than your typical candidate gene. Prior probability of 0.01. P value, 0.05. Posterior probability. This is assuming also perfect power to detect your association, which means you did this in a big study. If you do it in a small study, this number goes down because this thing scales with power. So the posterior probably, chance that you're right, about 15%. So that's a little bit disheartening, maybe. How about a typical candidate gene? This is the typical association study that's done. There's about a 1% chance that a p value of 0.05 represents a true association. Candidate or linkage peak, so Calpain-10, 1% chance with a p value of 0.05. Now, their p value was lower, but still, that's not much lower. Random gene in the genome. Basically guaranteed to be wrong with a p value of 0.05. So what kind of p values do you need to actually have pretty good posterior problem? Because again, this is assuming good power. So basically, what you need is p values that are somewhere in the range of 10 to the minus 4 to 10 to the minus 6. And this could be a single study or all studies put together. So basically, most report associations are probably incorrect but not all of them. Some will turn out to be correct. And I would say that if you have low p values replicated or there is a really, really, really, really good reason for plausibility, then that makes it much more likely to be right. And then just this last point, genes under linkage peaks are about the same as any other kind of candidate. Obviously, if you have a candidate that's in a good pathway and it's under a linkage peak, probably raises its prior probability. Oh, sorry. AUDIENCE: [INAUDIBLE] you're not just looking at the state that's [INAUDIBLE] independent SNPs being present, contributing to [INAUDIBLE]. JOEL HIRSCHHORN: Yeah. So I haven't talked at all-- I only alluded briefly to gene environment interaction with the PPARgamma and the diet thing. And haven't talked at all about gene-gene interaction. So there's a hot debate going on in the field, I would say, about whether one should look for gene-gene interaction without any evidence that either gene is involved, which basically involves doing all the pairwise at least, if not three-way or four-way or n-way combinations. And the challenge for that becomes apparent when you say that there are actually 600,000 independent variants across the genome. So even if you're just doing pairwise, that's 6 times 10 to the fifth. So it ends up being 3.6 times 10 the 11th variants-- sorry, pairwise combinations that you have to look for. And so you know that you're actually going to get a p value of 10 to the minus 11th for interaction at some point just because that's how many pairs you've tested. In fact, you can get 3.6 of them. So it becomes a issue of power again in that to correct for all the different hypotheses that you'd be testing, you need an enormous sample size, or you need to hope that the interaction effect is just so enormous that you actually get a p value that survives that correction for multiple hypothesis testing, which might be the case. But then the question is, if that interaction is so enormous, you have to draw very strange models that you wouldn't also pick up at least one of the two as being causal. Now, what should be done but isn't is that once you have something that you really believe-- so for example, we work in the diabetes field, and we really believe people are gamma Pro12Ala. But yet, we do not yet-- and we should-- test for pairwise interactions for every single thing we genotype in Pro12Ala because you can draw plenty of reasonable models where you would only detect one of the two things that are interacting. And the other one would be completely masked or largely masked. So we don't do that, and we should. And but I would say that if you're-- again, if you're picking up a paper which says we didn't see anything with either one of these variants, but then we did the pairwise test and we see an interaction, think about your prior probability that that pairwise test was actually associated. The number of pairs in the genome versus the number that are actually going to be causal, your prior probabilities are probably several orders of magnitude lower. So unless there's, again, some really great reason to know that these genetic variants interact. So that's my incredibly biased answer to that or not biased but opinionated answer to that question. I won't go through why you get the same sort of winner's curse and things like that in linkage. But just to take you through rare variant association studies, again, just in case you pick up literature on this. So this is the typical rare variant association study as it is practiced in 2004. Genes are resequenced in affected individuals to try to find, say, missense variants. And then those missense variants are then taken and typed in unaffected individuals. So it's sequencing here, genotyping here. And what's often observed is that they're not there. So I'll take you through a possible resequencing association study. Let's resequence your favorite gene, gene X. 200 diabetic individuals, and, lo and behold, you actually find 10 rare missense variants, each seen in one person with diabetes. You then type 200 healthy individuals. Those variants are not seen at all. So you conclude that rare missense variants in gene X cause diabetes. So now I'm going to take you through an equally possible resequencing study. Resequenced gene X in 200 Red Sox fans. You will identify-- if this is a gene that happens to tolerate rare missense variation, you will identify 10 rare missense variants. You might end finding some common variants, but you might not. Many genes don't have any common missense variants. If you then take those 10 missense variants and type them in 200 other people, say Yankees fans, you will not see those variants. So this is very plausible. And you will conclude that rare variants in gene X make you root for the Red Sox, which of course is not the correct answer. And the reason for this has to do with the fact that in identifying these rare variants, you have to go to very great depth in doing the resequencing. So you get common variants if you sequence two chromosomes. So that's what I showed you at the very beginning of the talk. So if you just compare two chromosomes and then look, are those variants common in the rest of the population, the answer is almost always yes. And the average allele frequency is about 1 in 5. So average is 20% for the minor allele and 80% for the other allele. But if you then sequence a large group of people, you obviously find all these common variants. You find those in the first four chromosomes you do. You basically run through all the common variants. But you actually keep finding variants as you go every once in a while, and this has to do with population genetics and what the frequency distributions are and stuff like that. But most of the variants you find, if you do deep resequencing, end up being these rare variants. They don't account for most of the heterozygosity because they're so rare. And in fact, their average frequency is something like 1 in 10,000. So if you type them in 200 people, you're never going to see any of them again. So what needs to get done is basically, you need to resequence the controls as well. There are other ways of doing it. For example, if you have families, can see that the rare variants actually segregate with disease. That's sort of equivalent to resequencing control. So that's one way around it. But that's basically what you need to do. And furthermore, what I said there in that previous slide was you identified 10 missense variants. Well, what if you identified 11, but one of them was common? And you said, well, that's a common one, and it's present in cases and controls, so that's probably not functional. That's probably-- and I'll just concentrate on the ones that are only present in cases. Well, that's obviously rigging the results beforehand. So if you limit yourself to just those variants that are present in cases and say those are the only ones that matter, obviously they're only going to be present in cases. So you have to have some way of grouping these rare variants for analysis before you know the results of the association study of which ones are in cases and which ones are in controls, so something like either I'll only look at frame shifts or nonsense mutations, or I'll only look at things that have this particular in vitro functional property or something like that. So just in the last couple of minutes-- or I'm actually way ahead. AUDIENCE: Seven minutes. JOEL HIRSCHHORN: Oh, no. Not way ahead. Good. 12:30. Good. For a minute, I thought I had 37 minutes left. I was like, I've been talking longer than that. OK. So what could we learn? And I think this is also important to think about in what are the goals, and what are the likely outcomes? There's a lot of hype around the whole field of genomics and SNPs and common disease and just to try to take a look at what is and is not likely. So I don't know if anybody's seen the movie Gattaca. So Gattaca, I think people actually have Gattaca in the back of their mind when they are thinking about this, which is that we should be able to tell by genotyping, if we could sequence somebody's genome, we will be able to predict-- if you look at the opening scene in that movie, it's like, probability of coronary artery disease, 98%. Probability of stroke, 100%. This sort of thing. So we already know that genetic variation, actually, even if we had perfect information, only explains 30% to 50% of common disease. But what's even the likelihood that we'll get that good a prediction? I think it's probably not that great because it's probably a lot of alleles that are going to interact in complex ways that we may not be able to figure out for a very long time. But nonetheless, there is some thought that we may be able to identify people who are at high risk or particularly, people who might be at low risk. If that were the case-- and I've shown here probably a very optimistic scenario, which is that you could enrich this much for the high-risk people-- you might say that these people would get a more aggressive intervention preventive measure, and these people might get the standard of care or might even be able to get slightly less aggressive interventions depending on how the risk-benefit played out. This obviously requires that there be effective preventive measures and things like that. But you could imagine, for things like diabetes, you could say, you really are at high risk for getting diabetes. You really need to exercise and lose weight, and maybe that there would be some-- you could hook that person up with a nutritionist and all the personal trainer and all that sort of stuff. And it might not be something that we, as a society, could afford for everybody but maybe at least for the people that are the highest risk, we could afford it if we could get there. But I don't think that this is necessarily really where the true promise lies. There's another possibility. This is called pharmacogenomics, which basically reclassification of individuals to guide therapies. So you have this mix of people who have some disease, like type 2 diabetes, but it turns out it's actually type 2A, 2B, 2C, and 2D. And maybe we could classify them by DNA sequence or expression profile or some genomic method. And we could say, well, these people will do best with treatment A, these people with B, these with C, and these with D. And again, that may or may not come to pass, and certainly, people are looking at it. And I think it's going to partially come to pass. I'll just give you one real-life example that actually is not in the clinic yet, but I'm not sure why not. So there's a gene called CYP2C9, which actually metabolizes warfarin, which is used commonly to treat conditions involved with thrombosis or blood clotting. This is an anti-clotting agent, actually, vitamin K. And this didn't quite-- I'm sorry. The white on yellow is not a good color scheme. But basically, most of the population has zero low-activity alleles. 20% of the population has one low-activity allele, and 5% of the population have two low-activity alleles. And if you look at the average dosage that people end up on by their genotype-- so this dosage is not based on their genotype. It's like there's trial and error by the physician, and they find, oh, all of a sudden, you have a nosebleed that won't stop. Maybe your warfarin dose is too high or your PT value is too high. We have to cut the dose. You see that there's a very nice allele versus dose correlation here so that if you have two low-activity alleles and you get put on the typical 5 milligram dose, you're going to have bleeding problems. And in fact, you're going to be at six times the risk of serious complications. So personally, if I were going on warfarin, I would probably want to know what my CYP2C9 genotype was. But it's not yet standard practice. One of the only cases that I know that is is actually if you have cancer and you're getting six [INAUDIBLE] purine or related drugs, there's a gene called TPMT, which again metabolizes that into a toxic intermediate. And that has now become pretty much the standard of care because again, the risk of death is higher if you have the wrong genotype. So death gets people's attention quite-- AUDIENCE: And that's-- JOEL HIRSCHHORN: --vigorously. AUDIENCE: Average dose, right? So-- [INTERPOSING VOICES] JOEL HIRSCHHORN: Yes. So there's a range. AUDIENCE: --histogram on the-- JOEL HIRSCHHORN: Oh, absolutely. Yes. So I have to say, I should have put error bars on here. I was looking at that and just noticing I did not have them. [INTERPOSING VOICES] AUDIENCE: --because even if you did a histogram-- JOEL HIRSCHHORN: Right. But-- AUDIENCE: --it doesn't necessarily change-- [INTERPOSING VOICES] JOEL HIRSCHHORN: Ah. So right. So if this were-- if you could say, well, physicians can do this without knowing ahead of time, then this should not happen, right? AUDIENCE: Oh, yeah. JOEL HIRSCHHORN: Yeah. So yeah. So this is the argument is oh, well, physicians are going to be good at this anyway. They're not. They suck at it. So but I think the other thing that is potentially useful, so if you look at least the two most well-established risk alleles for type 2 diabetes, turns out, they're both in drug targets. Now, this is admittedly a little bit circular because the reason they were looked at was because it was known ahead of time that they were drug targets. So they made sense as being good candidate genes. But it is interesting that two of the major drugs that are used to treat type 2 diabetes, there's variants in their targets that affect the genetic susceptibility to diabetes. And of course, the promise here is that if you could find all of the other genetic pathways, even if they only had a 1.1-fold effect on disease-- so PPARgamma is only about a 25% risk effect on risk of disease. But you could easily imagine that you come up with a pharmacologic intervention that does a lot more than the genetic polymorphism does. So there's only so much tolerance for how much genetic variation evolution will let happen. But it might be that giving thiazolidinediones might be like having 10 times as much of an effect as having an alanine allele does. So the idea is that hopefully by identifying the genes that are responsible-- and this is true for severe forms of the disease as well-- you might identify drug targets. And finally, of course, this helps just understand human biology, and that's always a worthy goal. So should talk about potential difficulties from all these-- from all these advances. There are obviously concerns about privacy with genetic data. And don't really have time to go into it, but obviously, the big fear has to do with insurance discrimination and employers knowing about genetic diseases. And there's bills that keep making it almost all the way through Congress and not quite to deal with this. So lobby your congressman. There's the Gattaca mindset about improper interpretation of predictive information, how predictive genetic information is. There are companies that will try to sell you that you should be on diet A or diet B based on SNP gene types that are in those irreproducible associations. There can be psychological impacts of if you say you have the high-risk allele for diabetes, that might-- without an understanding that this is a correlative rather than a perfectly predictive measure, that could have significant harm as well as impact on reproductive choices, again, depending on how predictive people think things are. There are huge interactions with concepts of race and ethnicity. We've alluded to this a little bit with different allele frequencies and different populations. Most alleles are present around the world, and you can't use generally any one variant to tell different ethnic groups apart, but there are definitely differences in allele frequencies between populations. And that has significant implications. And then finally, I certainly don't study the genetics of performance, but you could imagine that if people started getting into that, that could be quite dicey. And even if the science were perfect from a technical standpoint, it might create a lot of trouble. So David Altshuler is a collaborator with me on the diabetes work and also very involved in the hap map with the work that was done by Stacey Gabriel, Mark Daly, Steve Schaffner on haplotype blocks. The diabetes group also is Noelle Lake, Cecilia [INAUDIBLE] did some expression stuff. And Kirk and Lee and Eric were on the meta analysis association study. And there are a lot of genome project-type things that are obviously giving us all the tools to do this. So thanks.
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_6_Information_Science_at_the_Center_of_Genomic_Medicine.txt
ISAAC SAMUEL KOHANE: So let me give you a case history. This is work that I've done with one of my former interns, actually, who's a superb scientist at the Dana-Farber Cancer Institute, David Rowitch, who also is a part time neonatologist. I call this finding the needle in the haystack. And it's about studying the cerebellum. So I impressed, I hope, upon you on the last lecture, just how bad microarrays are. How noisy they are, and how difficult it is to interpret them. So now, I'll show you how-- darn it-- how, if you actually do the computation right, you can actually extract useful biomedical information out of it. So the cerebellum is a nice system to study because it has this nicely layered structure. The cerebellum is this part of the brain that sits in the posterior fossa, at the back. And has this very nicely layered structure which is detailed in this cross section here so that each layer has this distinct group of cells. And often, these group of cells are distinguished by different genes expressed at different times, specifically, for that cell type or cell layer. And the cerebellum has a pivotal role in locomotion and posture, but the fact that I can stand approximately straight and move around is almost solely a function of the activity of the cerebellum. And because of its nicely layered structure, the cerebellum has been the subject of a lot of neuroscience because it's much less complex than the other central neural tissues, in that regard. So there was a particular system that David was interested in. Sonic Hedgehog, it's one of these supposedly amusing names developed by fly biologists because fly biologists may frequently have a very bad sense of humor and they apply that to all the genes that they discover. Sonic Hedgehog is one such gene, turns out to be a very important signaling molecule that's very important in proliferation and differentiation of cells. And David wants to know, what was the role of Sonic Hedgehog in this outermost of the layer of the cerebellum, the external granule layer A, EGLA. It was known that cyclin D1 was expressed in this layer. And different other genes were expressed in other layers. It was known that Sonic Hedgehog was important in a variety of different ways. For instance, genes that had previously been implicated in a Sonic Hedgehog signaling pathway when a mutant cause in the mouse model here-- a posterior cerebellar tumor-- which is very much akin to the medulloblastomas that are a big cause of juvenile brain cancer, and a lethal one at that. And it's certainly been shown that medulloblastomas is are involved in the Sonic Hedgehog pathways, a large percentage of them. So the question was, could we find other culprits, other genes that are involved in the Sonic Hedgehog pathway? And it's a big haystack that we're looking into because we're looking in two large haystacks, actually. One is a large number of probes on the microarray. Two, we're looking at an entire cerebellum. A complex tissue-- even though it's one of the simpler neural tissues-- when in fact, we're trying to extract a signal out of one part of this complex tissue, there's a very superficial layer of cells called the EGLA. So can we find the signal in time and space? So we're not only looking for the right number of probes in the right location, but the right point in development. And this was quite a challenge for us because David is a personal friend and former intern. And he heard me pontificate about the importance of bioinformatics. And here was this closed friend who is now engaging in this type of collaboration. And if I gave him the wrong candidates, that means I'd be wasting the time of his post-doc, of which he had only one at that time, and delaying him by six months if I gave him some wrong things to look at. So it was a relatively high stakes game. So the experiment was very simple. To harvest, initially, just day five-- postnatal day five cerebellum, expose it to Sonic Hedgehog vehicle and growth arrest media. Forget about the growth arrest media for a second. But basically, exposing it to Sonic Hedgehog or vehicle, hybridize it to the now obsolete affymetrix mouse array. The mouse arrays are obsolete in this generation, it's no longer available. And we did what we refer to in the grant as noise models in our bioinformatic analysis, which I'll detail shortly. And then we gave a list of nine significantly up-regulated genes and 10 genes not significantly up-regulated and these will be subsequently screened with insight through hybridization to see whether, in fact, these predictions were correct. And so what we did was the very simple thing. If you recall, I showed you in the last lecture that I gave you how noise grew dramatically at lower levels of expression because you could flip the sign of the up or down regulation just by virtue of having low levels of expression affected by low levels of noise. So clearly, you need to have some of statistical measure of the separation of the two conditions. So in this case, the signal was the expression of a gene in response to the signal, namely, Sonic Hedgehog. And the other is the vehicle treated ones. And the question is, each such gene, could we find those genes that were clearly separated across those two conditions? And there's a variety of ways that you could have done it. And in fact, as you promise it, I'll have you explore some of the standard methods that, for instance, are things like a modified t-test, like the significance analysis of microarrays. But I will make the following point, which is, try to understand what you're doing by looking at ratios. For instance, just looking at a t-test, where you're looking at the difference in means, is fundamentally misunderstanding the computational challenge in a very simple way. What average do we want to calculate? And I realize not all of you are computationally involved, so I'll try to keep it simple. There's two types of averages that we might consider among many calculating. One is the simple arithmetic average, where you take where you sum the signal and divide it by the number of samples. And you sum the control divide it by the number of samples and you take that ratio. Or you take the geometric mean of them. And why would you do one or the other? Well, let me ask you the following thing. When I first thought about this, I was having a mortgage in my apartment, so it was incredibly important to me. If you have four different interest rates over 4 years, such that, the amount you have to pay out is 1 plus R1 times 1 plus R2, to 1 plus R3, 1 plus R4 and you're going to have to pay different amounts on that principle. How am I going to estimate the average interest rate I'm paying per year? Is it the arithmetic average of these Rs? No because it's compounding. I know that's a simple point, but few people realize it's actually the geometric mean. And the same error is being made if you look at the arithmetic mean versus the geometric mean to look at ratios because, fundamentally, we're looking at a ratio. And we did the analysis both ways. And I can tell you, we knew in this case, the answer is because we did the insight to hybridization. And first of all, the good news. The good news that we published in PNAS a couple of years ago was that, essentially, 80% of the time that we said a gene was expressed in the EGLA based on this simple geometric mean, sure enough, by insight of staining, it was. And when we said it wasn't, it wasn't. But the point is, if we use that arithmetic mean, we had double the false positive rate. So just understanding something as simple as the difference in the arithmetic mean and geometric mean, which is a non-obvious notion to anybody who's just been used to doing t-tests, screws up the analysis. Let's make this more relevant to human disease. How can we leverage this developmental view of the mouse? This we're talking about mice. How do we actually try to understand how this is reflected in its relevance to these tumors that are talking about, medulloblastoma? So we had, actually, some medulloblastoma. And I want to say about microarrays is that this community, along with the SNP community has been unique in making freely available the data to its community. For those of us who have MDs-- or will have MDs-- let us be aware that still today, most clinical research papers do not publish the primary data from which the conclusions are made. Still do not. And therefore, you are unable to independently verify these results or come out with better results or with improved results. And that's why it was really incredibly important. I'm sorry I didn't emphasize this before, that the early practitioners of microarrays, such as Todd Golub. And Pat Brown published all the data right away after publication. In fact, I looked a half year ago at Todd's original paper in 1999, on AML versus ALL. And I think there were 700 citations of just that one paper. And it would have not been the same had they had not published data because there was a lot of secondary analyzes and a lot of problems found with the paper and the size of the paper. It sets up a whole scientific process of motion that you don't have otherwise. So we have this data on a variety of different human medulloblastomas that have been published a couple of years ago. And then, we could go to a database called Homology-- and by the way, your next problem set, not the one I'm describing to you, but your next problem set will be a treasure hunt where we'll make you hop across all the different databases that you should know about as basic-- not basic biologists, but basic consumers of the very simple kinds of information around about SNPs and the microarrays. But we'll get to treasure hunt later. Suffice it to say that there is a database called humology, which allows you to map the genes in humans to those that are present in mice. And it turns out that there were present on those two microarrays, there on the order of about 3,000 genes that are homolog from mice to men. And so what we did then was to do a principal component analysis of these data sets. Now hands up those of you who really understand principal component analysis? Two, two of you. So those who don't, let me try to give you the simple version of it. Principal component analysis, essentially, tries to refactor the data so that you're looking at these new variables that are linear sums of the original variables. In this case, the genes. These principal components are linear versions of these original measurements that capture, in successive components, the major amount of variation of the data set. So the first principle components is this linear combination that gets the largest amount of the variation in the data set. The second principle components are orthogonal. Essentially, at right angles to that first principal component and get the second most amount of variants. And so on. And in fact, have as many principal components as you have original variables. But the nice thing is that, heuristically speaking, by the time you've gone through about two or three or four principal components, you've typically captured on the order of 70% to 80% to 90% of all the variants in the data set. In other words, by doing these linear combinations of orthogonal vectors, you're able to capture most of the variance. And the nice thing about that is for data sets, such as gene expression, where you have a lot of highly correlated behavior. In other words, there's large numbers of genes that are really not that independent of one another, this allows you to reduce the number of variables that you have to look at an analysis from a very large number-- tens of thousands-- to adjust a handful-- the first principle components that allow you to capture the majority of variation. And that's, by the way, the intuition that some of these face recognition software that you may have heard about from Homeland Security has been inspired. By essentially, taking the eigenface, where they look at the principal components that make up the features of the face and reduce it-- reduce the complexity to these principal components of face so that not only can you generate an eigenface, this generic face, but you can also map people to which faces they're most like. So it allows you a little bit of play and slot between different angles and different ways that you might look in a different day because it's still capturing the major variants of your face. So what we did is the following. Not only was there day five of the mouse, but we actually captured multiple time points on the order. I can't remember now, exactly. Maybe 15 time points. And it's a publicly available data set, if you're interested. 15 time points, going from embryonic days to postnatal days. Day 60 I believe, of the mouse. And then when we did the principal components of the gene's expression pattern across this time series, we saw the following when we just plotted the genes by that position in this new coordinate system, the first and second principal components. And I've colored them by whether they fell into this section or that half section of the egg, so to speak. And why did I color it so? Because if you actually look at when these genes had their maximal period, their maximal time of expression. The ones in this component, in this area of the two components, had an early peak and the genes in this component had a late peak. So the late-- sorry, the early principal component and the late set of principal components. In plane English, by just doing this simple decomposition-- the simple factoring into two principal components, we've taken that time series and re-plotted the genes. And now, we have a set of genes that are clearly separating themselves out from another set of genes. And they're characterized by being early versus late peaking. Now, so what? We then took the human homologues of those genes and we asked in the medulloblastomas, which genes compared to non-medulloblastomas were upregulated and downregulated in the human tumors? Please? AUDIENCE: Describe the x-axis and y-axis, what's the-- ISAAC SAMUEL KOHANE: This is the first and second principal component. AUDIENCE: Yeah, but I mean, the number 0,1, 2, 3 and-- ISAAC SAMUEL KOHANE: It's, I can't-- I don't think it's zero based. I think zero is somewhere around here. AUDIENCE: So it's just-- I mean, I know, so what does that mean? ISAAC SAMUEL KOHANE: Its according to the coefficient. If you position that gene according to its position along those two principal components. So each one of these is a single gene. So then what we do is we plotted those up and down regulated genes by whether they fell into-- we just took those up and down regulated genes without knowing anything more. And we say, where do they fall into this late and early decomposition based on the principal component analysis? And the simple answer was essentially, 90% of the genes that were upregulated fell into the early period of expression. And similarly, around 90% of the genes that were downregulated-- in humans, mind you-- fell into the later phase of development of the mouse. Go ahead. AUDIENCE: I have a question the way you're presenting the data doesn't make apparent why you need the second component. ISAAC SAMUEL KOHANE: That's a very insightful question. And the answer is that for what I'm describing it, does not because the separation that you're seeing is just in one dimension and you're absolutely right. It doesn't add anything. Turns out, there were actually different processes that I'm not describing here that actually were separated out by going into the second principal component. But I think you're showing that you're understanding what the hell I'm talking about by saying, for this distinction early and late, all you need is the first principal component. You're absolutely right. So I just want to-- those of you who've taken basic pathology in medical school-- will hear the following. I don't know if I have a pointer to it or a slide, but it's been known by Cohnheim and others from the turn of the century have speculated that there's something about the embryonic program of development that is recapitulated by tumors. That tumors are similar-- not the same thing-- are similar to essentially a poorly controlled embryonic development because when you look under the microscope, a lot of the tissues in tumors are not as well differentiated as the tissue from which they came from. Liver cancer, if the liver cancer looked exactly like liver tissue, it wouldn't be liver cancer, it would be liver. And so they're actually more primitive versions. And this actually adds support to this. In fact, just to check on this, we did the following experiment, which I don't show here. So what we've done here, just to place this in an appropriate perspective, is take neural tissue from humans-- human tumors-- and projected it against the components of mouse development. And what we C is this nice separation between upregulated genes and the early phase. Now maybe all we're showing are general markers of hyper-proliferation and not something specific to the system. So what we did is we took a similar pair, namely, lung cancer and lung development. And lung cancer and lung development showed, essentially, the same relationship. But then when you took medulloblastoma against the lung development background, the separation was no longer as good. And when you took the lung cancer against the neural development, it was also not as good. So what that tells us, it's something much more nuanced than simply embryogenesis recapitulates generically angiogenesis or the other way around I should say. It says that there is part of the differentiation program of that particular tissue, which is recapitulated by the tumor. And that's already interesting because it gives us some insights and development, but it gets more interesting when you start thinking, well, I know a lot about different stages of development. Can I actually start pulling out my understanding of the mechanisms operating at different stages of the development to understand what's different about these different cancers? So this is just showing you in the time series, what the principal components were showing you before. And in red and green, we're distinguishing the genes that are up and down regulated. And you see they have, in the human tumors, they have different time courses in the mouse data versus in the-- different courses of the mouse data, depending on whether they were upregulated or downregulated in the human tumors. And as I said here, this is the slide I wanted to Lobstein and Cohnheim were among the first who theorized similarities among human embryogenesis and biology of cancer cells. And it's actually not until now that we've gotten really much more objective evidence that this is the case. And the brain tumor classification system that is used and that was devised by Daly and Cushing in 1926-- from which our modern taxonomy is derived-- emphasized this. But they are very crude taxonomies and descriptive taxonomies. And here we're providing perhaps-- and I'm saying perhaps-- a much finer classification. Is that true? Well, let's turn around the principal components. Let's look at each tumor as a function of the genes that are upregulated. Sorry, let's look at the position of the tumors, now-- not the genes-- by the position in the principal component analysis. And to make a long story short, we were able to dissect out different days of development. So these are different mouse samples, and projecting onto different human cancers onto the same principal component analysis of these mice. And this is very interesting in a number of different ways. One is, we're separating out different days of development into its principal component space. But also, separating in that same space, the tumors. Now that's fundamentally interesting because it's telling us and the reason why I'm I bring this up in the genomic medicine class because if one of the fundamentals of modern medicine is taxonomization of disease. And here, I'm providing a very, very quantitative measure of differentiation of these different tumors based on their position in the space of development from a data set that you can never get from humans, but from mouse. And this is truly exciting because I don't have time, well actually, I don't have a slide to show you because we just submitted this-- sorry, it just got accepted to the genes and development, which is an important journal in this space. We can actually separate the metastatic from the non-metastatic medulloblastoma in the same way. They separate in the same individual space. So what does this tell us more generically? Principal component analysis is not a commonly used tool for clinicians, obviously. And yet, here we have a tool that is creating a much better taxonomy of disease with predictive power previously not available to us. And allows us to understand-- because we're saying what are the genes? Because remember I said the principal components are a linear component or linear combinations of genes? We can say why is this gene in this position? Which genes have the major weights which make it responsible for that tumor being in that position. Might give us some insights into the underlying tumor mechanism. And so here, we have a fine grain [INAUDIBLE] which was not previously available to us. And I think it's, therefore, not a leap to think that-- in similar kinds of processes-- that computational approach to these data sets will allow us to recast our very, very squishy current taxonomies. I think I told you in the first lecture, with a few exceptions, the most notable exceptions being microbiology, most of our disease classifications are not mechanism-based. In microbiology, this classification is mechanism-based. It's the organism that's infecting you that's the name of the disease. But most of the diseases like lupus, probably a mixed bag of inflammation inflammatory diseases that we don't know, polycystic ovary diseases, we're describing symptoms, not pathophysiology. And I think this is allowing us to get much, much closer and therefore, a much more robust and fine-grained understanding of that. So my interim summary here is that this computing allows us to identify some pathways and provides more importantly, a natural classification of disease with further insight. Apparently, I don't know how to open my book. Go ahead. AUDIENCE: Did you try to redefine bacterias that were [INAUDIBLE] against the optimized use of combination of genes that you expect to be a developmental program, or something to that level? ISAAC SAMUEL KOHANE: So Jose is asking a very interesting question which the short answer is no, but I think it's a very good question, which is-- but I wouldn't know exactly where to start. But surely there is a thoughtful way to respond to it. Could you take linear combinations of genes-- and I wouldn't know exactly which linear combination of genes-- but maybe just using those as a starting point, perhaps, genes that have been implicated at various stages of development and using those. And I think that's a good way to go. My understanding of developmental Biology is unfortunately, it's very sparse knowledge at different stages. So it'd be hard to know which genes are generally informative. Nonetheless, I'm a big fan of the approach that you're suggesting, which is a knowledge based approach. What we're doing here was clearly knowledge free. We were just taking the data and just saying, what explains the maximum variance and going from there. But if we could conceive that process or the set of genes that we knew were implicated, I bet we'd have much better resolution than we have currently. So I think it's a very good thought, but we have not done that. Which brings me now to another aspect of computing. I've shown you guys this slide. Does it look familiar? Good. So here's the fantasy that most deans of medical schools and heads of hospitals have. And, again, this is not a very technical point. It's more of a sociological point, but it will end up being technical because it can tell you about a whole other area of biocomputing that I think is ripe for the picking, but it's also a rate limiting step. So they all have the fantasy. Now that they've seen things like, oh, yes I can predict different outcomes of different tumors, let's say, I want to be able to develop a set of targets that a drug company is going to pay me a lot for. So the fantasy is the nice doctor talks to the nice patient. He gets consent in the nice hospital. They have a family history. They bank the tissue obtained. They do some genomic analyzes. They do a clinical annotation. They do some fancy mumbo jumbo bio-informatics. And lo and behold, they're going to have the target that a drug company is going to pay a lot of money for. That indeed is-- for those of you who are more entrepreneurial oriented, that's the fantasy that's launched. Many, many Venture Capital funded ships, like the decode project and others. And the problem is the following. It's that this central piece, the phenotypic annotation, ends up being the hardest part. Understanding what really happened to the patient. Did the patient really have a tumor? How old were when they had the tumor? What drugs were they responsive to? That is the hard part. And any of you have ever looked at a clinical chart can understand why that is. There's just very little useful machine-able data in the clinical chart. Because in fact, the real fantasy is the following. And what I call [? Ratwitchz ?] at the Medical College of Wisconsin-- what they do is they take rats, consomic rats, where they have systematically substituted one chromosome for another. So which would give them a very big efficiencies of identifying linkage with a different trait. What they do is they take these different chromosomic rats-- different strains with well-understood genotypes-- and they exposed them to very well characterized environmental exposures, such as hypoxia. They put them in the oxygen equivalent of top of the mountain for several weeks. Or they give them a high salt diet or high fat diet. Or they volume deplete them. And after that wonderful experience, they then physically reconstruct them and rip out their heart and put their hearts on these prep machines where they can look at the contractility curves of the heart in the perfused prep. And that's great because now you have a well-defined phenotypic environmental characterization. You know the genotype and you have the expression. And by the way, let me say and point you to the following site, PGA.mcu.edu. They have all that data online. The phenotypes, the genotypes and expression. And they do it for cardiac phenotypes, kidney phenotypes, lots of different phenotypes and different parts of medicine. And so this is the fantasy. But of course, patients would probably objectively subjected them to this kind of treatment. So this high throughput phenotype fantasy just is unlikely to happen anytime soon. So we took a very different approach when we were dealt with this problem. And that was how to get sufficient numbers of samples in the right amount. And you should know that there are companies like Ardeas, which have been given millions of dollars of venture capital unsuccessfully to try to solve this problem. To get enough tissues in the right amount. So the National Cancer Institute put out a request for applications saying, listen, there's thousands of tissues available throughout the pathology repositories of our country that we want to be able to do genomic studies. How to go about that? So inspired by Napster and Gnutella-- and for those of you who don't know what the-- those of you who are not aficionados of file sharing, Napster is a file sharing service where there actually is a centralized. But the data themselves are decentralized in multiple directories. Guntella, in order to avoid getting anybody sued, it's fully decentralized. The Directories are decentralized and the data is decentralized. And what we built, the shared Pathology Informatics network, SPIN, that's funded by about $7 million from the NCI, is exactly the same thing. And what we did is to take advantage of the twin obsession compulsions of pathologists. Pathologists are probably the best first taxonomists of the medical profession. They actually categorize organs and specimens better than anybody else. Second, they're obsessive compulsive collectors of tissue. And they keep tissue around for years and years. And so we thought that if we could take advantage of those twin compulsions, we could actually be successful. So at a bird's eye view what we did was the following, which was to create a network whereby a user, with a web browser, could send a query to his Query Composer, which would then send out a query to this nebulous thing called SPIN network. Which would, using this peer to peer technology, respond which samples were out there. More specifically, each tissue bank or institution would have its own node on the network, on the internet. And we provide them these open source three tools which allow them to extract from these pathology databases because there's a textual report with each pathology sample. Both the anonymized-- because we have this anonymization program-- the anonymized textual report, and code a few data elements to enable search. So that when you put in the query, the query percolates in a fully distributed way. There's no center to this network. This query percolates throughout the system. All the nodes respond eventually, if they can, and you get back the sum of all specimens involved. And those of you even in the medical profession probably don't appreciate the following point. The reason we created a peer to peer system is that pathology tissue banks are jealously guarded by the pathologists and by the surgeons who built them. They just don't want to share data. And it's essentially impossible today to know-- even across the street and the Brigham-- what samples they have. And by allowing them to actually fully control their own node and control what they expose to the outside, we're able to overcome the sociological obstacles for data sharing so that we can actually do this. So for instance, we've done this now, this is actually an out-of-date slide, not only across Harvard and UCLA, but now we've gone live with Pittsburgh and a few other sites. So that today, if you're a registered user, you can actually send out a query to all those institutions that you see on the right. And the query then percolates across the country. And this query takes on the order of 10 seconds on a bad day to execute. So for instance, we want to know how many distinct specimens can we have of renal transplant patients? And that's typically hard to come by. And here is a response. So this is out of seven nodes, this is a query I did back in October of last year. Out of the seven nodes that we had up at that time, with only two nodes responding, we already had 20,000 specimens identified with an age distribution shown as here. By the way, I'm aghast at the fact that apparently 34 specimens of patients ages 90 to 99. I wish it was true that was an error, but I looked into it and it's not. So some weirdos are actually transplanting things in very old patients. Would not happening in England. In any case, here's the age distribution. And if you click on the full text, and you have the appropriate privileges, you can actually see the full anonymized text that show a report of that patient. And so I make this point for a number of reasons. First, we're successfully mining the obsessive compulsive feature of pathologists. Second, I hinted to you that some of these outcomes analysis using functional genomic measures, such as the lymphoma study, are essentially not well reproduced in different studies. Whatever the reason, they are many. A lot of the ills in the analysis are excused by large numbers. So if we went from having 100 patients to having 20,000 patients, I assure you that the differences in different machine learning techniques would matter very little. And so this is how we're beginning to approach, how do we harness the phenotypes that are out there that are available, that are not available otherwise through a completely different kind of bioinformatics? This is more infrastructure bioinformatics. But it's the rate-limiting step. We can work on a dozen samples until we're blue in the face. We really won't know how much we're overfitting our data until we have large samples. AUDIENCE: But the very first thing that you can request is that for [INAUDIBLE]. ISAAC SAMUEL KOHANE: Yeah, I wish it was that simple. So that shows you're awake. The next question is, how do I get the samples? So I wish it was true that once you identified a sample, you just click the box, put your credit card and you get FedExed the tissue sample. So what happens at that point is that you start a dialogue with investigator and the Institutional Review Board of that tissue bank. So that's the bad news. The good news is, we've solved one part of the problem is figuring out where the specimens are to find out if your study is feasible. And people are already being used as tools right now to write grants because they can say, I know that there's [INAUDIBLE] specimens out and they don't necessarily get access to the tissue for that. But they get in-principle agreement from the collaborators that if they get funded, they'll do this as a collaboration. But you're absolutely right to do it. So this is just another example how we're using computation to overcome the obstacles to doing genomic medicine. Let me give you one last case history and then we'll go to the problem set. So let's talk about how we can use computing again to have a different understanding, and how we use computing allows to have a different understanding of for instance, gene regulation. So this is time on x-axis. And this is some arbitrary value of expression. Are these two time courses the same? Are those two genes developed by the same process? Yes or no? You could make an argument either way, right? Maybe they're the same. How about this one? Maybe they're the same, maybe not. I tell you, the stuff was pretty noisy. Maybe that's just noise, a priori, who knows. Now does order matter? It does in our analysis. Does time matter? Yes, good. Because it should be true that the likelihood of me being here now is informed much more by where I was five minutes ago and where I was a day ago, right? That's a fundamental Markovian property of Zack, and most physical processes. Now if there's a correlation coefficient at all, address that Markovian process. In other words, if you take two stocks, and shuffle the timing. Let's say two stocks, Apple and Microsoft, let's say for the sake of argument, that they are highly correlated. They the high correlation coefficient. If you permute the days that they maintain the same pair of stock values for that day together, to shuffle the days, is the coefficient the same or different? Same. So correlation coefficient does not in fact capture the effects of time. Now, as I hope you're aware from our previous discussions, a lot of the clustering that's been done even on time series data, is done on the basis of correlation coefficients. People in those dendrograms bring together genes that share the same expression pattern as measured by the correlation coefficient. So what consequences does that have? Let's go back to a very old study. This is [INAUDIBLE] study back in 1999. A classical Jewish study where they snipped off the-- no, they took a fibroblast from foreskins and they measured the gene expression pattern over time of the transcriptome of these foreskins after being exposed to serum. So you see here, every column is zero hour, 50 minutes and so on till 24 hours of each gene. And then they did this act of creativity, which is they draw lines next to the dendrogram and then cut out the blocks each line and say, oh look, there is a bunch of these genes and they seem to -- for instance, coagulation hemostasis. There's a tissue factor, pathway inhibitor and so on. These are coagulation involved. These are cell cycle proliferations. This is inflammation. Oh look, these are angiogenesis involved genes. These are cytoskeletal. And they all seem to be clustered together. And it's again, I use this metaphor all too often, but it's so true. This is very much like the dog at the opera. It's a miracle it can sing at all and you don't criticize how well it's singing. Because what they've done is this act of creativity. They've actually looked at the data of genes and they draw these lines and say, these belong a chunk. Now there are other lines they could draw on. And in fact, if you look closely at their original paper, there's large numbers of genes that they do not draw those lines next to. They don't have anything to say. So what's really going on here? So again, these clusters are defined by correlation coefficient. So what if we try to take advantage of the Markovian property of gene expression, like any physical process. Namely, that in a time series. The recent past informs you more than the distant past. So you can approximate the conditional probability of this value upon the recent past rather than the whole time series. That is, basically the assumption is that T0 is independent of the remote past, given the recent past. And how many steps back and look is what you well know as the mark of order. So mark of order is two, that means you look back two time slices. Markov order is one, you look back one type slices. If time ordering does not matter-- just like the correlation coefficient-- then your mark order is zero. The recent past actually doesn't inform you at all. Now let's look instead at clustering as this fiscal model selection problem. So rather than looking for highly correlated pairs, we'll consider each of the time series of his genes as being examples of a process. And the process is going to be represented therefore, by one or more genes, which are driving that process. And the question is, for any given pair of genes, do they belong to the same process? And essentially, with a very simple Markovian analysis, which is just a pain in the butt to implement, you apply Bayes' theorem and you say, is it more likely that these two gene signatures were generated by the same process or by a different process? If they are generated by the same process, if the model that they were generated by the same process has a higher probability than the one then they have a different process then you cluster them together. Just as you would in a regular dendrogram. Of course, the threshold you pick-- the probabilistic threshold that you pick-- is going to determine exactly which branches are brought together, which gene traces are brought together. But nonetheless, you now have a strong probabilistically based and Markovian based reason for bringing together two gene expression signatures, so then based on the marginal likelihood of these two gene expression patterns being generated by one process versus the other. And you can thereby eliminate them appropriately. Since I'm running short of time, let me just tell you that the patients have a number of nice tricks in their armamentarium. Which essentially, depend on the following observation that if we use the same data for the models, if we assume all models are initially equally likely, then you can do that very nice equivalents shown at the bottom such that, you only have to calculate the marginal likelihood rather than absolute probability in calculating which models are most likely. And therefore, simplifies a lot miscalculation. And incidentally, this program that we called CAGE is available publicly for-- if you look at time series. But what does it do? So if we took the same time series that you looked at with the expression data from the foreskin through this same Markovian clustering program, we only found four clusters. Two small clusters and two large clusters. So what's with that? The two small clusters-- the two small clusters were one, a cytokine cluster, and then an apoptose cluster. The large two clusters had a bunch of different genes in them. And they contained essentially, all the other clusters that had been obtained in the original analysis. So you could say at this point, well, Zack, you guys didn't do a very good job because despite all this convincing talk about the Markovian nature of gene expression, they were able to really pull apart a lot of processes and you were not. So how do we know that we did a better job or not? Well it turns out that back when they did their original experiment, Unigene, which is a database that Alberta will tell you about was an earlier stage. And what do I mean by that? Every time GenBank grows by a certain amount, by which I mean, more sequences are deposited into the human genome database, there's a periodic reassembly of the putative genes based on the better and better populated jigsaw puzzle of GenBank sequences. And sometimes it means that two things that were supposed to be different genes, when you have enough bridging sequences, ends up being the same gene. And a gene that was previously thought to be the same with enough distinguishing sequences, ends up being split into two. So these Unigene builds are different. And consequently, when we look back in 1999, 238 out of 517 genes were unknown. And we relabeled the genes according to the current state of the art at which point, only 20 were left unknown. And there were 19 genes that were present twice in the data set. And the original clustering puts four of these in completely different clusters. Whereas, we only did this once. If we put our marker order back to zero, that means we ignore the past, we get exactly the same misclassification of these identical genes. So the conclusion here is that the temporal ordering does matter. And doing just a simplistic correlational analysis will not cut the mustard when you're really try to dissect some of the processes. And using a statistical measure of clumpiness of these clusters, rather than a looks right test, which had been the standard for most of these papers from 1999 to 2003 probably is also not a safe thing to do. So it turns out, those two choices I showed you before were in fact, the same gene. Although, it was not known to be the case when it was first published. And these two traces were put in two different clusters. And likewise, these two traces also ended up being the same gene. And they too were put into two different clusters. All right. Let me wrap up by talking about another aspect of genomic medicine. Those of you who've done clinical research know the following is true, which is most clinical trials are censored. By which I mean, either some patients drop out of the trial before it's completed. Or the patient has died by in the trial, if mortality is your endpoint. So it's unclear whether, for instance, a patient who just died right after in the trial should have been included or not. And the big picture is that by having a particular cutoff point on the study, you're not allowing yourself to see the full evolution of the patient's history. And therefore, depending on what the nature of the study, you may actually have a very strong bias. And it's been shown in clinical research again and again that if this censoring is not attended to, you'll have a misinterpretation of the clinical trials. So that's well known in the clinical research arena. And the question is, is it equally true in genomics as applied to clinical research? Well, let's understand what most of the studies published actually do. They often do the following. They take a set of gene expression patterns, and do some clustering operation such as what [? Eliza ?] did originally in their paper. And they say, oh look, there's two or more in actual clusters just based on the gene expression profile. And then they ask themselves after they found these expression profiles, is there anything different about these patient populations? And they rummage around. And in this case, they found a wonderfully different mortality between the two groups of patients as defined by the expression pattern. So let's review that. You first look for a difference in expression pattern. And then you say, what makes these patients different? And it makes for a very impressive publication because you're not looking at the patient, just saying, I'm looking at gene expression pattern. And then when I open my eyes and allow myself to see what's different in a patient in these two groups, I'm finding something that's really clinically different. Now what's the problem with that? The problem with that is that you may be looking at the wrong thing for an example. With 10,000 genes to look at, there might be a cluster of genes that for instance-- let me just actually, before I come up with a fanciful example-- it might be that the reason that these patients have different expression profiles is because they weigh a different amount. One group is fatter than the other because one group is sicker than the other, except it was not picked up clinically. But one is going down the tubes and one is not. And what we're really seeing here is not something intrinsic about the tumor, but something intrinsic about the weight. And so getting at is a very indirect measure of mortality. And it may be very much unrelated. Can we do better by going directly from the gene expression pattern to the clinical phenotype of interest rather than going through this two step process where we first cluster the genes without any knowledge of the phenotype and then see how the phenotype is different. Specifically, can we directly find genes or linear combination genes that are highly correlated with survival times? For example, gene A plus 0.5 times gene B, plus 2 times gene C equals some probability of survival. Can we do that? And can we do that in the context of the kind of censoring that I described? So in summary, how can we use the survival times directly to find good predictors? So here's the fundamental problem. We have gene expression data. We have phenotypic data. And we want to find out how one predicts the other. Now previously, if I asked you that question, you'd say oh Zack, let's just use linear regression, logistic regression, for instance. It turns out with 20,000 variables, it breaks down. It just does not work. And so a nice answer to something called partial least squares. And what is partial least squares? Well, let me give you the intuitive feeling for it. I explain to you what principal components were. Principal components, being these components that capture a large component of the variance. And subsequent components are orthogonal to each other. This is like principal components, but as they relate to a specific outcome variable like survival. So these are principal components that are correlated essentially, with an outcome variable. So that's interesting. But the problem with perfect least squares is that it does not actually allow you to use sensor data. it assumes that the data is complete throughout. And that's going to immediately bias your data. So how do we take advantage of censoring? Now in classical clinical research in medicine, what you use is something called the Cox model, which essentially is a probabilistic model that says, what is the hazard of dying for this population of patients? And basically, imputes for the missing data fundamentally. And for a small number of variables or genes, you actually can use a Cox model to actually figure out, essentially, what would have happened if you'd had a complete data. But the Cox model, just like regression, does not work well for tens of thousands of variables. So what are we going to do? How do we get to use partial least squares? To make a long story short, I guess I'll post a paper to myCourses website. What you do is actually model the patient's data using a Poisson regression. So that you actually transform this data set from one with missing data, to one with complete data based on this Poisson regression. But the problem is now we can create essentially many more data points, essentially. And just to give you an intuition behind it because I'm going to run out of time to go into a full description is, this will allow us to estimate the fine problems. So let's say that a patient lasted until this point and dropped out of the study, where might they have died? What can we assume, based on this prior data? So by using a plus model, we can enter all these new data points. And so we can actually figure out precisely, or probabilistic-ally I should say, what the values were the various time points and completely in this probabilistic sense, so now we can apply partial least squares. So what does that mean, pragmatically? So Bhattacharjee, how do you pronounce that? AUDIENCE: Bhattacharjee, yeah. ISAAC SAMUEL KOHANE: Bhattacharjee, and basically working with Todd Golub, did a study of lung cancer, which had in fact censored data. And each patient had a survival time and it was marked whether they were censored or not. And the question was, could they identify different outcomes? And the short answer is, they were able to identify, for instance, metastatic versus non metastatic, but there are a lot of other outcomes that they could not distinguish from one another using these 125 samples. So we asked ourselves the question. If we took into account these partial least squares on top of this Poisson model, could we actually tease apart in a reliable fashion some new phenotypes. And the short answer is these are uncorrected P values, whoops. These are uncorrected P values, but tenth of -7, if you do correction for multiple hypothesis testing, can see that they're actually going to be highly significant. We were able to obtain components that were highly predictive from the partial squares analysis. Highly predictive of clinical outcome. Where previously, you could not. And to make that very clear, these were two groups of patients that previously were not distinguishable. But by using this method, we were able to distinguish them with a P value of this amount. And provide a very sharp distinction between these two groups of patients that was actually not resolvable using the standard analysis that done previously. And the reason is because I just want to bring this home to you, because we were able to go directly from gene expression to the outcome of interest. We didn't ask ourselves, let's cluster the patient data and then see how the patients are different after the fact. Who said, I know the different characters of the patients in this case mortality. What combination of genes best predicts that difference? And by looking at that direct signal, rather than indirect signal with all the complications of having to deal with sensored data, we're able to separate out very, very well these subgroups of patients. So, again, showing you how by using a little bit more sophisticated computation, we're able to identify subgroups of such patients. So this brings me to the problem and I want to spend a little bit of time to make sure you all understand the problem set. So I think it's very important for you to get your hands dirty with the data. So there is a data set that's available at the following URL. And if you just Google Kunkel, Kohane, Haslett, you'll probably find it. But that's the full URL. And I'll send out to the group the URL through our group email. And that's a data set from a paper that we published, I think, a year or two ago about Duchenne muscular dystrophy. Duchenne muscular dystrophy is a degenerative disease of the muscles. It's the major cause of what Jerry Lewis raises money for. And we have data on the muscles of patients who have this disease and people who don't have that disease. And what I want you to look at is that comparison. And I want you to identify those genes that are differentially expressed. Now, if you want to work in pairs, that's fine. But no more than pairs of you. And I'd like to use at least two different tests. One is SAM, Significance and Analysis of Microarrays. The other is the t-test, which I hope most of you know. There's many, many toolkits that you could use to do this comparison. Let me recommend to you the MEV, the Multiple Experiment Viewer at Tiger. Tiger is the Institute for genomic research and at Tiger.org. So if you Google MEV at Tiger, it does many more things than just these two tests. But I recommend it to you for these doing these two tests. So given these data, I want you to answer the following question. What is the difference between the top ranked 50 genes differentially expressed, both up and down by the two methods? And why are the lists of genes different? I'd like to use another program called Map Finder to classify these genes by their function. And so the output I'm expecting from you is two sets of differentially expressed genes based on these two different methodologies, your explanation why these sets of genes are different and the classification of these two sets of genes by functional annotation. It's a very simple exercise, but I think it's important because if I've learned anything about this area, it's just doing it by yourself a few times opens your eyes up to how bad the data is and what the problems are in analyzing it. And if you're not comfortable in downloading tools, then partner with someone in the group who is. All right. AUDIENCE: Your course is on the website? ISAAC SAMUEL KOHANE: [INAUDIBLE] will post this on the website and I will send an email out of this URL. Any other questions? AUDIENCE: When's the due date? ISAAC SAMUEL KOHANE: That's an excellent question. It's now February 24. How about-- when is March 15? Can you look up what day of the week that is? AUDIENCE: It's a Monday. ISAAC SAMUEL KOHANE: So March 16. And please let me if you're having problems. The guy who actually built MEV is a friend of mine at Tiger, so I can kick his butt if it doesn't work for you at a particular time. All right.
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_5_Limits_of_Technologies.txt
ZOLTAN SZALLASI: -technologies. And this is kind of the introductory slide. You heard a lot from Zack. I think that was last time, like a week ago or two weeks ago about microarray technology. And I'm sure that he gave an extremely inspirational and enthusiastic talk about the possibilities and scope of this technology. But let me give you just a little this line. So whenever a new technology appears or for example, microarray technology or the Genome Project, of course first, there is a general ebullience and optimism that all problems will be solved within a couple of years. There are lots of reasons for this optimism. One is that you want investors in company and you want public funding. But of course, in a couple of months or years, realistic expectations start to appear. And then you have to start to think about limitations of the actual technology. And actually, that's the reason that we have a talk about this very topic. So when we talk about limitations of the technology, we have to define what you actually want to do in science. Of course, there are lots of various definitions of science. But in a sense, you'd like to make predictions about some of a system. And we are going to talk about limitations in those terms. How is it going to limit your predictive power? So when you talk about limitations, you can talk about how accurate your measurement is. There are limitations of the accuracy, of the measurements, accuracy and noise. But there are limitations in terms of sensitivity. What are you measuring. How complete is your measurement? And of course, I'm going to be very briefly touching on this, that even if you measure everything very accurately, there are inherent limitations in your predictive power. You cannot predict everything. Think about unpredictability in terms of chaos. Even if you measure everything very precisely, there are systems that you simply cannot predict how it's going to behave in a analytic way. So noise-- first, I would like to define what is noise and what is signal. Noise is an inherent feature of complex systems. And noise in continuous and discrete measurements, noise is the limitations of the technology. And of course, we need to talk about what can be done about noise. That's what statistics was invented for. And I'm going to talk briefly about normalization. So what is noise? There are different definitions. This, I, of course, took from Webster. And let's just look at this point D, which is "an irrelevant or meaningless data output, occurring along with desired information." Now you should be aware of that noise is not always a bad thing. Sometimes noise may turn out to be a very important signal. And what is your background? I'm just-- AUDIENCE: Biology. ZOLTAN SZALLASI: Biology. Sorry? AUDIENCE: Medicine. ZOLTAN SZALLASI: Medicine, OK-- so this is probably not the best example for you. But there were two guys, two radial astronomers, many, many years ago. I think that was almost 50 years ago, who were looking for signals. And they just saw this noise coming from every direction in the universe as radial astronomers. And that turned out to be the cosmic background radiation, which is one of the most important discoveries. And these guys that actually got the Nobel Prize a couple of years later. That started with pure noise. And they were trying to get rid of that noise. They couldn't. And that led to discovery of the cosmic background radiation. But if you think about medicine, then for example, the way cisplatin was discovered as a chemotherapeutic agent, what happened is actually that the electrodes that they were using in those experiments contained platinum. And they saw the effect of-- come in. Oh, you're going to hang out, OK. Actually, I just wanted to close the door because I know that's annoying. And then they tried to figure out what was killing the cells, what was slowing down the growth. And then they realized that actually, it was the platinum containing those electrodes. And this is the way they discovered this platinum. So the point is that noise is not always a bad thing. Now what if you see as noise or error in measurements, in biological measurements, might be a key component of biological processes. So of course, mutations in evolution are extremely important. And when we will talk about discrete measurements, one form of that is actually when you are sequencing, you'll see lots of noise in the human world, all sorts of genomes. That's called junk DNA. Well, we do not really know what this junk DNA is for. Then-- no, just go ahead. There. AUDIENCE: Sorry. ZOLTAN SZALLASI: I should have said, go ahead. It's still the introduction. So we do not really pursue this junk DNA when you are trying to find genes or exons of introns or transcription factor binding site. And that is going to extremely-- it's going to bother you very much when you're trying to find these true signals in the genome. We do not really know what junk DNA is for. But there might be a good reason it's there. It might be determining the spatial distance of different genes or whatnot. Another type of noise which seems to be very important is during differentiation. Very often, you see an asymmetric cell division. So RNA or proteins are divided or distributed between the two daughters cells asymmetrically. And that is actually done or that happens more or less by chance. And the two daughters are going to go one way or another, depending on how much RNA or protein they got. This, you can perceive as something of a noise measurement if you do a single cell measurement. And stochastic fluctuations may be very important for the stability of complex, physical-chemical systems. I might be talking about stochastic genetic networks and robustness much later in April when we're talking about modeling. And let's suffice now, that stochasticity and noise in complex systems might be a very important feature to maintain the stability of that system. You should be aware, of course, that genetic networks and biological systems or stochastic systems, because you know that, for example, you have only a couple of 100 copies of a given transcription factor per nucleus or even less, sometimes you have only 50 per nucleus. The intracellular environment is not a free solution. And the reaction kinetics is often slow. And what it means is that if you have a stochastic system in this case, that if you have a completely deterministic system, then from any given gene expression found at any given state, you can go to one state, to one another state. That's a deterministic system. Whereas in a stochastic system, from any given state, you can go to different states with a bit of certain probability. So that's what we mean by stochastic systems. Now, if you have this, you have a stochastic system in biology. Then when you are measuring gene expression levels or protein expression levels or any activity of any biological parameters, you will perceive that as a noise in your measurement. Now is it true? Or is this really relevant to biology that you have stochasticity present in the system? And this was a paper that came out now, almost two years ago. But actually, they wanted to measure this. This was done in bacteria, but recently, similar studies were published in yeast as well. And what they did, they took two proteins from E coli. And they put on a green, two different GFP, two different green fluorescent proteins so they could measure the expression level of two proteins. And they set up the system in a way that if it was deterministic, then it was under the very same promoter. Both genes were expressed or driven by the very same promoter. And it was a very carefully setup experiment. So if the system was deterministic, then what they expected that the expression ratio of those two proteins is going to be the same in every single cell. Now what they found, that despite all their efforts to set up the experiment as perfect as possible, they found that two different colors. That means in this here, it's red and green. But of course, these are only false coloring. The point is that you have two different wavelengths where these signals are emitted. Depending on the actual cell, you can have very green, very red, and some yellow cells as well, which means that despite the careful way of this system of this experiment was set up, these cells expressed the two proteins at a different ratio. And that was due to stochasticity. So the point is that stochastic fluctuations occur in living organisms. They are trying to understand, now pretty hard, what's the relevance of this. It seems that it has a lot of relevance. But of course, we are not quite sure what the implications are. - You showld be aware of that whenever you do a biological measurement, or what especially these days technology might develop, you're always measuring population average data. So this again, is going to add to your noise. When you are measuring gene expression levels or do proteomics, you'll of course, grind down millions of cells or tens of thousands of cells. And that is going to be giving you a certain level of noise as well. And this is even true, this is true even if single cells are quantified. The reason for that is that if you have a stochastic network-- and let's imagine that you can really measure gene expression levels. You can do this for of course, individual proteins in single cells. But whenever you still do your measurement, you usually interfere with the cell. Or you kill the cell. So you don't really know how would that cell have progressed. So since you interfered with the system and still have the system, you can't really figure out what would have happened in the system. So you're actually going to end up with a population average data again. So there is no measurement without noise. As you know, it is usually the accuracy, sensitivity of your measurement. And I'm sure that most of you would be extremely troubled if you did a microarray measurement or some of a chemical or biological measurement, and in three triplicates, you would get exactly the very same number. That would mean probably, to most of you, that there is some of a systematic error in whatever you're doing at your photometer or whatnot. Because you would expect some of a spread of your continuous data. So it is expected for continuous variables to have data with a certain spread. And that's OK. And that's why statistics was invented. But you know that there is some of a true value of your measurement. But due to little fluctuations of whatnot, you have a certain spread around that through variable. And of course, usually the question is-- and that's what's the best statistics at this frequency statistics is really conserved with, that this is variable change due to a given treatment whenever you have a spread like that. So if you have measurement here or here or here and this is your starting point, then what's the probability that your parameter really change its value? So what you need to do for this is need to have of course, lots of measurements and/or a fairly good idea about the nature of the noise. That's very important as well. We are not going to get into this now. But as you know, that, for example, the easiest or the most convenient assumption is that you have a normal distribution. It's good to have that. Because if you have that, then you can actually make very simple calculations about the probability that whether your mean or whether your parameter is actually changed or not. So statistics was invented a long time ago. And actually partly, it was due to biological measurements. And so statistics is concerned in biology with many, many different issues. One of it is what is the true value of a given parameter if there is one true value? There is a very frequently used analysis that people or biologists are not really aware of, which is kind of a Bayesian analysis. And actually, this is the most frequently done statistical analysis in biology this is the way all science works. I have a similar belief about whether something is going to happen, or for example, an oncogene is going to transform a cell or not. I make a statement. And my job is actually, to convince you or other biologists that this is actually true. Now what you can do, what you usually do, is actually you repeat the experiment. And if you see the same phenomenon, then you are kind of updating your-- you are going to update our common belief that what I was saying that was actually true. So Bayesian statistics is always there in a hidden way in all biology. We are trying to update each other's belief network regarding biology. The third type of statistical analysis is that you don't really believe the measurements. But you know that there is some of systematic error there. And then you try to correct for this systematic error, and that's what's called normalization. And I'm going to talk about it in detail there. And there is a fourth issue in statistics when you actually producing a lot of, a lot of measurements. You are looking for certain patterns. Imagine that you're looking for gene expression changes that is going to cause cancer. And you have two populations of your samples, normal and cancer. And you see that certain type of genes are always downregulated or upregulated or mutated in cancer. Now this could happen by chance. If you do not have a large number of samples and this change is actually simply random-- so in certain cells, it's up. In other cells, it's down. Then if you have the wrong number of samples, then it might happen that just by chance, with a certain probability, in all normal samples, that mutation is not present. And in all disease samples or cancer samples, it's going to be present. So what you need to ask is that pattern, that would explain your biology be present by chance? And this kind of, there are too many numbers. And what you can do is actually you can try to solve analytically, that's what combinatorics is about. Or you can do some of permutation. And as you'll see later, that's actually a pretty nasty problem when you're trying to apply it for real-life problems. So biological measurements are often expensive. And something I'd like to point out to you, that if you follow the literature or when you will start to read the literature-- and I'm assuming you will because that's why you are taking this course. You going to see lots of nature science and high profile papers, in which they ran a single microarray measurement on a large number of different cancer samples. And then they are drawing all sorts of they are drawing all sorts of conclusions about which genes are important to cancer, which one is not. And these measurements have been still rather expensive. And it's not easy to come, not easy to obtain the samples. But you should be aware of that you cannot really do any statistics on that. You should do some sort of Bayesian type of statistics. But whatever they are doing on this is not really statistics. It's going to be Bayesian. I was going to say that I see this change very often and you either believe it or not. But you can't really get any hardcore numbers out of it that you can use for any statistical analysis or modeling. So reliable numbers cannot be produced without replicates, which is kind of obvious. So the central problem is that in massive biological measurements, quantitative and qualitative calls are supposed to be made on a large number of heterogeneous variables, using only a few replicates. That's what you're going to see over and over again if you work on a large scale or massively parallel biology. And this is one of the problems that technology and the analysis has to overcome. So where is the noise coming from in microarray measurements? So this is a slide, I think you saw some of variation of this in Zack's talk. So this is how an affymetrix DNA microarray chip works. So you start with tissue, and you extract RNA. And what you do is have to do an RT, reverse transcriptase treatment or step on it that is going to translate back the RNA into cDNA. And depending on how you do it, you can either produce cDNA or CRNA. Because during this process when you are producing the cDNA or the CRNA, fluorescence dyes will be incorporated into the polymers. And then these are going to be hybridized to the specific probes present on the chip. Now the underlying assumption or expectation is that ideally, one copy of a given RNA will produce one unit of a specific signal. If this were true, then you would have very accurate measurements. But now let's see what's actually happening in reality. When cDNA is produced from the RNA using the RT, this is an enzyme that has its own life, its own characteristics. So the initiation of the RT step is stochastic. Because as I'm sure you're aware of that, you need a starting primer that's going to be extended by the reverse transcriptase. And very often, the reverse transcriptase, the enzyme simply drops off. So that's why what you see is actually when you do a microarray measurement, you see usually a much stronger signal the three coming from the 3-prime end of the gene than from the 6-prime end. Because as you're transcribing and reverse transcribing the message, the RT starts to fall off. So you always have much stronger signal from where the RT is started, which is always the RT because that's where-- what you usually use is a poly(A). You're using the poly(A)-tag as your initiator. You can use random primers as well. And actually sometimes, it's used for some in bacteria. But most of the time you start with poly(A). Also, cRNA, which is used for the affymetrix chip, is produced in the presence of fluorescent dyes. And it's assumed that the dye incorporation. Or it was hoped or it is hoped the dye incorporation is going to be linear. And it's going to be incorporated in equal probability. But it's not the case. The cRNA production is not linear. There are messages that are transcribed into cRNA with a much higher probability, much higher efficiency than others. And the dye incorporation is not linear either. Also the affymetrix chip's involves a step, and you actually break down your cRNA. For whatever reason, this is the chip design. And breaking down the cRNA into small pieces is not going to be the same for all messages either. And of course, you have all sorts of other problems like hybridization or cross-hybridization. And one can go on and on and on what would give you the noise is just a couple of samples. But the point is that your final signal is going to be the sum of all the above, or all these things and others. So this is just the give you a feel for how many individual issues will arise when you're doing a micro measurement. Of course, the surface chemistry is very important, the background subtraction, and so forth. So let's see another example. This is the two-color microarray. The previous one with the affymetrix chip, you heard about last time, in which you are actually putting out a single cRNA per chip. And there was another competing technology invented at the same time. When you actually label cDNA of two different samples, you measure two samples, and you're actually measuring the ratio of for each individual gene. So in this case, what you do is have equal amounts of labeled cDNA samples. And what you're hoping for, what you're trying to achieve is that if a certain message is present at the same level in both samples, then the two intensity, the signals are going to be equal. So you're going to have a kind of red, yellow spot if a gene is overexpressed or underexpressed, you're going to have a stronger red or green color. Now what you are ending up with in these measurements is a ratio. And the problem is that actually, there is no truly blank spot. You always have some sort of a background noise there. And you are measuring the ratio. Then of course, that non-blank spot is going to give you some sort of a false pseudo signal. So if you're for example, if there is a gene that is not present at all in a given sample and it's expressed in the other sample, then the ratio would be of course, infinitely high. Or it would be very, very, very high. But you never see this. You always have, since you have a certain background intensity, what you see is some sort of, let's say, 100-fold upregulation, which in fact, or in true, in truth might be a complete downregulation or a complete lack of that gene in one of the samples. So this is perceived by the experimenter as compressing the signals. So you have a very wide dynamics of the ratios, from minus infinity to plus infinity, but what you actually you see-- and this is very usually, most of these measurements are, the ratios are cut-- is 100-fold up- or down-regulation on either side. There are lots of experimental issues that can also contribute to the [INAUDIBLE] noise. So this is how the Affymetrix chip is designed. You have seen this before. And what you have, so these are very short probes-- 25 base pair probes. The way Affymetrix tried to overcome this problem is that they designed a set of probes along a given gene using some sort of an algorithm. And what they hoped for that, if you have lots, and lots, and lots of probes-- 11 or 16 probes per gene-- then from this set of probes, you can somehow estimate the true gene expression level. So this is how they are actually designing. As you see here, this is the entire gene, and you are tiling the gene across unique enough sequence regions of the genes. Now, the problem here is coming from the real measurement, that these up here are the perfect mesh probes that are supposed to measure the same g. To some extent, you would expect that all these expression levels, all these intensities, would be equal. And very often, for most genes, for most purposes, this is not the case. You have very bright and very dark probes. There are lots of reasons for this. You have cRNA secondary structure, and so on, and so forth. But the point is that when you look a little bit harder, deeper, into what you are actually getting from these measurements, well, you're expected to estimate the true gene expression level from this set of intensities that can often vary by four or five orders of magnitude. So that's how reality works, these experiments. So this is just another additional piece of information that it's not that easy to design this. But, of course, one can improve a lot, and the gentleman sitting here could tell you lots of interesting stories about how these things are designed or how they are not designed by the manufacturer. But that's the story. So I was just trying to give you a couple of thoughts, a couple of pieces of data, about where noise is coming from in Affymetrix measurements in real life. But even if you had very good quality measurements, you have other conceptual issues in this field as well. So let's assume that you want to use your numbers to reverse engineer a system or to do forward modeling, more forward simulation, large genetic networks. But you'd like to have very good quality numbers. The problem is that when you do these measurements, you always measure a very heterogeneous solution, a very heterogeneous population of RNA and proteins. Now, when you started this measurement, how are you going to normalize your numbers? So how do you express your measurements, even if your measurement, your technology is very good? Per unit RNA? Per microgram RNA? The problem with this is that if you have a decrease in the level of a given gene-- and some genes are very highly expressed-- and then the message of other genes, unavoidably, is the relative increase of the level of other messages. Because let's say you have a million copies, or let's say 10 million copies for RNA per single cell. So if a highly expressed gene is downregulated, then what you perceive in your measurement is-- unless you're actually trying to normalize for the actual copy numbers-- that some of the genes are slightly upregulated. So there are other conceptual issues as well why you will have noise in your measurement. But as I mentioned, the real problem is, the real issue is the actual technology. Now, what you can do with that is, when you have a set of measurements, you want to take a good, hard look at your data to see whether you have some sort of a systematic error in measurement. These are a bunch of Affymetrix measurements-- real life, real measurements-- in which what you see is the intensity distribution of all probe sets. So what you have here is the measurement, gene expression measurements on about 10, 11,000 different genes, all covered by a different probe set. And this is what you see as a distribution. Now, what you see here is there is one measurement that's very strongly an outlier, and some other as well. These are pretty much the same. And imagine that you're actually running the very same sample. Let's say you have a single cell line, and you're treating it with different drugs. Now, what you'd expect is essentially the very same distribution for each of these RNA samples with a few differences, a few variations. And you have this guy here. So what you can assume-- and this is actually what people do and the Affymetrix algorithm does-- that for some reason during this measurement, the fluorescent dye incorporation wasn't as efficient, or your fluorescent reader was miscalibrated, or something else, but a systematic error occurred. So what you assume, then, is that the distribution for all these measures is actually the same. So what you can do is can start to shift your curves, because you have a good reason to assume that these are actually all very similar distributions. So what you can do is actually take the mean or the median of all of these curves, and you shift them to the very same mean or median. And you simply decide where you are going to shift everything else. And then, based on that, you re-normalize all the numbers. And when you look for differentiable expressed genes, you work with those re-normalized numbers. Because if you did not, if you hadn't done this, then you'd say that every gene is downregulated, which is obviously false. So that's what normalization is about. So normalization, in general, is that you don't really believe the numbers that come out of your experiments, and you hope or you assume that you're going to actually improve those numbers by assuming that you have a systematic error that you can correct. There are two ways of doing this. One is that you assume that most or certain things do not change, and the second one is that actually, you have an error model. So the first one are you assume that most are certain things that are changes to what you saw on the previous slide. So you say most of these distributions actually have to be very, very similar. And you can shape the means or medians of these curves, but sometimes the shape of the curve is going to be different as well. And, well, if you have this nonlinearity of the die incorporation, then you not only assume that-- you can assume that, if the curve is shifted, then the shape of the curve is going to be [INAUDIBLE] as well. So you can do some [INAUDIBLE] [? lowest, ?] and you can try to change the shape of the curves as well and shape the means that most of the curves, all the curves, would look very similar. And whatever remains as an outlier after all this is done is your true outlier, what you perceived as a real outlier. And in most cases, actually, that makes a lot of sense, and it provides differential cause that can be corroborated by independent measurements. This is the similar problem for cDNA microarray measurement. In this case, the red versus green ratios are not expected to show any intensity dependence. But in most cases, when you do two-color microarray, this is what you see. So these are the intensity, and this is the ratios. And you see that what you'd expect is a curve like this, and that is what you see. That means that the red and the green dye is not incorporated with the very same efficiency, especially depending on the concentration or the concentration or the individual gene species. So what you see is that for let's say low-copy-number genes, red dye is incorporated with a higher efficiency than the green one. What you do in this case is actually try to straighten it out, because we are assuming that [? we have, ?] for all genes, the red and green incorporation should be the same. So what you're trying to do is to correct for systematic errors. And in the case of when you assuming that your basic assumption is that most things do not change, then you can choose a set of elements that will be used. That is, sometimes there is a set of housekeeping genes, which is a very shaky concept. You're assuming that certain genes do not change-- let's say metabolic genes do not change or structural proteins, the genes associated with structural proteins do not change. Now, this is used very often, as I'm sure you've seen, in [INAUDIBLE] is that they [INAUDIBLE] forget the age or [? actin. ?] Well, it's OK. It's just very difficult to find a set of genes that is really not expected to change. Or you can choose a set of special control genes that, for some reason, that those genes never change in your system. And of course, then, the next step is you need to determine the normalization function, which is a global mean or median normalization, or some of an [? interdependency ?] normalization. If you want to learn about this more, then actually, there's a whole website, and a chatroom, and whatnot. And there's a whole cottage industry that's trying to figure out, what's the best way of normalizing a microarray? The alternative is that if you come up with some idea about how the error is actually generated. So this is the most popular error model, in which it is assumed that at low concentrations, you have an admitted error. You have just simply a normal-- a white noise around your measurement. At high concentrations, you have a multiplicative error. And actually, for all noise, you have the combination of the two. So if you make these assumptions, then you can generate very good error models. And the normalization based on that actually gives you a very similar result as with the previous assumption. So actually, these two methods, it seems, are interchangeable, at least for cDNA microarray. Noise will limit the useful information content of measurements. That's the problem. That's the issue why you need to be able to deliver that. So it seems that if you take all these microarray measurements, then a reliable detection of two or four differences seems to be the practical limit. So this is actually a very optimistic and not cross-platform comparison. So if you do a large number of Affymetrix measurements in all sorts of-- or cDNA micro measurements, or a large number of very, very different cancer samples, then it seems that if you take all the information-- or all the useful information, you extract all the useful information from your measurements-- since there are two-fold difference-- a reliable detection of two-fold difference-- is pretty much the limit, it's possible that certain genes are going to be measured reliably, with higher accuracy. But across all genes, probably, this is the experimental limitation. And why is it an important issue? Again, getting back to the issue that you were trying to predict how your system is going to behave, let's assume that you want to figure out who is regulating whom, starting with time series measurements. So you're going to measure gene expression changes or protein changes within a certain time frame. So how would you design your experiments? They were experiments done on the cell cycle of yeast or human fibroblasts. But, of course, you have to choose your timing correctly. So if this is the error of your measurement, this solid line, then, of course, you don't want to take measurements more often than the error of the experiment measurement allows you to do. So if you know how fast genes are changing and what's your experimental error, from that, you can determine a sensible reliable time window, which seems to be the case that, for example, in yeast, there is no point of taking more [? geno-spacial ?] measurements more often than every 5 to 10 minutes, and in mammalian cells more often than 15 to 30 minutes. If you measure more frequently, you're just simply going to run into noise, and you're just wasting your money. So that's the reason why you need to be aware of the noise limitations. And when you know what you're error or noise of your measurement, you can make some back of the envelope calculations of how much information you can actually extract from that measurement and what that could be enough for. So moving on to the other issue of sensitivity and completeness, when you're trying to predict what's going to happen to your system, then it's, of course, the question is that there is a trade-off or there is this issue of how many parameters are we measuring, and how many parameters should we measure? If you're trying to predict whether a certain cell is going to-- a certain cancer is going to metastasize or not, how many genes do you need for that? If you want to predict how a cell cycle is going to progress, how many genes do you need to measure for that, and how many, are we measuring? So for that, we need to have at least some impression of how large are these networks. So this is just showing that it's pretty large. This is a graph representation of all interacting proteins in yeast. So in this case, you have about 5,000 proteins. Proteins, genes, protein modifications are all independently regulated, so you can call them something like bio nodes. And the [? cautious ?] estimate would be that for in each cell, the number of bio nodes are going to be on the order of, let's say, a couple of hundred thousand. This is coming from the fact that you have 10,000 to 20,000 active genes per cell, and you have, let's say, less than 10 post-translation modifications per gene, per protein. And that would give you roughly this number. Of course, that could be much more and much less in terms of whether you're working with spliced variants or you actually need to measure on the module, the activity of modules. But this is probably on this order to have such a complete picture. Now, we certainly don't have this so far, but this is the way technology has developed. And actually, this seems to be the easiest thing to achieve. You just simply move to more and more and more genes, especially as the genome projects are being completed, and probably the coverage of the microarray chips of proteomics is going to reach a complete genome in the next couple of years. There is no real reason why it couldn't have been achieved. All you need [? to have ?] is the sequence information and set up the technology. But so the completeness can be achieved in terms of-- if you work hard enough-- and there are tens of thousands of biochemists and biologists working on this-- you can sooner or later measure most of the biologically important parameters of the cell. At least in principle, that means that you can have a probe that would measure this. But do we actually see signals coming from these when we are using microarray measurements? And there was a gentleman, [? Michael ?] [? Holland, ?] who did these experiments a couple of years ago, when he just simply took microarray measurements and RT-PCR measurements as well, on a couple of genes in yeast. And what he was interested in is, one thing, one, what's the dynamic range of gene expression changes in yeast? And what they found is that the [? transcriptome ?] [INAUDIBLE] in yeast carries varies over six orders of magnitude. What, actually, this means is that there are lots of genes. There are lots of cells. The [INAUDIBLE] [? packages ?] is very large, and you cannot see this in every single cell because the lowest number means there are 0.01 copies per cell. So what you see is that certain genes, certain cells will express a single copy of a gene due to stochastic noise, and only every 100 cells will express that. So this is the dynamic range of gene expression changes. He was also interested in that if he measures the gene expression level of these 300, 400 genes, and he chose important genes like transcription factors, and he compares the different technologies-- RT-PCR is fairly sensitive, although at very, very low concentrations you run into stochasticity, that is probably the most sensitive technology you can use. And you compare it to microarray, then how sensitive is microarray relative to RT-PCR? And that's what he saw. And what it shows you, that this range of gene expression levels is completely-- this is not seen by microarray. So this is well under the sensitivity of microarray. What you see is that you start to see something of a correlation between the microarray measurement and the RT-PCR at two copies per cell. So all these genes are actually expressed and changing, and probably they are doing something important. As I said, most of the genes were actually transcription factors. But they are not seen by microarray. So sensitivity is a very important issue when you do microarray measurements. Well, then, depending on your technology, you will have lots of genes that are going to be under the sensitivity of the technology. I'm sure-- and it's, I mean, as new technologies are coming out right now, this is going to be improved as well. But this is another issue you should be aware of-- that even if you do microarray measurements and you see lots of blank spots, it doesn't necessarily mean that those genes are not changing or they are not present. Simply, your measurement is not sensitive enough. So the utmost goal of the technology is going to be, of course, single measuring, single-copy-- sorry-- per single gene. But even if you are measuring everything accurately, there might be problems with predictions. And this is what I was referring to before. And just very quickly, OK, because you're a biologist, so many years ago-- actually, I think it happened here at MIT-- a gentleman, Edward Lorenz, was trying to predict how the weather was going to change. It was in '60s. And what he did is he took a few ordinary differential equations, a completely deterministic system, and he tried to predict how the outcome of this set of differentiable equations is going to change. And what he was really shocked to see-- and a little bit later, the entire scientific community was shocked to see-- that these three ordinary differential equations produced a behavior very sensitive to the initial conditions. Which means that if you just change a very little, just a smidgen of the starting parameters, the outcome of the measurements was completely different. And this ended up in scientific history as chaos theory, where you might have heard about bifurcations and so forth. The point is that even if you start with a seemingly completely deterministic system, you might not be able to predict how that system is going to behave because of this very fact-- that small changes in the initial conditions can cause huge changes at later time points. Now, we know that biology is not like that because biology is a robust system, because we are sitting here when we are talking. So many people think that a biological system is somewhere on the edge of the completely deterministic and chaotic systems. But the bottom line is that just because you can measure everything very accurately doesn't necessarily mean that you're going to have very high prediction. But let me give you a much simpler representation or example of the very same problem. Imagine that you already measured very accurately the gene expression level-- and at very high sensitivity-- of all genes, or many genes, in a variety of cancer samples. And what you're trying to figure out is, what are the genes that are causing cancer? Now, let's assume that you found this subset of cancer samples that-- these are actually real measurements from melanoma. And this is-- let's say this is a subset of samples that is extremely malignant, kills the patient very quickly. And you also think that you found a group of genes that is going to be responsible for that extremely malignant state. But you need to ask the question-- as I referred to this before-- can this be due to chance, because you have a limited number of samples? Well, just by chance, if you randomly put in those two values, you can see something like this. Sometimes you can find an analytical solution, but more often you can't. You need to do some sort of computational solution. So you permutate your data set and look for similar patterns. And if you never find a similar pattern, a similar group of genes, in the permutated gene expression matrix, then you say, well, this is not due to chance. But this is not that obvious how to do it. So analytical solutions can be sometimes found. So let me just give you this very simple example. So this-- I usually pose a problem that you can solve at home. We had this problem that, at the dawn of microarray analysis, my lab measured gene expression measurements in different breast cancer cell lines. And when we reached-- because this was very expensive-- when we reached six breast cancer cell lines, we found that 13 consistently misregulated genes, up or down-regulated genes. And what we asked is, can this be due to chance or not? So this was translated into a combinatorics problem that you have eight different cell lines in a gene microarray, it's the number of genes misregulated in the i-th cell line. And the question was, can we find [? K ?] consistently in misregulated genes across all these cell lines by [? chance? ?] So if you like combinatorics, this is a nice little home exercise if you want to solve. But so you can find an analytical solution for this. And this is very simple. And this could be solved quite easily. And you a fairly reliable number of this. But what if more genes are involved? And more importantly, what if genes are not independently regulated? The underlying assumption in combinatorics is that you're drawing your samples independently, randomly and independently. But in this case, genes are co-regulated. If a transcription factor is upregulated, well, the downstream genes are going to be upregulated as well, or some of them will be upregulated. And this is coming from real samples. So what you see here is when you do a complete permutation, then this is going to be the distribution of correlation coefficients for each gene pair. But in real samples, this is what you see. So there is a high correlation of gene expression changes up and down-- which is kind of obvious because this is a genetically regulated network. Now, the problem is that if you need to do this analysis, and you ask the question, is my pattern random or not, or can this be present due to chance or not, well, if you use permutated, a randomly permutated gene expression matrix as your benchmark, then in that case, your analysis, or your result, or your statistical analysis can be off by orders of magnitude-- by six or seven orders of magnitude-- relative to an analysis where you say, well, I'm going to permutate the sample but retain the overall dependency of gene expression changes. If you do that-- which is not an obvious thing to do and takes some computational tricks-- now you have a very different result. Noise in discrete measurements. Yeah? AUDIENCE: [INAUDIBLE] ZOLTAN SZALLASI: OK, so what you have is that you found a pattern that a certain number of genes are causing, let's say, cancer. And what people usually do is they do a complete randomization. Let's say you just swap everybody, everybody, and then you look for the same pattern, and you don't find it. You never find the five genes that would show the same pattern, and you are happy. Now, the problem is that this has completely destroyed-- destroyed this permutation, the codependence of genes. And in reality, that means that if you have codependence-- imagine that there are certain genes that are very strongly coregulated and other genes are never coregulated-- that the ones that are actually coregulated are not actually two independent genes, but in your analysis, that should be one, you could replace [INAUDIBLE] gene, right? And this is what you should retain. Of course, you don't have complete coregulation and complete independence, but you have a distribution of correlation coefficients. That's what you see here. So the way one should do this is create a large number of random matrices in which the distribution of correlation coefficients is something like this, but apart from that, it's random. And then ask the question, is my pattern present in this as well? Now, if you compare the statistical power or the statistical confidence between these two matrices, you can be off by five or six orders of magnitude. So something that is significant in this is way below significance in this. So that's the point. It's not that obvious how to do these things. There's just an important point that sometimes, even if you have good quality measurements, biology is going to present you with very difficult problems. And this is actually present in sequence measurements. I mean the whole BLAST issue is about this as well. So moving on to noise in discrete measurements, which is the best example. The easiest example is actually DNA sequences. So of course, you have measurement error there as well. You have sequencing errors with a certain probability. Let's say now it's probably down to 0.1%, but you use [INAUDIBLE] between 0.1% and 1%. Of course, the solution was [? sequence ?] [? a ?] [? lot. ?] Of course, if you see a difference in your sequencing and it's not done with a single individual, you are not quite sure whether you are seeing a single nucleotide polymorphism, a SNP, or a sequencing error. But if you work hard enough and sequence enough, you will have some sort of feel about the true subsequence of a DNA sequence. Now, you end up with a very, very, very long stretch of layers. In the case of humans, it's 3 billion. And what you need to achieve-- or what is expected from you-- is to find genes, introns, exons, the transcription factor binding sites in this sea of four letters. Now, how do you do that? This is going to be an issue of noise as well. If you had only genes, like exons and introns, or only exons, and transcription factor binding sites, it would be very easy to find. The problem is that you have lots and lots of junk DNA or intergenic regions, and you have no idea what they are doing. And in those, sometimes, seemingly intelligible information will show up just by chance. So how it can be found? That's why the real way of building genomes is not only DNA sequencing because from that, it's very difficult to find the number of genes. Actually, if you look hard in the literature about the number of genes, usually, the number of genes keeps falling with time because, actually, they see that there are lots of erroneous predictions. Usually, these gene prediction algorithms tend to err on the side that would give you-- on a more liberal side. It tends to give you more genes than actually is present. So what you are looking for is actually cDNA-- for example, cDNA libraries, for the same organism, because these are the truly expressed genes. So you try to bring together the two different databases. And if you find a cDNA, well, that cDNA can help you to find the actual genes. Now, the problem is that cDNA has to be expressed. And if you didn't happen to prepare a cDNA from the cell line in which that gene is expressed, well, then, you won't have that gene in your cDNA library. Therefore, you cannot find it in your genome. So how it can found? And this, the DNA sequence information, can be refined to large extent by all sorts of different databases, data sources. But there are lots of unexpected issues in biology which are truly amazing. They are completely unexpected, and you would have never been able to come up with that idea simply based on primary sequence information. And I think I'll just give you two really shocking pieces of data that are actually pretty reasonable. One is the widespread occurrence of antisense transcription in the human genome. Why do they get-- what do these guys do it, or why? It's a long story, but what they found actually that-- they found in the human genome about 1,600 actually transcribed antisense transcription units. So you know usually how the sense-- how the genome is read and described in the sense way? Maybe just looking into whether things are transcribed in the antisense way. I mean, you learned a lot. I mean, you learned a lot about microRNA, SiRna regulatory RNA. So there was a good reason why they were looking into this. Nobody would have expected that there is such a high number of actual antisense transcription units. Also, when a group checked out what portion of a given chromosome is actually transcribed, they were surprised to see that it was about one order of magnitude more than expected. What people usually do is you take a chromosome-- in case they check chromosome 21 and 22-- you know where the majority of exons or introns are, and based on that, you expect that most-- well, the exons are going to be transcribed, and maybe a couple of regulatory RNAs. So you have an expectation that, let's say, a couple of your chromosome, of a given chromosome, is transcribed. Now, what they found when they actually covered the entire chromosome by an asymmetric shape is actually 10 times as much information was transcribed from the DNA than expected based on exons. Again, you will have to predict this just simply based on primary sequence information. But what can you do? You have this sea of information that seems to be noise. So is there a way to deal with this? So let's assume that you need to find a transcription factor binding site. It's going to be something like T-G-G-A-C-T. Of course, you don't know that this is T-G-G-A-C-T. And, of course, it's not always T-G-G-A-C-T. It can be T-G-C-A-C-T because transcription factor binding sites like to play with sequence. And actually, this is the way they can change their affinity of their given-- or specificity of their given sequence. So then, this is going to be your actual sequence that it can bind to. Now, so this is what you're ending up. This is what you're looking for that you don't know that this is your binding site. And you're trying to add constraints. So this is one trick. You say transcription factor binding sites are usually within 500 base pairs upstream from the start codon of a given gene. And you also know that it tends to cluster in the same region. So for most transcription factor binding sites. You have more than one. So what you might be doing is you say, I'm looking for certain, let's say, six base pair long sequences that tend to cluster within 500 base pairs of A-T-Gs. And then you're going to find something. But still, this is going to be very, very weak. You have way more letters, way more information, and way more noise from which, then-- the one-- than the level from which you could extract the important information. So even if you do all this, you will find that many other of the transcription factor binding [INAUDIBLE] sequences do not function as such. Well, why? We do not quite understand yet do to higher level of DNA [? regularization, ?] whatnot. And, of course, the problem is that you do not know what sequence to start with. So what can you do? You can hope that your statistical representation will help. And one trick is, of course, provided by nature, which is cross-species conservation. So you have the extremely noisy-- we call them "noisy," but of course, they are not noisy-- extremely noisy genomes-- human, chimp, mouse, rat, yeast, whatnot. And you're assuming-- and this assumption is a fair assumption-- that you have a cross-species conservation of important sequences. So what you're looking for, are there sequences that are conserved across several species? And if you combine all this with some of the smart tools like using artificial intelligence, machine learning, HMM, hidden Markov models, were extremely useful to find from [INAUDIBLE] gene identification, then you might start to see some patterns emerging. And this was done for yeast by [? Alexander's ?] group. So this just gives you a concrete example that showed that this is actually a very efficient way to go. When they sequenced four years species-- four very closely-related sequenced yeast species-- the average number of genes in each of them were about 5,500. The reason they did it is because they knew that these were very similar species. So what they found is actually, there is a very high level of [INAUDIBLE] of genes. So what they found is that this shows that same gene is present at the same location in all of these species. The order changes, sometimes the gene is lost, gained, especially either the chromosomes around telomers or subtelomeric regions, there's a lot of turbulence going on. But for most of the chromosomes, things are-- or the information is retained to a large extent. Of course, there is a slow and rapid evolution. They found that for certain very important genes, there's 100% nucleotide conservation across all species. For others, there is a very low level of conservation. Probably, that's something that nature can afford to experiment with. But the bottom line is, what they were doing is actually, they found that important transcription factor binding sites are going to be present in the very same location across all species. So what they were looking for here is-- actually, this is [INAUDIBLE] binding site-- and it shows you the four different species, and it shows this is at the very same location in all different species. This is the [INAUDIBLE],, the [INAUDIBLE] binding site. It's another type of box. So it shows that you have a very high conservation of important regulatory information. Now, what you can do is actually turn this around and look for unknown information. So what you do is-- what they did is let's generate-- or they generated all random sequences, which was X-Y-Z. That means that X, Y, and Z stands for any of A-T-C, A-T-C-G, and [? A-T-C, ?] and there is any number of random A-T-Cs between them, between 0 and 21. You can do this. This is within the realm of scientific computation. This is actually not [INAUDIBLE].. So these are any combination. And you look for certain statistically significant patterns for these. One of them is intergenic conservation. Are there any sequences like this, when you go through all sequences, that tend to be conserved between genes and intergenic regions? You can check for intergenic versus genic conservation, or you can check for upstream versus downstream conservation. These are all statistical benchmarks they found for known transcription factor binding sites. So what they found, that for known transcription factor binding sites, all these have-- those are more conserved in intergenic conservation. You have a higher intergenic versus genic conservation and upstream versus downstream conservation. So recall the problem. You're starting with any random sequence. You're just trying to figure out that any of these random sequences have any biological significance. Now, even more importantly, what they found is that when you start to find statistically significantly retained or conserved sequences, then these motives were also arranged in front of genes that tended to share function, which is very important. Because you're assuming that there are certain functional modules, so genes that tend to do the same thing has to be turned on or off at the same time or under the same conditions. So that's when they came up with a long list of potential transcription factor binding sites, in which all these things were pulled together. And they found that these are sequences that tend to be conserved in front of genes that tend to share function. And many of these actually were confirmed independently by experiments as new through transcription factor binding sites. So the bottom line is that in these measurements, even in discrete measurements, this sequencing-- you will have to face a lot of noise. Biological organisms were built a long time ago, and the blueprints were lost. If you knew how it was built, then you could figure out what's important or not, but everything was experimented a long time ago. So it seems to you now that right now, the important information is hidden in a sea of irrelevant information. And it will be very difficult-- and usually, it's impossible-- to find based on solely computational ground. But if you look for help from actual biology-- in this case, cross-species conservation-- well, then, the important things, the gold nuggets, start to emerge. OK, and that's it. Any questions?
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_16_Microarray_Disease_Classification_II.txt
PROFESSOR: OK. So last time, we spent the hour and a half talking about classification methods and their use of genomic technologies in direct and close to direct, which we called indirect, applications of clinical medicine. And we talked about three different types of class methods again, class exploration, class prediction, and class discovery. And I think we finished with a discussion about class prediction, which is a basic approach that's used to-- that starts with microarray data from a certain number of patients, builds a certain model, and then tries to predict something. Everybody who was here kind of remember that, vaguely at least? And the basic steps of class prediction were, first, to choose a gene set, so choose a number of genes that's representative of the data and that divides the data into classes for which you're targeting your predictions to construct a function, a prediction function, that will be a mathematical function that you'll take a new sample with its expression data, and plug into the function, and get out an answer, and then determine a rule to try and then use the rule to classify one way or the other, and then, lastly, validation. So I thought what we would do today is go through an example. So we went through all this. So I'm not sure we need this anymore. I thought what we would do today is go through an example. And the example is related to breast cancer and the paper from Nature 2002 that we talked about. So the issue of New England Journal of Medicine that was published today had a number of articles related to use of these classification methods in AML, in acute myelogenous leukemia. AUDIENCE: [INAUDIBLE] PROFESSOR: Well, there's two different papers. There's an editorial and there's a perspective written. Maybe we'll look at the perspective for a second. And the perspective is called "Microarrays in clinical investigations." And so the perspective is pretty bold in its views here. And essentially, the last paragraph, the take home message-- so the take home message is that our usual thinking about biomarker discovery in clinical trials is about to change dramatically. In the future, clinical investigations will consist of small trials with a high density of data, precise patient stratification according to the expression profile, and highly tailored analysis of microarray data. So they are looking at the articles in this issue of gene expression and AML. And they focus on identifying the relevant-- so looking through 7,000 genes or I think one study used 133 arrays with 13,000, another with 25,000-- looking at large number of genes and their expression in patients and identifying smaller numbers that are very relevant to disease. And that's what's meant by the term biomarkers here. And they're making a prediction that this is going to change the way clinical trials are being done dramatically. This seems very unlikely to me that this will-- AUDIENCE: I was really curious because when you reading the articles, at least glancingly, they seem very bread and butter, just classification and some prediction. And I was just curious how they [INAUDIBLE] the New England Journal of Medicine is very reticent to take anything with [INAUDIBLE].. And so that sort of struck me really. PROFESSOR: Well, it's a learning process for everybody. There are things to learn from the breast cancer work that was done. And we'll go over the Nature paper. And that also was published in the New England Journal of Medicine as well. Let me give you-- let's see. I had this one for you. And this one is important because when you have [INAUDIBLE]. So let's-- I didn't have time to review the AML stuff that's accompanying the editorials and perspective in today's issue. I think it would be nice to go through that and look at the data directly instead of the claims in the editorial. But let's do that for the breast cancer work that was done. So this is the paper. You might want to have it just so you have it here, because we're going to-- I thought rather than talking in the abstract as we did last time, to really go through concretely the use of the prediction method here, which is used quite similarly in one of these papers here. So let's go through this Nature paper on gene expression profiling predicting clinical outcome of breast cancer. So in this paper, essentially what these authors did in the abstract, you can see that they used microarray analysis on primary breast tumors of 117 young patients and applied supervised classification to identify a gene expression signature strongly predictive of a short interval to distant metastasis. So again this is-- again, I often get tripped up by viewing things in terms of the specific tool that was used. And really what they're doing is doing prediction here. Some of our different pigeonholes that I try to put things in, class, exploration, prediction, and discovery. This is prediction. Supervised classification is one aspect of it. But there are other parts of this to doing prediction. But what they were interested in, essentially, is taking patients and making good or bad predictions of prognosis of outcome based on their gene expression. So they did a variety of things. But most of this paper is focused-- we look at, say, the second paragraph on the first page after the abstract. An unsupervised hierarchical clustering algorithm allowed us to cluster the 98 tumors-- now we're down to 98 from 117, I'll come back to that-- on the basis of their similarities measured over approximately 5,000 significant genes. And that's figure 1A. So here they have things sideways. They have the specimens as the rows and the genes as the columns in that top diagram. And so this is hierarchical clustering of this data set that consists of a big table. And the table again has 98 patient samples and 5,000 genes and expression levels that-- I might be short on the paper. Why don't you guys look on that for now, yeah. Yeah, this paper is up on the site. OK, so this is the data set that they constructed. AUDIENCE: I just have a question about figure 1A in particular. What's the utility of something like this in the paper? I mean they're obviously they're not looking at this-- like if I were to-- when they get their data and they see something like this, they feed into a computer that does all this analysis. So what's the purpose of showing us this rather than just kind of cutting to the chase of what their actual results are. Should this be standard for when you're doing these types of studies or whatever? PROFESSOR: I think they're pretty valuable diagrams because they do allow you-- this is the black and white, so there's the color version-- they do allow you to see that there are different patterns of expressions of genes and different subgroups. AUDIENCE: I mean, I guess maybe these lower ones are blown up, so where [INAUDIBLE] it may become more obvious. But I'm thinking just in terms of if I saw an SDS-PAGE gel in the paper, it's just like, right there you can see the results. If I look at something like this, it just looks like a bunch of pixels [INAUDIBLE].. I just I guess. PROFESSOR: So you're asking about the motivation? AUDIENCE: Well, yeah, I mean so-- [INTERPOSING VOICES] AUDIENCE: Yeah, so like the sub figures can be useful. I guess and maybe I can see how you'd say, OK, this is the reason we expanded on it. PROFESSOR: But even in the big figures, you can often see, and maybe I have a picture of this on color that I can put up, I'm not sure if I do. But I should. But you can see that there are different groups, even within color. [BEEPING] Sorry, I have some patients. AUDIENCE: I guess sort of as a newbie here, looking at my [INAUDIBLE] traditionally, you see something like that-- when I'm seeing something like this, I just go, well, gee, that looks pretty. And I have no idea what it means or whatever. And I think that would be kind of intimidating for people to want to pick up a paper like that. And they don't quite get the point of seeing something like. - Well, there's two issues. That's an independent issue. I think there's an issue that I find talking about these things is, the same thing as looking at a scattergram, why should you involve a-- look at a scattergram rather than just the correlation coefficient described in here. Just give me a look right path to see, to actually see, by visual inspection that it looks like this class is there. And so when I look at this thing, I can actually convince myself but there's a lot of crap. But there is also some big chunks of red. AUDIENCE: So definitely the color picture helps a lot. PROFESSOR: The other part of these pictures, which is helpful, are the dendrograms. OK, so again we talked about the use of hierarchical clustering as a classification method and that it does not really produce classes. OK, again, everything is related to everything else. You don't have disjoint classes. It's just that the degree of relationship is represented by the distance, literally the length of this line, right here in the diagram, represents the relationship between these two over here and say these two over here. And so again everything every cluster is split. This is binary. So everything is split into two. And so the adjacency is actually another issue too, about these, is that you can spin any one of these clusters at any node without changing the organization here. So sometimes you will see diagrams-- why don't I show you one since we're getting to nitty gritty here. AUDIENCE: When you have a paper and you see that figure, just apart from of the obvious, like here's a really big red patch, [INAUDIBLE] that's-- you're also, if someone is familiar with this, you're also gathering other information. By not having data in hand, you can see, you can look at that figure and say, ah, pattern, but here's another part of the figure that also gave me some more information. And me as the newbie, I'm just not seeing it because I'm not as familiar with that. PROFESSOR: And I don't have a password here. I can't do this. AUDIENCE: Is that a fair assessment? PROFESSOR: That's a fair assessment, OK. What I was going to show you is that there's been, in my area, last week there was a publication-- it's the second one this group has done-- where they put one of these diagrams up here. And they're showing that three patients with one condition, they all cluster together and separate from another seven. But they don't put the dendrogram on that. All they show is adjacency of rows. And they don't show exactly how they're organized or connected. So you see three that are here and then another seven that are here. But it's possible that this one over here is actually part of a group of four over here. And so that's another value to these diagrams, is when people claim classes to look and see. When you look at this one, it's not overwhelming, actually. And this is where they drew the line. And they took this as one class. But this is really-- I mean every one of these is just subdivided here, instead of one fairly large class. So there is helpful information for seeing those diagrams. AUDIENCE: So the leftmost is the two classes in [INAUDIBLE] 117-- PROFESSOR: That's correct. They said they are two classes. So this is one branch here. And this is the other branch here. OK. OK, yeah, well, let's get into the nitty gritty, at least so we have one example that we've understood fairly well what's happening. OK, so this is an example. So this is just hierarchical clustering. What would you call this if I made you pigeonhole it into class exploration, class prediction, or class discovery? These are 98 patients with breast cancer. These are both lymph node positive and lymph node negative. They're pretty much all of their samples except for a group that they've left off for other purposes. This would be an example of class exploration. So they're starting by looking at their data. And they're just saying, well, does it look like the expression profiles differ among patients and allow us to find different subgroups, different profiles? And we talked about one step that was important to do before this. And that was the stuff about signatures and their reproducibility and their distinctness. They don't provide any data here in that regard. They say that they did every tumor sample twice, that they did two independent array profiles on each one. But they don't say anything about what they did with that data. They don't provide any correlation coefficients, for example, to say that when you do the same experiment twice on a given tissue, you get out the same pattern of genes. But accepting that, this first step is exploration. And here there is an attempt to correlate these groups with different clinical features. So first, they do hierarchical clustering of the entire data set. And they look and see broad patterns. Maybe there is a group over here that's more red for these genes and greens for these. And so maybe there is some structure here. So the next step they do is to, for each one of these patients, they look at binary outcomes, and say if this patient here had a mutation in the BRCA1 gene, they'll color it in black. And this patient here did not have a mutation. So for each one of these 98 patients, they're now looking at one, two, three, four, five, six different phenotypic markers and coloring it in this way so that we can then look and say, is there any relationship between these phenotypic variables and these two classes which were defined by the dendrogram? Again to pigeonhole it into one of the three class methods, what are they doing here? What's that? Class exploration, class discovery-- they're doing class discovery here. So they looked at their data. They explored it. They wondered that there might be classes. And they're now asking is there something significant phenotypically, something meaningful clinically, that relates to these two classes? Or is it just an artifact of doing clustering, which will always reduce things into classes? Depending on where you cut this tree, you can have two classes here, or this one here that looks like there's a large class, which goes from here down to here. You can make up classes at any different level here, because fundamentally there really are no classes with a dendrogram. But they're now saying, well, what phenotypic variables might correlate with this split right here? This is still discovery. Yeah, I would call this class discovery, that they found two classes. And now that to establish them as truly classes that have some meaning, some clinical meaning, they want to say what clinical variables might this relate to here? And so looking at the things here, there are few things which are fairly striking, I guess. The presence of estrogen receptors on the tumor cells here is pretty good. Most of these patients are ER positive. Most of these patients here are-- actually, I can't remember which is black and which is white, which is positive and which is negative for their classes here. Metastases-- so the presence of-- and I think this is distance metastases at five years-- not really too much difference. Maybe I think white is present. So maybe more of these. What's that? White is positive. So there may be more consistently, certainly a larger percentage of this group, had distant metastases at five years. So once they do this and believe that they do have classes here, they then move to the next step and say we're going to build a predictive model. And this is the one that they focus on, is whether there's metastases or not. So they're saying in this data, there seems to be some way to classify the data that has some distinctions in terms of predicting outcome. And we're going to take it a step further here and actually go through the trouble of building a model to predict this. AUDIENCE: So they want to use the metastases as the principal predictor for or [INAUDIBLE].. Why, maybe this is just another ignorant question. PROFESSOR: No, there's no ignorant questions here. AUDIENCE: Why did they choose that as like estrogen receptor? PROFESSOR: Well, yeah, because people already know about the estrogen receptor and BRCA1 mutations. OK. So and metastasis is the-- it's fundamentally the important thing. It's the important clinical endpoint. If we can make better predictions based on gene expression profiles of tumor when we do an initial biopsy on the breast cancer, can we use that to then decide how to treat people? People have a good prognosis don't need the same treatment as people with a bad prognosis, maybe. It doesn't actually-- there's no treatment intervention-- not a treatment study here. It's not saying that we can intervene and alter prognosis through treatment. It's strictly predictive of prognosis at this point. OK? OK, so let's go back to the paper. And so the next thing they do after this class exploration, this sort of partial class discovery, I mean they don't carry it further and say that we really have discovered a new class here, because they really haven't. I mean what the prognosis is really correlating with is the estrogen receptor status here, which is a well-known fact. So patients who are estrogen receptor positive have a better outcome, I think, in breast cancer. So I'm not sure about the black being positive or negative. The black is negative, OK. Yeah, that's right. So patients who are estrogen receptor positive have a better outcome. The estrogen receptor negative patients, by and large, have a bad outcome. So there's actually nothing new. There's no important class discovery that took place in this paper. So what they next do is change the data set. Let's shut this off. It's not shut down. So the next step that they do in this paper, and that's on this page 532. Bottom of the first page, sorry, bottom of page 530-- is they focus now on a subgroup of these samples on the 78 patients with sporadic lymph node negative disease. So these are all patients who, at the time of the diagnosis of their breast cancer, had lymph node biopsies which were negative, and did not show any spread to the lymph nodes. And so they then focused the rest of the paper on this select group of patients, lymph node negative, and also sporadic, which means none of these had mutations, the BRCA1 or even BRCA2 mutations. They were not genetic breast cancers, patients who are genetic. And so in that 78 group, 44 patients were free of disease after five years, and 34 patients developed metastasis. So we're down to 78 patients. And 44 had good prognosis, or turned out to have good outcome. And 34 had poor outcomes. So what they then say on the next page, 532, is to identify reliably good and poor prognostic tumors, we used a powerful three-step supervised classification method similar to those used previously. In brief, approximately 5,000 genes were selected from the 25,000 genes on the microarray. OK, so let's get into how they build the model here. So they're going to use this data. So they're going to use 78 instead of 98. And they do have 5,000 genes. So the microarray has actually measured 25,000 genes. So they had this data set to start with. And the first thing they did is use basic, but still fairly ad hoc, procedures to get rid of genes that had very little meaning, so genes that just didn't vary, a gene that had a low expression level for every single one of these. And they removed such genes and ended up with 5,000 after that point. AUDIENCE: So was 5,000 predetermined, or was that-- PROFESSOR: They-- let's see how they ended up with five. They said significantly regulated and more than three tumors out of 78. So this is an area which remains completely ad hoc and different in virtually every paper. We call it the initial filtering of meaningless genes. And they decided that-- I believe that they did T-tests for significance and required there was some significance in at least three out of 78 at some particular significant level, and ended up with 5,000 genes. There's many ways people do this. Some people will say I'll look at the standard deviation of these gene vectors. And if the standard deviation is less than some cutoff, I'll assume those genes are not varying enough in this data set and get rid of those. So now they want to build their predictor. And so we talked about those three steps. And the first step is choosing a discriminative gene set. So they're going to whittle the 5,000 genes down into small numbers, which they're going to make put in their black box to make their model. And they do this by correlation with ideal outcome. I think we talked about that method before. So first they'll arrange the data set so that all of the good prognosis are up here. OK, and then T+ 45 through 78 are the poor prognosis. And then they'll make up an ideal vector, i, and put let's say 1's here and then 0's here. And then for each of these 5,000 genes, they'll calculate a correlation coefficient to this ideal vector. So r for gene one correlated with ideal vector i was some number, 0.6. And r for gene two correlated with vector i is some other number, negative 0.45, let's say. And do this for the 5,000 genes. Everyone follow? So the first thing they do is they get this down to 231 genes by using a cutoff of 0.3 negative or positive. So anything that was between an r between 0.3 and negative 0.3 is gotten rid of here. So they get rid of this one here. This they would keep. This one they would keep, and so forth. And so that left them with 231 genes, which we'll renumber and call these genes now. Yes-- AUDIENCE: Question related to the coefficient for [INAUDIBLE] normalizing their variables in [INAUDIBLE] gene expression so as to be in the same order of the outcome variables, presumably binary. What kind of transformation goes on? PROFESSOR: Well, there's two issues here. One is normalizing the data set. So they used cDNA arrays. So their normalization was built in because they used a reference. So a cDNA array is an experiment you have two samples applied each time. And their reference consisted of a little bit of RNA from each one of, I think, the larger set. I don't if it's the 78 or the 98 or 117 samples. So that was the normalization. To do this, they just organize it that way. Just move these up here and move these down here. AUDIENCE: I guess my question was, I'm just trying to figure out the [INAUDIBLE] do you actually quantify for-- in the correlation coefficient, you're looking at direction. But does the amount of expression or difference in expression between positive and negative side matter In terms of your correlation? PROFESSOR: No, that won't matter. Again, you could-- again, Pearson correlation coefficients, at least, are invariant to linear transformations. So you could take every piece here and multiply by 15 and add 7, and it won't change its correlation coefficient. OK, so now they were left with 231 genes. And all of these genes have some threshold of high correlation, either positive or negative to this ideal outcome here. So gene one had a certain set of expression values in this group. And it was different than in this group, because it correlates, has a high correlation coefficient to that factor. Everyone follow? OK. OK, so now they do-- some people would stop here and say this is my discriminative gene set. And the next thing I'm going to do is build my prediction function. And then I'm going to make my rule. And then I'm going to validate. But they decide to further optimize this discriminative gene set. So this is a little complex. But what they do-- not that complex. So they take the 231 genes that they have here. And they rank order them by largest magnitude of their correlation coefficients. So gene 16 had a correlation coefficient of 0.9. And G 12 had a correlation coefficient of negative 0.85. And they rank these going down to G 231, which just made it with a correlation coefficient of 0.31, into this group. So now they're even looking deeper into this data structure. And they're looking for the very best genes that correlate with this outcome. An example of such a gene might be gene 16, which had expression levels that were 1,000 for these samples, and was 5 for all of these samples. It was a perfect classifier, that one gene. So now they're going to really build a discriminative gene set through the following procedure. They're going to take the top five from this list and call that our discriminative gene set consisting of only five genes. OK, then they're going to go through this, build a prediction function, make a rule, and validate using leave one out cross validation here. Then they will add five more genes to this discriminative gene set. So they'll go down five more in this group and now have a discriminative gene set of 10 genes. They'll take that, build a prediction function, make a rule, and validate again. Now that they've done a second validation, they have a second accuracy number to look at. And they'll say did my accuracy get better or not? So the first time they did this with the top five genes, they found an accuracy in terms of prediction of, I don't know what it was. But we'll make up a number. Let's say 60% predictions were correct after they built their model. So then they did this with another of the larger gene set, including the next five best genes. And they found they had a prediction function of 70, that got them up to 70%. And they kept on going until their accuracy was the best it could possibly be and wasn't getting any better with the addition of more genes from the bottom of this list. There they stopped at 70 genes. So they constructed their discriminative gene set right here as a set of 70 genes based on this optimization of trying first the top five, building the model, testing the model, and looking at its accuracy, and improving the accuracy until it was as best as it could be. Yes-- AUDIENCE: Why five and not one? PROFESSOR: One would have taken them five times as long to get there. AUDIENCE: That's true but [INAUDIBLE] one is more relevant. PROFESSOR: Well, that's actually the point that people are trying to make in this field. That one is not relevant, in that the biomarkers that we're now looking for are groups of genes. AUDIENCE: I mean in the sense of implementing, I don't mean-- PROFESSOR: This can be implemented clinically. Once you set up to make an inkjet-synthesized or even robotically spotted array, you can do it for a few dollars. You can spot arrays for a few dollars, basically. This is not actually expensive. AUDIENCE: So basically they're going to find the minimal amount of these genes, and we give them the maximum amount of predictive value [INAUDIBLE]. PROFESSOR: Correct, a predictive accuracy in terms of going for this one prediction of good versus bad outcome. AUDIENCE: Is there any value-- so at each step before they got to the extra revision that you mentioned, [INAUDIBLE] close where most people would stop. Is there any value in there going outside of their 231 back into the 5,000. And just randomly-- PROFESSOR: I don't know how far up they went. I don't if they went past 231. I doubt it, that they then looked back at other genes. AUDIENCE: [INAUDIBLE] and they never [INAUDIBLE] at 231 genes, then they said OK, well, from there we've got certain candidates, which I think is quite fair. Once they've got their 70, is there any value in going back out to the larger data and seeing if we can call a few more things that maybe-- PROFESSOR: Maybe, but the main critical point of this approach is that this is overfitting, this is very, very serious overfitting of data. I don't think this is what you want to do. But people are still doing it. And I mean, it's fairly reasonable to make this gene set here and stop there. But to do a procedure that you then optimize this set, you stop with the 231. To then take that and to do repetitive cross validation on the very data that you're using to build the model, and to do it to select the genes, even in that way, is very serious overfitting of a data set. And I put up that fictitious graph yesterday about the points and overfitting it with the best possible curve. But it may not be the right one to make a prediction of a new point here because of how overfitting it is. So it works perfectly for the data set. And I think I brought up some of the pitfalls in terms of validation that's been done with microarray data. I think I passed out a copy of that paper mostly. AUDIENCE: Can't be overcome by cutting the data set in half and doing exactly what you just described, the method that you just described, and then taking that-- [INAUDIBLE] and then taking that other half a [INAUDIBLE] because you did describe-- PROFESSOR: You can build-- right, that's the right way to do it. So that's the training set and then the validation set. I mean even the real right way to do it is to make up any model you want and then prospectively test it on the next 100 patients. AUDIENCE: [INAUDIBLE] is that typically, the real performance is particularly in 2004, you have so few patients that you loathe to get away from many of your test set, because you might not have enough signal in your training set that you could [INAUDIBLE] for people. Otherwise, [INAUDIBLE]. That's why it's so amazing that [INAUDIBLE] can [? write ?] a paper besides a third of patients [INAUDIBLE]. PROFESSOR: Yeah, when these chips first came out, they were $2,000 a piece. Yeah, they're very-- they're even more than that, but-- AUDIENCE: Even more [INAUDIBLE] PROFESSOR: So these were very costly experiments to do back then. So this is their discriminative gene set. OK, now they have to choose a predictive function. So the predictive function, again, is if one was to give this model a new clinical specimen, how do we then make a prediction on the new clinical specimen? And so they used a fairly simple predictive function here, which is correlation coefficient to this ideal outcome. So if I gave you a patient tissue sample. And you did one of these microarray experiments and measured 25,000 different genes on it, then either you would use that in this model by pulling out the numbers from the 70 genes that were relevant-- that were part of the discriminative gene set. And then taking that number of 70 genes and-- let's see, sorry, I'm goofing up here. So-- improved-- The classifier predicted correctly the outcome. I'm sorry, I thought I understood their predictive function in this paper. Let's take a peek at it. The predictive function is a threshold rule. And that's evident like on figure 2. I believe they took an average number. So they took the average. So bad profile-- they took these profiles here and just average. So there's 34 in this bad group, bad prognosis group. And they took the 34 numbers here and averaged them together to get one number here. And then they did that for their other genes. So we're down to 70 genes right now. And so they have this ideal bad profile. And then pretty sure this is what they did. And they did the same thing for the good here. And they have their good profile, which is just an arithmetic mean of the expression levels. And now when the model is presented with a new row vector there of expression data for 70 genes for patients, and we'll call that new patient patient n, then they will calculate the correlation coefficient of patient n to maybe just the good vector. I guess you can't do it. Well, so I'm sorry I'm forgetting the details here a little bit. And I'm having trouble finding them right off here. But I'm pretty sure this is what they did. But so then which one did they correlate it with? The good one? Uh-huh. OK, yeah, which brings up another point, why not do it to the bad profile and correlate to that as a means of your algorithm? That might give you a very different answer. I'm not quite sure. But in any event-- so, OK, it seems like there is this choice here that they look at its correlation coefficient to the average good profile and got a number. So this will give you r, we'll give you a number, 0.4. And that's their prediction function. And then the last step in this model building is to make up a rule and say if there's-- and they used a threshold rule and a non-ambiguous classifier. So if it's above the threshold, it's in one group. If it's below the threshold, it's in another group. And they actually explored a couple of different possibilities for the threshold. And we talked about that Tuesday that this rule and varying the threshold is the classic trade off of sensitivity versus specificity here of a test. And they talk about that in this paper and show-- so on this diagram here, so how to divide the blacks and the whites to best advantage is the question here, the good versus the bad prognosis. And do we cut it here and get all of the whites with just a couple of blacks or do we cut a little bit more and get more whites? But we're trading off sensitivity and specificity here. And that's the and that's the rule that they use. So that's the model that they use. And then they actually do a couple-- in this paper, they do a couple of things for validation of it, for this step right here. And they do pick out a new test, a new validation set of 19 samples that they did not use to build the model here. And it's small. They don't say exactly how they constructed that group of 19. But they do use a separate group of 19 to then test this on. And when they do that-- so that's I guess page 534, the third paragraph-- to validate the prognosis classifier and additional independent set of primary tumors from 19 young lymph node negative patients were selected. There were seven patients who had the good outcome, 12 who had the bad. And it resulted in two out of 19 incorrect classifications. I think both-- so that's how they got an accuracy number out of that, two of 19 as the accuracy. One of the important issues that you get into when you do something like that is the accuracy may not be the same for the good versus the bad patients. So you might have a classifier that predicts all of the good outcomes correctly, but is terrible at predicting the bad ones. Half the bad ones it says are good outcomes. And that's not very helpful to you. And so I think the data is in here. And I think it's mentioned in the pitfall article that I handed out. That teases that out a little bit more as to the importance of saying whether the predictor is working equally well on the good and the bad cases that are presented to it. So I think that's essentially this paper. There was a follow up for this, which was in the New England Journal. It's on the website, which is even more focused on prediction as survival, gene expression signature as a predictor of survival in breast cancer. They used a larger data set here. But they use the same model here. They did validation on the data set. But they didn't do the full validation procedure. So we talked about that, how a lot of validation is done through leave one out. So leave one sample out, build the model, and then test that sample. And then do it for each of the samples 78 times or 200 times, whatever. And see how accurate your predictions are each time. But it's important when you do that to go back and rebuild the choice of the discriminative gene set each time. And this actually is in the New England Journal of Medicine. They do actually report both. They're aware of the need to do this. And the numbers I think were 24% versus 41% or something, 27%. So when they do the error-- when they do the validation based on not doing this step, here we're choosing the gene set, there was a 27% error rate there. But when they did it repeating this gene step, there was a 41% error rate, which is getting close to chance, flipping a coin. But in any event, so really on the basis of this, the Netherlands Cancer Institute that did this said that they were going to start using this 70 gene set to make clinical decisions. They're going to take new patients with breast cancer. Do a microarray experiment. Measure the 70 genes. Use this model, and make a prediction about good or bad outcome, and tailor their treatment according to that. Yes-- AUDIENCE: I actually had a question related to this and more [INAUDIBLE]. So the lab puts out a study like this, says here's 70 genes that we think are good indicator of [INAUDIBLE] [? By the logic, ?] I want to validate these results, right? Now in gathering and collecting this data, like this ridiculous study, they're looking at patients over five years. You do a microarray analysis on a patient and they pop up positive or whatever for this indicator, now you've got people making clinical decisions on this. And it seems like the validation method is an [INAUDIBLE] withheld data. They need to have a bad prognosis or a bad outcome in order to say that these candidates-- for the patient-- if these 70 genes actually do have a negative effect, or indicate [INAUDIBLE] that effect has to be there. So how does that how does a clinician or a clinical researcher, who kind of sees patients and does this, how do they balance that? I mean what would your suggestion or advice be? Does that makes any sense at all? PROFESSOR: Well you're talking about how do you do research on patients this area? [INTERPOSING VOICES] AUDIENCE: [INAUDIBLE] how you [INAUDIBLE] patients clinically? AUDIENCE: Well, [INAUDIBLE] so we have this result, people need to validate it. In the process of validating, they're going to find people who have this particular pattern, right? And they need to follow those people over a certain time course. And in order to validate their own [INAUDIBLE] essentially, if none of the people who have these 70 genes, or expression profiles or whatever, develop breast cancer, then the model's kind of [INAUDIBLE] back to the drawing board. So essentially, and I hate to say it this way-- PROFESSOR: No, say what you're thinking. AUDIENCE: They need these people, in order to validate the model, they need them to progress right to disease state. And yet at the same time, they're, as clinicians, so putting aside the researcher part, as clinicians, they need to treat these patients. And if there's a strong indication from prior studies that the chances are you're going to get breast cancer and it's going to metastasize. And it's going to get really bad for you. They can't just not treat these people aggressively. Or that treatment is going to affect that outcome. PROFESSOR: Well, people-- yeah, yeah, so I don't think anybody's doing-- AUDIENCE: You think about the Tuskegee-like experiments. PROFESSOR: Yes, so nobody's denying treatment or withholding best known treatments to patients. AUDIENCE: So how do you calculate that into the validation of the model? That if you get-- you can't-- because you can't say, well, they didn't have metastasis because of my treatment. PROFESSOR: There are a few issues. And you actually-- so in this New England Journal of Medicine paper-- here, I did not pass that one out. It's on the website. What they did is they used this model in a larger group of patients and they then compared predictions of this model to best clinical predictors. So for example, there's other predictive scales. And they used one called the St. Gallen scale, which predicts outcome in breast cancer. And they said how does our model compare to the St. Gallen scale. If we take all of these patients and we see what their St. Gallen number was and how they did. And we take our predictions and how they did, which is better? And they concluded that their method was better than the St. Gallen method of making predictions. And that was really one of the principal-- the principal justification for saying we have a better method to make predictions about how patients are going to do. And we're going to start using this in the Netherlands at the Cancer Institute to make our decisions, rather than St. Gallen. And what you point out is actually a big flaw in research that does this, because patients had their St. Gallen scale determined when they were diagnosed. And depending on what their clinicians thought, treated them accordingly, in order to achieve best outcome. So if they were in a poor outcome group to begin with, based on St. Gallen criteria, they got treated more aggressively to try and make their outcome better. So actually, the goal of the clinician is for the St. Gallen scale not to be right on this group of patients. The interventions that took place on these patients, because it was not a perspective controlled study, the interventions that took place were specifically designed to mitigate the poor prognosis that the St. Gallen criteria indicated on these patients. And so it may not be a surprise that the St. Gallen predictor was not all that accurate later on, because people were intervening actively with the results of that predictor to change outcome. And that wasn't happening with gene expression profiles. Nobody looked at the gene expression profile of these patients 10 years ago and then decided how they were going to treat them over the next five years to make their disease better. AUDIENCE: So is it just that because for right now the turnaround on the analysis of the gene expression profiles is long enough that the St. Gallen approach of aggressive treatment or whatever, that turnaround is long enough that you'll have your answer and your validation before clinical decision start getting made, repeat experiments are done to cross validate-- PROFESSOR: They'll never be a-- I mean to get a new drug approved in this country for labeling requires that you take a group of patients with a disease, and you treat some of them with the drug and some of them with a placebo, and you see which works. To ever really know whether prediction models are going to work accurately, one needs to do the same thing, to take patients prospectively to apply prediction models to them ahead of time, and to see what happens with them, without an intervention that's based on the results of the prediction. That's why I find a paper like this perspective here kind of scary. The microarrays in clinical investigations from today's journal, where the authors envision that in the future clinical investigations will consist of small trials with a high density of data, precise patient stratification according to expression profile, and highly tailored analysis of microarray data, otherwise known as massive overfitting of tons of expression data in a small number of patients here. And that's never going to tell you whether your predictions are correct. And if you're going to make decisions about treatment or predictions based on that, you won't be accurate. I don't believe that there'll be sufficient accuracy from small. I don't think that trials need to get smaller. And trials can be made smaller because we now have microarray data. In fact, trials need to be probably larger, because you have that much more undetermined data set. AUDIENCE: If you're [INAUDIBLE] the reproducibility [INAUDIBLE] PROFESSOR: Well, so the Netherlands Cancer Institute, they patented their set of genes as a predictor of breast cancer. So, hoping-- right, well, hoping that they'll have the patent. And if they show that their stuff works, then people will come to them and they'll start wholesaling their type of arrays and their approach. Do you know that-- I haven't seen anything, Zack. I mean there was an article that they were going to start doing this in the summer of 2003. And it was in Nature. We're going to start doing this. And silence. Did they come to their senses? AUDIENCE: That's what I'm thinking must have happened. I think that probably somebody who actually knew this [INAUDIBLE] say, do you want to screw our patients over by doing this? I don't know that for a fact. PROFESSOR: Right. AUDIENCE: Yeah, what's an article [INAUDIBLE] AUDIENCE: Oh, yeah. AUDIENCE: I can't remember how many marketing [INAUDIBLE] [INTERPOSING VOICES] AUDIENCE: There's two issues here. One is whether FDA approves these gene chips in general. We'll get back to that in a second. Narrowly, whether you use a [INAUDIBLE] formula to prescribe these kind of [INAUDIBLE] on any platform. And I think as a responsible clinician, most of us would not right now necessarily stratify our patients based on the prognosis alone. You agree with that? PROFESSOR: Yes. AUDIENCE: It's been done kind of for cancer. And some oncology trials, it's kind of scary to me that have been done. [INAUDIBLE] been conducted. The way cancer is done in this country is within impactive centers and cooperative oncology groups where large numbers of individuals are treated in protocols. And those are now being stratified by [INAUDIBLE] for some studies. I can't say that I fully agree with that. [INAUDIBLE] actually using the measurement [? technology. ?] Roche Diagnostics made the headlines about two months ago when they tried to get a chip, not for expression, but for genotyping mutations in the P450 of the proteins that clear idea of toxic chemicals out of our body in the liver. They tried [INAUDIBLE] it's very important for [? promo ?] genomics because it'll tell you how fast growth appeared. Roche tried to get it simply through a waiver that essentially said this is just a method of measurement. We don't really have to get specific FDA approval. The FDA said no, no, no, hold it. Let's go through the formal approval process. So what Cecily is talking about is that it announced that another [INAUDIBLE] will see if they can repeat it the same way. And this is-- It's actually an act of valor. As some of you are involved [? in HSP ?] should know that HSP is actually very much involved in help the FDA figure out exactly how to look at this, because [INAUDIBLE] the FDA has only now developed the minimum standards of data submission from the pharmaceutical companies on the microarray data. They have no guidelines whatsoever on the standard identification patients. So I think there's going to be several hops, skips, and jumps to get these kind of measurements translated from research descriptions to clinical measures. It's still going to be used, by the way, to give you a-- it still can be used in a Medical Center without FDA approval as long as they're not sold. So for instance [INAUDIBLE] the genetics and genomics has contracted with Affymetrics for a sequencing chip for genes involved in hearing loss and cardiomyopathy. But [INAUDIBLE] is not selling this for general use. It's been used internally by its clinicians. So it's a very tricky process that that's been where regulation-- it's genomics in fact [INAUDIBLE] will be giving a lecture about that [? later on. ?] PROFESSOR: Is Margulies giving that lecture? AUDIENCE: Yes. PROFESSOR: OK, that's all for today.
MIT_HST512_Genomic_Medicine_Spring_2004
Lecture_19_Modeling_and_Reverse_Engineering.txt
ZOLTAN SZALLASI: --like modeling and what modeling can achieve, or why people are doing it at all, or is it doable at all. Should one be interested in this, or this is completely harebrained idea? Reverse engineering, where you really want to reconstruct how the system looks like-- what you are going to use for modeling. And one more issue is that I really need to talk about-- but of course this is like-- nothing really exciting happened in biological modeling yet. People tried and applied the very same tools, methods, that have been around in modeling for decades. They're facing the very same issues-- very same problems. You have three-- couple ODEs. Well, what's going to happen? Very quickly you're going to run into trouble. So what's the way out? This is really, really the hope that the robustness-- as we know, I mean, life is robust-- can teach us some lessons that we can exploit if we want to do large scale meaningful modeling. So actually that's like the third point. Now for you guys, what's your background? AUDIENCE: Biology [INAUDIBLE]. ZOLTAN SZALLASI: OK, and yours? AUDIENCE: Bio and chem engineering. Biology and-- ZOLTAN SZALLASI: And-- and? AUDIENCE: And chemical engineering. ZOLTAN SZALLASI: Chemical-- OK. So good. So I mean for a pure biologist to carry the point home that robustness is something important and to look out for, that this might give the solution, is kind of new. For you guys, probably it's pretty obvious that if you do very large scale modeling, especially for chemical engineering, then these type of things are very important. I mean, you know how it is, right? If you think about all the chaotic systems like-- yeah? AUDIENCE: [INAUDIBLE]? I mean, I have some background, but it's [INAUDIBLE].. ZOLTAN SZALLASI: Oh, I will, I will. It's just a introductory slide. Is this important or not? For you guys, you will understand why this is important. For biologists, why would I care? So that's kind of the distinction. OK, so what's the goals of science? Of course the main goal of science-- and one can open a whole discussion-- this is predictive power. And the other goal is understanding [INAUDIBLE] and playing in a playground for social [INAUDIBLE] and so forth. But what we want to do is predictions. So how did biology evolve or how did it work? Obviously you had a black box. Something happened, right? You set certain conditions, perturbations. You get something that you can describe that's the living organism. And there are some few readouts. For example, you get a drug and the patient responded or not. Or like if there is no oxygen around, then for most organisms this means death. You do not really know-- and we didn't really know for a long time-- what's inside the box. We just play with this. We had certain inputs and we had certain outputs. Of course later-- biology during the past several hundred years or 100 years-- we started to learn that there is a very complicated intricate network inside the cell, which is like the unit of life. And if you want to do modeling of course, you can do black box things and that's what brute force or exhaustive reverse engineering is about. That you do all sorts of inputs and outputs, and you can kind of guess what's inside. [CROWD CHATTER] But it's much more efficient if you-- I'm sorry. It really bothers me. [DOOR CLOSES] It's much more efficient if you have a pretty good idea of what's inside the box. Because then the modeling description is going to be much more efficient. So what's inside? Whoops. Sorry. Come in. Now you all have some biological background. So there are genes, RNA, proteins-- proteins have certain activation states. These things are translocated. I mean, there is temporal and spatial information and that's going to set up your entire network. So what systems biology-- or at least some field of systems biology-- is aiming to do is creating a predictive mathematical logical representation of the living organism. And that leads to the whole issue that what is-- again, for you guys it's obvious. For biologists, not that obvious whether modeling is good or bad. Biologists do not really like the whole idea of in silico modeling. And that's kind of obvious-- I mean, that's understandable. But for you guys acquainted with the backgrounds, you know very well that all we are doing in science in most cases, is actually modeling. We are creating some sort of a mathematical representation of a system. Think about anything. I mean of course, if you describe the movement of the Earth and the sun, that's two mass points. That's kind of representation, and that's going to give you pretty good predictions. But all we are doing in science is essentially some sort of modeling. So again, let me just mention this very quickly. This is for biologists, because biologists are really abhorred by the whole idea of in silico modeling, although they started to warm up towards the whole concept recently. But biologists have been doing modeling for the very beginning of biology. If you had an experimental drug, you're not going to give it to the kid-- or little Johnny. We'd have to see whether it works or not. But you're going to use an experimental model. Now there's a very strong underlying assumption here, which is, this model-- this animal-- is going to react to the drug the very same way as a child will-- which might be correct or incorrect. But some of you may recall or you might have heard of the thalidomide issue, when they just simply chose the wrong animal model. There is an animal model that would have produced the birth malformations. They didn't choose that. But the bottom line is that biologists have been using models, and they have no problem when they are using an animal when they are trying to model, for example, the human response. So modeling can be an in vitro and in vivo model. And of course, the alternative is in silico model. OK, so this is like the intro for the biologist that you don't need to hear. So the first issue is that what is in the black box? And this is essentially, how do we represent biological knowledge in a way that could be used for modeling? And that's sort of reverse engineering of the intracellular regulatory network. So just to give you some estimate of how complex is this thing that's inside the black box, let me just give you some estimates that I was playing with. For a while, I've been giving a tutorial on systems biology for many years now. So it just tries to give people a feeling of how complex the system is. So the cautious estimate is that the number of interacting parameters is going to be on the order of a couple of hundred thousands, but less than a million. The way this comes out, you have 10,000 to 20,000 active genes per cell. Different post-translational modifications, different localizations, that will have independent regulatory input for something else. It's going to be, let's say that the three per yeast and three to six that's probably less than 10 per protein per gene on average in humans. So of course the whole thing may be way off either direction because if you have splice variants for example, lots of antisense transcribed genes, this number might be way higher. On the other hand, if modularity really works well in biology, you don't really need to put every parameter in your model. You can work with appropriate modules, and that's actually one of the key hopes. But that's one of the main directions biologists try to move to these days, because that's still very large. The point here is the following. If you take these numbers and translate into actual numbers of parameters and networks, for bacteria-- E. coli-- that that number, the complexity of that network, is not much higher than the regulatory control mechanism for a jumbo jet. So actually the idea or the whole concept is coming from John Doyle. I'm just kind of stealing this sentence from him. John Doyle is at Caltech. He's a very famous mathematician. Actually, he's the one who worked out the theory behind stealth. Stealth is not supposed to fly. The way that looks like, it's not supposed to fly. The reason that flies is that the control mechanism is so well-designed, that actually it's going to remain in the air. But if you describe like the aerodynamic capabilities, that's not supposed to fly. So the point is that the control theory-- the control mechanism-- in a very complex engine these days is not much, much, much higher, but definitely not many orders of magnitude higher than, for example, for E. coli. So if you want to try very, very, very, very hard and work out all the details for E. coli, then one might try to do some sort of a meaningful dynamic modeling on at least the bacteria. And actually, there are many groups or many institutions-- some companies-- who are trying to do that. Now that's not an obvious thing. There will be still lots and lots of questions open. Like I'm going to talk about parameter optimization, which is-- of course, you cannot solve it. It's just a very, very, very computationally intractable problem. But the point is that if the way you design an airplane-- which has to be robust-- is very similar to the way an organism will work, then there are certain tricks that you can borrow from control theory that you can apply to bacteria. So representation of biology-- the simplest level, this is just-- stole the epidermal growth factor receptor pathway from the CAG database. So the simplest level is, you have some sort of directed graph. Now this is obvious for you guys. For biologists, I point out that, this is like the simplest level of biological representation. So of course you have the nodes that are proteins or genes or post-translational modifications or whatnot. Then there is the edges that are the regulatory interactions. Now you cannot do much with this. This is not a dynamic model yet. You need a more detailed description of actually what's happening between some parameters, and you need to make that dynamic by introducing time. And in this case, if you have a continuous differential equation, then you need some sort of a description of how things are going to change here. And of course you need certain type of parameters-- genetic parameters. You need all these ingredients. Now this is obvious for you guys. Yeah. So when you do reverse engineering-- and biologists have been doing reverse engineering for many, many decades essentially. When you try to identify a protein, that's essentially reverse engineering the individual nodes. So that's what biochemistry or genome and other large scale node identifying projects are doing. Biochemistry, protein interaction screening, two-hybrid co-IP, is trying to determine all the regulatory interactions-- all the edges in your graph. But you can actually start, and there are some efforts that try to do brute force reverse engineering. Why would you do that? I mean, these approaches are extremely time-consuming. Although there are smart ways-- two-hybrid-- you can speed up things. You can do things in a high throughput manner. But working out every single detail for a genome of 20,000, 30,000 things is going to take a while. So if you know parts of it and you're quite sure about some subnetworks here, you can start to apply at the same time brute force methods which is why it's simple. You have consecutive time steps, and you try to determine a set of regulatory rules that can produce a gene expression pattern or parameter setting at the latest time point based on the previous time point. And if you do have many, many sufficiently diverse consecutive time points, then you can have a fairly good idea how things are going to regulate each other. Now the most widely used assumption is that you have some sort of an additive model. In this case, the state of a gene at a later time point is going to be determined somehow by the weighted sum of the state of another set of genes-- some sort of bias factor. And you have some sort of skewing function there. It doesn't really matter what actually you're doing. There are different types of methods. The number crunching can be done different ways. The bottom line is that-- these are just individual methods have been used-- that you need to determine all the bias factors and weight values. And if you think about it, you can just very simply think about it that it's like, you have many, many, many, many linear equations. And you need as many equations as bias factors and weight factors you have there, in order to do reverse engineering. Which was actually-- these are the exhaustive solutions, and that was actually done on the bound here. And what [INAUDIBLE],, who's a very smart mathematician, actually showed [INAUDIBLE] are unknown parameters in a set of ordinary differential equations, then you will need it on the order of 2r, on the order of r, sufficiently diverse time points-- to do efficient reverse engineering. This kind of [INAUDIBLE],, this is the number one would expect. But actually he proved it, so that's good. The point on this slide is that we should think about reverse engineering based on the assumption how your network works-- or network representation if you think about like Boolean networks and different levels of connectivity. You can make estimates of how many different time points, how many different measurements, you need to do a reverse engineering. Yeah? AUDIENCE: [INAUDIBLE] equates to the [INAUDIBLE]?? ZOLTAN SZALLASI: Sorry? AUDIENCE: You said it equates to [INAUDIBLE]?? ZOLTAN SZALLASI: In this case, no, no, no. This is the only one that has continuous differential equations. These are like Boolean representations. This is like an old slide. The point here is just showing that, of course, if you have a fully connected network, you cannot do it. If you have Boolean connectivity K, that means that your average number of input per node is K, then the number is much smaller. AUDIENCE: Oh, no, I think you mentioned [INAUDIBLE].. ZOLTAN SZALLASI: Oh, no. The linearity was simply-- this is just an additive model. We do not-- no lab has produced any sufficiently good quality data and large enough data set to worry about what the function is. What people are trying to determine at this scale, is actually who might be regulating whom. The equation is simply out of question at the moment. This just gives you a feel of how to-- or the difficulties involved in what you need. If you do not have enough independent measurements, then one other way of going-- and actually biologists have been using it for a long time-- is actually using perturbations. So if you do individual perturbations in the system in a kind of directed perturbation that these are the part of the network I'm interested in, then you can do this type of measurement. And the principle is very simple. Imagine that you have three genes-- A regulating B, B is regulating C. And then you create a perturbation matrix. You knock out individual genes. And this is going to show you that based on this topology, how the individual downstream regulated genes are going to behave. So from this perturbation matrix, you can create an accessibility matrix, which means that who might be regulating whom. And of course a given accessibility matrix can be associated with different regulatory networks. Both of these networks can be described by this accessibility matrix. And then usually what people think is like, let's take the most parsimonious, and they tend to believe it. Now perturbation measurements are not working very well. So I mean, you can work on subnetworks, and it has been shown to work if you have a subnetwork that has been sufficiently well described. But nothing has been really produced by these methods. Nothing really new has been discovered by itself. But this is kind of the gist of it. This is kind of the underlying principle, how are people trying to approach this. So if you do reverse engineering with these type of methods, given good quality data and enough computational number-crunching power, most probably we are going to have some sort of a regulatory topology of a network. It's almost given. And there are lots of preps who are producing knockouts for yeast. And they started to produce some meaningful data sets. But if you do dynamic modeling, you would at least hope that you have some of kinetic parameters in that model that describe or approximate reality fairly well. So another aspect of reverse engineering is when you have some idea about the kinetic parameters, but you're trying to refine these measurements. And that's what [INAUDIBLE] lab did. That's what they did actually. They essentially tagged by green fluorescent protein, every single protein in E. coli. So green for us, protein is like a tag and it's on every single protein. And they just do time series measurements on every single protein. It can be done in a high throughput fashion because in every single well, they know which protein is tagged. And if you do this, then you can actually-- from the measurements-- you can refine the actual kinetic parameters pretty well. The reason you need this is quite obvious. When biologists or biochemists determine genetic parameters, that's usually done in a free solution. You grind up the bacteria, do something, and that can be quite different from the actual genetic parameter present in the cell. So for one thing-- think about it-- in a free solution, you approximate things with very large number of molecules. Whereas in cells, sometimes you have 50 to 100 proteins of the same kind. So you might have a very different-- and it's not a free solution, it's a much more dense solution. So the efficient kinetic parameter can be very different. So reverse engineering can give you much better estimate so we can refine the estimates on the actual category. AUDIENCE: [INAUDIBLE] localized? ZOLTAN SZALLASI: As well, yes. I mean the point is that if you do-- you want to start with the best parameter set. And in this case, it's-- yes, localization. Of course it's important. I think about it at membrane proteins. I mean things that will happen in the liquid phase is going to be very different from the free water solution. So [INAUDIBLE]. So let's assume that one went through all the difficulties and have a fairly good description of the topology and the regulatory interactions, and then you have good quality kinetic parameters. So should one try-- yeah. AUDIENCE: Just have a question. ZOLTAN SZALLASI: Yeah. AUDIENCE: Two things that stand out. [INAUDIBLE] the schematics slide you had before. That one, yeah. ZOLTAN SZALLASI: This one, yeah. AUDIENCE: There are two things about this that stand out. Assuming [INAUDIBLE] I'm sure that some of the GFP tagged proteins actually were lethal [INAUDIBLE] right? ZOLTAN SZALLASI: Some, yeah. AUDIENCE: Because they included function or [INAUDIBLE] critical [INAUDIBLE]. And other proteins, obviously because many of them [INAUDIBLE] large protein, will probably reduce-- or have reduced activity or whatever. And both of those actually go towards-- so if we want to understand parameters or the kinetic parameters for gene regulation or whatever, I'm going to seriously bias the information that you get out of the [INAUDIBLE] much the same way that having the solution [INAUDIBLE] parameters [INAUDIBLE].. How do they address that? ZOLTAN SZALLASI: They didn't. AUDIENCE: They didn't address that? Oh. ZOLTAN SZALLASI: At this stage, they were happy to see that they can produce something in a high throughput fashion. Your points are absolutely valid and correct and important. What they do is, they do another expression at an individual protein level. Where they say that if I rewire-- and they took a sub-- this was not done in the actual publication for the entire network. They just took that part of the network that describes how the flagella are assembled. And if you use these refined more these kinetic parameters-- that actually were significantly different from some of it that was measured before-- then you have a much more accurate description of the dynamics-- how the flagella are assembled. So they are kind of proving at the level of the overall functional module. AUDIENCE: Do you see a possibility in that case-- assuming that two proteins, particularly the network [INAUDIBLE],, are important for genetic function. Assuming that those are tweaked by the GFP in different ways so that maybe the implication of looking at the GFP tag on one protein [INAUDIBLE]. ZOLTAN SZALLASI: So the point is-- OK, I see. AUDIENCE: Can you [INAUDIBLE] de-convolute the amount of error? ZOLTAN SZALLASI: Yeah, yeah. So what they were actually doing was, the GFP is not really trying to measure the protein. It's measuring the activity of the translation. So when you have a GFP tagged protein for, let's say protein A, then what they are really interested in is how fast that protein is activated by the non-tagged transcription factors. And what they were interested in, the parameters they wanted to refine, is the activation of that gene. So the GFP protein is simply a-- AUDIENCE: So it says nothing about-- ZOLTAN SZALLASI: --a marker. AUDIENCE: It says nothing about the actual-- ZOLTAN SZALLASI: If you have a very quick, very immediate feedback loop, and it interferes with that, yeah, that's a problem. But if you do not have that, you have much larger feedback for many, many, many different proteins. All they wanted to see, how that actual protein is activated. And the activators of that transcription unit is unaffected. There is no GFP tag on those proteins. AUDIENCE: Well I think my question-- and I'll just modify it [INAUDIBLE] back and forth. Is there a way to-- so let's say there are two competing regulatory proteins for transcription. ZOLTAN SZALLASI: Yes. AUDIENCE: [INAUDIBLE] ZOLTAN SZALLASI: Of a certain protein that is GFP labeled, right? AUDIENCE: And you get differing regulation of-- I'm sorry. I screwed that up. Two proteins that are GFP tagged that you know are [INAUDIBLE] and you get different regulation of those genes-- ZOLTAN SZALLASI: Yes. AUDIENCE: --from this. Do you assume that there's a way to actually de-convolute what the actual right-- like, so there's an error associated with the tagging [INAUDIBLE]. ZOLTAN SZALLASI: Yes. AUDIENCE: And you're going to get two different expression profiles for both of those proteins in each experiment even though only one is tagged. ZOLTAN SZALLASI: Well, the only way you would do it is actually you take two different tags. Probably that's the best way to go. From one set of measurements, you can attempt those things. But-- AUDIENCE: Would that give-- is it possible then to get information out about the degree of perturbation of the actual system from using that double tagged system? Do you understand-- ZOLTAN SZALLASI: Yes. AUDIENCE: [INAUDIBLE] ZOLTAN SZALLASI: I'm not quite sure whether it's worth the effort. What they are doing is-- as I said-- AUDIENCE: Because right now they don't have an estimate on how accurate their-- ZOLTAN SZALLASI: Yes. AUDIENCE: --parameters are, right? ZOLTAN SZALLASI: Absolutely. So what they do is they try to estimate whether their effort is worth or not based on the outcome of the entire modeling effort, and with issues like whether the description is a robust network or not. Every single measurement will give you a somewhat biased parameter. So if you measure a single parameter in different settings, they will always be different. What is the error there? The point is that with this way, you can refine parameters to some extent. And that was a significant improvement. Sometimes it is like an order of two decimal difference from what they thought it was before. Based on that, they have a more accurate description of the system. Now within certain error range, very different parameters would give you the very same result. This is what you expect from robustness. We are getting the error. You just missed the first couple of slides when I said-- if you keep playing with parameters, you can very easily overfit your entire system. So I'm not quite sure whether that's-- yes, for individual process, you can do it. And obviously, if you do very different taggings and very different types of measurements for the same proteins, then in some way you can de-convolute-- or at least you can catch very obvious contradictions. And you say, well for this parameter, I have no idea what's going on. Or for this parameter, I need to measure it individually. Because if you develop an antibody or some other measurement technique, then probably you can measure it much more accurately. So probably that would be the best way. For most proteins, probably you won't have that many problems. AUDIENCE: I see. So within the network, the parameters are defined in a way that it doesn't matter to the individual parameter [INAUDIBLE]-- ZOLTAN SZALLASI: Exactly. AUDIENCE: --how they combine [INAUDIBLE].. ZOLTAN SZALLASI: Yes. AUDIENCE: And making sure that that [INAUDIBLE].. ZOLTAN SZALLASI: Yes, yes, yes. And that's kind of the whole gist of large scale modeling. How much effort do you need? How much information do you put in to get some sort of a reasonably accurate prediction? OK? So let's move on to that. So what do you need to do for modeling? The obvious steps, you have to ask-- this is like step zero. You are going to ask interesting questions. You need to collect good quality data. You need to create an appropriate mathematical logical representation of the system. That's what we have talked so far about. We need to run simulation and test how well the simulation is fitting the data. Test for robustness or for other network properties. You update the model based on how well your model description is describing the data and then use it. So if you-- AUDIENCE: Quick question. Can you define robustness? ZOLTAN SZALLASI: We are getting there. AUDIENCE: OK. ZOLTAN SZALLASI: If-- well, it's relative intersensitivity to individual parameter values for the entire network. That's kind of the definition. So you have a network. Now you still have to ask another question. How does this network really behave? So essentially your question is, what is this equation here? And in most cases, what people think about is-- and this is the only models that have been really employed in some sort of a realistic way to describe biological systems-- is just using ordinary differential equations. Of course, there are other ways of doing. You can use differential equations for discrete steps. The point here is these are deterministic systems. So the question is, is biology a deterministic system? Or is it true that a deterministic description of biological network is going to give you something meaningful? Deterministic system-- you all know what it is. There is only one outcome from a previous parameter setting. But biology, as we know, is most probably not that highly deterministic. Actually, we know for a fact that individual biology calculations are always stochastic. It's another question how the biological network behaves. But for individual reactions, you know that you have very few molecules-- less than 400 transcription factors in a nucleus. Fairly far from a quick solution. Reaction can be often slow. So there's all the hallmarks of stochastic reactions that chemical engineers are all too familiar with. So in that case, you have to do different type of modeling. You want to use stochastic differential equations where you have kind of a stochastic error in differentiable equations-- which is kind of cheating. The real thing is what a long time ago, Gillespie introduced-- like 30 years ago. They actually do a stochastic modeling of individual chemical reactions. This is the thing that is actually closest to reality. Of course, the problem is that computation is so difficult. It's so intensive. And there is-- unless there's a good reason that you need to do this type of modeling, then this is not really an economical approach. Now there are other tricks people are trying to do. Like John Doyle is trying at Caltech with [INAUDIBLE] who came back from retirement, to do something called a tall leap. So this is like a really detailed description of how things are hitting each other and how chemical reactions develop. Now can you guess where the reaction is going to be in, let's say, 10 steps? You can tell what's going to happen in one step if you do a stochastic modeling. Can you guess what happens in 10 steps? So they're trying to cheat and they are trying to see whether-- if you think about like, ODE is going to describe very simply what's going to happen. The stochastic equation is going to jump around this in some way, and you will have alternative outcomes of the same reaction. With the tall leap, what they are trying to see-- they just jump from here to here. And there has to be certain-- what is the probability that the system is going to be in this state, in let's say 10 time units, or something like that? So this is, again, a wide open question. The big question is, is this really an important thing, that we think about biology and model it in a stochastic fashion? Now let me just show you an example from real experiments that shows that, yes, it's important. There was a very nice experiment in which they created a plasmid of two proteins. They were tagged with two different GFPs. And this was a very carefully controlled experiment, when two proteins were translated. On the same plasmid in every single bacteria, and everything was driven gene expression by the very, very same promoters. And everything was in principle, identical in every single cell. So what you would expect in the relative expression of these proteins should be the same in every single cell. So we should show the same kind of mixture of red and green. So that was the expectation. And instead of this, what you see is there are very red, very green, and yellow cells, showing that individual cells will choose very different pathway-- or path. So there will be cells that will have much more of one of the proteins than the other ones, although everything was identical-- as much as we can control experimentally. We know stochastics is important. We know that probably differentiation is driven by specificity. You have a parent cell, and after division you have two cells that are going to go two complete different paths. One remains a stem cell, the other guy goes the opposite direction. And that might be just simply driven how certain proteins are distributed randomly between the two daughter cells during the cell division. It is known actually that for many differentiation steps, probably the decision whether a cell goes one way or other is going to be a stochastic process. Now whether this is important for modeling the cell cycle, cell growth, and other features, we do not know. Yes? AUDIENCE: Might be a silly question. But how do you define stochastic? Is it just randomness or-- ZOLTAN SZALLASI: Oh, it's a very good question. So-- well in a way-- OK. How would you define stochasticity? Again, this is the representation of scientific knowledge. So we describe temperature with a single number, although you know very well that-- AUDIENCE: [INAUDIBLE] ZOLTAN SZALLASI: Yeah, you have individual kinetic energies of all molecules. So if you have many molecules, it doesn't really matter. You give temperature. If you have 10 molecules, the whole issue is very different. So stochasticity is that you go down one more level of detail description of the individual reactions, that will happen with a certain probability. So you can call it randomness, but the point is that most of these things as we understand, will occur with a certain probability. So probabilistic descriptions of individual molecular reactions are much more accurate and much, much closer to reality than continuous-- continuous differential equations are nice, and they were invented by Newton-Leibniz to do useful things. But I don't think that even they thought that this was an accurate-- well, probably they thought-- but for a long time, we know that this is not a very accurate description of-- certainly not since quantum chemistry and physics is around. So once you have a description and you decide which way to go-- so this slide just showed you that we have evidence that stochasticity is present in biology. We have lots of evidence. I just showed like one experiment. Whether it's worth doing for modeling and not going with all these, that's an open question. Probably it's going to determine which-- not, what's your question. That's why it's very important that you formulate your question appropriately. Now, once you decided-- so you have the network, topology, kinetic parameters, equations, everything. Then you actually start to run simulations and see what happens. So let me show this example that-- actually, this can describe certain biological phenomena. And you can actually formulate useful hypotheses based on this. This was something called a nucleocytoplasmic transport. This is an active reaction itself. Proteins get in the nucleus and they can't come out. And that's actually driven by this regulatory-- or this dynamic network-- involving a group of proteins. Essentially this protein is Rand protein, which is a G protein reaction. So it uses GTP as an energy. It's an active reaction. This is what is actually moving things in and out from the cytoplasm. When people started to think about modeling, they have a strong assumption-- everybody does-- that if you have a network that's not very stochastic, that's sufficiently isolated from the rest of the network, and we understand it well enough, then probably that network can be described pretty well with a mathematical model. And you can probably make pretty good predictions based on that. So these people actually who did this experiment, they were experts in the nucleocytoplasmic transport system. That's why they knew that probably there were not that many blank spots in this network. So what they did-- they took all the kinetic parameters. They already measured it several times. This is not such a large network. They created that and they started to do measurements. They actually measured pretty well-- pretty accurately-- how quickly proteins enter the nucleus. This is the nucleus. It's empty and then it's getting into the nucleus. So they can measure it quite accurately. So the first question you have, does the model produce time series of charts that feed the data? And they did all these experiments. And these are the measurements and these are the predictions. And yes, it seemed that the model produced fairly good descriptions-- fairly accurate descriptions. So that is good news. Now the next question you ask is, OK. I describe a certain experimental setting, but is the model robust? And how sensitive is the parameters? That is, if you have the right ratio of parameters and measurements-- I'm sure you understand this-- you can always overfit the system. You can always find a set of parameters that will give you the right result. Now is this anything useful? Probably not. Because that's what you'd like to predict. What's going to happen if you take a very different parameter setting? Or you put in a new drug or combinatorial treatment? So what these people do-- and in this case you need to do-- is check for sensitivity. So robustness is, you check the outcome of the behavior of the system-- whatever you measure. For example, in this case, how quick was the dynamic that things get into the nucleus. And you start to play around with every single parameter and ask the question that, in what parameter range do you still have the same result-- let's say with a 10% error? So these are all the parameter settings-- these are individual perturbations. These are perturbations on individual parameters at the same time that still give you the same result with a 10% error. And so that's the next step. And if you are-- see that, well, it looks fairly good. Then you start to trust your description is fairly robust. That's cool. Now the next thing you need to do is actually, can the model produce something useful-- for example, a testable hypothesis? And that's what these guys did. So they said, well, the model would predict that if you knock down this RCC-1 protein that helps RAM to perform its function, then it has to behave in a certain way. And that's what they did. They used the temperature-sensitive mutant to knock that protein out. And this is the measured dynamic or kinetics of the proteins getting into the nucleus after knockdown. And this is the simulated. So in this case, actually this describes the system fairly well after you introduce a very profound perturbation-- you knock down a protein. So these are the key steps you want to do in the case of the following. And if you have all these things, then you can assume-- or you have actually-- you can have fairly high confidence that your model is describing at least the subnetwork fairly well. AUDIENCE: [INAUDIBLE] the prediction model [INAUDIBLE]?? ZOLTAN SZALLASI: Well this is the part of-- I'm losing you. Thank you. This is the simulation, which you have on the bottom. AUDIENCE: OK, I guess-- I guess-- ZOLTAN SZALLASI: So you have two simulation states, right? That RCC-1 protein is present or absent. So that's what you have there. And you actually measure these things in the presence or absence of RCC-1. AUDIENCE: So are you able to predict like-- without actually speaking to examiners-- what is also going to be before you actually do the experiment or [INAUDIBLE]? You kept saying, like-- can you actually predict that curve without re-tweaking the parameters? ZOLTAN SZALLASI: This is not re-tweaking the parameters. AUDIENCE: [INAUDIBLE] ZOLTAN SZALLASI: You build the model, you check for robustness. And then you do not touch the model anymore. Right? Then you start to do in silico experiments, saying, with RCC-1, how does the system behave? And then actually you measure whether the prediction was correct. And after you are done with five of these experiments, you say, I have a fairly good model. So I can start to play with this thing and ask questions. Introduce random changes-- what's the model that's going to speed up? And then the in silico model is going to give you certain answers and then you can test those. But the point is-- and that's going to be like the last slide in this lecture-- that you're trying to speed up the discovery process here. Of course, every single time you need to do the experiment. But you can go through a much larger number of possible experiments in silico than in vitro or in vivo. So that's kind of-- I've introduced this concept of measuring the robustness. The robustness is the relative insensitivity of a network to parameters. And this will answer the question that this is a relatively small network. This was still a couple of panels of parameters. But what people are trying to do-- and there are companies that try to do this. There is some company that my friends started five years ago. It's called G Network Sciences. They are physicists-- solid state physicists. And they say, well, we can model very large things. And these guys actually build networks on the order of 1,000 or 2,000 interacting things-- parameters, proteins. They use ordinary differential equations and they will do forward modeling-- forward simulations. But if you think about this whole robustness issue or network fitting, parameter fitting, well, it's going to be a problem. These are the time series data predicted by the model. And you can actually do measurements on these things, right? So you can start to see how well the simulations fit the experimental data. Now if you think about it, you have very many parameters here. You have thousands of parameters here. And they can change in all sorts of combinations. So when you try to fit your model to the experimental data, if you loosen here, you have to tighten there. So this is a very, very complicated-- computer scientists will understand right away. This is an intractable condition. Well, actually, if you think about, these parameters have only two values that it's going to be, and you have n parameters. Then what you have to go through-- exhaustive [INAUDIBLE]. So it's not doable. So they use also the trace genetic algorithms or whatnot. But nobody has any idea right now about the landscape of this parameter space. Is this like-- really have certain optimal states? Or is it kind of local optima and we're just playing around with these things? And you'd like to know these things before you trust your model. So you can play and you can do relatively exhaustive searches for small networks. But then you start to enter the realm of realistic size networks of 1,000 parameters. You have no idea what you are-- where you are. So that's what answers the whole issue of robustness. If we understand what robustness means in this network, maybe you do not really need to worry about the actual parameter values that much. Because within certain ranges, the system has to behave the very same way. Yeah, that we all know. And there is some evidence for this. So there was a very nice paper from Gary O'Dell when they took all the knowledge we have about the [INAUDIBLE] priority network of [INAUDIBLE].. So this is all the knowledge we know. These are all the proteins that play a role in it and all the regulatory interactions. And they describe these things and they put all this into MATLAB and they start to play with this. And what they found-- they found two striking things. One is that if you have this network, parameter values essentially do not matter. In the range of two to three orders of magnitude, you could change any parameter setting. You still have the same outcome. Also, this network behaved in a robust way only if you introduced two regulatory interactions-- these two guys. And when they looked harder, they actually found evidence that those are there. So the important message of this work was that maybe it's mainly the topology that's going to determine how your network behaves. People tried-- like Andrew Murray here at Harvard-- tried to de-convolute the network. One of the questions is that, if you have a network that's robust, is this working the very same way as we understand in control theory? So you have models, redundancy, feedback loops. Or there is some of an other interesting topology at play that evolution developed or invented. So they try to do this, but they still-- the sense of this study was that the parameter values can change a lot and you still have the very same outcome. So if this is true for many biological networks, large networks-- and this is actually the way it's always designed, although it will take a lot of work and a lot of independent confirmation, then you do not really need to worry that much about individual parameter fitting once you understand the typology and you have some sort of approximate field about the individual parameters. So that's good news. And also that they could make predictive-- they could make predictions. So robustness one of the key properties of a link organism. And-- oh, this is the kind of semi-official definition. And OK, so how does this work? As we understand, robustness is derived in control theory. Jumbo jet, three things-- we have feedback loops, we have redundancy, and you have modular design. If you apply these things in the right combination, the right way, involving lots and lots of complicated math, then you have probably a very robust system. We do not know whether we have some other design at work in biology. In principle, that's possible. There are lots of issues raised like-- it was raised that if you have a parallel distribution of regulatory interactions, if you think about this graph description-- like small neural nets. You might have heard a lot about these things. Now, these things do not translate in any way into dynamic pressure. So nobody knows whether this makes any sense or not. But I'm just throwing something at you that, well maybe, yes. Maybe if you have some of parallel distribution or regulatory interactions, then maybe that's going to be the key-- or some sort of a design. Nobody really understands why the World Wide Web works. It's not supposed to work. It works but nobody understands why. There are some ideas, but nobody can describe really the dynamics. And it's not supposed to work the way it is described. Whether that could be de-convoluted to describe the work of World Wide Web, why it works-- into these categories, into these terms, or there is some of a self-organizing evolutionary principle at work there, nobody knows. Lots of people like [INAUDIBLE] and some other people who are betting on these think that there is some kind of self-organizing evolution principle at work here. Nobody understands in real terms, what that means. But as you understand, there is kind of a philosophical or like a religious issue here, that there is control theory as an engineer puts together a robust system, and there is evolution. Is the two things essentially the same or there is difference? So how can you use robustness? You can exploit this for biology. There very nice people from [INAUDIBLE] who did-- actually they took Drosophila again. It's kind of interesting. Bone morphogenic proteins that will tell your cell which way to develop or not to develop are distributed spatially in a well-determined way. So for example, this is a protein. This is the whole Drosophila egg. Some proteins are expressed only in the midline. So these proteins tell you what's the top, what's front, back, ventral, dorsal, midline, lateral, in an animal. So if you think about it-- if you look at this thing, this is odd. You'd expect things spreading out. So how can this thing happen? This is like a real distribution. This is the central lateral distribution. So how can this thing happen? So these are involving very, very few proteins-- three or four-- and they describe this in kind of a one dimensional way. So they kind of unfold in the animal. So this is the distribution as a central and lateral. And what they did-- well, this is what we know about this system. We do not know the parameters-- roughly no. But let's generate a very large number of random networks. They generate 60,000 random networks. You can do it, they're relatively small network. A couple of differential equations-- partial differentiable equations. And then ask a question-- what are the common characteristics of all the robust networks relative to those that are not robust? And they came up with a handful of observations. [INAUDIBLE],, which is a protein, does not diffuse. [INAUDIBLE],, which is another protein, is not chopped. This thing has to break down. And only the observed distribution is robust. There was no distribution that was like flat and robust. And we are talking about a very large number-- tens of thousands of randomly generated networks that retained the overall wiring-- the topology-- but the parameter values were randomly changed. So they came up with these two hypotheses. And it showed that it's only the-- robust networks must have these features. And actually they could experimentally verify this, which is very, very nice. So the idea again is that you can play around with these networks, ask what's the common feature of robust networks, and that will help you-- that will teach you something about the actual biology present. So this is kind of modeling based on all the differential equations real networks. Now you all know that until we understand-- you understand that until we really understand how the parameter fitting is going to work, and how the system works, we cannot really do very large scale modeling. But there were other people who took alternative approaches. And they said, well, maybe we can understand something about the overall topology of networks if we take very rough models-- like Boolean networks. Genes are on and off and they regulate each other. And [INAUDIBLE] and [INAUDIBLE] Glass, they started in the '60s and they played around with this. And they wrote interesting books and they became very popular. And they were not really part of real experimental science, but nevertheless they produced some very interesting suggestions that kind of keep coming back. So let me mention a couple of things here briefly. Because as I said, these things-- these Boolean networks-- never had anything to do with real description of biology. They're very, very far from that. And that was the '60s, '70s-- they made huge errors in their analysis. They very much undersampled the gene expression space and so forth. But they introduced a couple of concepts which were not exactly new, but they showed that it's an important thing. That if you have a Boolean network, things turn on or off each other, then it's a deterministic network. Then depending on what's the average number of input per gene or per parameter, the system can display chaotic or organized behavior. Chaotic behavior means that the system just keeps wandering about in the whole gene expression space. Organized behavior means that you have limit cycles or attractors-- limit cycles in continuous differential equations, and in Boolean networks, you have attractors. Which means that it doesn't really matter, what's the initial parameter setting, in terms of individual genes on or off. The system tends to fall towards attractors. This is not new. Because people knew that this thing exists. But the concept that these attractors can behave or can display very robust properties is still very important. Now is this important or not? And then at that time, [INAUDIBLE]---- he's a very interesting guy. I mean just-- you know, he didn't do any experiments. In the '70s he came up with a set of papers. And it's just intellectually very entertaining, what he did. He actually suggested that differentiation states are essentially different attractors. And during malignant transformation, development of cancer cells go from a normal attractor towards a malignant-- a new attractor. And then they will stay there whatever you do. Or you have to change a lot of things to drive it back to normal. Now there might be some evidence this is actually true. You heard about gene expression measurements. And what [INAUDIBLE] group did, they took cancer-- breast cancer samples from patients before treatment, after treatment, and from the metastasis of the same tumor in the same patient. And they measured gene expression pairs. And then-- well if you look at cancer-- you might have heard about it or you might be aware of this-- [INAUDIBLE] completely messed up genome-- chromosomes recombine in a completely random fashion. And you would expect that well, yeah, the same tumor is going to keep changing a lot. Well based on this, it seems that this is not the case. A tumor in a given patient is always going to be very, very similar to the tumor in the same patient given a different time point or during metastasis whatever you do. It's kind of the same-- AUDIENCE: In terms of gene expression. ZOLTAN SZALLASI: In terms of gene expression, I'm sorry-- gene expression pairs. And they are going to be very different from somebody else. So it seems that once a tumor develops, it finds a new stable state, it's going to stay there. It's not going to wander about. And now there is lots of evidence-- well, different labs work with different cell lines. And certain cell lines that have been established for a long time, and unless you introduce very huge genetic changes, they are pretty similar to each other. They are always much more similar to each other than, say, anything else. So it seems that the genetic network-- and this kind of makes sense-- finds a new robust stable state. And then it tends to stay there because if you do those individual small perturbations, the system can't fix itself. So this pretty much is pretty similar to what they suggested. You have strong attractors. Now, is this is useful? Not this representation-- we do not know. But you can ask questions like, how many perturbations do you need to introduce to drive the system from one attractor to another one? And if it shows that for a large network, you cannot do it with less than 100 perturbations, well that would suggest a very different therapeutic strategy than if you can do it with one. Or which things you need to change-- things that are highly connected or intermediate connected? So you can actually-- these representations might teach you something about the robustness of the network. That's why I wanted to mention it. Also I want to warn you about a couple things like, when we talk about modeling, we always talk about a single cell. What's happening in that cell and which way it's going to develop? And we shouldn't forget that whatever measurements you do, you always work with population average data. This kind of leads back to this stochasticity issue. And let me just show you this very nice little experiment by James Ferrell, when they measured the Jun Kinase activity of individual xenopus oocyte. And actually he had to do this whole experiment himself, although he's like a well-established full professor, because there was no student in his lab who's willing to do the following experiment. So what they did is that they measured Jun Kinase activity on the progesterone treatment. This is the dose response curve in a population of xenopus oocytes. And what happened is that-- it shows this very nice dose response curve and the coefficient 1. Now the problem was that this was very far from what he thought was the case. And so he asked some of the students to actually measure the Jun Kinase activity on individual oocytes. That's a lot of work. Instead of running a single [INAUDIBLE],, you have to run 100 different [INAUDIBLE] for individual xenopus oocytes. So he started to take individual oocytes and experiment. And actually he had a very strong assumption what he was expecting. That's why he invested all the time. But what he found is that for an individual oocyte, they are either always inactivated or activated, with a very sharp dose response curve-- with a [INAUDIBLE] coefficient of close to 100. He had a strong positive feedback loop there. So the point is that what you see here is like an average. Because each oocyte is activated at a slightly different concentration. But what you have in a single system is a strong positive feedback loop. So when you do and describe a system and when you try to do reverse engineering, if you go with population average data, your reverse engineering is going to give you a [INAUDIBLE] coefficient of 1 and some sort of a description that this is what reality is. And this is what you should do for your modeling if you want to describe the system correctly. Right? So this is kind of another warning how complicated biology is, that most measurements we are doing right now is grinding down a lot of cells. And you're doing some sort of a population average measurement. AUDIENCE: [INAUDIBLE] ZOLTAN SZALLASI: Yep. AUDIENCE: Could it be hypothesized that [INAUDIBLE]?? ZOLTAN SZALLASI: Oh, yeah. Actually he de-convoluted and then published another paper. He showed exactly why you have that. And there is a feedback parameter that you can tweak and you can actually show that you can completely change the hysteresis curve. AUDIENCE: Again, that's very similar to what [INAUDIBLE] stable [INAUDIBLE]. ZOLTAN SZALLASI: Yes. AUDIENCE: And like high induction [INAUDIBLE].. ZOLTAN SZALLASI: Absolutely. AUDIENCE: And low induction [INAUDIBLE].. ZOLTAN SZALLASI: Absolutely. The difference there is that this is a real network. And what these guys are doing, they put it in. Yeah, and that's what Jim Collins is doing at BU. These guys-- James Ferrell did it for an actually existing network and [INAUDIBLE] tweaking that individual feedback parameter. We have seen that. So that was kind of modularity, and this is the positive feedback loop here. It's kind of de-convoluted here. Is it important to understand or not? Well, this is something very simple, but this might be very useful for drug development. Because the concept is simple. You do not want to design a drug that's going to interfere with something inside a feedback loop. That's why it's a feedback loop. So if you keep an input there, that's kind of going down the drain. So if you understand this network and you understand you have feedback loops, you want-- that would give you very strong indications where to design drugs-- where to interfere with the system, because it makes sense. In the other case, when you are inside a feedback loop, it doesn't make any sense to put in too much effort. OK, the last thing I want to mention is constraint-based models. So these things are very, very, very complicated dynamic networks, and there are lots of problems involved. And is it a kind of a simple way or smart way to make a shortcut, and learning something about biology with much less effort? And those are the constraint-based models. This essentially-- there's nothing-- they are like [INAUDIBLE] analysis and these things have been around for ages for chemical engineers. And they are useful for certain things. They are essentially the beefed up version of Kirchhoff's Law, stating that if you have a metabolic network, for example, then whatever goes into a node, it has to come out. Otherwise the system is going to explode or that metabolite is going to disappear. These are not dynamic things. All you are describing is that at every single metabolite node, the net flux has to be zero. And then you have a set of linear equations. And of course, usually you have much more parameters than linear equations. At least this can start a question. And that's what [INAUDIBLE] group has been doing for a while and they did very interesting things. For example, they showed that if this is like the flux cone-- which is, I'm sure that-- those of you in chemical engineering are familiar with-- this gives you the-- underneath the flux cone, you have all the possible or allowed solutions. I mean you can-- since you have more parameters than equations, there are many different combinations of the parameters that give you the solution. So the animal can live always underneath this-- anywhere underneath this flux [INAUDIBLE].. The interesting thing they hypothesized is that evolution optimizes. In other words, the animal is going to live on the edge. Produce maximum energy with a certain amount of oxygen input. And actually they measured it. They took the metabolic network of E. coli-- that's fairly well described. You have essentially the entire E. coli network around and you can download it and you can play with this. And that's pretty well known. And that's simple, just linear equations. You solve it and then you have an idea what are the allowed parameter combinations. These are very simple combinations. So this is like just stoichiometric numbers, how many things go in and come out. And then you simply measure that if you measure the oxygen input or the energy source for growth, how fast is the mass produced? How fast will the animal grows? They show that it is always living on the edge-- so it's optimizing. And then you are switching from one energy source to another one. It's going to find the other edge. And if you do the experiment 10 different times, the animal is going to take very different routes to get there. If you keep taking time series points during the development, it's finding a new edge. But it's always arriving to the new edge and it's going to find that new stable optimal growth condition. So this is actually very interesting because this is an entirely non-dynamic description. It's a very simplified description. But it teaches you something very important about the biology of E. coli, and it can be very useful. For example, for chemical engineers who are working in fermentation, this is very important. You can really optimize the production of certain type of proteins if you understand the metabolic network underneath. So one more thing, it's just the end. Again, this is not entirely new. This is a very old paper-- it's not actually very old-- 1994. Then people describe like the coagulation pathway that has been known for a long time. And you can actually describe pretty well. So of course, modeling has been around in biology as well. It's just, we didn't have that much measurement. And there was certainly no public or private funding to do all the experiments. And so, this is what we are having. So traditionally, biology try to predict things and then do experiments. And people did everything in their head and based on their intuition. And what we are trying to do here is speed up or make it more efficient-- this process you have to experiment. You have the prediction, you do modeling in between. That's going-- in silico, you have lots of alternative solutions. You throw out all the obviously wrong solutions, and you just experiment to test the ones that are supported by your model. And that can speed up designing of new therapeutic approaches. For example, combinatorial treatment, which is very difficult if you think about it-- a combinatorial explosion of all potential treatments. Any questions? AUDIENCE: Do you see any similarities with-- I just like [INAUDIBLE] that came out of the paper [INAUDIBLE] in terms of the physics of kinetic gas theory and just the dynamic [INAUDIBLE].. Some of the models that people are building. Or is that [INAUDIBLE]? ZOLTAN SZALLASI: I haven't seen any experimental verification one way or another. Most of these approaches actually were done by physicists. Like people describe large Boolean networks based on stained glasses. And then they said, well, it's going to behave something like this. If you can formulate experimental testable hypothesis based on that, and then you do the experiments and you see this is the way it works or doesn't work, it's great. The problem is that there are some theoretical approaches that physicists work out that just cannot be translated into anything biologically testable. That's my only problem. And those are not that-- of course, those are not that obvious how to do. AUDIENCE: [INAUDIBLE] ZOLTAN SZALLASI: Possible to use what? AUDIENCE: Gene expression data [INAUDIBLE].. ZOLTAN SZALLASI: What gene expression data? AUDIENCE: [INAUDIBLE] ZOLTAN SZALLASI: Yeah, that's a good question obviously. And that's-- I mean if-- if you were here when I talked about microarray measurements or noise, those are pretty noisy measurements. So you can do it. If you want to do parameter fitting or microarray, I wouldn't do it. Microarray is compressing and if you can guess pretty well what's going up or down right now, that's already a nice achievement. One can play with this and-- well, microarray is improving and we are certainly taking our share. Showing, for example, that based on some chips, like half of the [INAUDIBLE] probes are wrong. They are just not what they are supposed to be. And if you throw those out, you have very good [INAUDIBLE] measurements. So one can improve this type of measurements. But right now if you just take any microarray measurement from literature or yourself, and you want to do a parameter fitting on that, that's essentially noise propagation. If your question is, if I do a modeling-- I have a fairly good description of the model-- very large network. And my microarray measurement can give you, with a 95% confidence, who goes up or down, and my prediction is just going to be from the model-- for many proteins, who goes up or down. And you match it in a probabilistic fashion to the outcome of the microarray. In that case, yes. And that is being done and that's a useful approach because you have a very large network. And you're going to say that I want to predict how most or many of the parameters are going to change. And if I can then tell you who goes up or down, even without giving you very good quality numbers, that's going to be something very useful. So that's something actually-- I must have had an extra slide somewhere, which is-- yeah, actually I do. That was it-- huh. That's it. Kind of predicted what I wanted to. So there is a low throughput data where you have very good quality models. And there are other ways of going that high throughput data that are not that high quality and you can do probabilistic modeling. You can create-- you have one network, and you create lots of alternative networks with individual parameter changes. And then you just keep pruning that tree-- which model is giving you the best fit to that low quality microarray measurement. So in that case, yes, you can use it. For parameter fitting, I wouldn't do it.