playlist
stringclasses 160
values | file_name
stringlengths 9
102
| content
stringlengths 29
329k
|
---|---|---|
On_Cooking_Lecture_Videos
|
On_Cooking_Chapter_29_Hors_dOeuvre.txt
|
in this module we're going to cover the broad range category of charcuterie the objectives for this module are define the term hors d'oeuvre discuss various types of hors d'oeuvres prepare and serve a variety of cold and hot hors d'oeuvres including cannabis identify and describe international hors d'oeuvres and choose hors d'oeuvres that are appropriate for the meal or the event an important distinction of hors d'oeuvres is that they are served before or separately from the meal this is in contradiction to appetizers which are typically served as part of the meal and it is designed to wet the appetite appetizers appetite teasers in fact the term hors d'oeuvre is a french term to translate into outside the meal some examples of hors d'oeuvres include crudite candied salmon flatbreads and etc here's a variety of different pictures of some of the different styles of hors d'oeuvres that we'll be talking about canopes are tiny open-faced sandwiches they can be sweet they can be savory they can use any kind of different base for their sandwich and we'll talk more about those crudite crudite is a raw or slightly blanched vegetable so for instance a platter of carrots and celery served with chicken wings is a crudite dips are served hot or cold as accompaniment to crudites crackers chips toast breads and other styles of food so think of hummus or baba ganoush and caviar caviar is the salted roe of a sturgeon fish canopies consist of several components the base is made of either breads or toasts crackers or firm vegetables it could also be little small pancakes called bellini's it could be a round of cucumber a spread which provides the flavor usually a flavored butter or cream cheese something that's going to act in contrast to the base if the base has a crispiness you want a spread that's going to have a smooth creaminess and then the garnish typically when we say garnish what we're referring to here is going to be a dominant or complement flavor in this case in the example used you have a little bit of rye toast with the the chev which is a goat cheese mixed with herbs and then the salmon on top of that and a variety of foods can be used for this style of garnishing here are just some examples of different kinds of garnishes and different kinds of canapes that can be made a little shrimp toast various different accoutrements on top of a piece of bread these are pastry shells that have been filled you have the bellini pancakes in the front you have the little hollowed out pastry shells in there little waffle shells which are typically savory you can have red meat on there or you can have seafood on there in this case we've got a little bit of both the seafood which is a shrimp mousse is complemented by the cool crispness of the cucumber again this is a nice little batter twill made with a batter that's um in this case it's mixed with a little bit of parmesan cheese to give it some some of that flavor in there various different styles of crudite including different colors and you know you want to have your crew your canopies rather you want to have your cannabis bright and flavorful and colorful looking you want a variety of them and you want them artfully decorated on the platter and then of course we've got our deviled eggs as well when slicing bread for canopies you want to start with a slice that's cut into several shapes to avoid any waste always start with a square slice of bread in this case this is a nice square piece that's been the crust has been cut off of it the long pieces here are actually three pieces side by side and you can see how they can be cut to maximize its utilization crudites the french word meaning raw thing crudites are raw or slightly blanched vegetables almost all vegetables can be used you do want to have vegetables that have a bit of a crunch to them if your vegetables are limp or let's say for instance like a celery and it's not really crisp you can soak it in ice water ahead of time and this will re-crisp the vegetable generally served with some kind of dips you know a nice hummus or a baba ganoush or some kind of thickened dip we'll talk more about dips in just a moment only the freshest and best looking produce should be used for this you really don't want your produce that's got burns on it or brown spots you don't want produce this limp or doesn't look very good or is wilted you really want the freshest product to go out for this particular item dips can be served hot or cold they can be served with crackers or chips toast bread or other food to accompany crudites you can serve it with a a nice pita bread a soft bread if it's a hearty dip or if it's a thin dip you want to serve it with something that's going to give it a little bit more body associated with it cold dips commonly have the base of mayonnaise cream cheese or sour cream whereas hot dips will typically have the base of a bechamel or cream sauce or cheese sauce caviar is row or eggs from the sturgeon family of fish it's considered a delicacy often eaten raw as an appetizer with some caviar fetching a high price historically the most prized type of caviar come from the caspian and black seas but due to overfishing caviar is now produced around the world so let's talk a little bit about caviar versus fish roe all female fish lay eggs to reproduce therefore they have a row not all fish row is suitable for human consumption however and only sturgeon roe is usually considered for caviar caviar that's imported comes in several different styles and different quality grades beluga caviar is the most expensive it's dark gray with well separated eggs etc is considered by some to be the best medium sized golden yellow to brown eggs sevruga which is quite small and light to dark gray and pressed caviar is made from osetra or zavruga hung and then drained in a cloth for domestic caviar the american sturgeon caviar is considered to be the same quality as its imports golden whitefish is small and very crisp row and has a unique color to it lump fish is readily available and reasonably priced this is going to be on the lower end of the caviar and as you can see from the red color this is not a natural red this is dyed typically with the food grade dye you might find this in lower grade sushi restaurants and then salmon caviar these are eggs are large with good flavor and a natural orange color the process of making caviar is quite simple they literally just salt the eggs and allow it to cure in the salt over a period of time hors d'oeuvres can be served in a variety of different ways including filled pastry shells such as barquettes which is a shape tartlets eclair puffs or profiteroles or bushes skewers and brookshets can be used to small skewers holding combinations of various different foods such as a skewer holding a meatball or a skewer holding a piece of grilled meat meatballs don't forget about the meatballs there's a reason why swedish meatballs are so common all around the world miniature sliders and it's not just a miniature hamburger we're talking about a little small piece of pork belly on a little miniature brioche bun hors d'oeuvres wrapped in cheese meat or vegetables such as rumaki which is bacon wrapped foods on a small skewer hors d'oeuvres wrapped in a dough such as phyllo dough which is a paper thin wheat dough sheets pie crust or puff pastry and wonton skins and then other kinds a variety of filled or or wrapped foods and items including stuffed potatoes small fritters even chicken wings a lot of different cultures have traditions for serving finger foods in small dishes before the meal in italy the antipasti which is a savory salty foods to whet the appetite from the middle east and north africa and greece turkey and in places such as that algeria you've got meze which is small plates of associated of assorted salads like hummus and baba ganoush in spain there's tapas the small portions either hot or cold savory foods such including items like chorizo and shrimp and garlic sauce and olives and then from russia we've got zakusui which is in russian means small morsels these are served before the meal typically things such as a nice caviar smoked fish or pickled vegetables hors d'oeuvres can be a precursor to a dinner or they can be served to the guest as the only food offered at a banquet butler service is also known as past or trade hors d'oeuvres and it's presented to the guest on trays buffet service is both hot and cold hors d'oeuvres and they can be presented on trays displayed on a buffet table or in chafing dishes various different types of past hors d'oeuvres and plated hors d'oeuvres using things such as cups or in this case glasses and toothpicks you can use plastic as well they do have a lot of acrylic things down that look really nice that are disposable skewers you can't discount skewers because skewers are a perfect for these side of these style of small hors d'oeuvres they can be picked up and the skewers be disposed of really easily or you can even use in place of skewers forks they do make disposable forks that look like the real deal as well spoons are a big thing these days especially if you have anything that has a sauce on it you can use a spoon to contain the sauce as well and you can use a spoon tree to have multiple spoons displayed in the case of this or you can use edible spoons in this case this is a parmesan that's used that's cooked into a batter and then cooked into a spoon shape using a special tool and then you've got your salad served with that also things such as bellini's which are pancakes little miniature savory pancakes are typically served with a little creme fraiche and caviar this is a very typical style of appetizer you might see and then the sampler apple appetizers i mean this is everything from small spring rolls the mini sliders the various different things this uses all the different aspects of the hors d'oeuvre presentation using small shooter glasses in the case of a shrimp cocktail skewers again for the meatballs using endive endive is a small chicory or it's in the it's in the chicory family which is a bitter green and it can be used as a container for various different uh stuffings in this case this looks like a little hopping john and then figs a natural pairing of figs and and blue cheese with a little bit of serrano ham and some baby arugula they give it a little bitterness to it mix really really well and then of course this is our stuffed mushroom caps and our belgian endive again that uses uh the dive is not only a serving vessel but as part of an edible component of it as well especially if you have things that are overly sweet or overly creamy the endive will give you a little bittering flavor and give you a little bit of texture as well additionally martini glasses are a good way of doing this you can also use smaller glasses to make a smaller version of a martini glass that's also disposable but don't forget about things such as deviled eggs deviled eggs are quite often overlooked as an appetizer or an hors d'oeuvre rather and they are edible but they're you can pick them up and you can use just one bite and they're done and then probably the most overlooked of all the hors d'oeuvres is sushi sushi tends to be lumped in its own category but we forget that this is meant to be finger food you don't actually are not adequate wise bound to use chopsticks with sushi it is perfectly acceptable to take a sushi piece and pop it in your mouth unless it is strictly only fish so let's summarize and talk about some of our key takeaways for this lecture takeaway number one is hors d'oeuvres are different from appetizers in that appetizers are usually a precursor to the meal whereas hors d'oeuvres are usually served as a standalone that's not to say that appetizers are rather hors d'oeuvres cannot be served with a meal but they are typically served separate from the meal whereas appetizers are considered a component of the meal number two canapes can be made with a variety of bases breads and garnishes and are easily the most versatile form of hors d'oeuvres they can be done with meats or cheeses or vegetables or various different kinds of things and it can be as colorful and as flavorful as you want crudites are more than just raw vegetable trays as you see at the grocery store you want to serve them artfully and have a variety of different items also think about how that trade is going to look in the next hour when everybody gets a hold of it you also want to have smaller versions of these trays to where you can change them out more frequently even when they get about halfway destroyed you don't want to leave it sitting there with just a little bit of food on it you always want to keep them fresh and updated dips are a good accompaniment to crudites but also be aware of the double dipper because of them we tend to serve single servings of these items down caviar is more than just the cheap stuff on top of sushi good caviar pops in the mouth with a creamy salty flavor and pairs well with a little creme fraiche and champagne number six hors d'oeuvres are readily relatively easy to make they're low-cost alternatives to doing a plated sit-down dinner many times they're easily prepared ahead of time and can be kept cold until service sprinkle in a few hot hors d'oeuvres and now you're making money
|
On_Cooking_Lecture_Videos
|
On_Cooking_Chapter_1_Professionalism.txt
|
in this module we're going to spend some time and talk about professionalism in the culinary industry the objectives for this module include name key historical figures responsible for developing food service professionalism and describe the contributions for each list and describe the key stages in development of the modern food service industry explain the organization of classic and modern kitchen brigades identify the attributes a student needs to develop a successful culinary career describe the importance of professional ethics for chefs and list the specific behaviors that all culinary professionals should follow so let's start our examination of a culinary timeline in the 1500s culinary guilts are developed to decay to monopolize the preparation of certain food items each guild governed the production of a set of specific types of food or Taseer's were in charge of the main cuts of meat such as beef and pork petitioners were in charge of poultry pies and tarts poultry yes poultry pitch and pies to misy heirs and were in charge of breads vinegar heirs were in charge of sauces and stews Trotter airs Raghu's the term Trotter is referred off of this referring to the lower leg of a pig pork shops were the caterers the first quote-unquote restaurant was formed in 1765 part of this there were restaurants but mostly were taverns Monsignor bulla share opened the first free-standing restaurant in Paris Bowl ingeras contribution to the food service industry was a to serve a variety of foods prepared on premises to customers whose primary interest was dining part of this their primary interest is more in taverns and beers most people dined in either communal dining halls or families environments kitchens with their homes or in taverns where the food was not necessarily the focus of the visit and typically was not very good during the French Revolution of 1789 to 1799 there was a huge social structure change in France to the most essential elements of French cuisine bread and salt were at the heart of the conflict bread in particular was tied up with the national identity bread was considered a public service necessary to keep the people from rioting Baker's therefor were public servants so the police controlled all aspects of bread production if bread seems a trifling reason to riot consider that it was far more than something to stop a boy a base for nearly everyone but the aristocracy it was the main component of the working Frenchman's diet according to Sylvia Neely a concise history of french revolution the 18th century worker spent half his day wage on bread but when the grain crops failed two years in a row in 1778 and 1779 the price of bread shot up 88 percent of his wages many blame the rustling the ruling class for the resulting famine and economic upheaval on top of that peasants resented the Gabrielle a tax on salt that was particularly unfairly applied to the poor obviously the causes of revolution are far more complicated than the price of bread or unfair taxes on salt just as the American Revolution was about more than tea tariffs but both contributed to the rising anger toward the monarchy the aristocracy guilds and their monopolies were abolished during this timeframe and the nascent restaurant industry emerged as well chefs could cater to a growing middle class during this time and this is really when we started seeing the advent of the restaurant growing at this point after the Revolution [Music] if there's an enduring image of the French aristocracy in the english-speaking world it's probably of a woman in a frilly dress and about two feet of hair piled on her head addressing the question of her subjects being too poor to buy bread with an oblivious let them eat cake that woman was Marie Antoinette who was the queen of France during the French Revolution but while her dress was indeed frilly and her hair was indeed tall could she really have said something so clueless the evidence suggests that she didn't how do we know oh there are lots of reasons starting with the fact that in French the quote isn't even about cake it's about brioche kill Maj de la brioche brioche is delicious it's a rich buttery bread and it has strong associations with decadence but if you ever try to serve a big loaf of brioche to the guests at a child's birthday party you'll find out real quick that it is not in fact cake but either way the point of the quote is that this out-of-touch aristocrat didn't understand the realities of life as a peasant so maybe that's a semantic quibble the real evidence against Marie Antoinette having uttered this famous phrase is that well it was around long before she was scholars of folklore have found versions of the same quote with some variations across Europe in 16th century Germany there was a story of a noblewoman wondering why the hungry peasants didn't eat crosa a kind of sweet bread there's no evidence that marie antoinette ever said let them eat cake but we do know people have been attributing the phrase Gilmore's de la brioche to her for nearly 200 years and defunding it for just as long the first time the quote was connected to Antoinette in print was in 1843 a French writer named jean-baptiste Alphonse Carr reported finding the quote in a book from 1760 when Marie Antoinette was just five years old he hoped this would then the rumour that she was responsible for the famous phrase once and for all sorry about that John Baptiste we're trying [Applause] you marie antoine Karim was known as the cook of kings in the king of cooks his stated goal to achieve lightness grace order and purpose cuit II in the preparation and presentation of food as a saucy heir he standardized the use of roux and devised a system to classify sauces as a garden mojo he popularized cold cuisine as a culinary professional he designed kitchen tools and equipment in uniform so we still used today as an author he wrote and illustrated important texts on culinary arts this was the beginning of lagron cuisine in the early 19th century arguably one of the most influential persons in culinary history is Auguste Escoffier it was known as the emperor of the world's kitchens he defined French cuisine and dining during the Lebel epoch which is the beautiful epic or epoch a period of time between the franco-prussian war in the outbreak of World War one he simplified food preparation and dining he classified the five families of mother sauces used in today in the classic cooking he wrote live reveille days menus a guide to planning menus for meals ma cuisine a survey of cuisine boudoir or middle-class legate Culinaire still in use today a collection of classic cuisine recipes and garnishes this was actually one of my personal favorite cookbooks Escoffier also brought us what we know of today is the kitchen brigade the kitchen brigade is the system of staffing a kitchen called the back of the house so that each worker is assigned a set of specific tasks the classic kitchen Brigade created by Escoffier consisted of the chef de cuisine sous chef ha Boyer or expediter chef step party or station chefs and commis or cooks or assistant cooks the modern kitchen brigade consists of the Executive Chef sous chef area chefs line cook pastry chef apprentice short-order cook and various different other cooks and chefs that would be assigned to various different tasks I mean I hear you ask how important is a Skopje and the things like how important Skopje yeah they're like this go for gay was everything iceberg this guy and at the cost who had the cover of the repertoire the cuisine let's go for his classic book tattooed on his entire back that's how important scoff yes he came he saw he cooked his name was Augustus cough ei and like many revolutionaries his goal was modest he just wanted to completely change the way we eat Escoffier was a revolutionary in that he democratized access to French cuisine it was a very medieval yield like atmosphere and Escoffier says no I'm gonna write it all down and by writing it all down I can make it accessible to anyone that was a revolutionary idea that changed cooking around the world Auguste would say to his employees make it simple make it simple make it simple my name is Michele Escoffier and I'm a great-grandson of a famous chef who is regarded today as the father of modern cooking inspiring future chefs and future tattoos Escoffier codifies the cuisine giving detailed step-by-step instructions where you have your grandmother saying oh I just put a pinch of this and a pinch of that the Scott fais was one who go it's a quarter teaspoon and he wanted to introduce scientific principles into the cuisine and one of those key principles if you like is the idea that these things can be expressed as a set of laws and rules and set down in a book so the dish you can make today you can make exactly the same dish again tomorrow the geek filling air which Auguste Escoffier published in 1903 is really even today the Bible too great chance Escoffier was the first one to just canonize everything and to put down on paper everything that there was in cooking before him there were no recipes after him the entire restaurant business began home sudden people like me started flying octopus legs because I was go PA now that chefs have a manual on how to cook Escoffier organizes a way to cook these recipes on a massive scale by implementing the brigade system an army like reorganization of the kitchen staff before went for scuffie a I've got one thing that is regularity I know my place and I know what I'm doing he will appoint chefs that will be responsible for particular types of food they will have sous chef and the sous chefs will have assistants and each person reports that in person above them I think it's all for the better it's great today the kitchen tomorrow the world during the same time with classy cuisine or cuisine classic we have Charles Ram hopper he was the first internationally renowned chef of an American restaurant Delmonico's in New York City published the epicurean which contains 3500 recipes and is still in use today Delmonico's was founded in 1827 in New York City and moved to several places before settling at to South William Street in its iconic building in 1862 the restaurant fired French chef hired French chefs Charles Randolph ER who remained the chef from 1862 to 1876 and again from 1879 to 1896 just three years before his death in 1899 at the age of 62 Del Monico still exists today in the same William Street iconic location the early 1900's and mid 1900s gave us Ferdinand Hwa and nouvelle cuisine this was typifies by lighter foods in a simplification of techniques chef Huang refined and modernized the classic cuisine of scoff ei and he laid the groundwork for a new Val cuisine or new cuisine as its referred his book MA gastronomy was first published posthumously in French in SiC in 1969 and the book includes 200 recipes based on plants notes he inspired in training influential chefs such as Paul Bocuse Jean and Pierre trois Charles Trotter Helene do a chapel and among others Gaston Leonora is the father of modern French pastry he became a culinary school or created a culinary school Lenoir a liqueur led notary he developed innovations and variants Charlotte's and mousses he mastered techniques of freezing baked products which at that time had been unheard of simply just going bad or just doing less of them he's known as the possible creator of the famed opera cake or gateau opera as seen here the late 20th and early 21st century gave us the American culinary revolution consumers and chefs seek bold ethnic cuisines inspired by the arrival of diverse immigrant groups particularly those from Asia and Latin American cuisines since the 1960's the united states has created and influence the cuisine of the world just as it had been influenced for many years prior california cuisine or new American cuisine was launched in early 1970s by famed chef Alice Waters and this was done in Chez Panisse in Berkeley California Alice waters had her goal to serve fresh seasonal and locally grown produce and simple preparations that prepared and emphasizes the Foods natural flavors Chez Panisse established in 1971 is considered to be one of the most influential dining establishments in the United States this was the public venue in which waters would put her culinary ideals into practice using fresh local and seasonal ingredients the restaurant established working relationships with local farmers and suppliers in order to do so and it was launched a career of many notable chefs including Jeremy Tower and Paul Bartoli this was really the beginnings of what we see of as the farm-to-table movement today previously I'd mentioned Jeremy Tower and Jeremy Tower worked for Alice Waters for several years and many people refer to him as the actual workhorse behind Chez Panisse he left Chez Panisse in 1977 and began his own career that tower has criticised water for taking most if not all the praise and credit for the acclaim of Chez Panisse as well as the primary leadership in the new California cuisine movement in the American culinary revolution he also questions waters role as an actual chef and the kiss in the kitchen implying that she has not cooked in years there is also her role in the restaurant altogether he wrote California dish what I saw and cooked at the American culinary revolution in 2003 fusion cuisine which is the blending of different cultures in their cuisines has been around for as long as there has been food that has travelled from country to country however recently the American cuisine has been influenced by the cuisines of other cultures as well this is most evident in the fusion cuisine which began its firmament in the 1980s in fusion cuisine ingredients or preparation methods associated with 118 ethnic or regional cuisine are combined with those of another peanut butter and jelly tomato and basil [Music] but Indian and Latin or French and Mexican I found some unusual fusion restaurants mixing just those flavors starting with episode a on the Upper East Side quick history lesson the French occupied Mexico back in the 1800s and they influenced their cuisine as a result we want something we want things how to cook how to eat today chef Agostino is mixing those French cooking techniques with traditional Mexican flavors you can see the fusion when he makes his duck confit taquitos I use apples in cranberry compote which is French technique and I use cilantro fresh onions and of course Episode epazote is not only the name of the restaurant it's also an old-fashioned herb used in Mexico the facilities is between the basil and I mean it has very unique flavor back to the fusion when chef Agostino makes veal tacos he braises the meat which is a French technique but uses Mexican beer not wine now this is one of their featured dishes it looks like a standard goat cheese salad but it's coated with chilies so it's shockingly spicy when you bite into it and really delicious paired with the truffle vinaigrette and a watermelon radish the heat is balanced and so are the French and Mexican ingredients [Music] Indian and Latin flavors are another unexpected combination but that's exactly what they mix at the Vermillion restaurant it's not only spices chef Anup at Walters Indian and Latin doesn't have to mean eye watering spicy food but he always packs in the flavor like with your tandoori skirt steak in India people don't eat meat in addition to the turf they also have the surf with dishes like tamarind shrimp in India then there's a Chipotle scallops mixed with Indian spices and served over a pumpkin puree and they even fuse Indian and Latin flavors with their naan bread xed time you can't decide what type of food to eat try a fusion restaurant and eat a few kinds of food together you may be pleasantly surprised Anna Gilligan Fox 5 News a farm-to-table movement is one of the fastest growing and hottest trends in the culinary industry today although chefs look to incorporate global flavors and ingredients they also seek locally grown foods this promotes agriculture it focuses on the food served in season and it protects heirloom varieties or varieties not changed over time a well-designed symbiosis between the restaurant and the farmer allows the seasonality of food to shine through many specialty farmers will work with chefs to plant specific crops and help with menu development Modernist Cuisine also known as molecular gastronomy is the study of chemistry and physics of food preparation ingredients are reinvents cooking using ingredients and machinery from food manufacturing liquids are solidified into spheres they're turned into powders one of the leaders of this is paranoid RIA Ferran adrià is most famous Li known as the Salvador Dali of the kitchen he's a Spanish trial chef trained in french nouvelle cuisine created with advancing the culinary science movement he has small tasting plates of many of his 35 courses engaged all the senses including using fresh ingredients using equipment and ingredients more common in food manufacturing than in restaurants in modern foodservice operations new technologies have impacted the modern restaurant stoves replace fireplace cooking food storage canning and freezing expands availability transportation train and then air shipping free freeze chefs from seasonality and geographic locations today new foods are available in most parts of the world food is now sourced globally as opposed to just locally hybridization and genetic modification means new varieties genetically modified organisms or GMOs may have unforeseen consequences the problem with GMOs and this is a controversial topic is that GMOs typically refer to things that are done in laboratory we've been genetically modifying plants for eons of time animals for eons of time but we've done it through crossbreeding it's when we inject the DNA of one into the DNA of another in a laboratory is when we run into unforeseen circumstances they may be regulated pretty heavily and have been regulated and will be regulated pretty far in the future consumer concerns impacts the food industry and service industry today concerns about nutrition and diet concerns about public safety and consumer concerns things such as government inspections regulation of labeling new interests in local and/or and organically grown foods concerns about sustainability these are practices used to minimize human impact on the environment they protect natural resources they minimize food miles and avoid pesticides and reduce packaging and composing of scraps and Fairtrade products what does it take to be a professional chef there's lots of different skill sets involved in that but we boiled it down to just a few knowledge lifelong learning skills development with experience taste judgment dedication professional ethics and pride the American Culinary Federation says it best the culinarians code says as a proud member of the American Culinary Federation I pledge to share my professional knowledge and skill with all culinarians I will place honor fairness cooperation and consideration first when dealing with my colleagues I will keep all comments professional and respectful when dealing with my colleagues I will protect all members from the use of unfair means unnecessary risks and unethical behavior when used against them for another personal gain I will support the success growth and future of my colleagues in this great Federation as you can see the career path in the culinary industry are vast and limitless from being a chef owner of a restaurant or executive chef or pastry chef you can do recipe testing a cookbook authoring you can own a bake shop or a catering company you can be a food stylist for a magazine you can write for food media you can even be a food technologist so the the list goes on and on and on if you enjoy writing than food buy the food blogging and food writer may be something you want to do if you really don't want to work in restaurants but like cooking maybe you might want to look into being a personal or a private chef and maybe you want to work with soil or dirt and you want to be a farmer so let's summarize and talk about some of the takeaways for today Auguste Escoffier is arguably one of the most important culinary figures throughout history the modern brigade system for kitchen is more adaptable for various restaurants and food service operations today we're currently in the third epoch of culinary revolutions in the United States first being nouvelle second California and the third as-yet-unnamed movement happening consisting in fusion farm-to-table etc fusion cuisine has been around since the first foreigner set foot on foreign soil currently it is experiencing a renaissance the culinary professional must meet the challenges of today with fervency and zeal there are many paths to success in the culinary world but are not limited to just working in restaurants and kitchens
|
On_Cooking_Lecture_Videos
|
On_Cooking_Chapter_11_Stocks_and_Sauces_Part_1_Stocks.txt
|
in this module we're going to discuss the beginnings of all soups and sauces the most important part stocks the objectives for this module are describe the principles of stock making a variety of stocks and prepare and use various types of mirepoix stock is a flavored liquid made from bones of chicken beef or fish or other animals a good stock is the key to a great soup a sauce or a braised dish the french appropriately called stock fond fond or base as stock is the basis for many classic and modern dishes the ingredients that go into stock are fairly simple we start with bones commonly we'll use beef field chicken or fish bones less common we may use lamb turkey game or ham bones mirepoix is the next ingredients to go in traditional french mirror paul consists of 50 onion 25 percent carrot 25 celery there are some variations on this that can be done depending on what kind of stock you're producing so if you're producing a white stock you may substitute the carrots for something else that has a white color to it maybe a turnip or a leek or something along those lines seasonings include things such as peppercorns bay leaf thyme parsley garlic and sometimes rosemary and sage there are various different seasonings for stocks first one is a bouquet garni a bouquet garni is consisting of things such as thyme and celery leeks may have bay leaf it may have other herbs inside there a bouquet is literally that a bundle of garnish or bouquet garni a sachet a piece sachet meaning purse a piece means spices so it's a purse of spices this is typically wrapped with cheesecloth and it has things such as thyme and bay leaf and parsley stems may contain peppercorns garlic cloves and various other different spices an onion pique an onion pique pk meaning pricked is studded with cloves and typically a bay leaf or a laurel leaf or an onion brulee onion brulee means burnt so when we burn an onion we char it to give it that deep rich color depending on the kind of stock you're making you'll choose one or the other of those typically if you're making a white stock you may choose the onion pique if you're doing a dark colored stock you may use the onion brulee as we start our journey on making a stock we want to start the stock with cold water then we'll progress to skimming the stock gently as it comes up to a simmer and then we'll skim the stock frequently the more of the skimming that we do the more likely we are going to have a clean stock strain the stock carefully after it's produced its golden colors and rich flavors and aromas cool the stock quickly store it properly and then finally degrease the stock we can degrease it all along but we'll do a final decreasing at the end there are several different types of stock that we're going to be discussing starting with vegetable stock white stock brown stock fish stock or foume and corbinone or corp bullion vegetable stock contains vegetables and seasonings simmered in water while not technically a stock because it doesn't contain bones it is generally given the classification of stock one of the things that vegetable stock is lacking that the other stocks will have is collagen which turns to gelatin which gives it that lip smack ability that mouth feel that we crave whenever we have a good stock today i'm going to show you just how easy it is to make stock from scratch so as far as i'm concerned the base for any great stock is onion celery and carrot now when you're preparing these vegetables you don't even have to worry about peeling them don't worry about a thing because the fact is the vegetables don't stay in the stock we're going to be fishing them all out later so we're just going to steal as many nutrients as we possibly can from them so i'm starting with some onions some carrots and some celery now i'm barely even chopping these guys because that's how forgiving this stock really is and look that's the base for any great stock now to that you can add any number of fresh herbs my preference though is always a mixture of parsley and fresh thyme i feel like these give the best flavor i'm also going to add a couple cloves of garlic because you get tons of great flavor from garlic but before throwing it in don't even worry about mincing it just crack it open and then toss it in with the skin on and what we're going to do is fill a pot and cover our vegetables entirely now another great option when you're making vegetable stock is to use your kitchen scraps i like to use things like the core of a bell pepper broccoli stalks tomato scraps bases of things like celery and cabbage and even onion skins basically i store them all in the freezer and then when i'm ready to make a vegetable stock i dump them right into my pot all of these scraps even though they may not be edible are full of flavor and nutrients so we really want to get the most out of them so they're all gonna go in our pot as well we're gonna give this a stir season it with salt and pepper and then let it simmer away as long as you possibly can and i'm talking one to two hours even three after two or three hours you are gonna have the richest most beautiful vegetable stock you could possibly imagine what i love about homemade stock is you can totally control the sodium level and the kinds of vegetables you add so one of the things i like to do before straining through my fine sieve is to just lift out all of the larger chunks now we don't have to worry about getting rid of these now because we've basically taken all of the nutrients out of them and they've been transferred to our incredible bra now if you're wondering why my stock is so rich and darkly colored it's actually because of the onion skins they actually dye the stock and all you'll really have to do is strain it with a fine mesh sleeve to make sure you're not getting any of the debris into the stock if you wanted it to be extremely fine you could totally also do this through a cheesecloth but i think that looks pretty darn good this nutrient dense stock can be stored in your fridge for up to a week or in your freezer for up to six months vegetable stock should be clear and light colored contain no gelatin so it has very little body to it may be used as a substitute for meat stocks in vegetarian dishes strongly flavored vegetables from the cruciferous family or those that are have a a bitterness to it should be avoided and other starchy vegetables will cloud the stock and should be avoided for darker richer color and taste add a little soy sauce while cooking white stock contains raw bones and vegetables simmered in water with seasonings white chicken stock and white veal stock are two prime examples typically what you would do with the bones prior to putting them in the stock kettle you would actually rinse the bones or blanch the bones with steaming water or boiling water really quickly drain that water off and then refill with cold water to start the process today we're going to learn how to make a chicken stock the way it's done in the professional culinary school in france in order to realize that stock you're gonna need carrots onions prick with cloves salt and pepper water organic chicken carcass of course a stockpot and last but not least a supercharged bouquet garni for white stock if you don't know how to make the burrigani just look at the link above so let's get started all right so here we are now the first step is to get organized always have all of your ingredients nearby so you don't have to run around the kitchen when you're cooking step number one we're gonna have to blanch our carcasses that are here as you can see i've chopped them up in smaller pieces so they can fit in my stockpot i've got my heat set up to a medium to high and to blanch these carcasses you just need to take them and plunge them into your cold water i repeat you have to start this mix in cold water not in boiling water all right that's done we're gonna be now waiting for the water to come to the boil and when it's done we're gonna take out the carcasses okay so now how water is starting to boil when you reach that stage we're gonna leave the water to boil very slightly for two or three minutes while it's boiling as you can see you got this white foam that starts to form on the top of the water these are these are the impurities you don't want so what we do we're gonna try to skim them out as best as you can it's not always easy and it depends how many they are so with a ladle you kind of go around and try to clean your water so that should do it at this stage you can turn your heat off and i'm gonna take my carcasses off and reserve them in a clean container at this stage i'm gonna remove the water from my stockpot i'm gonna clean my stockpot and replace the water with a fresh clean cold water so same as before this time i'm putting the carcass back into cold water at this stage there is no salt or pepper to be added in the water this is just pure cold water and only the chicken i'm gonna repeat the same process this time waiting for the water to boil and when the water boils we're gonna start adding our vegetables okay so our water now boils again and as you can see we've got a much clearer stock and this is what we want to get however you still have little impurities so you're going to first get rid of this little white bits because we are trying to make an excellent professional grade product we can now add a bouquet garni carrots and the onion with the cloves note that i haven't touched my salt and pepper and this is normal you never ever salt or season your stock when you're cooking it it's only at the end that you may add a little bit of seasoning from here you're just gonna let your stock to simmer for around 45 minutes which is quite a long time so from time to time you're gonna have to repeat the exercise of skimming any of the impurity at the top whenever they appear after 45 minutes to an hour you can turn your head off and you are now ready to filter your stock when are ready to filter our stock so the first step you're going to get rid of all the excess of chicken and large pieces of bones in your mix when this is done you can start bit by bit filtering your stock in your pan when you've removed most of the liquid as you can see here you're only left with a little bit then feel free to take your pan and pour everything in our chicken stock is now almost ready the last thing we're gonna do is correct the seasoning so i'm taking a little spoon i'm gonna taste i'm just gonna add a little pinch of salt remember never to exceed three to six grams of salt per liter of stock once you add your salt to give it a one last stir and your stock is now ready white stock is a neutral stock made from beef field or chicken bones blanching the bones you want to wash and cut up the bones place them in a stock pot and cover with cold water bring the water to a boil over high heat as soon as water comes to a boil skim the rising impurities drain the water from the bones and discard that water refill the pot with cold water and proceed with the stock and the recipe calls for brown stock contains bones and vegetables that have been browned with then simmered in water with seasonings some of the most common brown stalks are veal stock and brown chicken stock this is going to give it a nice deep rich dark flavor and color here i have some veal knuckle bones and i prefer the knuckle over different styles of veal bone because the knuckle contains more collagen which will break down into gelatin during the simmering process while you're making the stock and it's that gelatin which will thicken your stock as you reduce it now here i'm squirting them i'm going to rub them down with canola oil and this is going to allow the heat to transfer to the surface more evenly some chefs don't like this step because they say it adds too much extra fat to their stock but my contention is you have to skim it no matter what so i add it now here i use our wood fire oven to roast the veal bones it gives them really nice char and as you can see here using that laser thermometer about 800 degrees in there depending upon what part you hit now you can easily roast these bones in your regular con conventional or convection oven at about 450 to 500 degrees fahrenheit for about one and a half two hours until they're a nice dark golden brown but here as you can see we have a nice little layer of smoke it's gonna give the bone just a slight smokiness during the roasting process it's gonna also cook them a lot faster and as you can see here they're starting to char and caramelize on the top but you always want to make sure that you're rotating your pan during the cooking process where you whether using a wood fire oven or a regular oven for even heating purposes now once one side gets nice and cooked the other side is still pretty much untouched so you want to make sure you rotate those bones around place them back in your oven and you're going to continue doing this until they become a nice dark golden brown now here i'm brushing them with tomato paste and the tomato paste is going to help thicken the stock slightly but also it's going to add a little bit of acidity and sweetness to the stock and i'm going to place these bones back in my oven just long enough for that tomato paste to darken slightly and to caramelize and it's also that dinosaur caramelization of tomato paste is going to add a great color and body to my stock now i know what you're thinking oh my god look at those they're burnt no they're not they're gorgeous they're perfect just wait until you see the color on this you're going to place the bones in a stock pot and here you want to deglaze your roasting pan and i'm just putting it over our flat top with a little bit of red wine and as that red wine comes to a simmer i'm using a wooden spatula just to scrape off all the nummies now that all that stuff that's stuck to the pan is going to give your stock a lot of great flavor but also color as well so you want to make sure that you don't skip this process and then once you have it fairly scraped you're going to pour it into your stock pot right over your bones your next step is to prep your mirepoix which is two parts onions one part celery and one parts carrots and you add it at the ratio of one part mirepoix to five parts bones and here i'm going to cut up the mirepoix in just a rough cut i don't peel my onions or my carrots that's just a personal preference and the ratios for your mirepoix is completely up to you this is the classic ratio but there's no such thing as the mirepoix police at least they haven't come knocking on my door yet and since we're making a roasted veal stock you're going to roast up the mirepoix as well so it gets nice and dark and calmly and you can add that on top of your bones next you want to cover all your ingredients with cold water which you classically start with cold water because it forms larger protein aggregates as the stock temperature heats up making a less cloudy stock now if a cloudy stock isn't an issue for you because this won't affect your flavor then you can actually start with warm water next you want to put it over a high flame bring it to a simmer and add in some of your aromatics like a bunch of thyme one bunch of parsley and sometimes they tell you one or two bay leaves no here's about five or ten i like bay leaves in my stock now when it comes up to a simmer you can see this beautiful color extraction and this is literally a couple of minutes into the simmering process but because of that dark roast on the bones and the tomato paste i have a beautiful color now once your stock comes with a simmer it's time to skim off some of that fat and the protein aggregates now notice how my stock is only simmering on one side that's because i pulled it off the flame uh to my left so only the right side is on the flame and what happens is a sort of a convection heating cycle that lets the stock simmer up the fat the protein aggregates on the right hand side it collects on the left and it makes it a lot easier for me to skim by dipping the edge of my ladle underneath the protein aggregates and the fat skimming my stock more efficiently and thoroughly after the initial skimming process you're going to let your stock simmer for 8 to 12 hours which is ideal for a veal stock and after that streaming process i like to add a little bit of cold water right on top and that's going to shock some of the fat molecules that are left behind to come to the top and coalesce allowing me to do one final skimming before moving it on to the next stage which is going to be our straining and icing process after the stock has simmered for 8 to 12 hours you're going to strain it through what's called a china cap or a large conical strainer and this is to capture the veal bones and the mirepoix some of those larger particles and here i'm just using a pot to help ladle out the stock at first because it's so heavy we have about easily 20 gallons of stock there and i'm going to use a rolling pin or whatever wooden device you have to mash up the mirepoix just to get a full extraction next you're going to go through and pass it through a shin wall which is a fine mesh conical strainer now here notice how i'm tapping the side of the chin wall instead of pressing it through with a ladle and the tapping is going to allow the stock to pass through but keeps you from forcing through all that particulate matter which is going to give your stock a gritty mouth feel after you've strained your stock both through a china cap and a chinois it's important to place it in an ice bath so fill your sink with lots of ice and cold water because you want to cool your stock down to below 40 degrees fahrenheit within three to four hours because anything above that is going to allow bacteria to form which could sour your stock now the final optional step which is commonly done in professional kitchens and restaurants is to make what's called a remiage or a second polling so to your cooked veal bones you can add in one bunch of parsley some thyme and again this is basically just aromatics right and some you know black peppercorns are good some bay leaves uh some hot water because the protein aggregates have already formed the first time and on top of that you're also going to add some mirepoix don't need to roast it and again some more hot water and this mix i also like to add a little bit of red wine for both some flavor and some color and you're going to take this whole mixture bring it to a simmer and you're going to skim and simmer for about four to six hours now this remias can be used to start your next stock to add flavor and gelatin or it can be reduced down to reinforce the stock you just made now personally i like to use my remiage to braise more subtle cuts of meat like poultry and pork now for a more detailed audio lecture on the stock making process including science and technique check out the stella culinary school podcast episode two and three or head on over to stellaculinary.com stock brown stock is made from chicken veal b for game bones caramelizing gives brown stock flavor and color deglaze the roasting pan to release the fawn to remember that fond is flavor caramelize the mirepoix and deglaze with stock water or red wine and add to the stock for an added flavor the process of roasting and deglazing is fairly simple you just want to roast the veal bones coat them with a little bit of oil roast them in the oven until they have a nice dark rich color to them you can use that same pan to saute your vegetables in if you like but you can also deglaze the pan using water or stock or even a little red wine the process for this is get the pan raw and hot and then add your water into it be very careful the steam is going to come up and use uh something to scrape the bottom of the pan preferably not scraping the metal just a nice wooden spoon or wooden spatula works great and then you can also roast the mirepoix in its own pan or you can do it on the stove top and then have that nice dark rich color remember when you're dealing with a brown stock you want to roast your items beforehand a fish stock or a fumey fish bones or crustacean shells are cooked slowly with vegetables and seasonings and water use white non-oily fish you don't want to use salmon or tunas or anything like that you want a a clear clean white fish like a snapper a grouper a bass something along those lines that is not oily and it's not going to produce a slick tasting sauce [Music] so [Music] [Music] so [Music] [Music] so [Music] do [Music] [Music] so [Music] fish stock is made from the bones and heads of fish and crustacean shells oily fish are not generally used bones are not blanched due to the loss of flavor this is a very quick sauce fumay is different from stocks because they are stronger flavored and contain an acidic ingredient such as wine or lemon juice fish stock in general requires a much further less reduced time than any of the others 35 minutes to 40 minutes tops depending on how much liquid you're dealing with is usually sufficient to extract full flavor a corbin or court bullion is vegetables and seasoning simmered in water with an acidic liquid such as vinegar wine or citrus juices an example of this would be a crab or a shrimp boil [Music] so [Music] corbin is commonly used to poach fish and shellfish a flavored liquid usually water and wine or vinegar in which vegetables and seasonings have been simmering to impart their flavors and aromas not actually a stock but prepared in the same manner naj an aromatic corbinon served in its own sauce sometimes something served in this way is generally called alonage loosely translated to swimming other forms of stock or flavorful liquids include infusions these are light flavorful stocks typically made from dried fruits vegetables herbs spices all steeped in hot water think tea or coffee like dashi is a japanese broth most well known for fusions this is the basis for a lot of soups in japanese culture ramen noodles and various different kinds of udons and things like that are all contain adashi glazes are made from stocks and this is a reduction of stock actually in many many ways from the french term glossae it's a dramatically reduced stock one gallon of stock produces one to two cups of gloss a gloss devion which is a meat glaze consists of several different categories gloucester vo which is veal gloss deodorant which is lamb gloss dead pork which is obviously pork a gloss devoli which is made from poultry consists of things such as gloucester poise or roasted chicken a glostic canard which is a duck and then we have glass de poisson which is made from fish all of these are done in a manner in which you can remove a majority of the water from them intensifying their flavors and using them in various different sauces but you can also reconstitute them as well one of the biggest misconceptions is stock and broth are the same thing broth contains meat but no bones also has vegetables and herbs and seasonings and you cook it for a very short period of time stocks contains meats bones cartilage and it has vegetables and herbs and seasonings and it's cooked for a longer period of time and then bone broth which is something that's fairly new on the market and very common these days to see in the grocery store contains meat bone and cartilage and is cooked for a much longer period of time you're going to get a much deeper much richer flavor out of the bone broth than you would out of a normal stock it's going to cook the broth almost the bones almost to the point of being just complete mush so once we've made our stock and we have pulled our stock or what we call pulling it or straining our stock we have to cool it down the first way we can cool it down is divided into smaller pieces so we can take the stock and we can put it in smaller pans uh preferably metal because metal is an excellent conductor of heat and also an excellent way of chilling things down as opposed to plastic plastic is an insulator versus metal we can use ice paddles which are frozen specialized containers that get kept in the freezer and then you literally just drop it in the soup or the stock we can use ice water baths which is surrounding the metal container with ice water or we can use a blast or tumble chillers now not every kitchen has a blast chiller in it but most kitchens do have ice machines and water and ice baths are a great way of doing this you can even use the first three in conjunction with each other and that would speed the process down even faster our whole goal here is to execute the two-stage cooling process which is a very important aspect of cooling down any food items the first part is we have to cool the food from plus 135 degrees fahrenheit down to 70 degrees fahrenheit within two hours once we do that once we get it to 70 degrees fahrenheit we have an additional four hours to cool it down to 41 degrees which gives us a total of six hours maximum if you are going to use the cooling method such as an ice bath or running water style you want to make sure that you vent the stockpot and this is a way of cooling the stockpot faster one of the critical steps inventing the stockpot is done by elevating it in the deep sink and then filling the sink with cold water and the water will circulate on all sides of the pot speeding the cooling process in addition to this venting procedure cooling wands can be used to speed the cooling of the stock soups or sauces and other liquids these wands known as ice paddles are hollow plastic containers they can be filled with ice or water and sealed frozen and then used to stir the cooling liquids clean and sanitize wants after each use to prevent cross-contamination it doesn't matter what you put underneath it you can use a trivet or you can use blocks you may have something that specifically is used just for that or you may use wad up aluminum foil as long as it will lift it up off the bottom that is perfectly fine so let's summarize and discuss some of our takeaways a stock is made with bones whereas a broth is made with meat white stock typically will rinse or blanch the bones prior to starting the stock accordion is used as a flavorful liquid to cook or poach seafood it's usually highly seasoned with the additional white wine or lemon juice a corbin is also a good beginnings for a fish stock gloss often commonly misunderstood and mistaken with demi-gloss can be the beginnings of a sauce or reconstituted with water into stock roast the bones and or vegetables for a deeper richer stock flavor deglaze with red wine and add an onion brulee to the stock for more rich colors the two-stage cooling process is put in place to help limit bacterial growth and safely chill liquids
|
On_Cooking_Lecture_Videos
|
On_Cooking_Chapter_22_Vegetables.txt
|
in this module we're going to talk about all things vegetable the objectives of this module include identify a variety of vegetables purchase vegetables appropriate for your needs store vegetables properly explain various ways of preserving vegetables prepare vegetables prior to cooking and service and apply various cooking methods to vegetables the term vegetable refers to any edible herbaceous plant with little or no woody tissue vegetable is a culinary term its definition is no scientific value and is somewhat arbitrary and subjective all parts of herbaceous plants eaten as food by humans whole or in part are generally considered vegetables mushrooms though belonging to the botanical kingdom fungi are commonly considered vegetables and are usually treated as such since vegetable is not a botanical term there is no contradiction in referring to a plant part as a fruit while also considering is a vegetable given this general rule of thumb vegetables can include leaves like lettuces stems like asparagus roots like carrots flowers like broccoli bulbs like garlic and seeds like peas and beans and of course the botanical fruits like cucumbers squash pumpkins and cassios vegetables contain water-soluble vitamins like vitamin b and c fat soluble vitamins like vitamin a and d and contain carbohydrates and minerals freshly properly prepared vegetables add flavor color and variety to a dish and they play an important role in personal health vegetables are generally sorted into nine groups but we also have two extra thrown in there cabbages fruit vegetables gourds and squashes greens mushrooms and truffles olives which is one of our extra categories onions pods and seeds roots and tubers stock vegetables and baby vegetables which is our second plus two group the brassica or cabbage family includes a wide range of vegetables used for their heads flowers or leaves they're part of a group of vegetables referred to as cruciferous widely recognized for their health promoting properties brassica are high in vitamins c and k folate potassium and other minerals as well as phytochemicals that constitute important components of a healthy diet brassicas also contains sulfur compounds that give these vegetables a strong aroma and a bitter flavor when overcooked bok choy is a popular vegetable used in chinese cooking a member of the cabbage family bok choy is crisp in texture and light in flavor its white stalks and dark green leaves are packed with vitamin a and c calcium used in china for hundreds of years bok choy can now be found in most u.s supermarkets it could be boiled sauteed or eaten raw bok choy pairs wells with other flavors like soy sauce toasted sesame oil and hot peppers baby bok choy is smaller and more tender than regular bok choy bok choy is also called pak chow bok choy and bok choy the word broccoli comes from the italian plural of broccoli which means the flowering crest of a cabbage and is the dimunitive form of broccoli meaning small nail or sprout broccoli has thick central stalk and grayish green leaves topped with heads of green florets berkeley is eaten raw or steamed microwaved or sauteed and served warm or cold broccoli stalks are extremely firm and benefit from blanching stems are often low are often slow cooked for soups generally broccoli leaves are not eaten cheese barkley with firm stalks and compact clusters of tightly closed dark green florets avoid stalks with yellow florets broccoli rob also known as rav or rapini is a leafy green with small florets that look similar to broccoli florets the entire plant is eaten although some prefer to separate the spiky leaves and green florets from the more bitter stems broccoli rob may be boiled steamed roasted or sauteed and its peppery bitter flavor is a popular ingredient in both chinese and mediterranean especially italian cuisines barcellini or baby broccoli is sometimes referred to is a green vegetable similar to broccoli but with small florets and longer thin stalks it can easily be mistaken with broccoli rabe however this is more sweet than broccoli rob because broccoli rob is very bitter the this is a hybrid of broccoli and guailand which is uh sometimes referred to as chinese kale or chinese broccoli the name broccolini is actually a registered trademark of the manned packing company it also goes by the name aspiration sweet baby broccoli broccolitti or broccolitti and italian sprouting broccoli brussels sprouts consists of numerous small heads arranged in neat rows along a thick stalk the tender young sprouts look like baby cabbages and are usually steamed or roasted brussels sprouts have a strong nutty flavor that blend well with game ham duck or rich meats cauliflower like broccoli grows on thick stalks each stalk produces one flower called a head surrounded by large green leaves the head composing of creamy white florets also golden purple and green can be cooked whole or cut into separate florets for roasting steaming blanching or stir-frying hello everyone i am your produce guy today i'm going to show you how to cut up a cauliflower you've been to the store and you bought your nice fresh cauliflower and it looks beautiful now if you don't know how to pick them out at the store click right here and we'll send you to our when is it ripe video so that you can select your own cauliflower but you're at the stage now or you've got it home you need to break it down to use it in a recipe or put it together however you're planning to do it but what do you do do you just slice down through it no you attack this from the back you turn this over and you grab a knife now some of you may want to use a cutting board for this but i just like to go right on through and tear off as much of that greenery as i can then i'll work my way around just like that through there careful not to cut into the florets themselves get all that greening where you weigh there you are now you reach in with your knife very carefully and you just cut off one of those florets just like that and you just continue the way around right there look at that [Music] there in the time it's taken you to watch this video we've been able to break down a nice bowl of cauliflower florets now these are ready for you to slice into smaller pieces if that's what your recipe is calling for or you can leave them big like this and put them in the steaming basket and get them steamed up and ready to eat so don't be afraid of the cauliflower it's great it's good for you and it's delicious cabbage or head cabbage has been a staple of north european cuisine for centuries the familiar green cabbage has a large firm round head with tightly packed pale green leaves flat and cone-shaped heads are also available red or purple cabbage is a different strain and may be tougher than green cabbage cabbage can be eaten raw as in coleslaw or used in soups or stews it can be braised steamed or stir-fried the large waxy leaves can also be steamed until soft then wrapped around a filling of seasoned meat kale has large ruffled curly or bumpy leaves its rather bitter flavor goes well with beans and rich meats such as game pork or ham kale is typically boiled stuffed or used in soups tender baby kale is suitable for eating raw is also popular with steaks and salads kale is a power pack of nutrients one cup of kale provides thirteen hundred percent of the recommended daily allowance of vitamin k which is important for healthy blood coagulation and maintaining bone mass although it looks rather like a round root kohlrabi is actually a bulbous stem vegetable created by cross-breeding cabbages and turnips both the leaves which are attached directly to the bulbous stem and the roots are generally removed before sale depending on the variety kohlrabi skin can be like green purple or green with a hint of red the interior flesh is white with a sweet flavor similar to that of turnips napa cabbage also known as chinese cabbage is widely used in asian cuisines it has a stout elongated head with relatively tight packed firm pale green leaves it's a moisture and more tender version of common green or red cabbage with a milder more delicate flavor savoy cabbage is named for the savoy region in france it has crinkled emerald green leaves which are crunchy and tender and can be used similarly to napa cabbage what is meant by the term fruit vegetable fruit vegetables are those items that are botanically fruits but have savory culinary applications such as avocados bell peppers eggplants tomatillos and tomatoes avocados include several varieties of pear-shaped fruits with rich high-fat flesh this light golden green flesh surrounds a large and edible oval-shaped seed or pit some avocado varieties have smooth green skin others have pebbly almost black skin avocados should be used at the peak of freshness and ripeness a condition that only lasts briefly firm avocados lack the desired flavor and creamy texture ripe avocados are soft to the touch but not mushy firm avocados can be left at room temperature to ripen and refrigerate it for one to two days until they are ripe because avocado flesh turns brown very quickly once cut dip avocado halves or slices and lemon juice and keep unused portions tightly covered with plastic wrap another little trick that we use is if you're making guacamole and you smash up the avocados put a little bit of oil on top of the avocados then seal it with plastic wrap the oil will prevent an air from getting in and prevent the browning from happening [Music] two types of eggplants are commonly available asian and western asian varieties such as the japanese eggplant and a tiny round indian or thai eggplant are either small spheres or long and thin with skin colors ranging from creamy white to deep purple western eggplants known as italian eggplants tend to be shaped like a plump pear with shiny lavender to purple black skin both types have a dense khaki colored flesh with a rather bland flavor that absorbs other flavors well during cooking eggplants can be grilled baked steamed fried or sauteed they're commonly used in mediterranean thai and indian cuisines especially in vegetarian dishes as a substitute for meat the skin may be left intact or removed before or after cooking as desired sliced eggplants may be salted and left to drain for 30 minutes to remove moisture and bitterness before cooking they will also absorb less oil when fried if this procedure is done choose plump heavy eggplants with smooth shiny skin that is not blemished or wrinkled asian varieties tend to be softer than western varieties peppers are a member of the capsaicin family and then you can easily devote an entire lecture just on this topic from fresh peppers you get dried chilies and there are corresponding dried chilies for every fresh pepper members of the capsaicin family are native to the new world when discovered by christopher columbus he called them peppers because of their sometimes fiery flavor these peppers which include sweet peppers and hot peppers or chilies are unrelated to peppercorns the asian spice for which columbus was searching interestingly new world peppers have readily been accepted in indian indian and asian cuisines in which they are now considered staple items [Music] [Music] [Music] [Applause] [Music] [Music] [Music] in the late 1700s a large percentage of europeans feared the tomato a nickname for the fruit was the poison apple because it was thought that aristocrats got sick and died after eating them but the truth was a matter that the wealthy europeans used pewter plates which were high in lead content because tomatoes are also high in acid when placed in this particular tableware the fruit would leech lead from the plate resulting in many deaths from lead poisoning no one made this connection between the plate and the poison at the time then the tomato was picked as the culprit around 1800s and late 1800s with the invention of the pizza in naples the tomato grew widespread in popularity in europe but in the newly burgeoning americas there was still a little more to the story tomatoes are available in a wide variety of colors and shapes they vary from green unripe green to golden yellow to ruby red from tiny spheres like current tomatoes to huge squat ovals called beef steaks some such as the plum tomato have a lot of meaty flesh with only a few seeds others such as slicing tomatoes have lots of seeds and juice but only a few meaty membranes heirloom varieties such as brandywine german green and golden queen are irregularly shaped and may be prone to cracking all tomatoes have a similar flavor but the levels of sweetness and acidity vary depending on the species growing conditions and ripeness at harvest ripe tomatoes are natural source of glutamate and add umami flavor to foods because tomatoes are highly perishable they're usually harvested when mature but still green then shipped to wholesellers who ripen them in temperature and humidity controlled rooms the effect on the flavor and texture is unfortunate tomatoes should be stored at room temperature to preserve their flavor and texture well not actually tomatoes they're part of the gooseberry family tomatillos are also known as mexican or husk tomatoes grown small weedy bushes these bright green about the size of a small tomato and covered with thin papery husk tomatillos have a tart lemony flavor and crisp moist flesh although they are an important ingredient in southwestern and northern mexican cuisines tomatillos may not readily be available in other areas tomatillos can be used raw in salads pureed for salsas or cooked for soup stews or vegetable dishes when choosing tomatillos you look for husks that are split but still look fresh the skin should be plump shiny and slightly sticky cucumbers can be divided into two categories pickling and slicing the two types are not interchangeable cucumbers are valued for their refreshing cool taste and astringency slicing cucumbers are usually served raw in salads or mixed with yogurt and dill or mint as a side dish especially for spicy dishes pickling cucumbers are generally served pickled with no further processing pickling cucumbers include the cornichon kirby and gherkin they are recognizable by their shape and their sharp black or white spines and are quite bitter when raw slicing cucumbers include the burpless the seedless english or hothouse cucumber the seedless persian cucumber which is small and smooth skin the lemon cucumber which is round and yellow and the common green market cucumber most cucumber varieties have relatively thin skins and may be marked marketed with wax coating to prevent moisture loss and improve appearance wax skin should be peeled unless the cucumber will be pickled those cucumbers that have that are firm but not hard avoid those that are limp or yellowed or have spots summer squashes include patty pans yellow cook necks and zucchini varieties they are soft with edible skins and seeds that are generally not removed before cooking most summer squashes may be raw but are also suitable for grilling sauteing steaming or baking chayote is a type of summer squash it's typically considered a fruit much like a tomato but it's probably isn't something you want to bite into like an apple it grows on a vine on a vine and originated in mexico but is now grown in warm climates worldwide chayote squash looks like it has a crunchy texture and of the unripe pear yet it has a mild almost cucumber-like flavor less sweet than spaghetti squash in new orleans it's also referred to as a merliton winter squashes include acorn butternut hubbard pumpkin and spaghetti varieties they have hard skins or shells and seeds neither of which are generally eaten although it's common to see pumpkin seeds roasted and eaten out of hand the flesh which may be removed from the shell before or after cooking tends to be sweeter and more strongly flavored than that of summer squashes winter squashes are rarely raw they can be baked steamed or sauteed most winter squashes can also be pureed for soups or pie fillings the term greens refer to a variety of leafy green vegetables that may be served raw but are usually cooked greens have long been used in the cuisines of india asia and mediterranean and are an important part of regional cuisines in the southern united states most greens have strong spicy flavors collard greens often simply referred to as collards are a type of wild cabbage with loose leafy heads of bright green leaves collards have a sharp tangy flavor light and look like a cross between mustard greens and kale considered a staple ingredient in cooking of the american south collards are typically slow simmered with ham hocks and bacon until very tender then served with their cooking liquid called pot liquor mustard a member of the cabbage family was brought to america by early european immigrants mustard has large dark green leaves with frilly edges and is known for its assertive bitter flavor mustard greens can be served raw in salads or used as a garnish they can also be cooked often with white wine vinegar and herbs sorrel is a wild member of the buckwheat family its tartness and sour flavor are used in soups and sauces to accent other vegetables it is particularly good with fatty fish or rich meats sorrel leaves naturally become the texture of a puree after only a few minutes of moist heat cooking choose sorry leaves that are fully formed with no yellow blemishes if you are going to use this in a hot heat application be aware the fact that they're going to break down they're going to get darker in color and mushy so oftentimes you'll see them added as a chiffonade like an herb at the end of a plate to give it a bright lemony flavor spinach is a versatile green that grows rapidly in cool climates it has smooth bright green leaves attached to thin stems spinach may be eaten raw in salads cooked by almost any moist heat method microwaved or sauteed it can be used in stuffings baked or cream dishes soups or stews spinach grows in sandy soil and must be rinsed repeatedly in cold water to remove all traces of grit from the leaves it bruises easily and should be handled gently during washing stems and large mid ribs should be removed choose bundles and bunches with crisp tender deep green leaves avoid any yellow leaves or those with blemishes shard often referred to as swiss shard although swiss is a reference that is inexplicable and unexplained is a type of beet that does not produce a tuberous root the wide flat dark green leaves are consumed golden orange and pink stem varieties are also available as well as rainbow shard shard can be steamed sauteed or used in soups shard's tart spinach-like flavor blends well with sweet ingredients such as fruit choose charred leaves that are crisp with some curliness ribs should be unblemished and uniform in color the leaves of the turnip root have a pleasantly bitter flavor similar to peppery mustard greens the dark green turnip leaves are long slender and deeply indented turnip greens are best eaten steamed sauteed baked or microwaved mushrooms have a stalk and umbrella-like top although not actually a vegetable mushrooms are used and served in much the same manner as vegetables several types of cultivated mushrooms are available they include the common white shiitake cremini morel gypsy maitake saffron milk cap chanterelle enoki horn aplenty patty straw wood blooded the truffle which we'll talk more about in a moment and the porcini which is the king of the mushrooms two principal varieties of european truffles are the paragord or black truffle and the piedmontes or the white truffle fresh truffles are gathered in the fall and should be used promptly truffles especially the white ones have a strong aroma and flavor requiring only a small amount to add to this special flavor to soups sauces pastas or other items black truffles are often used as garnish or as a flavor pates terrines or egg dishes because fresh truffles can cost hundreds of dollars per pound most kitchens purchase truffles canned dried or processed avoid imitation truffles or truffle oil originating from asia or africa as the flavor and aroma is inferior and is often chemically induced true truffles are never inexpensive your supermarket or farmers market often will have a good variety of olives juicy green black brown or purple olives glistening in their own oil olives seasoned with chilis lemons and herbs olives pitted and left empty olives stuffed with red pimentos olives and brine or in wine marinade enough olives to make your senses real and like grape varieties for wine there are more kinds of olives in the world than you'll ever know every region where olive is grown yields a particular variety each with its own unique size shape and flavor black olives are those that were allowed to ripen before harvest in other words all olives start out as green and grow darker as they ripen different varieties will mature into blue red brown purple or black the flavor of the same olive will change the longer it stays on the tree how it's fermented and seasoned after harvest changes its flavor too and there are infinite ways to treat an olive subject only to the imagination and local resources of the producer but buyer beware all that olive is not gold can black california olives are green olives that have been cured in a lye solution then treated with oxygen and ferrous glutamate to mix the color black harmless book land olives treated with lye and cure quickly but have lost much of their original flavor these are often referred to as pizza olives or sliced olives the common or bulb onion may be white yellow such as the bermuda or spanish or red such as the purple onion medium sized yellow and white onions are the most strongly flavored larger onions tend to be sweeter and milder onions are indispensable in mirepoix they're also prepared as side dishes by deep-frying roasting grilling steaming or boiling pearl onions are small about half an inch in diameter with yellow or white skins these small bulb onions have a mild flavor and can be grilled boiled roasted or sauteed whole as a side dish or used in soups or stews cipollini or italian or of small onions are small and slightly flat mild bulb onions variety they can be pickled roasted or stewed sweet onion varieties include vidalia maui walla walla texas 1015 or oh so sweet sweet ball bunions have a high water content more sugar and less sulfuric compounds than other onions they're best for eating raw making them good choices for sandwiches salads hamburgers and the like cooking destroys much of their perceived sweetness and special flavor all sweet onions have a very short shelf life and should not be stored more than a few weeks like onions garlic is used in almost all world's cuisines a head or bulb of garlic is composed of many small cloves the entire head is encased in several thin layers of papery husk each clove is wrapped in then husk or a peel of the 300 or so types of garlic known only three are commercially significant the most common is pure white which is sharp in flavor a mexican variety of pale pink and is more strongly flavored and elephant garlic is apple sized and particularly mild black garlic is not a variety but the result of a detailed heating and aging process applied to common white garlic used in northeast asia for its purported health benefits black garlic is chewy with a mild sweet molasses like flavor leeks look like large overgrown scallions with a flat white tip and wide green leaves their flavor is sweeter and stronger than scallions but milder than common bulb onions leeks must be carefully washed to remove the sandy soil that gets between them leeks can be baked or grilled or as a side dish are used to season stock soups or sauces choose leeks that are farmed with stiff roots and stems avoid those with dry leaves soft spots or browning chives scallions and ramps are all three different varieties of green onions chives are thin and usually have a sweeter more delicate flavor scallions are also known as green onions or bunch onions and they are the immature green stalks of bulb onions the leaves are bright green with either a long or slender and bulbous tuber as white base green onions are used in stir fries as a flavoring of other dishes the green tops can also be sliced in small rings or on the diagonal and used as a garnish choose scallions with bright green tops and green white bulbs avoid those with limpy or slimy leaves in the south it's inevitable that if you walk outside in the springtime you're going to smell onions in the air this is actually ramps which have a shorter white base and a long green leaf that comes up out of them and these are grown wild and can usually be harvested wild but are occasionally found cultivated as well shallots or shallots are shaped like small bulb onion with one flat side when peeled a shallot separates into multiple cloves similar to a garlic shallots have a mild yet rich and complex nutty flavor shallots are the basis of many classic sauces and meat preparations they can also be sauteed or baked as a side dish choose shallots that are plump symmetrical and heavy for their size avoid those that appear to have been dried or have sprouted store shallots in a cool dry unrefrigerated place unless they are peeled in which case they should be refrigerated historically corn has had as many varieties as 90 or more today we know of at least 40 varieties that are still in existence one of which is the yellow sweet corn known as maize it's a grain or a type of grass that has been altered over time to produce what we see today corn kernels like peas are plant seeds corn kernels which may be white or yellow are attached to woody and edible combs the cob is encased in strands of hair-like fibers called silks and covered in layers of thin leaves called husks shuck the ears removing the silk in the husk before cooking the husks may be left on for roasting or grilling shucked corn on the cob can be grilled boiled microwaved or steamed the kernels can be cut off the cob before or after cooking in a process known as shoe peg corn on the cob is available fresh or frozen corn kernels are available canned or frozen choose freshly picked ears with firm small kernels avoid those with mold or decay at the tip of the cob or brownish silks seek out the freshest corn on the cob and serve it properly promptly because its sugars turn to starch once its picked beans with edible pots commonly referred to as green beans string beans runner beans or snap beans are picked when immature except for the stem the entire pod can be eaten this category includes the american green bean the yellow wax bean the broad bean and the french hari covert a long slender pod with an intense flavor and tender texture any strings along the pod seam should be pulled off before cooking beans may be left whole cut lengthwise or cut into slivers referred to as french cut or cut crosswise on the diagonal shelling beans are grown predominantly and primarily for the edible seeds inside the pod common examples are lima beans and fava beans their tough pods are usually not eaten for thousands of years cultures around the world have preserved certain legumes by drying common dried beans include kidney beans pinto beans chickpeas lentils black beans black eyed peas and split green peas shape is the clearest distinction among these products beans are oval or kidney shape lentils are small flat discs peas are round beans and peas destined for drying are left on the vine until they are fully matured and just beginning to dry they are then harvested shelled and quickly dried with warm air currents some dried legumes are salt split which means the skin is removed causing the seed to haves to separate of the shelling peas that are prepared fresh the most common are green garden peas known as english peas and the french petipois because they lose flavor rapidly after harvest most shelling peas are sold frozen or canned shelling peas have a delicate sweet flavor best presented by simply steaming until tender but also still al dente peas may also be braised with rich meat such as ham or used in soups cooked peas are attractive in salads as a garnish small when choosings uh choose small fresh pea pods that are plump and moist green peas such as soya beans or soya in japan is also referred to as edamame are a type of shelling pea picked before maturity fresh soybeans have a light green fuzzy pod and a tender sweet pea they're steamed in the pod then popped open and eaten out of hand as a snack and are often served this way in sushi restaurants when allowed to mature and then prepared like other dry beans however soybeans become extremely tough and hard to digest and bitter mature soybeans are best used in processing into oils tofu soy sauce and other foodstuffs okra was brought to the united states from africa by slaves and french settlers its original name was actually gumbo which is where we get gumbo from it's now integral into creole cajun southern and southwestern cuisines its mild flavor is similar to asparagus gum okra is not eaten raw it's best pickled boiled steamed or deep fried okra develops a gelatinous texture when cooked for long periods so it's used to thicken gumbos and stews to avoid the slimy texture that some people find objectionable do not wash okra until you're ready to cook then trim the stem in only cook okra in stainless steel because other metals cause discoloration choose small to medium pods that are deep green with salt without any soft spots pale spears with stiff tips tend to be a little tough frozen okra is widely available although history suggests that these were first eaten in ancient greece beets are associated with cold northern climates where they grow most of the year both the beetroot and the greens have leaves how are eaten common beets have a deep reddish purple flesh and are cooked by boiling steaming or roasting then peeled and used in soup salads or other things as side dishes pickling is a traditional preservation method is still used for beets today golden beets have orange yellow or variegated flesh and are prepared in the same manner beet leaves called greens are high in vitamin potassium and iron beet greens can be cooked by boiling sauteing or slow simmering choose medium sized beets that are firm with smooth skins avoid beets with hairy root tips as they may be tough small and baby sized beets are often used for garnish or as a snack in salads carrots among the most versatile vegetables are large tap roots although several kinds of carrots exist the emperor is the most common it is long and pointed with a medium to dark orange color and a mild sweet flavor carrots can be cut into a variety of shapes and eaten raw used in mirepoix or prepared by moist heat cooking methods grilling microwaving or roasting they're also grated raw and used in baked goods particularly cakes and muffins choose carrots that are firm that have a smooth and well shaped with bright orange color if the tops are still attached they should be fresh looking and bright and green carrots also come in a variety of other colors including purple and golden are taproots that look and taste like a white carrot and have the texture of sweet potatoes parsnips should be five to ten inches in length with smooth skins and tapering tips parsnips spilled like carrots can be eaten raw or cooked in almost any method when steamed there they become very soft and they can be mashed like potatoes because of their higher moisture content you may also have to add something else to it to help draw out some of the moisture including some potatoes that have been cooked and allowed to dry out some choose small to medium sized parsnips that are firm smooth and well shaped avoid large woody ones celery root also known as celery ack is a large round root long popular in northern european cuisines although it may look like it it's actually a different plant from the stock celery that we're accustomed to and its stalks and leaves are generally not eaten celery root has a knobby brown exterior a creamy white crunchy flesh and a mild celery-like flavor its thick outer skin must be peeled off the flesh is then cut as desired often eaten raw celery root can be baked steamed or boiled it's used in soups stews or salads and goes well with game and rich meats place raw celery root in a situated water to prevent browning choose small to medium-sized celery roots that are firm and relatively clean with a pungent smell despite their name jerusalem artichokes are actually tubers from a variety of sunflowers unrelated to artichokes consequently growers are now marketing these vegetables as sunchokes their lumpy brown skin is usually peeled off even though the skin is edible to reveal a crisp white interior with a slightly nutty flavor although they may be eating raw cooked cooking them before serving makes them easier to digest jerusalem artichokes are eaten chopped or grated into salads or boiled or steamed for a side dish or soup jicama is a legume that grows underground as a tuber jicama is popular because of its sweet moist flavor crisp texture low calorie content and long shelf life after its thick brown skin is peeled off the crisp moist white flesh can be cut as desired jicama is often eaten raw in salads with salsa or as a crudite it's also used in stir fry dishes choose firm well-shaped jicamas that are free of blemishes sizes size is not an indication of quality or maturity radishes have a peppery flavor and crisp texture radishes are available in many colors including white black and all shades of red most have a creamy to pure white interior asian radishes known as dicons produce roots two to four inches in diameter and 6 to 20 inches in length radishes can be braised steamed or stir fried but most often are eaten raw in salads or used as a garnish radish leaves can be used in salads or cooked as greens as well choose radishes that are firm not limp their interior should not be dry or hollow rutabagas are a root vegetable and a member of the cabbage family the skin is purple to yellow and the flesh is yellow with a distinctive starchy cabbage like flavor rutabagas and turnips are similar in texture and flavor when cooked and may be used interchangeably rutabaga leaves are not eaten rutabagas should be peeled with a vegetable peeler or a chef's knife and then cut into quarters slices or cubes they're often baked boiled and then pureed or sliced and sauteed rutabagas are especially flavorful when seasoned with caraway seeds dill or lemon juice choose small to medium-sized rutabagas that are smooth and firm and feel heavy for their size also a root vegetable from the cabbage family turnips have white skin with rosy red or purple blue blush and a white interior their flavor is similar to that of a radish and it can be rather hot turnips can be peeled then diced sliced or julienned for cooking they may be baked or cooked with moist heat cooking methods and are often pureed like potatoes choose small to medium sized turnips that have smooth skin and feel heavy they should be firm not rubbery or limp any attached leaves should be bright green and tender water chestnuts are the tuber of an asian plant that thrives in water the brownish black skin is peeled away to reveal a moist crisp white interior which can be eaten raw or cooked when cooked water chestnuts retain their crunchy texture making them a popular addition to stir-fried dishes they're also used in salads and casseroles or wrapped in bacon for hors d'oeuvres artichokes of the immature flowers of the thistle plant introduced to america by italian and spanish settlers young tender artichokes can be cooked whole but more mature plants need to have the fuzzy centers known as chokes removed first whole artichokes can be simmered steamed or microwaved they're often served with lemon juice garlic butter or hollandaise sauce the heart may be cooked separately then served in salads pureed as a filling or served as a side dish place raw trimmed artichokes in the situated water to prevent browning choose fresh artichokes with tight compact heads that feel heavy their color should be solid green to gray green brown sprouts on the surface caused by frost or harmless and it's just a cosmetic effect artichoke hearts and leafless artichoke bottoms are both available canned asparagus a member of the lily family has bright green spears with a ruffle of tiny leaves at the tip larger spears tend to be tough and woody but can be used in soup for or for purees asparagus is eaten raw or steamed briefly stir fried microwave grilled or roasted fresh spring asparagus is excellent with nothing more than lemon juice or clarified butter asparagus with hollandaise sauce is a classic preparation choose firm plump spears with tightly closed tips and a bright green color running the full length of the sphere asparagus should be stored refrigerated at 40 degrees fahrenheit upright in about a half an inch of water or with the ends wrapped in moist paper towels the stalks should not be washed until just before use a european variety of white asparagus is sometimes available fresh or readily available canned white asparagus has a milder flavor and soft texture tender and in its nature it's produced by covering the asparagus stalk with soil as it grows then prevents sunlight from reaching the plant or and it retards the development of chlorophyll the green color that naturally occurs in plants when they produce stripped of their tough brown outer shells the tinder young shoots of certain varieties of bamboo are edible bamboo shoots make excellent additions to stir-fry dishes or they can be served like asparagus although fresh shoots are available in asia canned peel shoots packed in brine or water are more common in the united states canned shoots should be rinsed well before use once a medicinal herb stock celery is now a common site in kitchens worldwide stock celery is pale green and stringy curved stalks often eaten raw in salads or as a snack celery can be braised or steamed as a side dish celery is also an important component in mirepoix choose stalks that are crisp without any signs of dryness fennel is a mediterranean favorite used for thousands of years as a vegetable an herb and a spice the fennel bulb often incorrectly referred to as swedenese has short tight overlapping celery-like stalks that are have feathery leaves the flavor is similar to that of a nice or licorice becoming milder when it's cooked fennel bowls may be raw or grilled steamed sauteed baked or microwaved choose a fairly large bulb bright in color on which the cut edges appear fresh without dryness or browning the bulb should be compact and not spreading no silly not that kind of baby vegetables baby vegetables are either the diminutive form of the normal vegetable salted during its growing process or in some cases can actually be different species of vegetables including such as the patty pan squashes pictured in the picture below these are often used as garnishes or in functions where you want to serve them whole on the plate or just cut in half and roasted and a very simple presentation to be able to leave the whole things on there most vegetables consist of more than 80 percent water the remaining portion consists of mostly carbohydrates a small amount of proteins and fat most vegetables are extremely low in fat and calories most of the structure is indigestible dietary fiber such as cellulose and lignin vegetables are a good source of a wide array of vitamins and minerals when purchasing fresh vegetables vegetables are sold by the weight or the count they're packed in things such as lugs bushels flats and crates some common vegetables can be purchased in pre-processed also trimmed cleaned cut to specification irradiation processes use ionized radiation to sterilize food this is to inhibit the growth of sprouting such as on onions or potatoes and ginger and garlic or to quarantine such fruits so when they come into the united states they have to go to a path of quarantine the inset insect deinfestations on cereal pulses and dried fruits and shelf life extension on chicken and meat and fish and pathogen reduction on spices and fresh foods it destroys bacteria parasites and insects it does not affect the taste or the texture of the food when you receive fresh vegetables oftentimes you'll need to wash them surface contaminants from soil water and handling can spread foodborne illnesses or viruses proper washing of vegetables is essential remove tags and ties do not soak wash vegetables uncut under cold running water slightly warmer than the produce itself refrigerate it promptly for canned vegetables raw vegetables are cleaned and placed in sealed containers and then subjected to a high heat treatment grading of canned vegetables include u.s grade a or fancy us grade b or extra select us grade c or standard canned vegetables are purchased in cases of standard can sizes such as the small cans that you might see at the grocery store all the way to the number 10 cans which you'll use in the restaurant industry often frozen vegetables are almost as convenient as canned freezing severely inhibits the growth of microorganisms that can cause spoilage grading of frozen vegetables is the same as canned one of the added benefits of frozen vegetables as it locks into the freshness iqf our individually quick frozen method is commonly used just as an aside this method was developed by a man named birdseye dried vegetables dramatically alter the flavor texture and appearance of the vegetables the loss of the moisture when drying concentrates of flavors and sugars in the plant drying greatly extends a shelf life of these vegetables so you may see these in soup mixtures or things that have to get rehydrated and generally they're going to be rehydrated as part of the cooking process itself the acid or alkali content of the cooking liquid affects the texture and color of vegetables vegetables such as asparagus broccoli green beans and red cabbage may discolor in the presence of acidic things such as maybe lemon juice green beans will turn darker paler not very pleasant look but a little bit of alkali such as baking soda may actually brighten up the greens the opposite effect is had when you apply these to the cauliflower in the picture and then whether you add acid or alkali to purple cabbage well you can see the results there these reactions of of greater concern when using moisty cooking methods in particular so finally let's summarize and discuss some of our takeaways from this module vegetables are a powerhouse of vitamins and minerals and other nutrients typically the brighter the color of the vegetable the more nutritious it is not all vegetables should be peeled in fact very few vegetables should be peeled as that's where the most of the nutrients live if you can clean a peel and use it you should do so know your vegetables not every vegetable responds well to refrigeration or being left out at room temperature leave the picking of wild mushrooms to the professional foragers i can't emphasize this enough this can be a very dangerous and deadly situation they are specifically trained to know the difference between edible mushrooms and the poisonous mushrooms which oftentimes look a lot alike vegetables add value to a plate they complement the mains on the plate and they can even be the mains themselves vegetarian food is a fast-growing populace of the segment of culinary arts and should never be overlooked
|
On_Cooking_Lecture_Videos
|
On_Cooking_Chapter_17_Pork.txt
|
in this module we're going to talk about pork the so-called other white meat the objectives of this module are identify the primal subprimal and fabricated cut support describe and perform basic butchering procedures purchase pork appropriate for your needs explain appropriate cooking methods of different cuts of pork and apply appropriate cooking methods to several common cuts of pork pork is the meat of hogs usually butchered before they are 1 years old since hogs are butchered at a young age their meat is generally very tender with a delicate flavor with the exception of beef americans consume more pork than any other meat the pork we eat is leaner and healthier than it once was because of advantages in animal husbandry we essentially bred the fat out of the pork because hogs are butchered at a young age their meat is generally very tender with a delicate flavor and pork can be enjoyed cured processed or fresh the mild flavor of fresh pork blends well with many different seasonings making it a popular menu item pork is naturally tender and can be prepared by almost any dry heat moist heat or combination cooking method more than two-thirds of the pork marketed in the united states is cured or preserved to produce products such as smoked ham and smoked bacon humans have been eating animals for two and a half million years and pigs have been on the planet for over 40 million years so it's likely that our ancestors were including them as a part of their diet more than two million years spent eating pigs may explain why pork is one of our species least common food allergies humans began domesticating pigs around 9000 bc and they quickly became the cheapest source of meat that our ancestors could eat pigs have less than half the gestation period of cows produce litters over 10 times the size grow to slaughter weight in less than half the time and will eat a diet of just about anything we can only speculate about why some religions ban their adherence from eating pork but a pig's diet and behavior are two of my top culprits moses and mohammed famously prohibit their people from eating pork and these two abrahamic religions were not alone every culture in the ancient near east forbid their people from sacrificing pigs to their gods relegating these unclean beasts to converting garbage into fuel for people who couldn't afford more desirable animals something that hasn't seemed to change for the past 10 000 years despite getting some bad press in some rather famous publications pork products grew in popularity through the roman empire medieval europe and modern schools pork cares well in salt and smoke producing a delicious meat that can be safely stored for future use delicious taste and safe storage were incredibly valuable in a time before refrigeration but i want to know if uncured pork has any value for us in the present day modern foods make us think that cheap and delicious are synonymous with bad and unhealthy but with natural foods our taste buds are a nutritional compass pointing us towards the foods that are good for our health while cuts of pork from the belly are very high in fat and calories cuts from the loin are actually quite lean containing as many as 24 grams of protein in just 120 calories pork contains every essential amino acid and a wider variety of micronutrients than almost any food that people call super but pork doesn't have quite as many nutrients as beef and the fat it does contain has a worse omega 6 to 3 ratio in some cases getting as high as 30 to 1. that can be a bit of a problem considering that a healthy ratio is closer to 1 to 1 and that's nothing compared to the problems that you'll experience if you don't cook pork to the proper temperature ruminant animals like cows and sheep have multiple stomachs to detoxify their foods pigs have just one stomach to detoxify the kinds of foods that my industrial plumbing is struggling to handle not only do pigs have just one stomach to detoxify foods like these logs but they also can't sweat to reduce their body temperature and release toxins instead of sweating to cool down they'll roll around in mud baths which may contain their own feces it's not surprising to learn that more than seventy percent of pigs are infected with parasites and bacteria including trichinella and yersenia cooking pork to a high enough temperature will kill these pathogens between their anatomy diet and behavior it was probably easy for our ancestors to recognize the health risks of eating pork it was probably also difficult to convince an entire society not to eat an animal that converts garbage into delicious calories unless you had a mandate from god some people don't like modern explanations for religious traditions but after reflecting back on my past experience at a farm it seems pretty obvious that our ancestors would recognize the connection between pork and disease but otherwise there weren't any immediate consequences to eating pork every day pigs may be garbage disposals that are infected with parasites but cooking pork to a high enough temperature will kill any contaminants buying pork from pigs that were raised on a healthier diet and lifestyle will still be less expensive than buying red meat while providing you with all of the essential amino acids and a wide variety of micronutrients while these pigs may have a better omega 6 to 3 ratio than their industrial counterparts the fat still is not as healthy as the fat from beef and eating pork does not provide any additional nutrients that you can't get from red meat strictly from a health perspective eating pork is better than eating no meat and if you're going to choose a meat beef bison or other ruminant animals are a better first choice after a hog is slaughtered is generally split down the backbone this divides the carcass into bilateral halves like the beef carcass each side of the hog carcass is then further broken down into the primal cuts hogs are bred specifically to produce long loins the loin contains the highest quality meat and is the most expensive cut of pork pork is unique among meats in that the ribs and loins are considered a single primal they're not separated into two different primals as they are in ribs and loin of beef veal or lamb the primal cut support consists of the shoulder or boston butt the loin the fresh ham the belly including the spare ribs and the picnic ham shoulder the primal pork shoulder is the lower portion of the hog's foreleg and it accounts for approximately 20 percent overall of the carcass weight this includes seven percent from the shoulder and thirteen percent from the picnic cam the shoulder contains the arm and the shank bones and has a relatively high ratio of bone to lean meat because our pork comes from hogs slaughtered at a young age the shoulder is tender enough to be cooked by any method it is however one of the leanest cuts of pork with a high percentage of connective tissue that requires long cooking pork shoulder is available smoked or fresh the shoulder is fairly inexpensive and when purchased fresh it can be put cut into shoulder butt steaks or boned and cut into smaller pieces for sauteing or stewing whole pork shoulder is the cut preferred by many barbecue pit masters throughout the american south here in memphis if you're going to compete in a barbecue competition particularly memphis in may you're going to use the whole shoulder with the boston butt and the picnic ham attached to each other the pork four shank is called the shoulder hock and is almost always smoked shoulder hocks are often simmered for long periods of time in soup stews and braised dishes to add flavor and richness when a pork shoulder includes the foreshank it is referred to as a picnic shoulder or picnic ham the primal boston butt is a square cut located just above the primal port shoulder and it accounts for approximately seven percent of the carcass weight the boston bud is very meaty and tender with a good percentage of fat to lean meat containing only a small portion of the blade bone the boston bud is a good choice when a recipe calls for a solid piece of lean pork the fresh boston butt is sometimes cut into steaks or chops to be broiled or sauteed when the boston butt is smoked is usually boneless and called a cottage ham the pork loin is cut directly from behind the boston butt and includes the entire rib section as well as the loin and the portion of the sirloin area the primal loin accounts for approximately 20 percent of the carcass weight and it contains a portion of the blade bone or on the shoulder end a portion of the hip bone on the ham in and all of the ribs in most of the backbone the primal pork loin is the only primal out of the pork not typically smoked or cured most of the loin is a single very tender pork eye mussel the whole boneless pork loin weighs from 8 to 10 pounds and is covered on one side by a solid layer of fat which can be trimmed away boneless pork loin is quite lean but contains enough intramuscular and subcutaneous fat to make it an excellent choice for a moist heat cooking method such as braising or it can be prepared with a dry heat cooking method such as roasting or sauteing the most popular cut from the loin is the pork loin chop chops can be cut from the entire loin the choices chops are center cut chops which are cut from the primal loin after the blade bone and the sirloin portions at the front and rear of the loins are removed double rib chops have a thickness of two ribs it may include two rib bones the pork loin can be purchased boneless or boned and tied as a roast a pork a boneless pork loin is smoked to produce canadian bacon the rib bones then trimmed from the loin can be served as a barbecue pork back ribs or baby back ribs although smaller than spare ribs pork back ribs are meatier the loin also contains the pork tenderloin located on the inside of the rib bones on the sirloin end of the loin the pork tenderloin is rather small weighing only between one to one and a half pounds it is the most tender cut of the pork with a mild flavor the tenderloin is very versatile and can be trimmed cut into medallions and sauteed or the whole tenderloin can be roasted or braised the primal fresh ham is the hog's hind leg it's a rather large cut accounting for approximately 24 percent of the carcass weight the ham contains the h bone leg bone and hind shank bones fresh hams like the legs of other meat animals contain large muscles with relatively small amounts of connective tissue like many other cuts of pork hams are often cured and smoked but fresh hams also produce great roasts and can be prepared using almost any cooking method when cured and smoked hams are available in a variety of styles they can be purchased bone-in shankless or boneless and a and particularly or partially or fully cooked fully cooked hams are also available canned there is a specific ham for nearly every use and desired degree of convenience a pork belly is located below the loin accounting for approximately 16 percent of the carcass weight overall twelve percent to the belly and four to the spare ribs it's very fatty with only steaks of streaks of lean meat it contains the spare ribs which are always separated from the rest of the belly before cooking pork spare ribs usually are sold fresh but can be smoked as well containing less meat than baby back ribs from the loin discussed later and a generous amount of fat which makes them tender spare ribs are simmered and then grilled or baked while being basted with a spicy barbecue sauce the remainder of the boneless pork belly is nearly always cured and smoked to produce bacon so let's spend a little bit of time and talk about the differences between the different ribs and this is for identification of rib dishes not necessarily rib individual ribs we're not going to count the number of ribs shown in the shown on the top right is the entire rib section of the pig extending from the top of the backbone on the outside of the loin primal and extending all the way to the rib tips at the bottom of the belly primal and you can see where it's been separated in two pieces and the top piece is going to be our baby backs and the bottom is going to be our spare ribs you can also see on the bottom picture where the anatomy is according to the diagram the top is going to be your baby back ribs st louis cut which we'll talk about in a few minutes will come from the middle section and then the rib tips on the bottom the saint louis cuts and the rib tips combined comprise what we call the saint louis the saint louis and the rib tips comprise what we call the spare ribs baby back ribs are lean tender pore grips cut from the top or spine end of the ribs closest to and often including a portion of the loin meat the word baby is used because they are small only three to six inches in length this however is no mistake that they are probably some of the best cuts of ribs the more tender and more meaty of all the ribs country style ribs are meaty cuts from the loin end near the shoulder available either boneless or with a portion of the blade bone attached these are not technically ribs as the quote-unquote ribs inside of them are part of the h-bone a large portion of the rack of pork ribs include the st louis ribs and the rib tips combined together this is the lower half of the rack of ribs without the spare ribs or the baby back ribs rather attached to it from the top end st loose ribs are spare ribs with the tougher rib tips removed the meat is squared off so the rack is more uniform rectangular shape by removing the sternum cap on the top and the flap on the side fresh ham hocks and pigs feet also called trotters are not primal portions but are expensive flavorful cuts that gain popularity among chefs ham hocks are round cross cuts of the lower end of the rear leg or the ham just above the foot cuts from the hog the hog's front legs may also be referred to as hawks as well although they have less meat than those from the hind legs bone in hawks are available fresh or cured and smoked with additional flavorings to provide a longer shelf life smoke talks are used as a seasoning for stews and braised dishes such as greens trotters or pigs feet are or hooves made up of bone connective tissue and a thick skin trotters are generally used to make gelatinous pork stock or added to dishes to flavor slow cooked beans or vegetables then removed or discarded trotters may also be cooked then pickled in brine and eaten as a snack the meaty bits of the hawks are and trotters are used in pennsylvania dutch scrapple a enrich rustic terrines and potted meats heritage or heirloom breeds of porks are specific breeds of hogs that vary in flavor and fat content such as the berkshire the duroc or the red wattles particular ways of feeding hogs such as free range may alter the taste of pork products such breeds prize for their flavor but they're often costly as well many can be raised using organic farming practices for instance the berkshire hog is typically raised on pastures rather than in waddling pens so they're often considered more healthy because they don't waddle in feces and mud they actually will graze in a pasture in order to keep cool they're typically in a cooler climate and will have farrowing huts or some kind of shade trees that they can lay down underneath pork is an excellent source of protein b vitamins iron and other essential nutrients pork is however high in fat especially saturated fats however in the past 30 to 40 years we have lowered the amount of fat and pork as a result of new breeding in feeding techniques sodium content is high in smoked ports cured and preserved products mainly because of the sodium that's added in the in the curing and preserving process a misconception is that pork is a very very fatty animal in contradiction to that when you think about it and compare it to chicken you can actually see from this chart that a pork tenderloin only contains 0.6 grams of fat versus a saturated fat versus a boneless skinless chicken breast with 0.4 so the it's very marginal what you have to do is make sure that you trim the fat off of the pork or at least most of the fat off the pork and you're going to end up with a very very comparable product and you can see this all the way down the chart where as in total fat maybe higher saturated fat is relatively low all the way across the board pork is just absolutely chock full of proteins that help build and repair body tissues and regulate body processes iron which is good for hemoglobin building fat which supplies energy and your body does need a certain amount of fat zinc which enhances the body and protects the bone structure as well as balancing of hormones and enzymes it has things such as vitamin b12 vitamin b6 these are your energy vitamins that will help transport and transfer energy into the body along with vitamin b3 which is niacin which releases the energy from the food but also this is good for healthy skin and digestive tract and riboflavin and thiamine all of these vitamins are necessary in a daily supplement you can get just by eating a piece of pork during the late early to mid 20th century with the advent of products like crisco the need for lard sharply declined pig farmers went away from the larding pigs of old these huge mammoth beasts and started selectively breeding pigs to have less fat while you would think this would be favorable this condition resulted in something called pse pale in color softened texture and exudated or releases its moisture this has led to a more dried out white meat hence the marketing term the other white meat this was generally considered to be a bad idea in today we find that this piece of meat is dry and it is overly tough because it's let go of most of its moisture today pork is rosier in color firm in texture and non-exudative resulting in a juicier tastier piece of meat so let's discuss some of the summaries and takeaways from this module pork is the meat of a hog butchered before they are one years old this is due to the fact that once they become too old like that typically there are sires or matings for for growing new litters we want to call them when they're younger so the meat is going to be more tender religious restrictions aside americans consume more pork than any other meat after beef modern pigs have been bred to have less fat and longer loin muscles resulting in a lean taste and texture the smaller medial ribs called baby back ribs got their name baby due to their diminutive size pigs often wallow in filth and mud mainly due to the conditions imposed on them by human beings if left alone however pigs will return to a feral state within weeks pork is often thought of as unhealthy but ounce for an ounce if trimmed and not preserved or smoked it holds its own with chicken and is healthier even than beef
|
On_Cooking_Lecture_Videos
|
On_Cooking_Chapter_5_Tools_and_Equipment_Part_2_Cookware_and_Storage.txt
|
in this module of tools and equipment we're going to explore cookware and storage wear there are five basic metals that are used in cookware aluminum copper stainless steel cast iron and carbon steel aluminum pots and pans are the ubiquitous pans that you'll find in most every kitchen they're cheaper lightweight commonly used and conduct heat fairly well however they can have hot spots lose heat and just as fast as they gain it and they cannot be used with acidic food as acid in food will react with aluminum and cause a toxic reaction because of this also they don't work well with induction cooktops because they're not ferromagnetic copper is one of the best conductors of heat it heats and cools rapidly whenever necessary and it's fairly heavy weight however if it is cheap or if it's thin it can have hot spots and you can't use it with acidic foods just like the aluminum soft metal however it's also easily scratched it can be easily damaged what you see in the picture is a product that just came from the factory after about a year's worth of use it will not look like this and it doesn't necessarily retain its heat very well stainless steel is the preferred cooking utensil for most kitchens it's heavy duty easy to clean surface typically clad which we'll discuss later resists pitting and metal reactions with acidic foods works with induction cooktops and holds its heat well if clad however it's much more expensive than the aluminum varieties cast iron has been preferred for many years it's very heavy duty if seasoned correctly it can also be non-stick it gives a unique flavor to foods and promotes browning better it holds onto its heat very efficiently however it's somewhat fragile and if you drop it things like the handles and even the lids or even the pan itself can break it may require seasoning or re-seasoning before use and you should never soak this in water using soap and a mild abrasive like kosher salt works very well to clean it dry thoroughly and lightly oil before putting away much like cast iron carbon steel is heavy duty if seasoned correctly can be non-stick it'll provide a unique flavor to foods and promote browning better it holds onto its heat very efficiently and is less fragile than cast iron and is relatively inexpensive the cons may require seasoning or re-seasoning before use and just like cast iron you should never soak in water and follow the same cleaning instructions an amalgam of several different metals cladware sometimes referred to as allcloud because of its brand name is using different layers in of metal including stainless steel copper and aluminum to be able to provide the benefits of each one of those metals with the durability and cleanliness of the stainless steel on the inside of the pan the pros of this is using these levels you get the best of both worlds however the cons is it can be very expensive additional cooking materials include things such as non-stick coatings like teflon terracotta which is used in cooking things such as tangents glass such as corning ware enamel wear which is also used in things like crock pots and things that are cast iron that have been coated with enamel and baked on in silicon in the case of sil pads or even in molds that are used in baking and pastry a saltus also known as a skillet and a satwa also known as a sautepan are almost used interchangeably in many ways but believe it or not though there is a difference between these two does it make a difference in your cooking if you use one instead of the other let's take a look a saute pan or satwa is one with the straight sides and has a larger surface area which makes it ideal for tasks like searing meat or reducing a pan sauce we also like it because we're less likely to slosh things over the side a sawtooth or skillet has a slope side and is used mainly in sauteing the slope sides providing the ample and perfect angle for flipping your food or as it's referred to as saute or to jump a rondeau is a favorite staple in a chef's arsenal and should be in any home cooks as well sometimes called a brazer or brazier this wide somewhat shallow pan is similar to a stockpot or dutch oven but not nearly as deep the pan has slight sides that are straight usually has two loop handles for easy carrying and nearly always comes with the lid generally it is made of stainless steel copper or combination of clad metals range of possibilities include part of the rondo's allure is its versatility its shape lends itself well to searing brazing oven roasting frying poaching pan roasting and simmering or boiling this shape is deep enough to hold liquid for poaching or braising but shallow enough that it can evaporate to make a cooking liquid more intensely flavored when shopping for a rondo look for one with a heavy base which will conduct and retain heat well you also want a tight fitting lid and an oven safe construction so you have the option to finish or even fully cook dishes in the oven also make sure the pan is not too wide it should not be more than a couple of inches in diameter larger than your burner or it won't heat properly like stock pots or dutch ovens rondos come in a variety of sizes choose one that will accommodate the number of servings you usually prepare a six quart or seven quart version should be sufficient for preparing four to six servings a saucepot is a cooking utensil that is round and shaped with high straight sides and a longer handle equipped for a tighter fitting lid a saucepan can range in sizes to hold contents from one pint or in sizes up to four quarts they are made of materials such as stainless steel copper anodized aluminum glass and enameled steel or cast iron a saucepan has many uses including preparation processes such as boiling water or making sauces in soups or for braising foods saucepans are very similar to a saucier pan as both are used to perform similar tasks the saucier pan is shorter in height and is made with a sloping sides instead of a straight side saucier pans provide easier access to stirring contents around the edges of the pan and a wider opening it is easier to make with wide arcs of the spoon spatula or whisk however choosing a saucepan or saucier pan is typical typically a personal preference as both pans work equally well for the preparation of sauces and various foods stockpots are deep pots used for simmering stock and soups they have long bodies and slope sides they're typically taller than they are wider they come in various different sizes to accommodate various different stocks or soup sizes and can even have a spigot at the bottom for easy draining [Music] there are various different kinds of specialty pans including crepe pans which are shallow and thin and flat and used to make a crepe using the method of smoothing out the batter across the bottom roasting pans are used for roasting meats and vegetables they're heavy duty and can accommodate multiple pieces in large quantities whole turkeys and whole cuts of meat can easily fit into these pans they're often accompanied by a drain pan in the bottom of it paella pans are specialty pans from spain used to make a spanish dish called a paella this is a cooked rice dish usually with seafood and woks are used for cooking and chinese cooking and asian cooking and are typically designed for a much higher much faster heat process [Music] one of the first steps in setting up your kitchen is choosing the right cookware cooking and cleanup are a lot easier when you have the right pots and pans for each task though you don't have to purchase the most expensive brands consider investing in a good quality set that will last let's take a look at some of their different uses large stock pots are taller than they are wide and have a capacity of at least eight quarts with two strong grab handles we use these for cooking or boiling soups stews stocks and pasta this pot is great when you need to feed a crowd or make large quantities of food like potatoes or corn on the cob most sets of pots and pans also come with a smaller version which is a handy size for everyday cooking a small saucepan is really versatile you can use it for cooking smaller portions steaming or reheating when shopping for pots there are a few common features to look for these are stainless steel pots with thick bottoms they allow for even heating without scorching or burning and at the side here notice the handle look for a pot with a welded handle instead of one with rivets a welded handle lasts longer and in any good quality set of pots you want tight fitting lids the lid helps bring things to a boil much faster which saves both time and energy when buying pots and pans pay attention to the handle i prefer a metal handle that way the pan can go from the stove top right into the oven just be sure to use a pot holder when you take it out now for saute pans commonly called frying pans we have two non-stick pans here it's nice to have at least one large one this one is great for making about four or five pancakes in this smaller pan maybe two eggs again the metal handle is great for moving from the stovetop to the oven to cook omelets or frittatas you should also have a good sized roasting pan this will hold about two maple leaf primed chickens a pan this size is also good for a roast or a turkey so these are the basic pots and pans you need to get started in the kitchen there are a lot more you could add but the best rule of thumb is quality over quantity make sure you invest in good quality pots and pans that will last for years great meals these heavy duty polycarbonate food storage containers often referred to by the capacity in the trade name cambro ie 22 court cambro are the workhorses of the restaurant kitchen and are some of the most common and useful storage containers in kitchens they can store produce in the cooler liquids in the freezer and dry goods at room temperature they are versatile enough to go from environment to environment they come in round or rectangular designs ranging in size from 4 6 8 12 18 and 22 quarts and they often come with matching or size lids to accommodate them they wash well by either hand or dishwashing and can stand up to years of usage other food storage options include what is often referred to by polycarbonate's trade name lexan or by the company trade name cambro consists of large food boxes they come in full and half sizes three and a half inch deep six inch deep and nine inch deep they also have drain pans that can be used in conjunction to prevent water from draining product such as when icic fish ingredient bins come in a variety of different shapes and materials they are used to store dry items such as flour grains beans rice etc they are often on casters so they can move around in the kitchen bringing the food to the workstation and not the other way around they are required to have lids that prevent any foreign objects to fall into the food these polycarbonate and stainless steel containers called hotel pans are designed to be used to store small amounts of ingredients on a refrigerated make table such as a sandwich station polycarbonate will not react with acids and foods like metal containers will and they will not get so cold as to damage the food stainless steel will not react with most acid foods and are often used where the temperature needs to be kept very cold since metal is better conductor of cold than polycarbonate they come in sizes ranging from full size hotel pans two-thirds pans half pans third pans fourth pans six pans and ninth pans and various other sizes each size is available in six six inch four inch and two inch deep models the name of the pan refers to how many of each would it take to equal a full size pan for example the nine it takes nine of the one-ninth pans or ninth pans to make one full-size hotel pan here you can see various different configurations depending on the usage needs many of the configurations require the use of a spanner bar to prevent the pans from falling in [Music] if you want to get a job in a professional kitchen as a prep cook which is where all cooks and eventually chefs start off at you have to first understand the basic terminology of some equipment in the kitchen and this will also help you if you're a home cook you just want to pick up some stuff at your local restaurant supply store so here we have our basic hotel pan and hotel pans are defined by their depth so here we have a two inch hotel pan that has a two inch depth also called a 200 and you have a four inch hotel pan which is a four inch depth also called a 400 and then you have the deepest hotel pan which is this the six inch or the 600 and these can be used for any number of things like brazing or holding food depends on the application that you want to use now you also notice that some four inch hotel pans will have a wider brim than others and these are used on buffet lines to hold a two inch hotel pan and you'll notice that this two inch hotel pin will fit directly into this and it'll fit snugly and this is because you'll have water underneath in your four inch hotel pan that will be simmering and it will keep whatever product you have hot in your two inch hotel pan and you do this by using sternos and these sternos will go in those little indents on the bottom and you'll fit this whole thing into a stand now you also have perforated hotel pans which are basically hotel pans with holes in them and these can be used as steamers you can use them to drain things in a steam steamer setup you're going to place your perforated hotel pan into a larger hotel pan that has water underneath it that's simmering you're going to place the lid on top there's a couple different lids you have a lid with no hole on the side that traps in steam and then you have a lid with a hole on the side that will allow a spoon to stick out and you'll commonly see this on either buffet lines or like half brows any restaurant that is executing off of a steam table now we also have service pans also called mise en place pans because this is where your mise en place will go all your prep for your line and they come in the two inch four inch and six inch depth and so here we have a sixth pan and it's called a sixth pan because you can fit six into a hotel pan the same with the nine pans or ninth pan and this is because you can fit nine ninth pans into a hotel pan and obviously same thing goes with a third pan and it's called a third pan because you can fit three to a hotel pan now there are also half hotel pans which are simply just half the size of a hotel pan still coming in a two inch four inch and six inch depth you also have these large sheet trays which can be used for any number of things roasting reheating baking whatever you want to use them for and then you also have half sheet trays which are literally half a sheet tray now before we go one last thing you'll normally see these roasting racks were just a rack insert for large sheet trays and this is good for resting meats anytime you want a product to be elevated above the pan so it's not sitting in its own juices now for more information including posting questions and comments for this episode head on over to stellacloney.com kp14 for the show notes sip top bags often referred by their trade name ziploc are an indispensable storage device for kitchens they allow the storage of dry goods and other small quantity items they can be used in the freezer or refrigerator as easily as they can be in dry storage and they can even be used in sous-vide cooking a plethora of sheet pans are necessary in the kitchen they come in a variety of different sizes but most common sizes are the full size and half size they can be used for storage organization baking and a number of different usage while not exactly a food storage container the speed rack is essential for storage and movement within the restaurant they can fit full size and half size sheet pans readily they can be left open wrapped with plastic for transportation and air protection or draped with a specially designed shroud let's summarize and take and look at some of the takeaways for this lesson takeaway number one not all cookie medals are identical cheap cookware often translates into cheaply made and burnt product takeaway number two you want to pick cookware based on your budget but also by its intended use you don't necessarily need copper pans to saute most foods takeaway number three specialty cooking surfaces allow for more versatility in the cooking process and different results takeaway number four just like metals choosing the right pot or pan is essential for the job you should never use a rondo to simmer stock for a long time because evaporation will be more significant use a stockpot instead takeaway number five the right size storage bin is essential why store two quarts of stock in a 22 quart container it simply doesn't do the right job and it takes up too much room in your coolers and takeaway number six you can never have enough hotel pans no matter what size you have this presentation will attempt to give an overview of the different small wares and pieces of equipment that you'll experience in a restaurant environment however it's not an exhaustive list for more information i suggest ktom.com which can be accessed at the link on the screen
|
On_Cooking_Lecture_Videos
|
On_Cooking_Chapter_28_Charcuterie.txt
|
in this module we're going to cover the broad range category of charcuterie in this module we'll cover the following objectives preparing force meats assemble and cook pates terrines and sausages prepare and use aspect jelly employ the proper methods for brining curing and smoking meats and fish identify cured pork products and prepare and use sausage [Laughter] shofar i'm looking forward to teaching everybody about charcuterie so what we want to get across for everybody is we want to teach them about what charcuterie is how it came to be and how to eat it so those are the three topics that we want to touch on let's do it what is charcuterie rock okay so we're going to talk about the different types of charcuterie first one is the salumi they're ground meats they're flavored with with the herbs and spices from the area that they come from then you've got your cooked charcuterie things like mortadella my favorite and then we finish up with uh prosciutto which is just pork salt air and thyme the best there's a whole world of charcuterie out there that a lot of our customers don't even recognize a charcuterie but they should yeah so you got prosciutto salami everybody knows those you got bacon pepperoni pate people are always confusing i tell them bacon right yeah right yeah even the fish stuff exactly yeah the cured fish bacon yeah baklava come on there are several different types of charcuterie force meat sausage pate terrine gallantine roulade and salt cured and brine products a forced meat is a preparation made from uncooked ground meats poultry fish or shellfish it's seasoned and then emulsified with fats it's used as a primary ingredient to make pates terrines gallantines and sausages textures vary from smooth and velvety to well textured in coarse to ensure the proper emulsification of a forced meat the ratio of fat to other ingredients must be precise temperature must be maintained below 41 degrees fahrenheit or 5 degrees celsius and the ingredients must be mixed properly and thoroughly some different types of equipment that you'll need for making a horse meat food processors or food choppers drum sieve and a standard meat grinder plus various different size grinding dies the ones that are shown in these slides are all commercial grade but they do make residential grade as well let's take a few minutes and talk about forest main ingredients the meats are the dominant meat that is what gives it its flavor in the forest meat it could be chicken beef pork poultry of any kind it could even be vegetarian or seafood fats used are adding moisture and richness to a dish you want to use the fats that are a little more solid in nature as opposed to the ones that are squishy binders such as a crustless white bread soaked in milk or eggs sometimes even used together gives it a nice binding if you think about it we do the same thing with meatloaf and then seasonings salt curing salts even which we'll talk about later marinades and various herbs and spices that go into it for flavor and then finally our garnishes these are meats or fats or vegetables or other foods added to and limiting quantities to provide contrasting flavors textures and colors so for instance a salmon and scallop mussoline terrine wrapped in leek leaves or a pate uncroot with brunoise garden vegetables or pate de compagna which is wrapped in prosciutto fish terrain with salmon dill and pink peppercorn garnish wrapped in spinach or a chicken roulade with pistachios and dried cranberries cured meats are easy to make at home if you have the right stuff and that means curing salts also known as pink salt prague powder or tcm tinted curing mixture curing salts is a mixture of salt and sodium nitrite also known as cell rose in french it's used to inhibit bacterial growth and preserve the meats while keeping the color and flavor intact the addition of 6.25 sodium nitrite is the key just don't use it as a table salt it controls the spoilage of bacterias and growth of the bacterias and it preserves the pink color of the meat once it's cooked which is why it's often dyed pink also this helps to identify readily and prevent you from putting it on the table there are typically two different kinds of pink curing salts or tinted curing mixtures that we use and they're referred to often as prog powder this is product powder number one which is also known as instacure or modern cure cures are used to prevent meats from spoiling when being cooked and or smoking at low temperatures typically under 200 degrees fahrenheit this cures one part sodium nitrite and 16 parts of salt and then are combined and crystallized to assure even distribution as the meat temperature rises during the processing the sodium nitrite changes to nitric acid and starts to gas out at the at about 130 degrees fahrenheit after the smoking cooking process is complete only about 10 to 20 percent of the original nitrite remains as the product is stored and later reheated for consumption the decline of the nitrite continues four ounces of prague powder is one pound product powder number one is required to cure 100 pounds of meat so again the sodium nitrite converts into nitric oxide and this requires cooking or slowly smoking things at temperatures of less than 200 degrees this is the product powder you want to use when you're applying heat four ounces of frog powder is is of number one is required to cure 100 pounds of meat or one level teaspoon is required for every five pounds of meat prog powder number two is used on dry cure products it's a mixture of one part of sodium nitrite and 16 parts of salt as the same as frog powder number one but in this case since this particular product does not convert the same it does not require cooking or smoking or refrigeration this cure is sodium nitrite and it acts as a time release slowly breaking down in the sodium nitrite into the nitric oxide but before it does that it starts as sodium nitrate works its way chemically to a sodium nitrite over time and then becomes nitric oxide as it starts to cure this allows you to dry cure items and products that take much longer time to cure a cure with sodium nitrite would be would dissipate too quickly you want to use this again similar to the way we do the others the difference here is sodium nitrite or sodium nitrate rather in prog powder number two is used without heat slowly over time product powder number one is the heat application prog powder number two is without heat just like in prague powder number one the ratio for this is one ounce of the cure for 25 pounds of meat or one level teaspoon of the cure for five pounds of meat and it's very important that you stick to these ratios because too much of this can create toxicity in the food pate spice is a typical spice blend this is a recipe that's used for a traditional pate spice where we take two to three bay leaves we apply a half a teaspoon of thyme leaves that can typically you want to use dried fresh thyme leaves and what i mean by that is you want your thyme leaves to be fresh but you can leave them out of the refrigeration on the countertop for several days and allow them to dry out some after they've been picked and they crumble and and grind very nicely a half a teaspoon of mace which is the outer coating of a nutmeg shell if you can't find mace you can substitute a little bit more of the nutmeg and add just a touch of cayenne pepper to it 3 4 teaspoon of cinnamon 1 teaspoon of nutmeg 1 teaspoon of cloves a half a teaspoon of white pepper a half a teaspoon of rosemary again allow it to dry out and a half teaspoon of basil and a pinch of cayenne pepper again to give it a little bit of a bite and then you want to grind all those things together and make sure they're all incorporated together as a process when it comes to preparing forced meats preparations include time and temperature control for safety or tcs foods temperature control must be strictly maintained all food can contact surfaces and cutting boards must be sanitized and you want to make sure that your table is sanitized as well to ensure proper mold emulsification forced meats must be kept below 41 degrees grinder and food processors parts are typically frozen kept in the freezer you can even keep your food processor parts stored in corn starch which will prevent any ice crystals from building on it if it has any water on it all food must be cut into small pieces small enough to fit in the grinder you don't want to over stuff the grinder or the food processors pate de compagna that's right this is the video the charcuteries do not want you to see because when people find out that country pate they've been charging thirty dollars a pound for is really nothing more than cold meatloaf well you can imagine what will happen millions of americans will continue not to buy that pate but above and beyond saving a few dollars i think this is a really fun project and i thought this really came out tasting great so as they say let's go ahead and get this pate started right by tossing several different meats into a mixing bowl which will include a whole bunch of pork shoulder that i cut up into like one inch pieces and then to that i'm going to add some duck meat which i've trimmed off a couple large legs since we're doing a country style pate some type of game bird would be very traditional in this okay so our pate base is going to be pork and duck but we also need to include a very healthy dose of fat and today i'm going to be using some diced up bacon but some pork belly would also work and we'll go over all your fat options on the blog post and then of course it would not be a real pate unless we add a little bit of liver and while pork liver would be more traditional i'm going to use chicken livers which i've chopped up kind of roughly to make them look even more disgusting so before people turn off the video let's go ahead and cover those with a diced onion as well as some sliced shallots and minced garlic we will also toss in some freshly chopped parsley at which point it's time to season this up with what looks to be a dangerous amount of kosher salt but don't worry it's the perfect amount and then besides regular salt we're also going to add a little bit of pink salt and no not pink himalayan salt pink curing salt which you may or may not have purchased for our ham video so that's optional but if you do have it we'll toss a little bit in and then we're definitely also going to need to toss in a spoon of pate spice and you might be thinking hey i don't have that it's okay nobody does which is why we make our own by combining the following four ingredients some ground ginger a whole bunch of freshly grated nutmeg some ground clove and some type of ground pepper usually white or black but we're going to shock the world by using cayenne and we'll give that a mix and that's it you are now the proud owner of some pate spice which by the way is great in so many other things which unfortunately we don't have time to review because we need to finish this off with some freshly ground black pepper as well as a nice big splash of brandy or i guess cognac if you want to waste the good stuff but of course that's up to you you guys are the billy mays of your country-style pates which reminds me never buy anything for the kitchen that you see in an infomercial usually that's a bad idea especially if it slices and dices things but anyway once we have all those ingredients together we'll go ahead and give that a thorough mixing at which point we're going to wrap it up and pop it in the fridge to marinate for about two hours okay some people like to go overnight but i don't i think a couple hours is perfect so we'll go ahead and wrap that up and pop in the fridge and while we're waiting for that we can go ahead and make the next component the panad and for that we will start with some plain dry bread crumbs to which we will add a couple large eggs which appear to be very excited they're in this and then we'll finish up with some heavy cream and that's it we will give that a mix and that's what we're going to add to our meat later to help bind it together as well as help provide that beautiful tender texture we're going for and when you first mix this it's going to look kind of thin but as it sits on the counter for a few minutes it's going to thicken up quite a bit as you'll see when we add it to our ground meat which by the way is the next step and as you might know one of the keys to successful meat grinding is to use very very cold meat so what we're going to do is transfer this onto a sheet pan and we'll pop that in the freezer for i don't know about 15 or 20 minutes at which point it's probably going to look something like this and we certainly don't want this frozen solid but it should be very very cold to the touch and starting to firm up a little bit and i think using very cold meat is important no matter how you grind it but especially if you're going to use the grinding attachment on your stand mixer which isn't really exactly heavy duty but having said that if your meat is properly prepped this does do a very decent job i do try to make sure i keep that plate in front of the blade cleaned off but other than that there's not much to do except pass it through and i'm sorry to disappoint you but i'm not going to show a lot of this grinding footage since nobody should have to look at this longer than necessary but anyway we're going to go ahead and coarsely grind our meat and by the way if you don't have one of these machines there's other ways you can do it which i will be happy to outline in the post and then what we're going to do once our meat's ground is toss in our fruit nuts and i'm gonna be going with some dried cherries and pistachio and since that's our garnish we want those pieces whole which is why we didn't grind them and by the way these are optional you don't have to put them but we are doing a country style pate and if you've ever been to the country you know there's lots of fruits and nuts and then once we've added whatever we're garnishing with we will finish this off by mixing in our panod which as predicted has gotten very thick so we'll go ahead and transfer that in and mix it until combined and i say mix but it's really more of a fold because as with working with any kind of ground meat we really don't want to overwork this if we can avoid it so we'll kind of stir and mix and fold that together and then what we'll do once that's been accomplished is transfer this into some kind of fat lined mold and for a mold i'm using a bread pan and for fat i'm using bacon i really do like to use call fat and that is more traditional but i didn't have any and the bacon really does work nicely and then as you may have guessed we're going to go ahead and fill that to the top with our mixture and yes i might have a little too much but that's okay and by the way please make sure whatever you're using is long enough to overlap because we are going to add a few pieces to the bottom i mean it's the top now but it will be the bottom and then once we have our meat transferred in and successfully encased with bacon i'm going to add a little piece of parchment paper over the top and then we'll wrap that nice and tightly with foil preferably heavy duty at which point we will transfer that into some kind of deep pan or dutch oven in case you're wondering that was just a little silicone pot holder that i put at the bottom for a little insulation but it's probably not necessary it just seemed like a cool thing to do and then once that's placed in we want to partially fill that up with hot water just hot tap water is fine and we want to fill that up somewhere between halfway up the pan and oops i went too far so i usually shoot for about two-thirds of the way up our mold and then because the other secret to this is cooking it to a very specific doneness i'm gonna go ahead and insert one of these probe thermometers before we put the lid on and that's going to ensure we cook this to the perfect internal temperature so we will go ahead and place that in and then pop on the lid at which point we will very carefully transfer this into the center of our preheated 350 degree oven for about two hours or so or more importantly until it reaches an internal tap of 155 and we really do want to go by temperature here because there are so many variables that will affect the cooking time so for me this took about an hour and 40 minutes and then what we'll do once it comes out and we remove it from that pan is let it cool down and because i filled mine so much it was actually coming up above the top of the pan so for now i'm just going to press it down with this heavy cast iron baking dish and i should mention if you didn't over fill yours you could just skip to the next step which is the official way we're going to press this while it cools and the reason we're doing this is so things compact a little bit and we end up with a little firmer and what most people would consider nicer texture and by the way in case you're wondering the reason i like to transfer this into a dish lined with paper towels is because as we press some of those juices and excess fat will leak out but anyway let's go ahead and remove that foil and proceed to the official pressing method and for that what we'll do is cut a nice thick piece of cardboard just a little smaller than the opening and we will wrap that in foil and we're gonna use that to press down our pate as it sits overnight in the fridge with the help of a little bit of weight for example a couple cans of garbanzos or something heavier might even work better but the point is make sure this is weighted down as it chills overnight in the fridge so that the next day is nice and firm and compressed and looks something like this and then what we'll do to unmold this is simply dip it very briefly in some very hot water just for a couple seconds and that pate should slide right out and we could start to slice but for best results i like to pop it back in the fridge until that bake is nice and cold again right when we slice it should be firm and white like lardo and not flabby and translucent so i did re-chill mine at which point we can slice in and see how we did and as i was cutting this i didn't even have to wait to taste it i could tell by the way the knife was sliding through the texture was perfect in the appearance check it out there is no way that could possibly look any better unless i guess we saw a few more pistachios so i cut another slice and there we go that can't look any better so visually i was very happy with how this came out and it looked like the real deal but i needed to go in for a taste to make sure and i served that up very traditionally with some cornish shones and mustard and toasted bread and as happy as i was with the appearance i was even happier with the taste all right we joked about this just being a cold meat loaf but because of all our rich meats and copious amounts of fat and salt not to mention that little hint of gaminess from the liver this goes so far above and beyond that and deliciousness they really shouldn't have even been mentioned in the same video i mean taste texture appearance i just enjoyed everything about this so to summarize there ain't no pate like a chef john pate because the chef john pate don't stop until it's gone so i really do hope you give this a try soon head over to foodwishes.com for all the ingredient amounts and more info as usual and as always enjoy some common forced meat preparations your basic force meat it's smoother and more refined and probably the most versatile of all this can be used for stuffing into a galantine or balantine or it can be used for putting into a terrain pretty much any application that you would use for speed for this is a good one to be able to use it doesn't require a lot of processing it can be a little bit rustic or it can be somewhat smooth country style is the simplest to prepare and it's heavily seasoned typically country style forest meads are chunky and will have bits and pieces of the whole mussels or chunks of maybe fruit or pistachios or nuts or some kind into it a mussoline is light airy and delicately flavored and is typically done just like you would do a chocolate mousse with the addition of cream that's been turned into a whipped cream and folded into let's say for instance a seafood puree this allows for the structure of the seafood puree to become light and airy and not dense and chewy less of a type of forced meat more of a style of horse meat quinnell's are small dumpling shaped portions of a moussaline these are look fancier than they are they're basically you take two spoons and you scrape them back and forth until you end up with this almost football like shape using forced meats we've got some several different styles we can do uh the first is the terrine terrine in this particular case this is not a baked terrine but traditionally they're baked in an earthenware mold also you have metal mold stainless steel molds you have enamel molds several different kinds like that and the actual mold itself is referred to as a terrine and that's where we get the name terrine from pates or in this case pate on crute is a fine savory meat filling wrapped and baked in a crust you don't necessarily have to have the crust to be a pate the pate uncroot means pate in crust and this is an example of one in the picture and normally what you would do with that is as is cooking you would pour in an aspect which we'll cover later and it would fill the voids in there and keep it nice and tightly packed the next two are gallanteens and balantines and are often mistaken with each other a galantine is a forced meat wrapped in the skin of the animal and poached and served cold whereas a balantine is forced meat wrapped in the skin of a chicken thighs or something along those lines and then seared and finished off in the oven and served hot aspic jelly is used in the finishing and presentation of terrines pates and galantines it adds flavor and shine prevents display food from drying out and it's made from a strong stock with increased gelatin often an added gelatin to it plain gelatin gelatinous meats or bones are added you may follow guidelines for gelatin content as required sheet gelatin works very well in this aspect also you can use regular gelatin powdered gelatin you just don't want to use jello there are several different types of terrains that we could talk about liver terrains a type of liver twin terrain called foie gras vegetable terrains bronzer aspectorines moose riyettes cornfies and crempinets liver terrains are typically made from calf chicken beef goose or duck liver they're popular and easy to make pureed poultry pork or veal livers are mixed with eggs and a padana of cream and flour and then baked in a fat back line terrine although most liver purees easily in the food processor a smoother finished product is achieved by forcing them through a drum sieve after or in lieu of pureeing them in the processor foie gras is unique even among other poultry livers in that it consists almost entirely of fat vulgar arteries are made from the fatted goose or duck liver called foie gras meaning fatty liver fog requires special attention when cooking it's if it is cooked improperly or too long it turns into a puddle of very expensive fat it's as expensive as it sounds foie gras the fattened liver of a duck or goose you can find it at fine french restaurants like here at bistro pierre le pa in new york city and a single appetizer can cost as much as your main course at your average diner i have to forecast for when we're making the terrains that's eight pounds of foie gras that's 250 bucks so why is foie gras so expensive [Music] first of all ducks and geese are expensive to raise compared to chickens they take up to two and a half times longer to mature so the capital investment for a foie gras farm for the output is at least two and a half times uh greater than that for a chicken farm and then there's force feeding the process that fattens up the liver the duck is raised as a normal duck and then for the last two weeks about of its life two to three weeks depending on the farm is force fed corn and grain through a metal tube a couple of times a day it requires a ton of feed as much as four pounds each day which grows the liver up to 10 times its normal size plus it requires a lot of time during that period of time you have an enormous enormous input of labor and cost of labor to to to grow and produce the the finished product but it's hard to deny that foie gras is also costly well because it's so controversial animal activists say that foie gras is one of the most inhumane meats out there and while farmers and some chefs disagree no one is abusing animals at a foie gras farm the moral debate has put pressure on supply especially in the u.s in fact there are only three foie gras farms across the entire country and it doesn't look like there'll be more anytime soon nobody in their right mind would open a foie gras farm somebody tried to open a farm in indiana about a decade ago and quickly decided that they couldn't couldn't handle the political aspect of it in early 2019 the supreme court upheld a foie gras ban in california which went into effect in 2012 and even in the european union which produces ninety percent of the world's foie gras around a dozen countries prohibit force feeding taken all together that's why a pound of foie gras can cost as much as ninety dollars and that's well before it makes it to the plate foie gras requires skill to prepare the thing is with foie gras is that you need to be taught how to handle it and skilled chefs like harold tend to run pricey restaurants yet another reason why foie gras is so expensive but still people are willing to pay the price people love it like they love love love it that cannot be duplicated in any other way a texture that is you know very special so while foie gras may be controversial it's unlikely to be kicked off the menu anytime soon vegetable terrains are often made from softer vegetables such as squash zucchini eggplant tomatoes etc but can also be made from items such as butternut squash potatoes onions and the like they may or may not contain dairy items such as cheese they can also be held together with a non-protein aspic making them vegetarian and are veg and vegan friendly vegetable terrains have a relatively low fat content and stunning eye appeal beautiful vegetable terrains are made by lining a terrine with a blanched leaf leafy vegetables such as spinach then alternating layers of one or three or six or ten separately prepared vegetable fillings to create contrasting colors and flavors a different style of vegetable terrine is made by suspending brightly colored vegetables in a mussoline force meat to create a mosaic pattern when sliced bronze are aspectorines made by simmering gelatinous cuts of meat most notably pig's feet and head including the tongue in a rich stock with wine and flavorings enrich the stock with gelatin and flavor from the meat this will create an unclarified aspect jelly the meat is then pulled from the bone diced and packed into the terrine mold then the aspic is added to it and allowed to set head cheese is a common example along with scrapple and sauce moose much like its sweet cousin moose is uh bound together using cream a savory version of moose which is not a mussoline forced meat is made from fully cooked meats poultry game fish shellfish or vegetables that are pureed and combined with a bechamel or other appropriate sauce bound with gelatin and enlightened with whipped cream a mousse can be molded in a decorated aspect jelly coated mold such as one described in the process that follows or is is peeled off after the mousse is unmolded a small mousse can be served as an individual portion a larger molded mousse can be displayed on the buffet riyettes are prepared by seasoning and slow cooking pork or fatty poultry such as duck or goose and generous amounts of their own fat until the meat falls off the bone the warm meat is mashed and combined with a portion of the cooking fat the mixture is then packed into a crock or a terrine and the render fat is strained over the top to seal it rights are eaten cold as a spread accompanied by a little bread or toast confit is prepared in a similar manner as rietz except that before cooking the meadow or poultry is often lightly salt cured to draw some of the moisture out the confit is then cooked until very tender but not falling apart cornfies are generally served hot like riyettes cornfield can be preserved by sealing them with a layer of strained rendered fat properly prepared and sealed riyettes and cornfies will keep for several weeks under refrigeration the classic example of a confit is duck confit which is shown in the picture to the side a thin layer of fat is used to line the terrine mold the fat helps keep the force paint moist during cooking sliced fat back and call fat are the most commonly used for this application call fat is a web-like fatty membrane that surrounds the stomach and intestines of cattle and sheep and pigs call fat is used to wrap terrines sausages and pates as well as other forced pink products such as crepe and yets the membrane holds the meat together and during the cooking the melting fat bastes the meat adding moisture and flavor pork coffee is generally preferred for these purposes using fat back or call fat is essentially a form of parting that does not require any tying twine or toothpicks [Music] today we're making rich decadent french country pate trust me you don't have to be a fancy french chef to make pate for your own picnic historically the tricky part about making pate is that it involves a lot of steps but it's also really easy to overcook that won't happen here because we're cooking ours sous vide with jewel you can walk away while it cooks and come back knowing your pate will be amazing here's what goes into it ingredients are super inexpensive and easy to come by you've got your base meats your salts and your binder if you stick to the recipe on the meat the salts and the binder you can get creative with your garnish your spice mix and whatever you want to wrap around your pate [Music] so we're gonna start with spices the one we're doing today is nice and bright perfect for summer but you can use any spice you like once you have your granite ground spices you're gonna get your salts and your cures ready when i say salts i mean salt for seasoning and sodium nitrite the pink salt that's gonna make sure the pate stays nice and pink and lasts in your fridge a long time then we've got our meats we've got pork shoulder which is real classic real traditional you probably won't see a lot of pates without pork shoulder we've got pork liver that's gonna give it a nice mineral flavor and we've got bacon bacon's gonna add a little bit of smoke but mostly it's there for flavor and fat we're gonna cube our meats so i'm gonna cut nice little chunks [Music] we're gonna toss those meats with the spices and the salt then we're gonna put them in the freezer we're gonna do that so it gets nice and firm that way it grinds better once we've got that in the freezer we're gonna go back to the rest of the pate we've got a panad all that means for us is a binder and you've probably used a panod before when you've made meatloaf or meatballs what that does is it makes the pate nice and tender so the more panod you add the more tender your pate will be we're going to take some eggs milk and bread blend it up together but the first thing you're going to do is take the crust off of your bread you could do any bread you could even do crackers bread crumbs all right milk two eggs i'll give it a little mix the next thing we're gonna do is prep our mold so we've got a classic terrine mold like this this is a nice cast iron one you can get these all over we're gonna first line it with plastic we line the mold with plastic that way when the terrain is cooked it just slides right out you see how it sticks like crazy so what we're gonna do a little ice water get it nice and wet same little wet rag here start to work the plastic into the mold you don't have to worry about these wrinkles they'll barely show up later then we're gonna line it with bacon so nice little sheets ask your deli guy to slice that bacon thin just make sure you get it into those corners lining it with bacon's totally optional just makes it gorgeous so remember our meat's in the freezer from here we're gonna grind it [Music] so first thing apart grinder auger goes in little blade a little die all right we're in business all right we've got our ground and seasoned meat here we're gonna add our panad dried cherries thyme this is the garnish you can add whatever you want we're going to add some wine for flavor [Music] so we're going to fill it up to like here and then we fold all the bacon over it this is the last step then the plastic wrap then it goes in jewel [Music] so we've got everything in our terrine mold and the pates ready to cook traditionally you would cook it in what's called a bane big pot of water the mold goes in you crank the oven up and your objective is to not burn or overcook the pate and get it just to the right temperature but it's really tricky in an oven today what we're going to do instead of cooking it traditionally we're going to cook with jewel and what jules is going to allow us to do is get that beautiful creamy pate texture we want without stressing i'm going to put a couple chopsticks down that's going to help the pate stay above the bottom that way water can get underneath i'm going to put joule in hot water whatever is hottest out of your tap we use manual mode to set drill to 167 degrees fahrenheit after the pate is cooked i'm gonna take it out chill it in the fridge overnight if you want a little extra upgrade we're gonna press it pressing the terrine or the pate will make sure that you have a very dense texture who wants some pate who wants some pate guys if you like making pate you're definitely going to want to make some other awesome picnic stuff like chicken liver mousse pickled mustard seeds wine gums macaroons all the french goodies go find the recipes at chefsteps.com and order a jewel while you're there crazy amount of birds huh cause it's summertime and it's french picnic it's summertime and we've got birds i love the bird sounds for preparing a galantine butterfly the breasts tenderloins that are used you can use chicken breasts you can use turkey breast pork tenderloin or any number of meats and cover the skin with a thin layer of meat arrange the forced meat and garnishes in the cylindrical shape across the center of the skin using plastic wrap to tightly roll the gallanteen and form a tight cylinder wrap the gallanteen with heavy duty aluminum foil and then you can submerge post or sous vide in water until it reaches the desired temperature for doneness sausages are forced meat stuffed into casings fresh sausages for fresh sausages fresh ingredients that have not been smoked or cured or used fresh sausages include breakfast sausage links and italian sausages they are made with fresh ingredients that have not been cured or smoked this can also be bulk sausage not in the casing as well smoked and cooked sausages are made with raw meat products treated with chemicals usually the preservative sodium nitrite examples are kielbasa bologna and hot dogs dried or hard sausages are made with cured meats and then air dried under controlled conditions dried sausages may or may not be smoked or cooked a dried or hard sausage includes salami pepperoni and sorper soda smoked and cooked sausages and dried or hard sausages are rarely prepared in typical food service operations although chefs are increasingly interested in preparing these artisanal foods rather they are produced by specialty shops and facilities where sanitation is insured for proper fermentation and preserving of dry curd cured products controlled temperatures under sanitary conditions are essential a hassel plan and a health code variants may be required in order to prepare dried cured foods on site sausage meats are forced meats with particular characteristics and flavorings both coarse italian and lamb sausages for example are simple country style force meats without liver and the different seasonings stuffed into casings and formed into links hot dogs bratwurst and other fine textured sausages are also variations of basic force meats stuffed into casings and formed in the links although sausage mixtures can be cooked without casings most sausages are stuffed into casings before cooking two types of sausage casings are commonly used in food service operations natural casings are portions of hog sheep or cattle intestines sold by the bundle also called a hank the diameters of castings are measured in millimeters and they come in several sizes depending on the animal or portion of the intestine used sheep casings are considered the finest quality small casings both hog and sheep casings are used to make hot dogs and many types of pork product sausages beef casings are quite large and are used to make sausages such as ring bologna and polish sausage most natural casings are purchased in salt packs in order to rid them with salt and impurities the casings must be carefully rinsed and cold or rinsed in warm water and allowed to soak in cool water for at least one hour or overnight before use collagen casings are manufactured from collagen extracted from cattle hides these are generally inferior to natural casings in taste and texture but they do have advantages collagen casings do not require any washing or soaking before use they have a long shelf life and they are uniform in size because their texture collagen casings are often used for smoke sausages and snack sticks welcome to another edition of brewed for food goose island's education series my name is jesse valenciana and welcome to meet jesse [Applause] [Music] sausages sausage making can be a very intimidating thing but i'm going to show you with the right equipment you can make your own sausage at home well at the butcher down the street get all the fun playing with me first things first i have a very fancy grinder spent probably too much on it but you probably have a kitchenaid at home that you got for your wedding gift and probably have never used with the right attachment to grind the meat grinding attachment you can make your own sausage so first things first we're going to take apart our grinder and it's important when making your own meat grinding your own meat to have very cold tools the tools always have to be cold before we break down our meat that we're going to stuff in our grinder let's put this stuff in the freezer shall we so today i'm going to show you how to make a thuringier it's a very nice easy baseline sausage recipe so if you're going to make beer cheese if you've got some jardin arrow they're going to put on top of it the flavor of this meat is not going to take away from anything else so what you want to start off with is a good chunk of pork i recommend pork butt because pork butt is the best butt it's nice it's cheaper than pork shoulder it's got a good amount of fat and it doesn't have too much connective tissue which could get inside your grinder and muck it up and slow it down so the secret is to breaking out your pork into about one inch cubes and you always have to have a good sharp knife this is a a chef's knife a butcher's knife not a chef's knife so you want to get the meat keep a lot of the fat but destroy get rid of all the connective tissues because that's what guts up your your grinder good sausage always has a good amount of fat so you've got some good fat like this right here that you can use for the sausage and this is that disgusting connective tissue that you want out of it see it's got these weird capillaries and whatnot i don't think it's capillaries i think i'm just using that word because i read a medical journal [Music] all right so now we've cut up our meat into nice little chunks next step is to get a bowl put your meat in there for eight pounds of pork i probably got rid of close to two pounds so you've got six pounds of meat to play with i'm using one tablespoon of margarine another tablespoon of coriander tablespoon of fennel salt um i always add more salt than everything else it brings out the flavors even more so don't be shy with it we've got some thyme here and it's about one tablespoon of thyme give or take time to pa a pain in the ass to to get off of like the little stalks that comes in takes a lot of time get your hands all nice and meaty get in there so we're going to wrap this up stick in the freezer for about half hour like i said by that time our equipment will be nice and cold the meat will be nice and cold and we can start grinding [Music] all of our meat is cold again and we've already grounded coarsely now we're going through a medium grind [Music] [Music] so our meat's all ground and ready to be encased now like i said before it's important to keep all of your equipment really cold butchers have an advantage because they normally make their sausage in a colder room like a walk-in cooler so the secret to making sausages at home is really like getting in there get in there dude get in there and freezing your equipment now that we're ready to make our sausage we need different equipment so here's our sausage attachment the smaller ones are for breakfast sausage the larger ones are for in our case just regular old grilled sausage we've got another wheel here that's just going to push out the sausage into our casings so my natural casings have been sitting in water with salt and they've been sitting in the freezer so adding salt in there keeps them from freezing and sticking together so i've added a brand new clean pan the meats on the other pan it's going to be fed in through the equipment so we can actually start making sausage so what what you want so what you want to do is feed a little bit of sausage through make sure it comes out then we're going to pinch it we're going to pinch the casing tie a knot keep on feeding can you just feed the sausage through so i'm going to end up tying a knot on the casing you want to make sure there's sausage all the way through or else you won't have a nice top link to start off your your whole sausage chain [Music] thanks mike he grinds pepper and he feeds meat like a champion there we go we just made some sausage it's a big german smile think about the bread that you're going to use for your sausage so i've got some brat buns at home so i'm going to make a little bit longer link the way i link it is i find the spot where i'm going to pinch it and i start squeezing the meat away from the middle so you can twist your your link or else it's gonna be too tight you're gonna snap your your casing there you have it sausage this is a thuringer just like my grandma when she came back when she came from germany she brought this recipe with her i'm just kidding i found this in a cookbook but it's delicious recipe all right these sausages are ready to be thrown on the grill or if you want to what i do is i like to park cook them over a stove with some beer and some water and finish them off on the grill but they are complete and however you want to enjoy them they're ready to go recipe below remember to like comment and subscribe thanks for watching meet jesse cheers it's a goose island thing salt curing is the process of surrounding a food with salt or a mixture of salt sugar nitrite based curing salts herbs and spices salt curing dehydrates the food by drawing out moisture which inhibits bacterial growth and adds flavor it's often more used with pork products and fish salt curing as a preservation method is not a quick procedure the time involved adds money and production cost for example country style hams are salt cured proper curing requires approximately one and a half days per pound of ham which means three weeks for the average ham some salt cured ham such as the smithfield and the prosciutto are not actually cooked the curing process preserves the meat and makes it safe to consume raw gravlox is a well-known salmon dish prepared by salt curing salmon fillets with a mixture of salt sugar pepper and dill hi again folks welcome back to five euro food today i'll be making gravlax or cured salmon this delicious scandinavian delicacy is wonderfully easy to make and perfect for parties at christmas new year or midsummer to make this dish you're going to need the following ingredients one large salmon fillet with skin deboned descaled and trimmed equal parts salt and caster sugar for a large piece like the one i'm using you'll need about two deciliters which is around one cup of each two tablespoons of juniper berries and these are optional and one large handful of fresh dill and here are the ingredients on screen once again for you now [Music] start off by crushing the juniper berries if using in a pestle and mortar [Music] mix the salt sugar and crushed juniper berries together in a bowl place the salmon out onto a large board and rub the sugar and salt mix well into the flesh [Music] finally chop the dill and pack the dill onto the top of the salt rub [Music] cut the salmon in half and fold it back onto itself so the flesh sides are facing inwards place the salmon onto plastic wrap and cover well place into a bag and store in the refrigerator on a tray for three days turning over each day after three days remove the salmon rinse away the rub the dill pat dry finely slice and then serve thanks ever so much for watching i hope you enjoy making your very own federal relax i'll be back soon with something new so do check back again soon for more recipes from five euro food a brine is actually a very salty marinade most brines have approximately 20 salinity which is the equivalent to one pound or 480 grams of salt per gallon or four liters of water like dry salt cures brines can also contain sugar nitrites herbs and spices brining is sometimes called pickling today most cured meats are prepared in large production facilities where the brine is injected into the meat for rapid and uniform distribution commercially brined corned beef is cured by this process has our most common hams after brining hams are further processed by smoking smoking preserves flavor and protects food for longer storage there are two types of smoking cold and hot cold smoking dries and flavors food at low temperatures between 40 and 85 degrees fahrenheit or 4 and 29.4 degrees celsius cold smoking at the lowest temperature of 40 degrees fahrenheit 4 degrees celsius promotes food safety because it is below the temperature danger zone meat poultry game fish shellfish cheese nuts and even vegetables can be cold smoked successfully cold smoking foods are actually still raw some such as smoked salmon or lox are eaten without further cooking most such as bacon and ham must be cooked before eating hot smoking cooks food at higher temperatures in cold smoking usually between 160 and 185 degrees fahrenheit or 71 and 85 degrees celsius in an enclosed oven for a long period of time hot smoking foods are fully cooked by the hot smoking process and do not require additional cooking before eating as with cold smoking a great variety of foods can be prepared by hot smoking including meats poultry game fish and shellfish although most hot smoked foods are fully cooked and removed from the smoker some such as bacon are used in other recipes that call for further cooking select fresh unblemished food for the highest quality to smoke if frozen products are being smoked properly thaw them under refrigeration before smoking all food to be smoked should be salt cured or brined first under refrigeration salt curing and brining adds flavor allows the nitrites and nitrates which give them the ham bacon and other smoked foods and meats their distinctive pink color to penetrate the flesh and most often and most important extract moisture from the food allowing the smoke to penetrate more easily before smoking wash the food to remove the cure or brine allow the food to dry until the surface develops a sticky skin called a pellicle a thin sticky membrane or skin that forms on the surface of cured fish meat or poultry exposed to air the pellicle protects the food from drying out during the smoking process thanks to the pellicle compounds in the smoke stick to the skin flavoring and coloring it in the world of barbecue the smoke ring is one of the most sought after properties of smoked meats it is believed to show that you've done a good job and properly low and smoke the brisket a smoke ring is a pink dust coloration of meat just under the surface called bark it can be just a thin line a pink or rather a large thick layer a good smoke ring is about a quarter of an inch in thickness the smoke green is produced by a chemical reaction between the pigment and the meat and the gassings produced from the wood or charcoal when burned these organic fuels produce nitrogen dioxide gas this gas infuses into the surface of the meat as it cooks surrounded by the smoke it reacts with water in the meat and produces nitric oxide myoglobin is the iron containing purple pigment in the meat when meat is exposed to air it reacts to the oxygen developing a bright red color that you might think is blood but isn't the red or pink color of raw meat is due to this oxygenated myoglobin an aspect jelly is a mixture of stock and typically an additional level of gelatin is added to that once the gelatin is made cool the clarified aspect jelly by slowly stirring it over an ice bath brush or spoon the aspect jelly over slices of chilled pate arranged on a cooling rack repeat until the coating reaches the desired thickness sauce chauffoid which is french for hot cold is a decorative coating used in the presentation of cold cooked foods it derives its name from its method of preparation the sauce is prepared hot but served cold knowing how to prepare use sauce chauffwad is an essential skill for a garment chef and considered part of the craft of charcuterie traditionally used on coat cold meats poultry fish and other items that were eaten cold sauce chopois is now more typically used to coat a whole poached salmon or a whole roasted poultry item which is then further decorated and used as a centerpiece like aspect jelly chauffwad that has been eaten should be fairly firm when cold gelled at room temperature but tender enough to melt quickly in the mouth fog used only for decorative purses purposes should have a heavier gelatin content and be quite firm which makes it a little easier to work with a classic chauffour is a mixture of one part cream and two-part stock either a light veal stock chicken or fish stock strengthened with gelatin depending on the stock used it ranged in color from cream to beige a more modern sauce chauffeur also known as a mayonnaise chauffeured or mayonnaise koli is made from mayonnaise it's easier to make than the classic sauce and provides a wider product which is more desirable when you use for centerpieces in summary let's discuss some of the takeaways for this module charcuterie platters can add a great deal of value to a party by combining meats cheeses and other items such as pickles olives mustards jams and breads pates tarines sausages and cured meats were developed as a method of utilizing excess products and preserving them left for later use over the last 200 years the guard mojay chef has become indispensable in the operation of many large fine dining establishments hot and cold smoking are done for flavor however hot smoking is also used as a preservation method and is done over a long period of time the term hot smoking is a bit of a misnomer as it only occurs during between 160 degrees and 185 degrees fahrenheit and rarely ever over 200 degrees fahrenheit and while often considered old school aspic and chauffeur are still a valuable skill for all garments
|
On_Cooking_Lecture_Videos
|
On_Cooking_Chapter_11_Stocks_and_Sauces_Part_2_Sauces.txt
|
in part two of this module we're going to be talking about sauces and particularly the mother sauces and their derivations the objectives for this module are recognize and classify sauces explain the proper use of thickening agents and prepare a variety of classic traditional and contemporary sauces so what is a sauce anyway in cooking a sauce is a liquid a cream or a semi-solid food served on or used in preparing other foods most salsas are not normally consumed by themselves they add flavor they add moisture and they add visual appeal to a dish sauce is a french word taken from the latin salsa meaning salted sauces may be used for sweet or savory dishes they may be prepared and serve cold like mayonnaise prepared cold but served lukewarm like pesto or cooked and served warm like bechamel are cooked and served cold like applesauce they may be freshly prepared by the cook especially in restaurants but today many sauces are sold pre-made and packaged like worcestershire sauce hp sauce soy sauce or ketchup sauces for salads are called salad dressings sauces made by deglazing a pan are called pan sauces a chef who specializes in making sauces is called a saucier there are a few key building blocks for making good sauces that we need to talk about number one stocks we always want to make sure that we start with a good stock make sure it's flavorful rich and aromatic this is always going to be the base level flavoring so if you start with a bad stock or one that is turned or is no longer good it's going to make your sauces turned or no longer good tasting thickeners use thickening agents properly to achieve the desired texture flavor and appearance you don't want to use the wrong thickener in this case because certain thickeners do not react well to heat or cold so you need to understand the basic premise behind what thickeners to use and then seasonings it's all about what seasonings to use to get the desired flavor whether those include herbs or spices in the stock preparation or in the finished sauce seasoning is always going to be the case and then of course don't forget salt and pepper at the end a roux is a combination of equal parts by weight flour and fat a white roux is only cooked briefly used in white sauces such as bechabel or in dishes where little or no color is desired a blonde rose cooked slightly longer to take on a little more color used in ivory colored sauces it should begin to take on a little color themselves a brown roux is cooked until it develops a darker color and a nutty aroma and it's used in brown sauces additionally there are such things as red and black roux collectively called dark roots and are almost exclusively used in creole and cajun cooking and require much longer cooking time and a very delicate touch there are two ways to incorporate roux into a liquid without causing lumps first thing cornstarch arrowroot rice flour potato starch all of these can be used as a thickener in making a slurry the processes combine equal parts of corn starch or other starch to cold water make sure it's cold water you do not want to use hot water cold water out of the tap is perfectly acceptable stir this into a smooth paste to create a slurry you will meet some resistance particularly with things such as corn starch and tapioca because of its viscosity so it will be a little difficult to mix it into this paste then you want to add the slurry into a simmering sauce and whisk now to depending on the kind of starch you're using it will either be a little more cloudy such as cornstarch will actually make the sauce a little cloudy versus arrowroot which will make the sauce unchanged and give it a slightly translucent look to it burmony it's a real french word but in fact it's a fantastic ticking asian for any gravy or cream sauce add the butter into a small bowl pour over the flour and mix together until a paste is formed [Music] get your fingers in and don't be afraid to get stuck in now sprinkle the mixture into some hot milk and stir continuously adding it bit by bit the milk will start to thicken and you should end up with a lovely creamy sauce in the culinary arts the world liaison broadly describes the process of thickening a sauce using starch such as flour or corn starch egg yolks fat or even foie gras or pureed mixture of vegetables most commonly however liaison refers to a mixture of egg yolks and heavy cream that is used to thicken a sauce the classic aleman sauce is made by incorporating a liaison into a volute sauce the egg yolk and heavy cream mixture is whisked together and then some of the hot liquid is added and whisked into the liaison to gently heat the egg proteins in the process called tempering this helps prevent the egg from scrambling and giving a undesirable texture the heated mixture is then added to the sauce to thicken once the egg mixture is added to the sauce be careful not to bring it to a boil as the sauce will curdle and break as children we learn very quickly that oil and water don't mix the example of emulsification is on the screen as you can see on the left oil floats on top of water but as we stir the oil and water mixture the oil will slowly start to incorporate in the water the fat globules breaking down into smaller and smaller pieces until finally it's completely distributed in the water emulsions fall into one of two categories oil and water emulsions are common in food products such as crema or the foam on top of espresso mayonnaise and hollandaise sauces are stabilized with an egg yolk lethesin homogenized milk is an emulsion of milk fat and water with the milk proteins as the emulsifier water and oil emulsions are less common in food but still do exist such as butter which is an emulsion of water into butter fat and margarine which is an emulsion of water into fat globules from oil there are three types of emulsifications first is a temporary emulsion and this is only able to hold their bond for a short amount of time such as a vinaigrette semi-permanent emulsions hold emulsions for a longer period of time but are subject to break with heat and time such as the holidays and permanent emulsions hold their emulsions for a long period of time and are not subject to break easily such as mayonnaise in the culinary arts the term mother sauce refers to any one of five basic sauces which are the starting points for making various secondary or small sauces these leading grand or mother sauces are the foundation for the entire classic repertoire of hot sauces they're called mother sauces because each one of them is like the head of its own unique family the sauce is essentially a liquid plus some sort of thickening agent along with other flavoring ingredients each one of the five mother sauces is made from a different liquid and is a different thickening agent although three of them other sauces are thickened with a roux in each case the roux is cooked for a different amount of time to produce a lighter or darker color here are the five classic mother sauces an example of some of the small sauces that can be made from each bechamel with bechamel it's made from a milk base or cream base and you can make a cream sauce a mornay sauce and the tua you can use it for croak madame or for making mac and cheese volute veloute is based off of white stock white chicken stock white fish stock white field stock some of the sauces that can be made from it a saucepan blanc a sauce normandy and it's common used with chicken fish potatoes espanol or brown sauce is made from a brown stock some of the secondary sauces for this would be demi-gloss or bordelaise madeira and it's commonly used with meats and lamb and game meats tomato sauce is made from well you guessed it tomatoes and it's common ingredient in this is not only the main flavor component but it's also part of the thickening process because this is one of the sauces that does not use a roux to thicken with it's actually thickened by reduction method reducing the water out of the content some of the sauces that can be made with this of course is your common tomato sauce for making pastas but also a spanish sauce and a creole sauce hollandaise which is a mixture of egg yolk and clarified but clarified butter is used to make secondary sauces like bearnaise and mussoline it's common used on things such as eggs benedict in the bechamel family a traditional bechamel sauce is made by whisking hot milk into a simple flour butter roux combination this sauce is then simmered with onions cloves bay leaf and nutmeg until it's creamy and velvety smooth the trick is to tack the bay leaf to the onion using cloves called an onion pique this makes everything easy to remove so you don't have to fish around for the bits left in the sauce after straining this traditional bechamel sauce is almost always used as a sauce complete or a sauce mother so it is designed to be used in small sauces and not necessarily used by itself some of the small sauces that can be made from it cream sauce sauce creme or cream sauce is an original classic cream sauce it's also one of the simplest variations of a bechamel sauce it's made by whisking heavy cream into the finished bechamel and adding a little salt and pepper to taste you can make this simple sauce a little more interesting by adding a chopped chives or parsley not only will it create a nice flavor but it also adds to the visual appeal of the all white sauce sauce mornay or mornay sauce the is made from enriching a standard bechamel sauce with gruyere and parmesan cheeses gruyere is often mistakenly thought of as a swiss it's surprisingly similar to making macaroni and cheese from scratch this bechamel variety is an ideal accompaniment for vegetables or pasta it's an excellent choice for chicken or fish as well you can play up either with a little stock to create a volute sauce variation as well sauce for this particular sauce you want to saute onions and puree them then add them to the bechamel it's a classic cream sauce that you'll want to make when you have a vegetable dip it's also really good as a base for casseroles or as in the case of some italian dishes would be used in lasagna nantua sauce is a classic seafood sauce this variation is made by incorporating shrimp butter and cream into the basic bechamel sauce traditionally however it's made with crawfish the sauce is delicious with fish and seafood especially with shellfish just be sure to get the right timing right because it should be served immediately since seafood cooks fast you'll want to have this particular bechamel sauce ready to be reheated at the last minute cheddar cheese sauce there are many different ways to make a cheddar cheese sauce but one of the best is the bechamel base all you need to do is add cheddar cheese mustard a little worcestershire sauce and like mornay sauces cheddar cheese sauce is great with vegetables pasta and fish yet it doesn't have to be anything fancy it also works well on macaroni and chili dogs as well as does five star dishes mustard sauces is a combination to simply adding a small amount of dried mustard smooth whole grain mustard or dijon mustard to the sauce a small squeeze of lemon works well with this sauce too this is a natural accompaniment to light fishes and light dish dish dairy free bechamel sauce this uses instead of milk and butter you may choose to do something a little more healthy and use something along the lines of a soy milk or an almond milk using this soy milk with a little olive oil and flour as the roux instead of butter it's a very simple recipe that you can use any white sauce can be dressed up in many different ways [Music] do [Music] do [Music] do [Music] do [Music] do [Music] do [Music] do [Music] a volute sauce is a savory sauce made from rue and a light stock the term volute is a french word for velvety the volute family is quite extensive and this is due to the flexibility and versatility of the sauce there is a lesson to be had within the simplicity of the veloute in its multiple possibilities are branched off because of its neutral flavors and velvety texture can mean that there are no real end to the variations in preparing a volute sauce a light stock one in which the bones may have not been used previously roasted such as chicken or fish stock is thickened with a blonde roux thus the ingredients of the volute are equal parts by mass of butter and flour to form the roux and a light chicken or fish stock with some salt and pepper to add seasoning the sauce produced is commonly referred to this type as a by the stock used such as a chicken volute or a fish volute some of the fish variations a reduction of white wine a cardinal sauce which is diced tomatoes cayenne pepper and sometimes paprika the end result will be a sauce that has a red tint to it this goes really well with lobster and crawfish dishes and is often the base for what most people in south refer to as crawfish cream sauce and normandy sauce which is cider or dry white wine is added along with butter aleman sauces is a sauce parisienne this is a french cuisine based on a light-colored veloute sauce typically veal or chicken or sometimes even shellfish pollutants but thickened with an egg yolk and heavy cream liaison and seasoned with lemon juice alemand was one of the four mother sauces of the classic french cuisine as defined by antoine karim in the art of french cooking in the 19th century some of its derivation salsa roar is a french word for sunrise from adding tomato paste horseradish sauce adds dry mustard and freshly grated horseradish for a little kick mushroom sauce is sauteed sliced mushrooms and butter with white wine finished with a little chopped parsley and chives and sauce poulette or pulae sauce is finished sliced mushrooms and shallots and butter then finish it with a little lemon juice and chopped parsley traditionally supreme sauces are made from a volute sauce made with meat stock in this case supreme a chicken stock is usually preferred and it's reduced with heavy cream or creme fraiche and then strained through a fine sieve albufeira sauce based on spring sauce to which is added meat glaze or gloss de polier or gloss de veo and it is served chiefly with poultry and sweet breads hungarian sauce consists of added paprika onions and white wine and is excellent on fish chicken and other meat items [Music] [Music] [Music] [Music] do [Music] [Music] [Music] [Music] salsa espanol and demigloss are both rich brown sauces but the latter is a derivation of the first after a sauce espanola has been made it can be easily used in a one-to-one ratio with brown stock and then reduced by half and finished with sherry wine resulting in an intensely flavored demi-glass sauce it can be stirred into soups stews and risottos to boost the taste or simply spooned over a sizzling steak demigloss is a rich brown sauce that combines one part espanol sauce and one part stock and is finished with sherry wine a juilliard is a sauce made by slightly thickening meat stock or juices with a slurry based on corn starch or arrowroot bordelaise is made from dry red wine bone marrow butter shallots and sauce demi-glace similar to marshawn devoe sashur sauce is sauteed mushrooms shallots and white wine reduction simmered in demigloss sauce african is espanol sauce flavored with tomatoes onions peppers and herbs sauce preguard is espanol sauce with duck drippings flavored with orange and lemon juice sauce bourguignon espanol sauce with red wine shallots and bouquet garni marshon de vin sauce which is a red wine reduction classic french sauce steak version with reduced red wine this would often be served on a steak with chopped shallots and simmered in demi-glace sauce charcuterie has onions and mustard white wine and chopped cornish on pickles served in demi-gloss sauce leonaes or leonae sauce onions and white vinegar are simmered in demigos mushroom sauce a classic sauce made with sauteed mushrooms shallots and a splash of sherry simmered in demi-gloss madeira sauce is demi-gloss that is enriched with madeira wine and sauce puave or apoiv puave equals pepper in the french language it's made by adding equal parts of espanol and white stock to a white wine then using it to deglaze a pan that has a protein that's cooked with copious amounts of cracked black pepper such as a steak [Music] [Music] so [Music] [Music] so [Music] do [Music] [Music] so [Music] so [Music] so [Music] do [Music] the tomato sauce family is significantly different than the previous three mother sauces while you can cook stock in the production of this sauce you generally do not nor do you typically add any type of thickener to the sauce instead relying on reduction to thicken originally the classic tomato sauce or pomodoro in italian was made with pork bones and salt pork but today it is a lean version without meat or pork usually made with mirepoix garlic olive oil tomatoes tomato paste optionally white stock and a or vegetable broth and seasonings some of its derivations are creole sauce this is the addition of dice green peppers in place of the carrots this is what we call trinity in the new orleans region instead of carrots we use green bell peppers then celery and onion and we also add diced okra and green olives to this with a little dash of hot pepper sauce or spanish sauce which is the addition of sweet peppers such as red bell peppers garlic olive oil and sliced mushrooms or sauce milanese is the classic tomato sauce we know and love with tomatoes garlic olive oil and basil [Music] it [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] do [Music] [Music] [Music] [Music] hollandaise is a creamy buttery rich sauce with a touch of lemon juice and a dash of spice of cayenne or hot sauce this technique for making this is different from any of the others this is an emulsified sauce with egg yolk as its primary thickener emulsified butter sauces must be held at temperatures conducive to bacterial growth always use clean sanitized utensils prepare sauce close to service time and never hold hollandaise-based sauces more than one and a half hours never mix an old batch with new batches of the sauce some of its derivations include bearnaise which is made by steeping white wine and white wine vinegar with shallots black peppercorns and tarragon before straining and adding to the egg yolk mixture in place of the lemon juice the sauce is typically finished with chopped tarragon and minced shallots saucerone is made by adding tarragon and tomato paste or a sauce a tomato sauce and finishing with diced tomatoes sauce fourier is a marinade derivation with added glostavian or meat glaze sauce grimrod which sounds like something from harry potter but it's actually a hollandaise sauce with saffron added typically saffron that has been steeped in white wine and added to the hollandaise either in place of or in addition to the lemon juice sauce maltese is made by substituting blood orange juice for the lemon juice and hollandaise and finishing with blood orange zest sauce mussoline also referred to as chantilly sauce is made by finishing hollandaise by folding in whipped heavy cream to lighten the sauce [Music] so [Music] [Music] [Applause] [Music] [Music] [Music] do [Music] do [Music] so here's a simple breakdown of the sauce with their main ingredient and the thickener used in each one bechamel's main ingredient is milk or cream and it uses a pale roux sauce falute is a white stock usually chicken or white veal or fish stock and it uses a blonde roux espanol is a brown stock brown veal or brown chicken stock and uses a brown roux hollandaise main ingredient is actually butter and it uses the emulsification of the egg yolk with the butter and tomato sauce main ingredient is well tomatoes and it uses a reduction method there are a few traditional sauces that we also need to talk about so those are the mother sauces that we just discussed now let's talk about some butter based sauces ber blanc bur is the french word for butter and blanc meaning white so this is a white butter sauce berruge is a red butter sauce compound butter is a mixture of various different elements into cold butter and then put back into a tube and refrigerated or frozen this is a really good way of utilizing flavors and having the buttery richness added to a sauce you can use this in a technique we're going to discuss in just a moment called monte alber or you can use this just a simple piece on top of a nice steak and let it melt over with those flavors monte alber is a technique as opposed to a butter sauce and what this translates into is melt with butter and think of this as a finishing technique for a sauce if i have a nice meat glaze that i'd like to serve with my steak maybe one of those sauces like a hunter sauce or a sauce chashur that i'd like to serve with that i'm going to add just a little bit of butter at the end and swirl it into my sauce this is a technique called monte a bur or melt with butter and it will distinctly change the flavor and richness of the sauce that it's added to as well as giving the sauce a nice sheen or shine burr blanc that's right i don't even speak french and even i know that means white butter but do not let that sort of generic name fool you this classic french butter sauce is complex elegant and sophisticated whatever that means it's also incredibly versatile and very very easy to make as you're about to see so let's go ahead and get started with armisen plos i know so much french but all miso plos means is simply getting together all your ingredients before you start a recipe and since this sauce comes together so quickly that's definitely something we're going to want to do so for this sauce we're going to need some white wine and if you're keeping score at home i'll be using a sauvignon blanc for this we're also going to need some white wine vinegar or if you want some lemon juice it's going to work exactly the same so it's just a matter of taste we'll also want to do a little bit of finely minced shallot as well as a little splash of heavy cream which is technically optional here but i do like to use it and i'll explain why later and then last but not least we need a whole bunch of high quality unsalted butter and besides making sure that's cold what we want to do before we start the recipes cut that into cubes because as you're about to see we're going to add that little by little and once that's set we can head to the stove for this very simple two-part procedure the first part being the reduction so what we'll do is we'll add everything except the butter into this pan all right our shallots our white wine our vinegar and our optional cream cream is not included in the original recipe but it does help stabilize this sauce and increases your chances for success not to mention it tastes good and i think the color comes out better but anyway we'll add all that to our pan and bring that up to a simmer on medium-high heat because what we need to do before the butter gets anywhere close to this is reduce this by about 75 percent or so so what i like to do is bring it up to a simmer on medium-high and then back the heat down to medium and basically just stand there and watch it until it's ready and sure if you want you can give it a stir once in a while but that does nothing and one of the few ways to screw up this recipe is to walk away and let this reduction reduce too far and burn if i had a dollar for every one of these i burned when i worked in the restaurants i'd have like eight or nine dollars so it was pretty common but anyway we're just gonna hang out watching that reduce until like i said it gets down to about 25 of what we started with which to my eye looks like what we have right about here and at that point we're going to switch our heat all the way to the lowest setting and we'll start whisking in our cold chunks of butter but we want to start slow so i'm just going to add a couple pieces and we'll whisk that in or if you want you could just swirl the pan it really does not matter as long as the butter keeps moving that's the key and then once those first couple cubes have melted in or almost melted in we can go ahead and toss in two or three more cubes and then we'll simply repeat that process until all our butter has been emulsified and by the way you have my apologies if you're getting dizzy from all this close-up stirring i probably should have panned out to a wider shot but then i would have to clean the other side of the stove so never mind but anyway as soon as all that butter has been incorporated our brew blanc is basically done and by the way you keep hearing me say bert blanc but i believe in french it's correctly pronounced burt blanc which is exactly why i use the americanized burblanc i can't keep a straight face saying burblanc but anyway how you say it's up to you you are the patrol of your burp belong and if you want you can add the butter a little quicker towards the end and once we've incorporated all that cold butter into our sauce we can turn off the heat because our burr blanc is done and as you can see here thanks to properly emulsifying that butter in you should be looking at a sauce that has a beautiful thick luxurious texture and then above and beyond admiring our viscosity we're also going to want to taste this and add a little bit of seasoning it's definitely going to need a pinch of salt i think we'll also do a little pinch of cayenne and by the way do not judge this by how it tastes on a spoon or on your finger this is one of those sauces that's only really awesome on food so hold off on your final opinions and assuming we're ready for service we'll go ahead and transfer that into a warm sauce boat and possibly finish with some fresh herbs in my case a little bit of chive because that's what i had around and by the way if you're not going to use this right away it will hold but you got to keep it warm okay if it cools down and you try to reheat it it's going to break so if you make this a little bit ahead make sure you keep it in a warm spot somewhere ideally between 80 and 120 degrees but personally i will not be needing to keep this warm because i'm going to go ahead and serve this over a gorgeous piece of roasted sea bass and yes i am probably pouring over a little too much but i couldn't help myself it just looks so good and then i'll finish up with a little more chive and that my friends is one incredibly simple and just absolutely drop dead gorgeous sauce which would not matter at all if this didn't also taste amazing just that perfect balance between the acidity and the wine and vinegar and that fatty richness of the butter just a fantastic sauce experience and while i really think you're going to enjoy this specific version of burr blanc the different ways you could adapt this are as obvious as they are unlimited okay different types of vinegars and citrus and herbs etc just a lot of ways to really tweak this to your own tastes but anyway that's it burr blanc or burblon if you prefer no matter how you pronounce it i really do hope you give this a try soon so head over to foodwishes.com for all the ingredient amounts and more info as usual and as always enjoy other traditional sauces let's get the 80 pound elephant out of the room and let's talk about pan gravy and pan sauces it's very easy and very common for the term gravy to be used in place of sauces but technically it is not the same thing a sauce is usually made from stock and a reduction or some kind of thickening ingredient where a gravy is made in the pan most times using what we call fond or those little pieces that are stuck to the bottom of the pan of the particulates of meat or the flavoring from the searing process this is all goes into pan sauces or pan gravy what we would do in that particular case is we would use the fat that is already in the sauce in the pan and we would sprinkle with a little flour in a technique called sanjay which means to sprinkle we would sprinkle that on top of these these fats and fawn and this is going to make the roux in the pan and then we would add a stock or a liquid of some kind to it and taking it using that so in essence it is a volute kuli is something entirely different coulee is actually a pureed of either a cooked or even a raw vegetable or fruit in this case this is a coulis of red peppers and it is red peppers and a little red wine some seasonings and herbs that have been simmered together and then pureed in a blender until it is absolutely smooth and you put it in a squeeze bottle and just give it a little squeeze on the plate today all bets are off there are tons of different sauces that you can use instead of using just the old traditional classic sauces while they are still used and still good this offers another alternative and another level of flavor such as salsa and relish and chutneys vegetable juice sauces broths and foams and flavored oils all bring something unique and different to the table that haven't been seen in a while cream-based sauces have their place but also so do light and airy sauces and just refreshing sauces in general so let's discuss some of the axoms involved when using sauces select the appropriate sauce based on the cooking method used to prepare the dish select the appropriate sauce based on the richness of the main ingredient you don't want a sauce that's richer than the main ingredient consider the flavor profile of the foods on the plate and then consider the visual appearance of the dish so let's summarize and discuss some of the takeaways for this module sauce is a french word taken from the latin salsa meaning salted gru slurry liaison and reduction are all methods used to thicken sauces different shades of roux are used for different color sauces light and cream sauces get a light roux and brown stock uses brown roux slurry is made from various starches not just cornstarch rice potato tapioca starches are all used butter sauces make a nice accompaniment to meat and vegetables offering a rich creamy texture and sometimes classic sauces can make room for more modern interpretations which brings something unique to the food themselves
|
On_Cooking_Lecture_Videos
|
On_Cooking_Chapter_21_Eggs_and_Breakfast.txt
|
in this module we'll be discussing eggs and breakfast the objectives of this module are describe the composition of eggs purchase and store eggs properly apply various cooking methods to eggs prepare pancakes and griddle cakes plan breakfast menus to provide a variety of breakfast food options for customers and prepare breakfast coffee and tea beverages so let's discuss some of the functions of eggs and what they do for menus and recipes eggs provide texture they provide flavor and structure that provide moisture and nutrition to items eggs leaven and thicken bakery items such as breads or quick breads pancakes and the like eggs bind ingredients such as crumbs on breaded foods eggs enrich and tenderize breads and cakes and eggs extend the shelf life of some baked goods so let's take a look at the different parts of an egg the primary parts of an egg first and foremost are the shell this is made of calcium carbonate which is excreted in the last few minutes of the egg laying process which is why it's in this unique and distinct oval shape the egg yolk in the center is the fat that the chicken would use to feed on as it grows the albumin also known as egg white is that part that surrounds the egg yolk and keeps the egg yolk in a suspended animation style so it's not sloshing around or moving around it also provides protein as well other parts we have the outer membrane this outer membrane is on the inside of the shell and if you've ever peeled it a hard-boiled egg before you probably have seen that outer membrane you've had to break through it to get into the egg once you get through that little bit of outer membrane you can easily peel the egg without any sticking or difficulty or tearing the egg itself the inner membrane resides around the yolk and just like the outer membrane it helps to suspend the yoke and keep it tight inside the egg sac itself the calasi cords are the connection between the outer and the inner membranes and as a piece of plastic wrap would get all twisted up if you just roll it around on the table so is the calyce cord it gets twisted up the tighter it's twisted as it's a young egg the tighter it is it's going to suspend that egg yolk in the middle as it ages the egg yolk the calasi cord is going to get weaker and weaker and will allow that yolk to kind of move from one side to the other or to the bottom or to the top the germinal spot is the little spot of red on the egg yolk which is typically where reproduction would happen with the chicken if it were to germinate and actually be a full-blown chicken in most cases it's not in the case of eggs typically you don't have that issue and the air cell which is the little pocket of air that gets bigger and bigger as it ages as the egg gets older and older this is one of the reasons why you may see a divot in the bottom of a hard-boiled egg it's because of this air cell getting older and older typically it's best thought that older eggs are better for hard boiling or boiling in general because the fact the air cell gives a little more room for the the shell to be peeled away eggs come in three different grades grade double a is the freshest egg most commonly used in prepared foods and bakeries when broken the yolk and the white will not spread out much covering a small area the white is firm and thick while the yolk is round and stands up high grade a is the most commonly sold egg and is often seen in grocery stores when broken the contents will spread out a little bit the whites are thinner but still firm and the yolk is round and stands up from the surface grade b eggs have whites that are thinner and yolks that spread wider and flatter than eggs of a higher grade while still a good quality they don't present as well as others this grade is usually used to make liquid frozen or dried egg products when it comes to eggs size matters eggs come in a variety of different sizes knowing what size egg you're using will help determine the number of eggs to use in a recipe especially if their recipe is based on weight or volume most recipes are written for either large or medium eggs so as a recap in the united states eggs are stored at a temperature of 41 degrees fahrenheit or lower prior to storage eggs are washed with a mild sanitizing solution and warm water to kill any salmonella bacteria fresh eggs can then be stored for four to five weeks past the packing date in the refrigerator in europe however it's illegal to wash eggs and instead farmers vaccinate chicken against salmonella treating it at the source with the cuticle still intact refrigeration could cause mildew growth and contamination so they're kept at room temperature usda has laid out guidelines for how commercial eggs should be produced processed and stored to make sure eggs remain safe for public consumption the uk has their own regulations but their regulations are quite opposite to those in the us making it illegal for eggs produced in the uk to be sold in the u.s and those produced in the u.s to be sold in the uk but if guidelines in both the cases are made for food safety and are in the public interest why are they so contrasting to one another which of them is the right practice or are both of them equally good in the case of american eggs the usda requires its egg farms to properly wash and sanitize their eggs before they reach consumers in order to remove any dirt and feces on the exterior of the eggs these contaminants are bacterial in nature and may pose a food safety threat when they enter a kitchen environment for instance one may crack open an egg and then proceed to prepare a sandwich with those bacteria-ridden hands also eggshells are porous so there's a possibility of microbes leaking inside the egg as per the regulation eggs are washed with warm water at least 20 degrees fahrenheit warmer than the internal temperature of the eggs and at a minimum of 90 degrees fahrenheit a detergent that won't impart any foreign odors to the eggs is also used after washing the eggs have rinsed with a warm water spray containing a chemical sanitizer to remove any remaining bacteria and then are dried to remove excess moisture every step of this whole process is scientifically designed and must be executed with extreme care otherwise this procedure can lead to more damage than good for example the drying of eggs is crucial because a dry eggshell acts as a barrier to bacterial intrusion while the presence of moisture makes the shell more permeable exposing it to a pathogen attack furthermore moisture itself may act as a medium for bacterial growth and water provides an excellent vehicle for pathogens such as salmonella the right temperature of warm washing water is important because water colder than the egg could cause the contents of the egg to contract sucking polluted water in through the shell results may get worse if a facility is not careful enough to regularly change the washing water and eggs are left to sit in the dirty bath with such high risk of bacteria if cleaned improperly the uk and almost all of the world except the us believe that eggs should not be washed or cleaned before they're sold to consumers as far as dirty eggs are concerned the eu's egg marketing law encourages good husbandry on farms because eggs can't be cleaned or washed is in the farmer's best interest to produce the cleanest eggs possible as no one is going to buy their eggs if they're dirty this may not sound too convincing to you but there's more to it an egg carries a thin layer of coating on its outer surface called cuticle this layer naturally protects eggs from almost all contamination and also keeps the eggs fresh for longer periods washing eggs damage most of this protective layer and makes them vulnerable to contamination from pathogens and other microorganisms in the absence of washing an egg is capable of protecting itself naturally and if care is taken while handling the eggs the contamination of kitchen environments can also be eliminated the uk thus believes that a little care is better than the huge risk and cost of washing eggs another thing that makes british eggs and american eggs different to one another is how they're stored in the uk and almost everywhere in europe you can see eggs sitting in unrefrigerated shelves of supermarkets the eu regulations stipulates that eggs should in general not be refrigerated before selling to the final consumer chilled eggs when left outside at room temperature during transfer between the supermarket and the customer's home may sweat and form moisture on its surface facilitating the growth of bacteria and probably their ingression into the egg the american system on the other hand instructs its seller to refrigerate eggs under 40 degrees fahrenheit in order to decrease the risk of salmonella salmonella isn't a big problem in the uk because european farmers have been vaccinating their hens against salmonella since the 1990s and have received good results vaccination in the us is not a common practice because of all of these differences and how eggs are processed and stored in the two nations british eggs are significantly richer in taste but how would you know british eggs can't be sold in the us and the u.s ones in the uk so the age-old question brown eggs versus white eggs which one's better for you and which one should i buy the answer is quite simple egg color depends on the breed of the chicken for example a white leg orange chicken lays white shelled eggs while a plymouth rocks and rhode island reds lay brown shelled eggs the different breed of chicken will have different colored earlobes and the earlobes are actually the determinate factor of this red or brown earlobed chickens will lay brown eggs white earlobe chickens will lay white eggs there really is no other distinction between them and i need to be clear that there is absolutely no other distinction between the eggs than the color of the shell they contain the same nutrients they contain the same phytochemicals and lipids and all of those things are identical the only thing that's different is the shell now in many cases brown shells will be a little thicker and that also has to do with the particular chicken and how they lay the eggs there are even green eggs and blue eggs pink eggs and speckled eggs but these usually come from the chicken with different elements in their diets as opposed to their their breed of chicken as you can see from the picture even the green eggs have shades so the darker color green eggs come from those red earlobe chickens where the lighter color grenades come from the white earlobe chickens we talked about salmonella bacteria and eggs so let's talk a little bit about sanitation eggs require time and temperature control for safety they are what's known as tcs foods time and temperature control for safety foods this means that they have to stay refrigerated and they can't stay out for very long and when they're cooked they have to be used or discarded within a certain length of time inadequate cooking or improper storing may lead to foodborne illnesses such as salmonellosis eggs can be pasteurized at 140 degrees fahrenheit for three and a half minutes this is a called a quick pasteurization or a high temperature pasteurization which pasteurizes the eggs without any negligible effect to the egg itself never leave egg dishes at room temperature egg dishes of room temperature are highly susceptible to bacteria and bacterial growth particularly if there's already salmonella on or in the egg then it can grow at a much rapid pace if it's left at room temperature food service operations can purchase eggs in many different forms from whole eggs to egg whites only made from fresh or frozen egg yolks available fresh or frozen and something called pwe or pasteurized whole egg this is a liquid egg which usually has citric acid added to prevent what we call greening effect which we'll talk about here shortly the reason why we may choose one of these products over the other is utilization how we're going to utilize it if i'm making a ton of meringue then i don't necessarily need egg yolks if i'm making custards maybe i don't need the egg whites if i'm doing nothing but scrambling eggs then maybe it's a good idea to buy the pasteurized whole egg additionally there are egg substitutes for those people who are concerned about cholesterol or who are concerned about health issues perhaps maybe they have an allergy to eggs in this case they can purchase something called a egg substitute here's an example on the screen with a soy product this is not necessarily an egg as a matter of fact it even says egg free on the pot on the package so be aware of what you are getting and how it's going to be utilized you wouldn't be able to use this as a fried egg product this would be for scrambling or for using as an ingredient eggs are a power pack of nutrients eggs contain vitamins a which is good for eyesight vitamin d which is good for well pretty much everything vitamin e which is excellent for your health and immunity vitamin k for clotting and vitamin b complex for energy production eggs are rich in minerals and contain less cholesterol now than previously and egg whites do not contain cholesterol at all and are often added to dishes to reduce the total fat content [Music] eggs can be cooked in a multitude of different ways dry heat cooking methods include baking such as sheared in quiche wait a second i said sheared i wonder if anyone's ever had a sheared egg before well you might actually have a sheared egg is actually an egg that is baked so a lot of times what you would do is you would take a piece of ham or some bacon or you you can even do this in a muffin tin by itself and then crack the egg into it and then put it in the oven and bake it sauteing eggs such as scrambled and omelets pan frying eggs sunny side up over easy over medium over hard basted eggs where you take a little bit of the fat and baste over the egg you can use moisty cooking methods such as soft cook or soft boiled these simmer for three to five minutes in the shell hard cooked or hard boiled eggs simmer for 12 to 15 minutes in the shell and then poached eggs simmer for about three to five minutes without the shell at a slightly lower temperature there are two recognized types of omelettes the french omelet which is pictured on the left is rolled up into an oval or cila a cylindrical shape before serving it's made similar to scrambled eggs but stop stirring before the egg is almost set to allow the egg to knit together at this point you can add cooked internal ingredients and roll the omelet onto the plate the american omelet is folded in half american omelettes cook longer than the french counterparts hence the crust let the eggs cook for a few minutes until the bottom gets a nice golden brown color for the american method there is no pan shaking or swirling around when your eggs are practically done spoon in your fillings and fold in half voila as the french say so why is there a green ring around the yolk sometimes when you hard cook an egg this will come across really plain you'll see it very clearly sometimes it could be even worse than it looks properly hard cooked eggs are uniformly cooked throughout and have a golden color to the egg yolk a green discoloration covers the the yolk when the in-shell eggs are overcooked particularly uh you know over boiled this happens because the hydrogen in the egg combines with the sulfur and the egg yolk and that gives it that garish green color that's perfectly fine it doesn't hurt anything but as you can see it can transition from a slight green ring around the yoke at the top of the screen to a very pronounced green egg yolk in the middle which is completely green almost all the way throughout this is simply a factor of heat and time you cook it too long for too long a period of time and it will continue to cook that hydrogen in the egg white and interact with the sulfur in the egg yolk the green ring is harmless however and it's safe to eat it doesn't affect its taste all it affects is its color so let's summarize and discuss this section of the module and talk about eggs eggs are versatile they can be served in a number of different ways in the united states eggs should be stored refrigerated at 41 degrees or lower eggs are sorted by grades and sizes and it's important to know the size of an egg when you're dealing with a recipe eggs are a good source of protein and fats while not being nearly as high in cholesterol as portrayed a brown egg and a white egg are from different chickens and it's based on the earlobe color but are essentially the same egg yolks will turn green when overcooked due to a reaction with the trace sulfur contents in the yolk in this section we're going to continue our discussion and talk about breakfast and brunch [Music] breakfast the most important meal of the day healthy high in fiber fuel to go go go you're going to be full no matter what you order brunch the most indulgent meal of the week who wants a lobster meatball decadent boozy wrapped in bacon and slathered in hollandaise two very different ways to start the day which one are you breakfast or brunch can i get an espresso i'm sorry sir we don't have any special at the tel wink grill in houston texas it's breakfast all day seven days a week all right then i guess i'll just have a an american coffee coffee what this menu lacks in fancy european hot beverages it makes up for abundantly pretty homemade like mama would cook it deep inside breakfast country the law of the land is clear asking for your eggs to get poached is enough to get you shot well no not enough to get shot eggs benedict will get you shot yeah thank you former marine and 35 year tell wink regular pat forster knows where he stands on the breakfast brunch divide wheat or brioche sweet because i don't know what a british is donut or beignet definitely a donut on most weekend mornings breakfast is forester's only meal of the day i come in here i eat i don't eat till tonight this this just kills me for the day now it's totally acceptable to eat breakfast alone but brunch that's all about the socializing and the hard stuff and don't forget the jazz trio a few hours after my breakfast my friends abby mary elizabeth steve and calvin crawled out of bed to join me at houston's backstreet cafe for brunch i think breakfast is more like eating to live and brunch is more like living to eat the rich creamy and glistening emulsion that is hollandaise sauce was only one of the hedonistic pleasures at backstreet and later at houston's rdg octopus ceviche banana stuffed french toast grapefruit margaritas mulberry mimosas it's really good he needs another drink i'm gonna go and take a nap [Music] where would we be as a culture if we were all brunch people we'd be fat and slovenly and never getting to work i guess right mark meyer is chef and owner of cook shop in new york city where would we be as a culture if we were all breakfast people i think we'd probably be very boring meyer celebrates the differences between breakfast and brunch by serving both this is something i'm going to come in i'm going to read my paper i'm going to have a cup of coffee i'm i'm done they're sexy these eggs they're fun yeah they are you date these you marry those so can breakfast and brunch get along that's very nice there's room at the table for both so let's talk about some of the different types of breakfasts continental style is familiar to any traveler who's ever stayed in a hotel it's a light morning meal typically consisting of pastries and baked goods fruit toast and coffee it's usually served buffet style and modeled after the european breakfast similar to what one would enjoy in france or the mediterranean having spent time in france and italy i can tell you that this is exactly what you would get for breakfast continental breakfasts emphasize simplicity and focus on foods that are easily can be stocked in england and ireland a full-service breakfast is something that's very familiar to us this full-service breakfast in these particular countries often consisted of eggs and various meats and commonly more than one type of fried potato such as hash browns or home fries potatoes o'brien or potatoes pancake and usually some form of bread or toast here in the southern united states we often include things such as grits and biscuits and sausages buffet-style breakfasts is our form of service known as a la francois buffets are often served at various different places including banquets and catering functions and even some casinos do a breakfast style buffet often means that you can eat and eat and eat until you're full which is a good bang for your buck they often consist of eggs and varying styles and some kind of meat such as ham bacon sausage small juices and toast with butter and jam plus coffee or tea in many ways this resembles a full service breakfast just serve buffet style let's talk about some of the different kinds of breakfast meats that are available bacon is the ubiquitous breakfast meat typically when we think of bacon we think of pork bacon but there is turkey bacon chicken bacon and soy bacon the problem with the latter three is that in order to get the bacon taste you have to add so many different flavorings and chemicals to it and preservatives to be able to keep the product from turning bad this is one of the reasons why scientists have actually discovered that eating pork bacon in moderation is healthier for you than trying to eat any of the alternatives in excess canadian bacon is another alternative instead of coming from the belly of the pig which causes it to be very fatty in normal cases of bacon the canadian bacon comes from the loin which is on the top of the pig and this gives it this very distinctive looking almost ham looking quality to it it's a solid long muscle with very little fat in it and once it's smoked and cured it gives it a very unique texture and flavor to it speaking of hams we've got two different kinds of hams we've got ham fresh which can come in cured and uncured styles and we've got country ham fresh ham is typical of what we would see on the table at christmas or thanksgiving it's generally a cooked and cured ham that's been smoked but you can also get one that is uncured and smoked you can get one that is uncured in raw and cook it like you would a roast country ham is a little bit different country ham is made by the process of salting and desiccation so this salting will make sure that the ham is completely dried out or somewhat dried out you can't get it completely dried out but it's going to reduce its moisture content quite a bit and also it will help impart some of the flavors that are packed in with the salt as well and then it's served typically pan seared with a little red-eye gravy is a great way of eating this it's a little more salty than its traditional cousin additional breakfast meats we've got breakfast sausage breakfast sausage comes in bulk sausage comes in links and it comes in patties bulk sausage can consist of things such as breakfast sausage that has sage and tarragon in it it could be chorizo sausage which is a spicy sausage it could be a bulk and dewey style sausage so there's a lot of different ways you can look at a sweet sausage a savory sausage a spicy sausage there's various different ways this comes the bulk sausage is great if you want to make your own sausages by stuffing them it's good if you want to use it as some kind of forced meat to stuff into another piece of something so the bulk sausage allows you to be able to do that without having to break it out and tear it apart the other alternatives of course are the patties which we're very familiar with these are just bulk sausage formed into patties and then the links which are stuffed into a casing and then smoked breakfast meats are not just about pork we can also have steak skirt steak and hanger steak ribeyes new york strip and lots of different others here in this picture you'll see on top an example of a rib eye and then below that you've got a hanger steak these hanger and skirt steaks are typically more beefy in flavor because they are more exercised meats but they are also a little more stringy because of their nature whereas a ribeye steak is delicious but it's not as exercised so it's not going to be quite as beefy as these others but personally i like a steak in the morning oftentimes you'll hear about locks locks are smoked salmon this is different than hot smoked salmon this is cold smoked salmon so it's smoked at a temperature of about 85 degrees for you know typically several hours before that it's cured with a salt and sugar mixture that will help dry out some of the moisture out of that particular piece of salmon so really what we're going for in this particular case is we're going for the moisture draw that's going to make it almost like the country ham that we talked about earlier but the smoking is going to give it a slight smoky taste to it this is going to give it a very unique texture as well as opposed to hot smoked salmon which is going to get kind of pale and flaky this is going to retain that beautiful salmon color and it's going to give a little bit of a chewiness to the salmon as well think of it almost like salmon jerky without going so far as being completely dried out and typically you would serve this on a bagel with a smear of cream cheese maybe you add a little onion or some capers or maybe even a little diced cooked egg on there lots of different accoutrements that you can add with that one of my favorite items in the morning are griddle cakes grilled cakes are not just pancakes they come in various different sizes and shapes and styles griddle cakes refer to any batter or dough baked on a heated flat metal surface or griddle these can be savory they could be sweet they could be a mixture of the two one such griddle cake that we always think about is pancakes and waffles this is batter that is usually leavened and cooked quickly on a grill or in a waffle iron with very little fat the griddle cakes that you see in the picture here the pancakes gives it a nice fluffy texture to it by searing it on a flat top or in a saute pan and flipping it over once it starts to bubble then you're going to get a little rise out of it a little puff out of that and that's going to give it that nice soft texture whereas on the waffles it's going to have that nice soft texture but it's also going to have a crispiness to the shell on the outside which gives us that extra little bit of texture in there that will separate it oftentimes when we have something that's too soft we tend to get lost in the texture by having something that's soft and crispy it gives us something else to be able to concentrate on another form of griddle cakes are crepes crepes are thin delicate unleavened pancakes so as opposed to the pancakes that we saw earlier they had a leavening in them whether that leavening is egg white or baking soda or some other kind of leavening these do not have any leavening in them so they're going to retain a very thin nature and they're typically served either as a dessert with fruit or compote of some kind but they can also be served savory one of my favorite dishes is a braised duck shredded and mixed in with a couple of different spices and herbs and then put into the crepe and finished with a little bit of a asian slaw and a drizzle of honey mustard hot honey mustard on top of it so it's got a really nice savory flavor to it crepes are typically made using what's called a crepey air which is a round sort of flat top and you use this stick or dowel style thing to be able to spread the batter out and make it nice and thin you can also do this in a saute pan but you do need to use non-stick saute pans for this sort of thing there's a bunch of different cereals and grains we're not just talking about the cold cereals that we eat and the big sugary cereals that we like to have in the morning time but there's a lot of different other other options as well these are processed breakfast cereals that make up of rice and corn oats and wheat we've all seen the breakfast cereal aisle going through the grocery store it's an innumerable amount of qualities and quantities of breakfast cereal from the barely there sugary puffs of things to the almost break your teeth shards of hard granules of wheat germ you can get anything and everything in between in the process cereals are oftentimes served hot or cold typically breakfast cereals like this are not served hot they're of cold with milk but you can also have hot cereals which we'll get into in just a moment and these are typically either ready to eat cereals in this in the case of this picture or they can be prepared in the restaurant kitchen itself some of the hot cereal alternatives oatmeal oatmeal is a great breakfast option there's several different options you can see here from the hot oatmeal with the caramelized bananas to the granola bar in the center with fresh fruit inside of it to the almost porridge like consistency of the oatmeal on the end you get a plethora of different options in your in there for it so oatmeal it can be it's not just that you have to eat a package of oatmeal or a hot bowl of oatmeal every day granola is very simple you toast the oats you mix it with honey and then you add in whatever ingredients you want to add into it and press it into a form and let it cool down it's very simple to make granola bars in the south we're very familiar with crits but outside of the south we're probably more familiar with something called polenta grits are made from dried harmony corn which is a traditional southern dish that serves sweet or savory hominy corn is corn that has been soaked in lye and allowed to puff up and then it's dried and then ground up into these grids the difference between harmony and polenta is they don't soak the corn in lye in polenta they just dry the corn as it is and then grind it up it's almost like cornmeal in that particular case and it's going to have a different flavor to it if you've ever had hominy you know maybe you had it at your uh at the kitchen table at some point harmony is this puffed white corn that almost has a a sweetness to it and that's a condition of the soaking in the lime breakfast beverages range from sweet to savory and include some of the following coffees tea flavored teas tizzanes and herbal teas juices and alcoholic beverages not all coffee is created equal from the light roasted cinnamon color all the way up to the dark roasted italian color the roasting affects the flavor of it the lighter the roast the more green it's going to taste and the darker the roast the more bitter it's going to have typically here in the united states we tend to drink a medium city roast or full city roast where in france and italy they will drink the darker roasts the cappuccino is made with a dark roast and the espresso is made with the dark dark dark italian roast there are several different ways that you can brew coffee decoction is a process used in turkish coffee where you actually boil the coffee with the grounds the grounds in turkish coffee are a lot more finely grained ground and this will create what's called a crema on top which is the cream or the frothy part that floats to the surface in turkish culture this is called feet and they say that it's not a proper turkish coffee if it doesn't have little feet on top infusion is another method this is typically how you would brew coffee through a low boiling point an example of this would be a french press where you pour the coffee into the french press you pour the hot water in there and then you let it sit or steep and then you would press the plunger down to put all the sediment down at the bottom and just have the coffee liquid at the top drip is the most commonly done to a machine also it's most common style here in the united states however as you can see from this picture it's getting more and more common for drip to be done in a slow process so this almost includes the infusion method in there as well this allows for the coffee to be poured through the filter and slowly drip through as opposed to being forced through like some coffee makers do espresso is another option and this is a completely different process where hot water is forced through at great pressure through a finely ground coffee this coffee is so finely ground and it's packed extraordinarily tight in there that by forcing water through it it also will extract a lot more of the caffeine in the bean as opposed to some of the other methods and this is why espresso is often thought to be highly caffeinated there's various different tea qualities from the black tea that we're very familiar with to green tea white tea and something called oolong the black tea that we're familiar with one of the most common is orange pea coat which is what they make things such as lipton with it has an amber brown and strongly flavored texture to it the leaves have been fermented so when they are picked they're picked mature and allowed to wash them first and then allowed to air dry them and in this air drying process they ferment so you get this nice tea smell to it this fermented tea smell a little bit before that they have green tea green tea has a yellowish green color with a slightly bitter flavor the leaves have not been fermented but they are essentially the same tea you can do the exact same process the only difference is they have not been fermented they've just been dried out white tea is a is a variation of green tea made from the young tea leaves that are air dried so they're picked when they're very very young just budding out really and not really getting a ton of chlorophyll yet so they have that white color to them oolong tea is has the characteristics of both the black tea and the green teas and it's partially fermented so from less less fermentation or no fermentation with the white tea the young leaves the no fermentation with a little bit older leaves that have the chlorophyll in it to the green tea the oolong sits in between there and then the black tea is on the other end black teas also have various different variations depending on the kind of tea leaf it is it could be lopsang suzhou it could be earl grey it could be various numbers of different kinds of teas that's going to have a little bit different characteristics earl grey is going to have a slightly better taste to it and it's going to be a very strong tea as opposed to the orange pea coat that we use here in the united states for lipton and some of the other companies it's going to be a mild tea that works very very well for iced tea flavored teas tizzanes and herbal teas are another category flavored teas are made with the oils dried fruits spices flowers or herbs in blended or unblended tea so the basis for this is going to be tea it's made with the tea leaf and the flavorings opposed and combined with it so this is essentially tea but with flavoring they can be served hot or cold and typically flavored teas don't contain caffeine because while they do contain the tea they typically are diluted to the point with the fruit juices or the the different spices what have you to the point where the caffeine is almost negligible in there tissaines are herbal infusions that do not contain any real tea these tizzanes and herbal teas are made from fresh or dried flowers fruits herbs seeds or roots chamomile mint ginger ginseng and hibiscus are all forms of tizzanes and herbal teas tisons and herbal teas are usually caffeine free and may aid in digestion as well one of my personal favorites is at the bottom of the screen this is a hibiscus tea and it's very commonly found in specialty markets anywhere that might have caribbean or mexican ingredients in it and it's a flower that's been dried and really all you need is just that it has a natural sweetness to it so you don't really have to add sugar to it i typically will add just a tiny tiny bit of honey to mine and i'll drink it hot or in the summertime it's great just chilled down and served with ice breakfast juices can range from the sweet such as pineapple orange mango kiwi etc to the more savory juices such as carrot tomato and green juices they're consumed at breakfast when the stomach is empty this will aid in digestion and absorption of essential vitamins and minerals into the body there is a word of caution here though that too much consumption of anything with an acidic content such as orange juice pineapple juice apple juice tomato juice if they're too acidic you really shouldn't eat them without also eating food you can eat them on an empty stomach but you need to eat it with something right after that because the acid in these will contribute to the acid of your stomach and could cause erosion of the stomach lining in the process especially over time from mimosas to sangrias alcoholic beverages are an excellent addition to a brunch menu they provide a low-cost high profit margin to the restaurant and they can be served chilled served over ice blended infused heated muddled or in any other ways you can see various different representations of it here on the page from the simple mojito in the the middle of the page on the left to the sangria at the bottom right of the page there's tons of different ways that you can do this breakfast uh typically you're not gonna see this but at brunch is where you will see your cocktails mostly and you're gonna have a little more time at brunch to sit down and enjoy with friends it's a social event as opposed to breakfast sometimes can be very quick and ready to get your day started so let's summarize this section and discuss some of the takeaways brunch is usually more upscale and social whereas breakfast is usually more well more sometimes breakfast comes in many forms continental full service and buffet typical american breakfast which originated in england usually has some form of potato meat and egg and it can also be served with fruit cereal juices coffee and or tea pancakes and waffles are leavened whereas crepes are not coffee and tea come in a variety of colors and flavors to suit every customer's needs and juices and alcoholic beverages are a mainstay in a typical brunch
|
On_Cooking_Lecture_Videos
|
On_Cooking_Chapter_7_Flavors_and_Flavorings.txt
|
in this module we're going to talk about flavors and flavoring and how to use them the objectives for this module are explain the basic physiology of the senses of taste and smell describe how flavoring ingredients can create enhance or alter the natural flavors of a dish identify a variety of herbs spices salts oils vinegars condiments alcoholic beverages and other flavorings describe the flavor principles of a variety of international cuisines so what is flavor flavor is a combination of tastes aromas and other sensations caused by the presence of a foreign substance in the mouth flavor is to food as what hue is to color and what timber is to music taste is the sensation we detect when a substance comes in contact with our taste buds on the tongue they consist of five different tastes sweet which were taught from birth to be adjusted toward bitter which were taught to absolutely not be adjusted toward as a matter of fact typically bitter is thought of as poison salty which we are one of those flavors that we naturally crave because our body craves salt sour which is a puckering or an a citric taste to it which gives us a nice enhancement and something called umami umami is a little bit different than the rest because technically it requites to savory more than anything else so really in order to discuss flavor we have to first discuss the human olfactory system now we know what the five tastes are but without the olfactory system we can't determine the flavor of something all we can determine is whether it is sweet salty sour bitter or umami once we place the substance in our mouth the aromas are delivered through the retronasal path olfactory neurons at the top of the nasal cavity are clustered together in the olfactory bulb in short if you can't breathe you can't taste taste buds or specialized sensory organs which are found on the tongue within three different kinds of small bumps known as papilla as well as the back of the throat and the roof of the mouth it's a common misconception that you only taste food in certain areas of the tongue we grew up believing that the tongue has four taste zones one for sweet sour salty and bitter but this is not the case these tastes along with the fifth taste cost umami or savory can be sensed in all parts of the tongue the sides of the tongue are more susceptible or sensitive overall than the middle and the back of the tongue is more sensitive to bitter tastes there are five factors affecting flavor perception temperature when we think of this we have to think of something like a body temperature something that is body temperature or 98.6 degrees is more readily identified by the body and easily or digest however we also have to think about things such as saltiness we want to season the food at the temperature we're going to serve the food consistency the thickness or viscosity of a food is going to play a factor as well if it has a thickness to it it has what we call mouth feel and that will aid in the flavors contributing all around the mouth the presence of contrasting tastes this is a little bit unique in the fact that we don't want all one flavor the commercial that came out a few years ago is something about sweet and sour soup you wouldn't want just sour soup you want sweet and sour soup the presence of fat often misquoted fat is not flavor fat is a vehicle for flavor fat has no flavor by itself but the presence of fat will give it that mouth feel again that consistency we talked about earlier and it will also provide a sense of satiety or fullness to us as well and then finally color color makes a big difference as far as how we perceive something but also in how it tastes a red bell pepper is going to be sweeter than a green bell pepper even though it is the same bell pepper that's just been allowed to ripen a little bit further there are three compromises to the perception of taste age is one of those things that will compromise your perception of taste but also the under development of it as well we're said when we're children do have a child-like palate which means we eat very simple foods our taste buds have really not developed in this point but as we age our taste buds will develop and we our tastes will change along with that but as we get older in life we diminish some of those taste buds we diminish some of our sense of smell and with that we lose some of our taste health is also a factor if you are very sick or if you have any kind of head cold or if you are any kind of compromised you may not be able to taste things very well and smoking smoking is one of those things that people don't think about and a lot of chefs in the industry do smoke but what they do is they will often wash their mouth out before tasting food to be able to get that effect smoking not only affects your mouth and the area around your mouth the smoke clings to your face and clings to your nose and gets all in the areas that usually are reserved for the smells of food but it also numbs and deadens your taste buds in the process because of the heat that's drawn in through the smoking there are five factors associated with flavoring foods factor number one start with simple combinations of ingredients often the simplest combinations are the best factor number two select fresh foods that are in season whenever possible i like to say take the best ingredient you can find and do very little to it factor number three match the flavoring used to both the ingredient and the cooking technique not all cooking techniques are designed for all different flavors some flavors are more delicate factor number four prep techniques also impact the flavor of the food the size of the cut affects the perception of taste and texture in fact number five the temperature of foods impacts the taste perception of the flavors as we discussed earlier so let's talk about the difference between flavoring and seasonings flavorings is an item that adds a new taste to the food and alters its natural flavor flavorings include herbs and spices vinegars and condiments most flavoring is done during the cooking process seasoning is an item added to enhance the natural flavor of a food without changing its taste salt is the most common seasoning most seasoning is done at the end of the cooking process oftentimes when we discuss flavor profiles it's easier to refer to it as if it was a musical note top notes or high notes are the sharpest or the first flavors and aromas that you taste and smell middle notes is the second wave of flavors and is generally more subtle low notes or base notes are the most dominant lingering flavors that last for a while the aftertaste or finish is that final flavor that hangs around after all the other flavors are gone roundness is the unity of the dish's various flavors just like you don't play a single note but you play a chord and depth of flavor is the broad range of the flavors that are used in the dish when dealing with working with flavors chefs may rely on classic flavor combinations such as lamb with rosemary and garlic or apple pie with cinnamon these are classic flavor combinations that have been around for a while the reason they've been around for a while is because well they work chefs can also choose to amplify flavors such as a steak sprinkled with a little kosher salt this will not only bring up the flavor of the natural meats and the natural juices and the my yard reaction that caramelization on the outside of the steak but will enhance those flavors as well without altering them or changing them in any way chefs may layer flavors like adding a lemon rind a lemon juice and lemon basil to the dressing each one is added at a different time in the process and because of this they're going to have different interpretations in that recipe they're going to come and go at different times in your in your sensation of taste so let's talk about some seasoning and flavoring guidelines flavoring should not hide the taste or aroma of the primary ingredient let the ingredients sing flavorings should be combined in balance so as not to overwhelm the palate the last thing you want is something that's too salty or too bitter flavoring should not be used to disguise poor quality or poorly prepared products if the soup is scorched it's scorched and it should be thrown away flavoring should be added sparingly when foods are cooked over a long period of time because when foods are cooked over a long period of time there are changes that are going to happen in that food as it cooks if you start off by salting it to the desired salinity early on then in the end result it may end up being too salty taste and season your foods frequently during the cooking process this is one of the biggest mistakes most chefs in the industry do is they don't taste their food frequently enough if at the end of your shift you are still hungry you haven't tasted your food enough international flavor principles are designed around certain criteria the primary ingredients of the proteins and starches can change depending on the environment you're in the proteins that are used in asia are different than the proteins that are used in north america religious influences are a huge factor in the flavors of the world some flavors are heavily influenced on the traditions of halal or kosher some flavors are heavily influenced on religious holidays as well the typical cooking methods used in the environment such as a wok cooking versus a grill in the south the cooking liquids which are used some regions use fats some regions use oils some regions use various different kinds of fats maybe they use pork fat versus chicken fat for other reasons so the fats are going to play a big factor in that not only in cooking but also in the fats that are used in the dish itself and then the flavorings that they use the flavorings of india for example are a good example of this they're bright and they're colorful and they have a lot of pungency to them whereas the flavors in certain other areas say like lower asia in china are not necessarily as pungent they're very simple and simplistic in their nature let's talk about flavors from around the world every culture tends to combine a small number of flavoring ingredients so frequently and so consistently that they become distinctive of a particular cuisine this has never been more true than in certain regions here in the united states we tend to base our base level flavors on the french mirepoix carrot celery and onion we also have the trinity of the south louisiana area but in greece they have tomatoes and cinnamon or olive oil lemon and oregano as base level flavors in india depending on where you are in northern india it may be cumin ginger and garlic where in the southern parts of india it could be mustard seeds coconut tamarind and chili in mexico tomato and chili or lime and chili are common ingredients that you would find as base level flavorings in spain the sofrito of spain has tomatoes onions garlic and olive oil all simmered together and then in china we have ginger scallions and garlic are going to be the base level flavors herbs are any of a large group of aromatic plants whose leaves stems or flowers are used as flavorings they can be used either dry or fresh spices are any of a large group of aromatic plants whose bark roots seeds buds or berries are used as flavoring usually used in a dry form hole or ground condiments are any item that are added to a dish to add flavor for example these include herbs and spices and vinegars it also refers to cooked or prepared flavoring such as those prepared mustards relishes bottled sauces and pickles that we see at the grocery store so let's examine salt salt has been historically one of the most important spices that have ever been around they have fought wars over salt it was used as currency in the past salt was used to preserve food for long voyages on ships so in essence salt was life there are a bunch of different varieties of salt culinary salt or table salt as we refer to it you often see those in the little round containers sitting on the shelves with the the girl in the raincoat with the umbrella this is also sometimes referred to as iodized salt because in the early 1900s iodine was added to it to prevent the formation of goiters on the side of the neck rock salt is used predominantly in things such as well making ice cream we add rock salt to ice and it lowers the freezing temperature of it kosher salt is the preferred salt of the culinary industry not only is it a clean taste but it has a large flaky texture that makes it really easy to detect how much we're using at a time it flows through the fingers very nicely and gives us a tactile sensation sea salt is going to have the flavors of the ocean it's going to have the flavors of the sea it may be a little more briny tasting it may have a little more of a earthy taste to it celery or this is a salt which is also a sea salt style can be used in many cases it's also typically going to be a little more gray fleur-de-celle is another example of this and then we have specialty salts such as smoked salts and you might have a himalayan salt the pink salt that we see you may have flavored salts that are mixed together so for instance you might have a salt that is mixed with espresso grounds and it is an espresso salt or you may have a salt that has been flooded with a burgundy wine and allowed to dehydrate and recrystallize into a burgundy salt there are so many different salts out there in the supermarket and i've gotten a lot of comments from you guys about what salts to use and what to do if you over or under salt your food the most common salts i have over here and this is iodized table salt so this is what you typically see in restaurant shakers or maybe you even have one of those on your kitchen table and next to it i have its cousin i would say and that is kosher salt coarse kosher salt now going back to this normal table salt it has iodine added to it and that was in like the 1920s iodine was added to help prevent some devastating iodine deficiencies in the united states this table salt is also really processed you can tell by the texture here of the salt itself it's very uniform and almost rounded in shape and that means that the salt has undergone some processing that kind of knocks out all of the jagged edges of the salt crystal and makes it really easy to dissolve so if you're cooking something if you're putting it into pasta water it dissolves really quickly now kosher salt kosher salt is something we hear in the test kitchens love to use why because you can easily pinch it and sprinkle it on your foods it has a coarse texture to it but it's almost um an irregular texture there are some fine particles of salt but there's also some bigger crystals here now in our recipes and in a lot of recipes they'll call for coarse salt and it's important that you take note of that and if you don't have coarse salt in your kitchen and you want to substitute this table salt here or a finer salt you need to be mindful and not use as much so if a recipe calls for one teaspoon of course kosher salt and you're using table salt here you probably want to start with at least half so one more thing here got a lot of questions about over under salting and my best piece of advice for you would be to season as you go season a little bit taste it does it need more salt maybe you should hold off that's the best piece of advice season as you go so these are the two most common salts i would say that we use in the test kitchens but out there there are a lot of other salts that we like to call finishing salts and finishing salts are kind of broken up into two different categories there's either sea salts or there are mine salts and across the world i would say that the production of salt comes from half from the sea and half from mining them out from the earth now this salt i have here this is actually a sea salt it's one of our favorite finishing salts here in the test kitchen it's called malden salt it comes from the south coast of england and it has the biggest most beautiful flakes to it so it's extremely coarse and that's part of the reason why we love it because it does something that kosher and table salt it doesn't do it gives texture to things so if you're sprinkling this on a salad or if you're finishing it over some grilled steak slices this gives a wonderful crunch which is kind of an unexpected texture but really delicious and also it gives off a little bit of an extra flavor so not only is it salty but it has its own flavor to it because it picks up the flavor from its environments now a cousin to this sea salt here is over here this is called celery meaning gray sea salt and this comes from the west central part of france and you can see it's characteristically gray because it picks up a lot of that color from the environment in which it's harvested now it is harvested in what they call these salt ponds and that is just an area or a marsh i would say of salty water and what happens is evaporation takes place and all of the salinity all of the salt starts to crystallize now what floats to the top and forms as the purest crystals of salt is the cousin of celery which is fleur de cell meaning the flour of the salt and you can see i have two versions here a very fine version of fleur to sell and i also have a coarse version of it here you can see there's a big difference in the color it's much brighter and whiter in color and the crystals are very beautiful on this course version i have over here now this celery is actually the salt that kind of settles down in these evaporation pools so it's manufactured in the same way but one's kind of on top and one is on the bottom in contrast to these sea salts we have some other salts here that are colored salts and they're called colored salts because they actually pick up the color from the environment in which they're harvested so this is a hawaiian black salt and the black comes from the lava that it is surrounded with so it's actually kind of mixed in with the salt as it's harvested and that gives a really distinct flavor you get a lot of mineral compounds get a lot of subtle earthy flavors to go along with it this is himalayan pink sea salt it sounds kind of crazy because we're in the himalayan mountains right and where's the sea around it but it's actually ancient seas have evaporated over time and all that salt is left behind and it picks up this wonderful hue this pink color from the surrounding earth now colored salts we're going to move on to flavored salts over here they're countless flavored salts and these are actually combined with other ingredients not necessarily naturally occurring in the environment in which they're harvested so this is a green tea salt here this is a truffle salt but most popular i would say in the united states there's celery salt which people typically use in their bloody mary's lots of different flavored salts out there in the marketplace these obviously have much more flavor now the last salt i want to talk about and this is for all of the fans out there who love to pickle and preserve this is pickling salt and you can see pickling salt is very fine it's very bright it shines and it is a highly refined salt so it's almost a hundred percent close to 100 sodium chloride so very very very pure and it dissolves easily since it's so finely ground into your pickling or brining solution so there you go guys i hope i've helped you out here with a very simple glossary of all the different salts what you should do with them how you should season as you go and there you go guys enjoy salt away and of course if you have any kitchen conundrums reach out to us using the hashtag kitchen conundrums we would love to hear from you so there are some rules when you're dealing with salting food that you have to be aware of salt foods sparingly to start salt foods in small increments as you cook cooks can perspire in the kitchen as well and therefore need more salt in their own bloodstreams and when you go to taste the food it can actually trick you into thinking it doesn't have enough salt in it so you have to be very very careful don't add too much salt to a dish to satisfy your own craving for salt because of this and customers can always add more salt at the table there are a rare select few of restaurants that don't include salt shakers on the table but they are just that rare indeed more salt can be needed in a dish that will be served cold because serving temperatures are going to affect the saltiness of something if you serve it at a hot temperature you want to season it at a hot temperature and vice versa oils are a type of lipid that remains liquid at room temperature cooking oils are refined from various seeds plants and vegetables so the term vegetable oil is an interchangeable term it can mean canola oil safflower sunflower or any other kind of vegetable oil that is produced from a vegetable so let's look at some of the most common forms of vegetable oil canola was originally a trademark name of the rapeseed association of canada and the name was condensed from can from canada and ola from vegetable oils like bisola but now is a generic term for edible varieties of rapeseed oil in north america and australia all olive oils are unique besides being able to come from several different varietals of olives the growing conditions also have a factor on their flavors olive oils are broken down into categories extra virgin extra virgin olive oil is a virgin olive oil that has a low acidic level and is made from healthier ingredients it's generally considered high quality olive oil and is most commonly used for its divine taste extra virgin olive oil is the healthiest and most flavorful grade of the several reasons its acidity level is the lowest has less than about one percent and its taste is fruity and light allowing it to be used uncooked in salad dressings soups pastas and as a dipping oil it's naturally extracted no chemical or preservatives are added and comes from the first pressing of the olive fruit it is a deep yellowish green in color premium extra virgin olive oil can also be purchased and is everything that normal extra virgin is taken up a notch in quality as well cold pressed olive oil is the top of the line in terms of price and flavor virgin olive oil is regular virgin olive oil it's extremely similar to extra virgin but slightly less quality it's also naturally pressed and extracted but it contains a slightly higher acidic level and a generally no higher than two percent it tastes remains of high quality albeit less fruity and is also great to use in dressings marinades pastas etc virgin olive oil is commonly used for roasting vegetables or placing over corn on the cob while grilling it is lighter and more golden in color virgin olive oils can be broken down into two more specific categories fine and semi-fine fine virgin olive oil is has the acidity of 1.5 while the semi-fine would be no more than 3.3 fine extra virgin olive oil is a good option for shoppers on a budget who desire the taste of extra virgin semi-virgin olive oil is less quality and should not be consumed raw refined olive oil this category of olive oil gets its name because of the additional refining techniques that are used to press the olives and create the oil these additional refining methods are necessary in order to create a oil that is fit for human consumption oils that are over 3.3 and acidic are generally lacking the quality that go through this process refined olive oils lack the naturalness of virgin olive oils and is often supplemented with chemicals and preservatives to develop a strong taste that is good for cooking and adding flavor to blend foods it has an acidity level of less than 0.5 percent at giving it a long shelf life it lacks an overall quality of taste and color and is much lighter in color pure or regular olive oil is also previously known as pure olive oil in kind of the middleman of olive oil because it blends qualities of both virgin and refined olive oils around 85 percent is refined and 15 percent is virgin it's acidity level can be no more than 1.5 percent and it's a good option for frying and searing although it has some taste and is of slightly better quality than refined oil it should not be consumed raw pumice instead of being made from the pressed olives olive pumice oil is made from the leftovers the paste-like material that is created as a byproduct during pressing this oil is extracted from the pumice or pulp from the fruit when the use of chemicals are used in the process and added heat it's refined and commonly used only by restaurants with an added amount of regular olive oil for additional taste the grate of olive oil you use should match up with the occasion and reason of which you're using it virgin olive oils are meant primarily for uncooked preparations of salad dressing sauces and dipping oils while regular and refined types are better for cooking and adding flavor to foods each different type of olive oil differs in price as well knowing what grade of olive oil is to look for will make both the trip to the grocery store and the preparation of your meals more efficient and beneficial to your health and taste buds two of the cleanest oils sunflower and safflower oils are often used when they are when you're looking for a clean taste for cooking on high heat sunflower oil is extracted from sunflower seeds while safflower oil is extracted from safflower seeds both types of oil are rich in unsaturated fats hence they're healthier to use as a cooking oil cotton seed and grape seed oils offer a variety of clean tastes and a high smoke point soybean oils is one of the fastest growing oils in the united states and around the world it is clean slightly darker and slightly more pronounced flavor than some of the others because of the high smoke point these oils are turning up in more and more frying applications peanut oil also known as groundnut oil or a righteous oil is a vegetable oil derived from peanuts the oil has a strong peanut flavor and aroma it's often used in american chinese south asian and southeast asian cuisines both for general cooking and in the case of the roasted oil for added flavor unrefined peanut oil has a strong roasted peanut taste and is used as a finishing oil refined peanut oil is used for frying sesame oil is an edible vegetable oil derived from sesame seeds besides being used as a cooking oil it is used as a flavor enhancer in many cuisines having a distinctive nutty aroma and taste the oil is one of the earliest known crop based oils worldwide mass modern production is limited due to the inefficient manual harvesting process required to extract the oil sesame oil is a key ingredient in many asian african and middle eastern cuisines and is used in both light and toasted forms this graph shows the level of monounsaturated fats in popular oils from the high end and olive oil to the very low end of sunflower oil basically what this shows you is that olive oil should be used sparingly sunflower oil is much more healthy and can be used a little more frequently and with more effect so we talked about oils but now we need to talk about fats fats are for cooking come primarily from animal sources and are solid at room temperature cooking fats include butter lard duck fat chicken fat called schmaltz shortening although it is a vegetable product it has similar properties to fats [Music] hey everybody thomas joseph here with another kitchen conundrum i've received a lot of questions regarding cooking oils or cooking fats and which ones to use for what while it can be very very confusing i'm going to show you an easy way a guide actually as to which oils should be used at what times and temperatures so over here we have oils that you should use over moderate heat and what that means is the smoke point of the oil should be under 375 degrees so what is smoke point smoke point of a fat or an oil means the point at which the oil starts to break down and it releases these funny little things called free radicals and it gives the oil a bitter acrid taste so you really want to try to avoid that and over here we have oils and fats that can be used at a high heat and that means anywhere from 375 up to about 500 510 degrees so i'm going to start off here using the oils that should be used over moderate heat and i'm going to show you a simple saute saute is at a lower medium temperature not a high high temperature so you can use things like extra virgin olive oil so into the pan here i'm going to add about a tablespoon or two of oil and you can see it's dancing around the pan which is perfect smoke point you'll actually see it the oil will start to billow out smoke and that's when it really changes so if you're at that point you should start over and heat your pan to a lower temperature so into this pan here i'm just going to saute some zucchini a little bit of bell pepper and some corn some salt and pepper and something like this an easy saute is a perfect way to use these sorts of fat because we're not really going over a 375 degree temperature with this category of oils all of them here are unrefined so an extra virgin olive oil is basically an unrefined olive oil avocado oil this dark color you can tell it's unrefined as you refine oils the temperature or the smoke point goes up and the highest heat oil we have here is safflower oil and this can go up to 510 degrees so this is what we love to use whenever we're searing meats now i'm going to crank this pan up to the hottest it can get and i'm going to add a little bit of safflower oil to the pan so my pan is nice and hot the oil is shimmering that means it's at temperature and i'm going to sear my pork chop here you can hear the heat you can actually hear the heat so i'm going to give the pork chop a little bit of a flip and look at that the nice golden brown crust and that's because we're searing at such a high high temperature so now that you know which oils to pick up at the supermarket the proper way to store oils would be in a cool dark place a place that's free of moisture and away from any heat source you don't want to store a bottle of oil next to the stove top that's a bad idea because it will affect the rate of rancidity so away from the stove in a dark place capped off so that there's no air and if it does come in one of these clear glass jars either decant it into something that's darker or store it in your pantry so that there's no chance your oil is going rancid there you go kitchen conundrum saul vinegar has been around since the ancient babylonians over 5000 years ago from ancient egypt to china japan and the mediterranean vinegar has evolved over time using many ingredients as its base from beer and figs to rice apples grapes grains and fruits wine vinegar is either made from red or white wine cooks use vinegar for many purposes such as pickling deglazing pans marinating meats making sauces and is found in certain desserts even red wine vinegar is commonly used in the mediterranean countries being a common staple in most french homes there are several different qualities of red wine vinegar the longer the wine vinegar matures the better it is most red wines can be matured up to two years while white wine vinegar is a moderately tangy vinegar that french cooks use to make hollandaise and marinade sauce vinaigrette soups and stews it's also an excellent base for homemade fruit or herb vinegars wine vinegars also include such types as champagne vinegar sherry vinegar and single varietal vinegars such as pinot noir vinegar if you've ever been to a place with fish and chips on the menu malt vinegar is a good condiment to know and love malt is the term for germinated and dried grains of barley used in adding a rich nutty toasty flavor to some of our very favorite things like beer and milkshakes and the vinegar we copiously shake into our fish and chips which is made with beer batter and therefore contains malt what's really great about malt vinegar is that it's made directly from ale just like red wine vinegar is made from wine when the booze is ready it's fermented until it's vinegar the result is a milder sweeter and more complex flavor range than plain white vinegar which is just acid and water besides being a versatile condiment in the british fried foods world malt vinegar makes a great gastrique or simple salad dressing mixed with olive oil and fresh herbs white vinegar is a kitchen workhorse aside from cooking with it it's commonly used as a cleaning product but it also use it as a lot for cooking it's not fancy and it's harsher on the palate than other vinegars but it is clean neutral sharpness also makes it a perfect base for adding your own accents such as honey and herbs spices and mustards to create vinaigrettes marinades or pickling liquid you'll taste white vinegar and ketchup where it's perfectly balanced the overall sweet and savory flavors of the tomato and it's summer perfection when zipping up a creamy potato salad apple cider vinegar is a pantry essential it works particularly well in pork dishes for marinades or added to a chutney with recipes that include apples or cabbage or in a smoky sweet barbecue sauce it's great in vinaigrettes and can be substituted for red wine vinegar in a pinch a few quick squirts will also add balance to your soup or stew made from fermented apple cider this vinegar will be fruitier slightly sweet flavor to whatever it's added to it's light brown in color and can be found as a clear liquid or in a cloudy unfiltered versions they are interchangeable in cooking the unfiltered is more likely to be unpasteurized or organic and some people prefer to use unfiltered simply because it's less refined a relative newcomer to the vinegar market having been discovered in japan in the mid-1800s to supply the burgeoning popularity of sushi in that country this vinegar created from fermented rice has a mild acidity and slightly sweet flavor this is the mellowest on the list of all the vinegars and is great for seasoning any dish where you need some tang but don't want your vinegar to overpower it when purchasing rice vinegar you'll see regular and seasoned options the seasoned rice vinegar is sweeter due to the added sugar and salt and can even be used as a dip if you have room for only one go for the unseasoned you can adjust flavors as needed per use once you bring this guy into rotation you'll be reaching for it all the time black chinese vinegar also known as chingang vinegar has a woody smoky flavor and is traditionally made from glutinous rice or sorghum black vinegar is common sour component many number of dishes found in southern china but also widely known in the united states as a dipping sauce for dim sums and a common meat marinade flavored vinegars are typically a neutral vinegar such as cider with the addition of a flavor component such as strawberries raspberries peaches tarragon or etc one of the most complex and flavorful vinegars balsamic is also versatile it can be used as an ingredient with an recipe such as marinades soups and braised dishes full flavored vinaigrettes or reduced to a sauce more refined aged virgins can be drizzled over fruit or cheese balsamic sweet complex flavors develop as a dark reddish vinegar is fermented from grape musts which are freshly crushed grapes unlike wine vinegar which is made from fermented red wine and barrel aged for a minimum of 12 years and up to 25. older vinegars will be more flavorful and thicker as they become further concentrated while aging they'll also be more pricey it's easy to feel stumped staring at a grocery store shelf full of a variety of prices you get what you pay for wisdom should apply here this doesn't mean that you have to break the budget a younger bottle around fifteen dollars is perfect for day-to-day use in your cooking for the real deal look for bottles labeled asceto balsamico traditionale as there is no regulation around the word balsamic along with the vinegar style true balsamic is certified by the italian government and hails from the moderna or reggio himalaya region of italy also check the label to ensure you see grape muss as an ingredient some cheaper balsamics will add caramel color to regular wine vinegar as ingredients moderna italy one of only two places permitted by european law to make traditional balsamic vinegar here at the san delino villa they've been brewing up this black gold for the last 200 years from one ingredient and one alone [Music] grape juice its coveted sweet syrupy taste depends on two things grapes with a very high sugar content like this trebiano variety and perfect timing the key is to pick the grapes late in the season when their sugar content is at its highest machines can't be trusted to choose the ripest grapes romano speciali on the other hand has a seasoned eye for the sweetest fruit [Music] and he's a connoisseur of fine vines [Music] romano and his mate julio take a day to clear the vines of 300 kilograms of grapes to maintain its exclusivity italian growers produce less than 10 000 liters of traditional balsamic vinegar per year [Music] and to ensure only the ripest best quality grapes get through most of the work on this premium product is done by hand but manually removing grapes from their stalks would take too long [Music] the solution is approaching its 100th birthday the dearest patricia or d stemming machine [Music] as spiral blades spin inside a perforated drum grape juice trickles away through the holes while stems are spat out on the floor only problem is the juice comes out mixed with grape pulp and pips in the distant past it was crushed under the feet of children but bouncing bambinos have been usurped by this mechanical press in less than an hour julio has transformed 100 kilos of grapes into 70 liters of sweet juice known as must this is where it gets tricky as soon as he extracts the must airborne yeast starts to convert the sugars into alcohol [Music] davide lanardi is the third generation of leonardis to run this estate and he doesn't want his grape juice turning into wine [Music] davide needs to delay the fermentation process so he fires up the hob to kill the yeast and concentrate the sugar in most [Music] after 24 hours about half the liquid has evaporated and the must has a rich caramel color the key to making perfect balsamic vinegar is getting the sugar content spot on davide can't trust his taste buds alone so to make sure he's right on the money he uses an instrument called a must saccharometer it measures the density of the liquid [Music] the denser it gets the when the sugar hits 30 davide moves on to the next stage [Music] so far he's worked hard to kill off the bacteria but you can't make balsamic vinegar without some of them so next he pours the cooked grape juice into this so-called mother barrel it contains special acetic acid bacteria from the previous year's production they immediately set to work the challenge for any producer is to create vinegar with a complex sweet sour taste the only way to achieve this is to mature the great must for at least 12 years [Music] wine is matured in underground cellars to protect it from temperature changes which would turn it into vinegar which of course is just what davide wants so he matures his grape juice in the attic where seasonal changes vary the temperature and help form vinegar [Music] davide's challenge is to concentrate the taste and thicken the consistency he achieves this by transferring the must into a series of smaller and smaller barrels made from different types of wood each wood adds a different flavor whilst openings allow oxygen in to sustain the bacteria as they turn the sugar into vinegar [Music] every year more than a tenth of the liquid evaporates left alone the vinegar would eventually solidify so davide tops up each barrel in turn with liquid from the next barrel up [Music] then once every winter he decants one liter from each of the smallest most mature barrels ready to be bottled [Music] but moderners vinegar consortium won't let it go to the world's high end delicatessens until it gets their seal of approval [Music] expert tasters check the vinegar's flavor and aroma and hold it in front of a candle flame to check its color and viscosity if it passes the test they bottle it in specially designed 100 milliliter flasks depending on its age and prominence balsamic vinegar can fetch up to four thousand dollars a liter that's the equivalent of two whole cases of vintage champagne and a couple of bags of nuts [Music] pickles are food preserved in a solution like brine of vinegar these are often vegetables of various kinds but any water-rich food can be pickled well including fruits and even meats relishes are condiments generally composed of chopped vegetables with other ingredients potentially added relishes are typically not cooked but raw when the pickling liquid is added southern chow chow is a good example when usually serving as a relish in the u.s it's specifically a pickle relish pickles are chopped up seasoned and used to flavor other dishes however technically a relish does not require pickles chipotle and adobo chili's smoked jalapeno chilies are canned in a red sauce that typically contains tomato puree in a variety of seasonings such as paprika salt onions garlic chili and oregano used for making sauces chipotle mayonnaise rubs as well as other recipes the smaller moita jalapeno chili used for making chilies in adobo rather than the larger mejo chili the term chutney refers to the number of sauces or the dry base for such sauces native to the indian subcontinent forming an integral part of the cuisines of the indian subcontinent chutneys may be realized in such forms as tomato relish a ground peanut garnish or yogurt sauce cucumber spicy coconut spicy onion or mint dipping sauce an offshoot that took root in indian cuisine is usually a tart fruit such as a sharp apple rhubarb and damson pickle made milder by an equal weight of sugar usually demerara sugar or brown sugar to replace jaggery and some indian sweet chutneys vinegar was added to the recipe for english style chutney that traditionally aims to give a long shelf life so that autumn fruits can be preserved for use throughout the year such as jams and jellies or pickles for else or else to be sold as a commercial product indian pickles are mustard oil as a pickling agent but anglo-indian-style chutney uses malt or cider vinegar which just produces a mild product that in the western cuisine is often eaten and with hard cheeses or with cold meats and foul typically in cold pub lunches to the uninitiated fish sauce might seem like an odd concept like soy sauce is both a condiment and an ingredient and it's full of glutamates that enhance flavor and food but while soy sauce is made from comparatively mild tasting fermented soybeans and grains fish sauce gets its signature flavor from something far more pungent fermented anchovies manufacturing methods vary among producers but the basic process is the same fresh whole anchovies are layered with sea salt and left a ferment in vats for at least 12 months over time the fish breaks down and the salty liquid that forms is collected and filtered before bottling it's strong stuff with an intense aroma but there's a reason for this pungent sauce is a critical component for many asian cuisines and is becoming commonly known in american kitchens it boasts a rich savory taste and has a brininess that brings out the depth and flavor in everything from dipping sauces and soups to stir fries and marinades black bean garlic sauce is made from grinding salted fermented black soy beans with ginger and other seasonings rather than using the whole fermented black bean and hand chopped garlic to make the dish this sauce comes ready to use black bean garlic sauce can be used in stir fries steamed dishes and especially seafood dishes typically when cooking with black beans we like to use whole fermented black beans that said this pre-made black bean garlic sauce is convenient charred option that adds additional flavor to a dish ketchup is a sauce that is used as a condiment although original recipes using egg whites mushrooms oysters grapes mussels or walnuts among other ingredients the unmodified modern recipe refers to a tomato based ketchup tomato ketchup is a sweet and tangy sauce made from tomatoes sugar and vinegar with seasonings and spices the the spices and flavors vary but commonly include onion allspice coriander cloves cumin garlic and mustard and sometimes includes celery cinnamon and ginger the market leader in the united states with over 60 percent of the market share and the united states and united kingdom with 82 percent of the market share is hinds hunts has the second largest share in the u.s with about 20 percent in some of the uk ketchup is also known as tomato sauce a term which is means a fresh pasta sauce somewhere elsewhere or a red sauce especially in whales tomato ketchup is often used as a condiment to dishes that are usually served hot and may be fried or greasy french fries hamburgers hot dogs chicken tenders tater tots hot sandwiches meat pies cooked eggs and grilled or fried meats ketchup is sometimes used as the basis for or as one ingredient in other sauces and dressings it may flavor may be replicated as an addictive flavoring for snacks such as potato chips mustard is a condiment made from the seeds of the mustard plant of which there are multiple varieties the whole grain ground cracked or bruised mustard seeds are mixed with water vinegar lemon juice wine and other liquids salt and often other flavorings and spices are added to create a paste or sauce ranging in color from bright yellow to dark brown the taste of mustard ranges from sweet to spicy commonly paired with meats and cheeses mustards is also added to sandwiches hamburgers corn dogs and hot dogs it's also used as an ingredient in mayonnaise and many dressings glazes sauces soups barbecue sauces and marinades as a cream or as an individual seeds mustards is used as a condiment in the cuisine of india and bangladesh the mediterranean northern and southern east southeastern europe asia and americas and africa making it one of the most popular and widely used spices and condiments in the world it's also popular accompaniment to hot dogs pretzels and bratwursts in the netherlands and northern belgium it is commonly used to make mustard soup which includes mustard cream parsley garlic and pieces of salted bacon mustard as an emulsifier can stabilize a mixture of two or more liquids such as oil and water added to a hollandaise sauce mustard can inhibit curdling common types of mustards include dijon originally from france english mustard a notably stronger and thicker mustard french mustard not to be mistaken with french's mustard which is a this is a dark brown mild and tangy sweet with it actually invented by the coleman's company in the united kingdom american or yellow mustard which is very mild and bright yellow from the added turmeric powder spicy brown or deli mustard is consisting of coarsely ground seeds with the addition of horseradish beer mustard well as you can imagine has beer added to it whole grain mustard also known as granary mustard honey mustard hot pepper mustard fruit mustards hot mustards spiced mustards and spirited mustard spirited mustards being made with whiskey brandy or cognac sweet mustards from bavaria these were sweetened with sugar applesauce or honey and this is the original origins of today's modern honey mustard as we know it we talked briefly about soy sauce earlier soy sauce is also spelled soya soya sauce in many parts is a east asian liquid condiment of chinese origin traditionally made from a fermented paste of soy beans roasted grain brine and aspergillus molds soy sauce is a current form was created about 2 200 years ago during the western han dynasty of ancient china and spread throughout east and southeast asia where it is used as a cooking and as a condiment soy sauce can be added directly to food used as a dip used to season meat or be added to a item for flavor and cooking it's often eaten as with sushi noodles and sashimi it can also be mixed with ground wasabi bottles of soy sauce can be found at dining tables in china japan korea and all over the world as common seasonings the taste of soy sauce is predominant predominated by saltiness followed by moderate umami sweet taste and finally slight bitterness which is hard to perceive due to the masking effect of other of the other tastes the overall flavor of soy sauce is a result of the balance and interaction among different taste components the saltiness is largely attributed to the presence of nacl or common table salt in the brine tamari made mainly in the shobu region of japan tamari is darker and appearance and richer in flavor than its soy sauce cousin it contains little or no wheat wheat free tamari can be used for people with gluten or celiac intolerance tamari is more viscous than its cousin than 1.5 percent of soy sauce produced in japan is tamari it is the original japanese soy sauce this recipe is closest to the soy sauce originally introduced to japan from china technically this variety known as miso damari as this is the liquid that runs off meso as it matures the japanese word tamari is derived from the word tamu that signifies to accumulate referring to the fact that tamari was traditionally a liquid by-product made by the fermentation of miso japan is the leading producer of tamari tomorrow soju is often used for sashimi oftentimes other varieties of soy sauce and for sashimi are inaccurately referred to as tamari soju the black label in japan by law which clarify which whether or not it is actually tamari tahini is a condiment made from toasted ground hulled sesame it is served by itself as a dip or as a major ingredient in hummus baba ganoush and hava tahini is used in the cuisines of the levant and middle eastern mediterranean the south caucasus as well as part of north africa tahini based sauces are common in middle eastern restaurants as a side dish or as a garnish usually including lemon juice salt and garlic and thinned with water hummus is made from cooked mashed chickpeas typically blended with tahini olive oil lemon juice salt and garlic tahini sauce is a popular topping for meat and vegetables in middle eastern cuisine a sweet spread version of this or sweet tahini is a type of hava sweet it sometimes has mashed or sliced pistachio pieces sprinkled inside or on the top and is usually spread on bread and eaten with a quick snack wine is a naturally fermented fruit juice the most common form of wine is made from grape juice however wines can be made from peaches strawberries apples and other fruits wine making process depends on choices made by the wine maker such as fermentation levels and tannins so let's discuss some of the different types of wine from red wine white wine and roses sparkling wine is a wine with significant levels of carbon dioxide in it making it fizzy while this phrase commonly refers to champagne the european union countries legally reserve that term for products exclusively produced in the champaign region of france sparkling wine is usually either white or rose but there are examples of red sparklings such as the italian bruschetta the lambrusco and the australian sparkling shiraz made from madrassa's grapes the sweetness of sparkling wine can range from very dry brute styles to sweeter douay or dolce styles the sparkling quality of these wines come from the carbon dioxide content that may be the result of natural fermentation either in the bottle as with the traditional method or in a large tank designed to withstand the pressures involved this is also known as the charmot process or as a result of simple carbon dioxide injection in some cheaper sparkling wines port wine from portugal is a fortified wine produced with distilled grape spirits exclusively in the duro valley of the northern providences of portugal it's typically a sweet red wine often served as a dessert wine although it comes in a dry semi-dry and white varietals fortified wines in the style of port are also produced outside of portugal including in argentina australia canada france india south africa spain and the united states however under the european union protected designation of origin guidelines only the product from portugal may be labeled as port or porto while the names opporto porto and vino de porto have been recognized as foreign non-generic names for port wine originating from portugal as well sherry is a fortified wine made from grapes white grapes that are grown near the city of juarez de la frontera and andalusia spain sherry is produced in a variety of styles made predominantly from the palomino grape ranging from light versions similar to table wines such as manzanilla or pheno to darker and heavier virgins that have been allowed to oxidize as they age in barrels such as the more darker richer of the olorosos sweet dessert wines are also made from pero jimenez or moscatel grapes and are sometimes blended with palaminto based cherries madeira is a fortified wine made from the portugal madeira islands off the coast of africa madeira is produced in a variety of styles ranging from dry wines which can be consumed on their own as an aperitif to sweet rinds usually consumed with dessert cheaper cooking versions are often flavored with salt and pepper for use in cooking but these are not fit for consumption as a beverage marsala is fortified wine dry or sweet produced in the region surrounding the italian city of marsala in sicily marsala first received a demonstration day originally controlled as status in 1969 the european union grants protective designation of origin status to marsala and most other countries limit the use of the term marsala the products from the marsala area wine labels are an important source of information for customers since they tell the type of and origin of the wine the label is often the only source that the buyer has for evaluating the wine before purchasing it certain information is originally included in the wine able wine label such as the country of origin quality type of wine alcoholic degree producer bottler or importer in addition to these national labeling requirements producers may include their website address and a qr code with vintage specific information wines can be labeled by wine type grape varietal region or place of origin vineyard name trade name or bottler in general old world wines or european and mediterranean areas are based on the area and the producer where the new world of the americas concentrate on the varietal but even those lines are being blurred these days in order for the wine to be labeled by the varietal it must contain 75 percent of that particular grape in the united states as you can see from this chart there are approximately 91 red grape varietals and 94 white grape varietals not counting new hybrids being developed even today some of the most popular red grape varietals are cabernet sauvignon merlot pinot noir shira or syraz malbec zinfandel some of the most popular white wine varieties are chardonnay and riesling sauvignon blanc pinot gris or pinot grigio shenan blanc moscato wines are evaluated on four qualities aroma bouquet taste and body beer is made from water hops barley or malt and fermented yeast there are several different types of beer typically we break beer into two different categories ales and lagers in the ales category we have pale and brown ale and red ales these are very familiar to us in the united states but we also have porters and stouts lagers are things such as pilsners and box if you've ever had a budweiser you had a pilsner when we select beers for flavoring ingredients there are some factors we need to consider usage from beer batter to marinades beers can be used to add flavor tenderize add moisture and add flavor bitterness hops add bitterness the more hops the more bitter the beer this may sometimes require the addition of sugar to the ingredients colors red light brown dark these colors add dimension to the food such as beer batter or marinades and of course flavors we want to match flavors or adding a new flavor to the equation so let's discuss some of the distilled spirits that can be used for flavorings gin rum tequila and vodka are some very predominant flavors when used in cooking we can use these to make a vodka sauce or we can use tequila as a marinade for chicken rum is an excellent marinade and has the the taste of the sugar canes and gin gives us the flavors of the juniper berries with the almost pine taste to it brandy is produced by distilling wine and is typically taken as an after dinner drink the term brandy also denotes liquors obtained from the wines of other fruits as well and there are many different types of brandy across the winemaking regions of the world including cognac and almanac vs are very special designates a blend in which the youngest brandies have been stored at least two years in the cask vsop or very superior old pale this is a reserve brand that designates a blend in which the youngest brandy is stored for at least four years in the cast exo or extra old or napoleon designates a blend in which the youngest brandy is stored for at least six years in the cask fruit-based brandies are common very popular flavoring components in food one of the most commonly used is calvidos or apple brandy if you've ever had things such as french onion soup you've probably had calvados in that soup also kierswasser which is a cherry flavored brandy which is raspberry flavored poa which is pear flavored and this one which i'm not even going to try and pronounce which is plum flavored from scotch with its peat moss and charcoal flavors to the irish whiskey which is a little more well whiskey forward as opposed to its scottish cousin bourbon is an american classification of whiskey and is traditionally used in the kentucky region but all in all it is a classification which uh has specific qualifications for tennessee whiskey is an example of a whiskey that is produced in the united states which is not necessarily considered a bourbon because of its legal qualifications rye whiskey is produced using rye as opposed to corn and will have a unique characteristic all of these have a high alcohol content with a sweetness to them which adds to the flavor especially when flambeing the cures are made from herbs fruits nuts spices flowers and other flavorings in a base of neutral spirits brandy ram or whiskey typically the flavoring components are added to that to give us the liqueurs cream liqueurs are made with cream they can be traditional cream liqueurs or flavored with various different flavorings creme liqueurs contain no cream but contain additional sugar some examples are cointreau which is an orange flavor creme de cacao chocolate frangelico toasted hazelnuts blue curacao which is also an orange flavored liqueur tinted blue framboise which is raspberry midori which is melon chambord black raspberry and amaretto or is made with almonds such as this luxardo [Music] here's how to flambe flambeing can be dangerous but it can also be super easy and i'll show you how to flambe safely here's what you need first and foremost an audience this isn't the kind of thing you do at home all by yourself secondly a good heavy-duty saute pan third some kind of fruit bananas strawberries apples whatever you have and then of course a little bit of butter sugar and of course your fuel some kind of alcohol source and you can't use beer or wine there's just not enough alcohol present your best choice a rum or some kind of liqueur today spiced rum begin with your butter then preheat the pan while you wait for the butter get your strawberries ready flambe fruit is a great way to elevate a simple dish of ice cream or you can add it to a slice of cake or you can just enjoy it just the way it is one of the keys to flambeing is to listen carefully you need to hear a sizzle sizzle says hot hot means the alcohol will actually evaporate and burst into flames spoon full or two of sugar and it's the butter and the sugar and the rum working together that forms the sauce here's how to add the rum safely and light it on fire first take the pan away from the flame tip the front of the pan down and away from you add your rum the shot or so and then back to the flame safely if you're feeling it go ahead and shake it a little bit and this is perfectly practical right now it's reducing after the flames have died down let the sauce just simmer and reduce for a second until it forms a glaze just like that so what if you don't have a gas stove at home can you still do this you sure can and the way to do that do everything exactly the same and then when the time comes use a lighter [Music] there are a few guidelines however when using alcohol as a cooking component use quality products garbage in equals garbage out do not cook with wine that you would not drink many cooking wines contain sulfites and added sugars pay attention to the cooking time once the wine or alcoholic beverages have been added it's very easy for these to burn or change the effect of it brown foods before adding wine or other alcoholic beverages to the finished dish such as a sauce or stew once you add the alcohol you can no longer get the my yard reaction the browning of those particular and finally alcohol and acids and wine may interact with aluminum or cast iron cookware you want to make sure that you're using stainless steel or something that is not going to react with these particular acids so let's summarize and discuss some of our takeaways for the day taste is sweet salty sour bitter and umami flavor is all of these with the addition of aroma and mouthfeel if you can't smell you can't taste herbs come from green parts of the plants whereas spices come from barks and roots and seeds etc vegetable oil is a generic term and can mean any number of different oils seasonings enhance flavorings alter the flavor of foods and salt is the most common seasoning in the world
|
On_Cooking_Lecture_Videos
|
On_Cooking_Chapter_4_Menus_and_Recipes_Part_2.txt
|
in this module in part two we're going to be talking about units of measure and various other culinary mathematic principles involved the objectives for this module are understand the difference between imperial and metric units of measure in the culinary arts explain how to convert grams to ounces define and describe yield and conversion factor explain how to calculate unit cost describe and explain the yield percent triangle explain how to calculate recipe cost and plate cost and how to set the menu price accurate measurements are the most important aspect of food production let's look at some of those units of measurement weight weight refers to the mass or heaviness of a substance it's expressed in terms such as grams ounces pounds and tons spring scales rely on coiled springs which can fatigue over time balance scales are great for large measurements but require counterbalances and are not as accurate digital scales are accurate and can be converted from imperial to metric with the touch of a button they come in large and small sizes volume this refers to the space occupied by a substance this is expressed in cups gallons teaspoons fluid ounces bushels and liters measuring cups can be large gallon sized or larger pitchers all the way down to a quarter cup sized cups measuring spoons typically consist of tablespoon and one one-half and one-quarter teaspoons count this is commonly used in purchasing to indicate the size of an individual item count can refer to the each item or the case when ordering by the count it's crucial to make sure to specify by the each or by the case there are two measurement systems u.s system also referred to as imperial uses pounds and ounces for weight and cups for volume the metric system which is the most common system in the world it's a decibel system in which the grams liters and meters are based on units of weight volume and length respectively as you can see from this diagram one is more complicated than the other currently there are only three countries in the world that do not use the metric system although in many cases most of them use a mixture of the metric system and the imperial system including our own country in the united states liberia on the coast of africa is a originally founded by free america free black americans so it would stand reason that their constitution is actually very similar to the american constitution and because of that they also emulated our metro our system of measurements miramar which is also known as burma is between india and china and it was a protectorate of the european union or british isles at one point and because of that they kind of well didn't really like the british so they kicked out everything british that they had and that included the metric system but today they still have quite a bit of the imperial system and some metric systems as well okay i have a confession to make i've lived in the united states since 2010 and alexa what's the weather today right now in new york it's 65 degrees with clear skies and sun today's forecast has partly sunny weather with a high of 77 degrees and a low of 61 degrees [Music] ah i still don't understand the use of fahrenheit virtually every country on earth uses celsius to measure temperature but the us still uses fahrenheit and for that reason we at vox often get comments like these okay we get it besides the fact that the majority of the world uses it the metric system makes conversions a lot easier [Music] the celsius scale even looks simpler it has freezing and boiling points at nice round numbers 0 and 100 where in fahrenheit it's a bit of a mess and of course this isn't just an issue of aesthetics or weather updates america's unwillingness to switch over to the metric system has had serious consequences in 1999 a 125 million dollar satellite sent to mars disappeared in the martian atmosphere it's a setback to years of work already done in the vastness of space all it takes is one navigation error and this colossal mistake was largely due to a conversion error between us and metric measurements [Music] fahrenheit was really useful in the early 18th century at the time no one really had a consistent way to measure temperature but then a german scientist came up with the fahrenheit scale when he invented the mercury thermometer in 1714 to make the scale the most popular theory is that he picked the temperature of an ice water salt mixture at the zero mark he then put the freezing point of water which is higher than a salt mixture at 32 and placed the average temperature of the human body at 96. from there he placed a boiling point of water at 212 degrees in 1724 fahrenheit formalized that scale and was inducted into the british rural society where his system was a big hit as britain conquered huge parts of the globe in the 18th and 19th centuries it brought the fahrenheit system and other imperial measurements such as feet and ounces along with them and fahrenheit became a standard system for the british empire across the globe in the meantime the metric system was gaining popularity during the french revolution it was put in place to unify the country at the national level so by the second half of the 20th century celsius became popular in many parts of the world when many english-speaking countries began using the metric system even america attempted to switch over the change would have been good for trade and scientific communications with the rest of the world so congress passed the law the 1975 metro conversion act which led to the united states metric board that would educate people about the system this created the only metric highway sign in the u.s the interstate 19 connecting arizona to mexico but it didn't go much further than that the problem was that unlike the uk canada or australia the law made the switch voluntary instead of mandatory and of course people resisted the change and the metric board couldn't enforce the conversion so president reagan ended up disbanding the board in 1982 the next notch to metro gate came when the metric system became the preferred measure for american trade and commerce in 1988 but nothing really stuck with the general public even though bizarre measurements like feet and fahrenheit are not doing them any favors students have to train for two sets of measurements making science education even more difficult and companies spend extra dollars producing two sets of products one for the us and the other for metric there's also an argument for public health according to the cdc about three to four thousand kids are brought to the er due to unintentional medication overdose every year and conversion errors for dosage are to blame so it seems like a no-brainer america needs to switch to the metric system to match the rest of the world but it is still struggling to make that change that's because it'll take a lot of time and money but there's no financial proof that this will all be worth it so unless that change is proven to be economically better we're not going to be using celsius anytime soon what's 77 burning in celsius 77 degrees fahrenheit is 25 degrees [Music] celsius [Music] [Music] to move from imperial to metric and metric to imperial it's important that you have some kind of keystone piece of information let's discuss these for each one for weight an essential keystone is one ounce equals 28.35 grams or one pound equals .45 kilograms this will allow you to be able to move from imperial measurements to metric and from metric to imperial for volume a simple conversion is one fluid ounce equals 29.57 milliliters or one cup equals 237 milliliters you can use this to be able to move from a gallon of water all the way over to a liter of water by moving and manipulating going from a gallon to a cup a cup to a milliliter and a milliliter to a liter recipe conversions are used when scaling a recipe up or down for instance if i have a recipe that is serves 10 people and i need to serve 100 people that would be an example of when i would need a recipe conversion we have to take into consideration yield first yield is the total amount of product made from a specific recipe yield also refers to the amount of food item remaining after cleaning and processing but that's for another time conversion factor is the number used to increase or decrease ingredients and recipe yields step one divide the desired or new yield by the recipe or old yield to obtain the conversion factor now this may sound complicated but it is the most easy and simplest of all maths we take our new yield divide it by our old yield and that's our conversion factor step 2 we multiply each of the ingredient quantities in the recipe by the conversion factor to obtain the new quantity old times conversion factor equals new quantity so you can see it's not a complicated procedure if you remember old divided by nu [Music] so you have the perfect recipe and now you need to feed an army well we can do that what we need to do is figure out our conversion factor this is the equation that we're going to use to convert our recipe for example your recipe has 4 servings and you now want to make 10 servings and to do that we need to take our new servings 10 and divide that by our old servings 4 which equals 2.5 our conversion factor which you will need to convert your recipe here's an easy recipe we're going to convert this from 4 servings to 10. so we take each item and we multiply it by our conversion factor of 2.5 and we get a few crazy decimals in there which we need to change to fractions to do that you convert that decimal to a fraction and don't worry if you can't do that just google it so here's what our recipe looks like for 10 people pretty simple eh but wait what if we wanted to go from a 10 serving recipe to a three serving recipe it's the same math just a smaller decimal equivalent all calculates out the same easy peasy remember your conversion factor converting portion size is also equally simple and now while this may look complicated it is really not step one determine the total yield of an existing recipe by multiplying the number of portions by the portion size step two determine the total yield desired by multiplying the new number of portions by the new portion size now it's simply old versus new right we take our new yield and we divide it by our old yield to get our conversion factor again and then we multiply each ingredient times that conversion factor so here you can see an example of cauliflower soup the recipe yields 1.5 gallons and these are 48 4 ounce servings we need to produce 72 6 ounce servings so we first have to figure out what our old yield was 48 times 4 is 192 ounces then we figure out what our new yield is 72 times 6 is 432 ounces then we take our our conversion factor by calculating new divided by old 432 divided by 192 is 2.25 that is our conversion factor then we multiply everything in that recipe times 2.25 to give us our new quantity additional conversion problems the following must be considered when converting a recipe equipment used sometimes the equipment we need is scalable and sometimes it is not if the recipe says that we need a mixing bowl well sometimes we may need to go into a bigger solution for that evaporation rates when you multiply the amount of liquids sometimes you have to take into consideration that the recipe may tell you to simmer for 10 minutes for evaporation but when you increase the liquid amount the you may also increase the amount of time it has to simmer to evaporate recipe errors well sometimes we just make mistakes and that does happen from time to time so you always want to double check your recipe and make sure that it's accurate before you proceed time it takes to cook a recipe just like evaporation rates sometimes the time is going to adjust the longer you have to cook something may depend on how much ingredients you start with or whether you're going to add ice as part of the liquid to cool it down for the cooling down process or if you're going to let it naturally cool down all of these factors have to be taken into consideration now let's talk about unit cost we're talking about the price you pay for one specified unit such as a pound a can a gallon a bunch and each all of these are units of measure in that as purchased simply put ap as purchased means this is the condition or cost of an item as it is purchased or received from the supplier before any trimming any cleaning or any processing is done for it so take for example an apple one apple purchased the as purchased would be the entire apple including the peel the stem the seeds and the apple meat itself we would need to convert that as purchased cost to unit costs or prices if i take my as purchase cost let's say for instance i bought 10 apples at 10 for the 10 apples and i divide it by the number of units it'll give me my cost per unit so if i use my example i have ten dollars worth of apples divided by ten to apples that's going to give me one dollar per unit or one dollar per apple let's talk about yield percent many ingredients require cleaning trimming or boning for these products yield is the usable edible portion or ep plus the fat shells skin or sinew that's discarded yield percentage is the ratio of the usable ingredient after cleaning and trimming a lot of ingredients yield a lot of yields so because of that we tend to use what is known as the book of yields this can be downloaded it can be found on the internet and what it allows you to be able to do is look up really quickly what the yield percentage is of an item let's say for instance i need to know what the yield percentage of broccoli is i can look it up in the book of yields very quickly by looking inside there and finding broccoli and it will tell me how much of that broccoli is usable the usable part is my edible portion and then my yield percent is the ratio of my edible portion to my overall ingredient edible portion is the amount of food item available for consumption or use after trimming or fabrication it's a smaller more convenient portion of a larger or bulk unit as you can see from the example in the pictures below i have a tenderloin on one side that has the sinew and the fat and the connective material still attached to it including the chain muscle and on the other side i have the cleaned denuded tenderloin the chain muscle on the side all the sinew and fat separated so now i have my edible portion which is my usable products my whole beef tenderloin and my chain muscle that's been cleaned on the side these are all considered edible portions the other part are considered in the next little bit usable trim is the product that is available after trimming off the edible portion but it is still usable so in this example the chain muscle even though i may not be able to make a stake out of that chain muscle i still may be able to use it for making kebabs or making stew meat or something along those lines so it is a usable trim trim loss or simply just referred to as loss is the waste product that is no longer usable in any other form and it is destined for the recyclable or the trash bin or the compost bin here's an example of a butcher's yield form the reason why we complete these is so that we can calculate our yield percent for further usage if you are using the book of yields you can access a lot of this information within the book of yields and compare it to how well you do as a cutter a meat cutter a product cutter of any kind and see if you are doing what is on par with everybody else so i start with my primary weight then i have my raw weight which by if you look at this 6.75 as my cryovac weight my raw weight is 6.61 that means when i open the bag i'm actually losing a little bit of weight and it's a little bit of the bag but it's also a little bit of what we call purge or the moisture of the water that comes off of it once i fabricate it i weigh out my usable product that i'm going to use for my primary usage that's going to be for my steaks and i have 3.5 pounds then my secondary usage is 1.32 pounds that's going to be my chain muscle on the side and then my waist weight is 1.78 pounds which is going to give me a yield percent of 52 percent the way i calculate that will be coming up so looking back at our yield percent which is our ratio of usable ingredient after cleaning and trimming if we take that into consideration then our ep weight after trimming which is our weight after we weigh it after we cut away all the sinew and unusable portions divided by our ap weight which is our original weight straight out of the container out of the box we put it right on the scale we multiply that times 100 and that's going to give us our yield percent so let's look at the example that we had noted earlier 3.5 pounds of usable primary trim divided by our original weight of 6.75 pounds multiplied that times 100 which gives us 51.85 percent or 52 percent let's look at calculating yield percent and quantity to purchase any time that you have three variables you can use a triangle very similar to this our edible portion quantity divided by our as purchase quantity equals our yield percent our edible portion quantity divided by our yield percent equals our as portion quantity and our as portion quantity times yield percent equals our edible portion quantity this is very helpful being able to determine how much of a product you need to purchase for instance if i know that my yield percent on broccoli is 50 percent and i go to my recipe and i see that i need 10 pounds in my recipe 10 pounds in the recipe would be my edible portion quantity my yield percent is 50 so i take edible portion quantity divided by yield percent equals my as portion quantity or 10 pounds divided by 50 percent equals 20 pounds and that's how much broccoli i need to purchase i can do the exact same thing with this that i did with the preceding triangle this is measuring cost but you'll notice that one difference is that the ep and the ap are reversed the ap is now on top and the ep is on the bottom this is a key distinction for this particular triangle because trimming decreases the usable quantity of an ingredient the cost of the ingredient must be increased by the amount discarded ap cost divided by ep cost equals yield percent this is an example of a yield test ap cost divided by yield percent equals ep cost ep cost divided by multiplied times yield percent equals ap cost so if i know how much something cost me so that broccoli i purchased earlier cost me five dollars and i divided by my yield percent 50 now my ep cost is going to be 10 and that's how much broccoli i'm using in that recipe even though i'm discarding some of the material i still have to take it into consideration all these costing methods are leading up to one simple thing how to determine your recipe cost and how to translate that into the cost or the price that you charge your customer recipe costs that we start with determining the cost for a given quantity of each recipe and then within that costing procedures described earlier we add the ingredient costs together to obtain the total recipe cost we have our total recipe cost divided by our number of portions that recipe manufactures or makes and that's going to be our cost per portion once we have our cost per portion we can start to determine our selling price the selling price is determined by the plate cost this is the cost of the food that is served the overhead cost is associated with running a business as well as factored into it in many cases but a lot of associated costs are factored in in other places on the profit loss statement the food cost percentage is the amount needed to add to a play cost price in order to achieve a desired profit understanding how to calculate your food cost percentage is essential for every restaurant owner if you're in the restaurant or food service business you understand how important it is to keep your cost under control being able to compare and calculate your food and beverage costs month over month can be really helpful in the overall management of your business so how do we calculate food cost percentage well food and beverage costs are calculated as a percentage of the total volume of sales first calculate the amount that you spend on preparing a dish take all the ingredients that go into that dish and add up the numbers to find your food cost for that item next divide your food cost by the price of that dish let's say for example that it cost you a dollar fifty in ingredients to make a burger and fries and you charge five dollars for that meal dollar fifty over five is zero point three take that number and multiply it by a hundred and we get a food cost percentage of thirty percent once you have the food cost percentage of items on your menu you'll be better equipped to determine if you need to raise or lower your prices to cover your overhead cost you'll also be able to appropriately change either the menu price or type of ingredients you use if the prices of the ingredients fluctuate don't forget that every restaurant will have different food costs what we come down to is determining our food cost or cost of goods sold to determine the food cost of an individual menu item you determine the cost of the plate by adding up all the cost per portion that go on the plate you've got one portion of macaroni and cheese one portion of broccoli and one portion of grilled chicken one of the dinner rolls maybe you have a side salad that goes with it and then also i like to factor in the paper and plastic orders that go with it even if they don't get it to go i still factor that in because every to-go order you have if you don't factor that in that's a loss that you're going to take then i divide by the menu or selling price then i multiply times 100 to get the food cost percent so again my plate cost divided by my selling cost or selling price times 100 gives me my food cost percentage we can also use food costs percentage to determine our sale price the first thing we do is we determine the total cost of all the components in the finished plate our plate cost then we divide the total plate cost by the desired food cost percentage plate cost divided by desired foods food cost percentage equals our selling price so let's take a moment to pause on that and talk about that because this is something that as a chef i do quite regularly i take my cost of the plate let's say for instance i have a dessert that cost me one dollar to make and my desired food cost percentage for desserts is 15 and i save for desserts because i have a desired food cost for desserts for appetizers for entrees for salads i have a desired food cost for each of those my appetizers and entrees are different my dessert my desired food costs for my appetizers my desserts are 15 this is my personal cost that i like to achieve and my desired food costs for my entrees are anywhere between 30 and 45 depending on the product that i'm using chicken is relatively inexpensive so i'm going to go with a 30 percent to a 35 percent food cost on that versus a steak which is way more expensive and i'm going to be more at the 45 food cost so when we look at food cost what we're really talking about is the amount of money that we're paying in order to produce one dollars worth of revenue so if i get one dollar 45 cents of that stake entree is being paid by just simply buying food and putting it on the plate so if i take this model it's plate cost and divided by my desired food cost to give me my sale price i take my play cost of one dollar for the dessert i divide it by .15 because i can't divide by a percentage i have to turn it back into a decimal and that gives me a selling price of 0.6.66 dollars well i'm obviously not going to put 6.66 on my menu what i can do though is put 795 the selling price that you're going to get from this calculation is a minimum selling price but keep in mind that price value perception what people perceive to have a value to them will diminish if that price is priced too high so you always want to keep that in mind you never want to go too high but you also want to go at or above your selling price calculation so i know that's a lot of information this is intended to be an introduction to it a survey to it there are whole classes dedicated to things such as this so let's talk about some of the summaries and the takeaways for today when to weigh and when to measure are often critical to a recipe success one ounce and one fluid ounce are not necessarily the same thing if i have an ounce of flour and a fluid ounce of flour they may not equal to the same we're slowly working toward the metric system by incorporating dual measurements on most of our products if you go to the grocery store today you don't buy a gallon of soda you buy a liter of soda converting a recipe is as easy as dividing your new yield by your old yield new divided by old use the book of yields because there's a lot of items will have different yields and it is impossible to keep track of it i actually have a book of yields that i've maintained for over 20 years it's got tabulations on them the material in it doesn't really change they just add some new stuff every now and then but you can find all this online and i highly recommend picking up a copy of it or an electronic copy of it or at least know where to get that information online remember for the epq and the ep dollars triangle the ep and the ap are reversed and six know your food costs be aware the cost will change when the price of the ingredient changes so when the price of meat spikes up your food cost is going to spike up which means you're going to get less of that dollar your percentage is going to go up you're going to get less of that dollar in return
|
Microeconomics_entire_playlist
|
Normal_vs_Inferior_Goods.txt
|
in this video we're going to discuss the difference between normal Goods and inferior Goods in economics so we know that a change in people's income is going to affect demand right so it's going to affect demand for goods and services going to affect demand for cars and so but the question is is it going to increase demand or is it going to decrease demand and the answer to that question depends on the good in question is the good in normal good or is it an inferior good most Goods are considered to be normal Goods and what we mean by normal Goods is that if people's income increases then people are going to demand more of that good there's going to be an increase in demand so let's take let's take sports cars for example so if people's incomes increase then there's going to be an increase in demand for sports cars so we can think of that as something that is a normal good sports cars are normal good now if something happens like a recession or something and there's a decrease in people's incomes then that's going to decrease demand for the sports cars so we can think about like that but you might be saying oh okay easy people's income goes up demand goes up but it depends because if it's an inferior good then we're have actually the opposite effect with an inferior good if people have an increase in their income they're actually going to demand less of the good they're going to start buying something else oh I got to raise at work I don't have to buy ramen noodles anymore they get excited now if there's a decrease in their income like a recession or they lose their job or something they actually increase demand for that good so it seems kind of weird but it's basically the inferior good behaves in the opposite way of the normal good so I want to give you a couple examples so let's take the demand for chocolate bars and then let's take the demand for potatoes and instead of potatoes you could think of something like ramen noodles if you know what ramen noodles are something like that some kind of cheap good that people people buy because it's inexpensive and so we're going to say that potatoes or ramen noodles well think about that as our inferior good so this is going to be an inferior good and then the chocolate bars will consider to be a normal good and so what does that mean that means that if we've got our demand for chocolate bars and people's incomes increase let's say that people's income people's incomes double or something let's say people's incomes increase dramatically and we want to know the effect on demand for chocolate bars and potatoes well because we know that chocolate bars are a normal good because there are normal good that an increase in people's incomes is actually going to increase demand so when demand increases remember that demand is going to shift to the right so we're going to have a new demand curve B - let's call it and we are going to shift to the right people are going to buy more chocolate bars their incomes doubled and they say hey you know what now I could buy a lot more chocolate I really like chocolate bars now let's think of the same thing a doubling of people's income or triples something happens somebody got a raise at work people are doing a lot better and now let's think about the demand for potatoes or if you want to think about it the demand for ramen noodles or something cheap the idea is something that people is very inexpensive and that people would like to buy something you know maybe they see is higher quality than thin ramen noodles or something right so when the people's incomes increase they say okay you know what and tired of buying ramen noodles I've had enough of that I want to buy some chocolate bars or I want to buy some steak or something they want to buy something that they think is a lot better the taste better or something like that right not they don't want the ramen noodles anymore just because it's cheap because now their income is increased so what's going to happen is their demand demand is going to decrease for these long long noodles or potatoes or whatever this inexpensive item is so we'll say D - so demand has actually decreased so what we notice is the following day it for a normal good when people's incomes increase the demand is actually it's going to shift to the right because demand is going to increase they're going to demand more of it however when it's some of some kind of good that people see is they don't really want to be consuming it they're only consuming it because it's cheap and they don't have a lot of money or something like that then when their incomes increase then they say you know I don't have to buy ramen noodles anymore I could buy something else and so the demand is actually going to shift to the left
|
Microeconomics_entire_playlist
|
The_Coase_Theorem_and_Negative_Externalities.txt
|
in this video we're gonna discuss the coast theorem so we've been talking about how negative externalities can create a problem because we've got one person imposing costs on another person but not reimbursing him or her for the harm that's being done in a pigouvian tax and and cap and trade are different ways that we can think about bringing about the socially efficient outcome the socially efficient amount of pollution or whatever the externality is however this guy named Ronald Coase came up with a really innovative idea which is that a negative externality can just be resolved by bargaining between the parties so the person that's creating the externality and then the person that's being affected by the externality they can bargain and come up with a deal where they basically achieve this socially efficient outcome so they can just work this out on their own but there are two really important assumptions to kosis argument so one is that property rights are clearly defined so if somebody says hey look I'm polluting this river then the other person is able to say hey I have a right to clean water and so forth property rights have to be clearly defined and the transaction costs have to be low and transaction costs I'll define it a little more when we talk in the example but basically it's a class of exchange the cost of coming to an agreement the time and effort it would take to come to this agreement and to do the bargaining and that's going to become evident in our examples to come so let's say let's think about pollution here for a minute let's think about a factory that pollutes and they're polluting a nearby river so they're polluting this river here they're dumped that maybe they're manufacturing steel or some chemical and they dump sludge into this river and there happens to be somebody who lives on the other side of that river and that person likes to fish in that river but the chemicals that are being dumped in the river are killing some of the fish so they're killing some of the fish so now this person has fewer fish that they can catch they've been harmed so there's been some harm and what Coase is basically saying is look on the one hand yes this Factory is doing something wrong but also this person is choosing to live here and so these two parties if we say okay well what are their property rights assuming that we can define their property rights they can come to an agreement on their own absent any kind of government intervention so let's say that this person's property right is a right to clean water a right to fish right so they've got a right to clean water and this this river and so now it's that the factory is violating this person's property rights so the person could say hey look I have a right to clean water here and so you're violating my right and so therefore we have to come to some kind of agreement and so maybe the factory says okay well look we will give you let's say fifty thousand dollars a year for the fact that you are being inconvenienced and so forth right now this person could choose to accept that they get used to reject that and move away any number of things but the idea is with Coast bargaining is that these two parties come to this disagreement on their own okay now transaction costs are low here and the reason that transaction costs are low is because there's just one person who's being harmed and then there's one factory and so they can bargain with one another but what if you had a situation what if what if we have a situation where there was a factory and let's say Los Angeles that was putting off all kinds of pollution in the air and it was creating smog and so it's creating smog and that's making it difficult for people to breathe and so there are costs being imposed let's say I'm millions of people right so there are millions of it now it's not just one person that happens to live on the other side of the river now it's millions of people who are being harmed now you could say okay these people have property rights they have a right to clean air they have a right to clean air and that factory is violating their rights so then those people have standing to say hey look we've got an issue here you're creating this pollution which is violating our property rights we need to come to an agreement but the problem is that in such a situation close bargaining wouldn't necessarily work because transaction costs are going to be high and this is what transaction costs are you've got millions of people here for each one of those people to go and work out some kind of agreement with the factory would be basically almost impossible right because each person one person might say look if you give me $100 a year I'm happy with it and then another person says well I want $10,000 and they for millions of people to individually come and strike some kind of bargain with this Factory and to determine how much each of them has been harmed there's just so many people to work out a solution that in such a case coast bargaining just really wouldn't work
|
Microeconomics_entire_playlist
|
Cap_and_Trade_using_Marketable_Permits_to_address_Negative_Externalities.txt
|
in this video we're going to discuss cap and trade so cap and trade is a system of marketable permits that is used to address negative externalities so I want to give you a specific negative externality let's talk about pollution and in particular we'll talk about sulfur dioxide emissions so sulfur dioxide leads to acid rain and acid rain can kill fish and insects and so forth so let's say it's killing fish in a nearby stream and that there are people who fish in that stream and catch that fish and so now a cost has been imposed on them right so let's say that the factory that is producing electricity or whatever they are creating this sulfur dioxide with their production it's creating acid rain it's killing the fish and imposing costs on the people who do the fishing and so that's the nature of the negative externality and so there's going to be a market failure in that there's going to be more sulfur dioxide produced than what is socially efficient and so cap and trade says well we're going to do a couple of things to to stop that and what we're going to do is first of all we're going to put a cap on the total amount of sulfur dioxide emissions so you might say okay look we're going to have a cap we're going to have I don't know how sulfur dioxide is measured but let's say 1 million tons is the cap on sulfur dioxide so firms cannot produce or generate more than a million tons of sulfur dioxide in any given year so that's the cap in cap and trade but the second part is you say okay look now that we've done that we're going to set up a market where firms can actually trade and buy and sell permits to emit sulfur dioxide so here's how this would work so let's say that we have an industry with three firms we have firm a we have firm B and then we have firm C and we allocate initially we say okay well how much does each firm get of these million tons to to to produce let's say that firm a is given 200,000 and firm B is 200,000 and then firm C is 600,000 now these could be allocated based on historical emissions and say well this firm see firm sees a lot larger we should give them a larger credit uh but this could be create issues because somebody might say hey well look that's rewarding bad behavior they were admitting a lot they should be punished and so forth but you could also auction the credits in any event these are the initial allocations of credits so firm a in any given year has credits given to it so it can produce up to 200,000 tons of sulfur dioxide now here's the thing let's say that firm a gets really good they have they have clean tech they invest in clean technology and they say hey we do a really good job here and we could actually get our emissions of sulfur dioxide below $200,000 or two 200,000 tons so we say okay well we'll get two maybe we'll do 100,000 tons and then we will we can sell the difference right because we only we got credit for 200,000 but we are able to invest in technology and yet where we only produced 100,000 tons of sulfur dioxide maybe firm C hasn't invested much in in technology and so firm C can say well look we would like to buy a 100,000 of credits because maybe we think we're going to be at 700,000 right so firm C it says okay look we will buy credits now you might think hey that's a bad thing that's a bad thing because now this firm that this firm C hasn't done a really good job and and now they're actually emitting more than they were supposed to but look the total amount of emissions the total cap is unchanged we said as a society we could deal with a million tons and now that's happened and so even though firm C might not have done a good job cleaning up firm a invested in clean technology and had an incentive and said hey look we can actually sell our permits we can make money off of this and so they can trade them and buy and sell them and so there's an incentive for firms to do cleaner technology and so forth and then we also as a society get the Peace of Mind of knowing that look a million tons is the amount of sulfur dioxide that can be produced and that's it now I want to show you with a graph how this is basically equivalent to using a pigouvian tax which we talked about before with setting a corrective tax and that'll get us to ultimately to to the same outcome so let's say we've got price and we've got quantity and let's say that this here is the marginal social cost of pollution and then let's say that we'll call this the mar well let me do a different color the marginal benefit that should be a straight line forgive me this is a marginal benefit of pollution I know it's hard to think of a marginal benefit of pollution so you could also think of as the marginal cost of cleanup or marginal cost of abatement which is like cleaning up pollution but we'll just right now we'll just say okay right here where our marginal social cost equals the marginal benefit that should be our optimal that's our socially efficient quantity right this is our socially efficient quantity of sulfur dioxide okay that's the socially efficient amount and let's that was we said a million tons right we said a million tons now what we do is we say hey look we also can look at the marginal private cost to the firm right that's creating the sulfur dioxide so their marginal private cost is lower than the marginal social cost because the firm doesn't internalize the cost to the people doing the fishing who have fewer fish to catch and so forth so the actual so the equilibrium absent cap in trade absent pivan tax and so forth is going to be we'll call this we'll call this Q Prime and then this one will be qar so Q Prime let's say that it's 1,500,000 tons of sulfur dioxide so this is this is inefficient right because we're overproducing sulfur dioxide more than a socially optimal because the factory did not have any incentive to take in the external cost to the people doing the fishing and and and so forth right so we said when we talked about a pigouvian tax we could actually set an amount here as the tax right see this difference so basically the external cost the marginal external cost cost of the externality the cost of the fisherman and so forth that could be set equal to a tax right so that that could be set equal to a tax and that would bring about the socially efficient quantity so we call that that tax would be a price mechanism this is a way we change the price we force the factory to face the full social cost of its actions and then because it we changed the price to the firm it naturally adjusts to the socially efficient quantity however with cap and trade we're doing the reverse we're setting the quantity we're setting the quantity and say hey we're going to set Q star at a million instead of saying look we're going to do this tax and and then do that we're just going to set the quantity which is nice because you can say look we're going to set the quantity based on science maybe we know the scientists tell us a million tons is the the absolute Max and with that with that because with that tax in theory you end up at the socially efficient quantity but if you didn't set the tax right or something like that or where there could be issues but in any event now we're going to with cap and trade set the Quant and then let price adjust so in each case so so cap so tax is a price mechanism and then cap and trade is a quantity mechanism so cap and trade we're setting the quantity and then allowing price to adjust right through the sale of the marketable permits and so forth but with a pigouvian tax we're using a price mechanism and then the quantity will naturally adjust as terms change their behavior uh based on down the fact that they're internalizing the social cost but in theory in theory both the corrective tax and the cap and trade the system of marketable permits both lead to the socially efficient equilibrium
|
Microeconomics_entire_playlist
|
Economics_of_Fracking.txt
|
in this video we're going to discuss the economics of fracking which is a really good way to understand how supply and demand Works in a real market so first of all fracking stands for hydraulic fracturing and it's basically a way that companies will drill a well into the ground and then they will spray water and sand other items into Shell Rock right so there'll be this shell rock formation and they break up they break up the Shell Rock and they're going to be able to extract oil in natural gas right so this Natural Gas is important because natural gas can be used to heat people's homes or to Power air conditioning and so forth so we're going to have some really interesting uh economic effects from this increase in supply of natural gas so basically because we have this new technology if we think about the market for natural gas we're actually going to have a shift in our supply curve right so I've got our supply curve here and then we've got our downward sloping demand curve here and so our supply curve because we have a new technology that allows for more production of natural gas our supply curve is going to shift and we're going to have it shift to the right okay so I'm just going to put S2 that's our new supply curve now previously our equilibrium our equilibrium was right here and our old price was here and our old quantity of of natural gas was here right so this was the equilibrium amount but now the equilibrium is shifted now the equilibrium is right here see it where the the supply curve number two intersects with the demand curve right so now we have this new equilibrium so if we were to extrapolate this out now we see that here we have our new price and our new price is lower see this it's lower than our old price so the price of natural gas is going to go down so the price of natural gas is going to go down and then if we extrapolate this here we see our new quantity right so now there's going to be a higher Quant quantity of natural gas that's being produced in the market and consumed right so our price has gone down and the quantity has gone up of natural gas okay now think about this practically coal is a substitute for natural gas potentially right because coal can also be used to heat homes right let's just think let's just focus on heating homes here right so in the United States which I'm focusing on here because this where fracking's had a really big impact in the United States when we think about the market for for natural gas and coal we could use either one to heat people's homes but if we have a situation where all of a sudden this new technology this new technology is decreasing the price for natural gas well what is going to happen to the market for coal because if coal is a substitute or we can think of natural gas as a substitute for coal or either or it doesn't matter the idea is that now there's a substitute has become cheaper so a substitute for coal has become cheaper people can use natural gas instead of coal the the price of natural gas drops what's that going to do to demand for coal right so with natural gas we had a shift in the supply curve right so our supply curve shifted right but now with uh with coal we're actually going to be having a different kind of shift we're going to have the demand curve here's our demand curve our demand curve is now going to shift left our demand curve is going to shift left so see we got here we're our new demand curve D2 okay so now we have we also have a new equilibrium so our new equilibrium is going to be right here okay before it was here but now it's moved here's our here's our equilibrium so now we'll say our new price so our new price of coal here's our new price that just corresponds to this equilibrium and then then we've got our new quantity of coal and you see that the new quantity is lower than the old quantity and the new price is lower than the old price so price of coal has gone down and then also the quantity at at where the Market's clear right this is our equilibrium so what has happened here we've seen that technology has made it easier or to to basically easier to produce a lot more natural gas right because we got this new technology where we can get natural gas out of shell rock and because natural gas can be used to heat homes it's displacing some of the coal right because its price is dropping right we got this big supply of natural gas it's displacing some of the coal that used to be a used to heat homes right and so now now the price of coal and this this has all happened since the beginning of the 21st century right I this is to give you a time frame in the US so now the price of coal is plummeting and the quantity is plummeting so what is what has been happening well we've been having bankruptcies we've been having bankruptcies of coal firms right
|
Microeconomics_entire_playlist
|
How_Substitutes_and_Complements_Affect_Demand.txt
|
in this video we're going to discuss substitutes and complements in economics so the idea behind substitutes and complements is that a change in the price of one good can actually affect demand for a different good and it depends on whether the two goods are substitutes or complements so for example let's take a bus ticket and we're thinking about a bus to get your thing about taking a trip but you could also take a train right so you could also get a train ticket so if we're thinking about the demand for bus tickets and there's an increase in the price of train tickets so then people will say okay well train tickets are increasing in price I'm going to increase my demand for bus tickets I'd rather ride the bus because the Train is getting more expensive right so if we have the increase in the price of a substitute that will increase demand for something like the bus ticket now if there's a decrease in the price of a substitute let's say the train tickets actually became cheaper then that's going to decrease demand for the good in this case a decreased demand for a bus ticket now a complement a complement is kind of like the opposite it's two things that you would normally consume together for example you might think about spaghetti and pasta sauce right so you're going to have pasta and you're then you're going to have sauce and you're usually going to have them together right you're not just going to have one you're going to have the two of them in combination so if pasta sauce is so if the price of pasta sauce were to increase so if that were to increase in price that would decrease demand for pasta think about it if you went to the store and pasta sauce had tripled in price you would probably buy less pasta and then the opposite is also true right if pasta sauce were to become a lot cheaper then you would actually increase your demand for pasta because you're buying the two of them together they're complements I want to show you I want to sketch out for you the demand just to show you how this would work so let's take a couple Goods here let's think first about Pole and then we'll think about the demand for peanut butter but let's think about the demand for Cole so what happens if the price of natural gas were to decrease if natural gas became a lot cheaper for whatever reason well natural gas and coal can both be used to keep people's homes so they are substitutes so natural gas is a substitute for coal so if natural gas suddenly becomes a lot cheaper that is going to decrease the demand for coal so what's going to happen as natural gas becomes cheaper so people start heating their homes with natural gas instead of coal so the demand curve for coal is going to shift to the left there's going to be a decrease in demand for coal now let's think about peanut butter in the u.s. I don't know about your country but in the United States peanut butter and jelly are commonly consumed together on a sandwich we'd have a sandwich called the PB&J sandwich PB&J peanut butter and jelly sandwich it's very common thing that people eat particularly kids so if we think about the demand for peanut butter and we say that there is a decrease in the price of jelly if parents are going to the store they're looking for some of the buy for their kids lunch and they say wow jelly got a lot cheaper well what is that going to do to the demand for peanut butter well because they normally buy these together they buy peanut butter and jelly at the same time if jelly gets a lot cheaper that's actually going to increase demand for peanut butter so we see what we'd have the demand curve would actually shift to the right for peanut butter and we'd have a new demand curve here d2 so we see that as it was this is a complement right so we would say that jelly is a complement to peanut butter because people consume them together so we see that the price of a related good whether it be natural gas related to coal or how jelly is related to peanut butter or pasta and pasta sauce the price of one good when that changes that can actually affect the demand for an entirely different good
|
Microeconomics_entire_playlist
|
How_to_Find_the_Equilibrium_Mathematically.txt
|
in a previous video we talked about how we could find the market equilibrium by graphing the supply and demand curves and seeing where they intersect and then our equilibrium price was the price where the quantity demanded equaled the quantity supplied but there's another way that we can do it and we could use mathematical equations to solve for what our market equilibrium is so let's say we're talking about the market for chocolate bars and I've got a demand schedule here and a supply schedule to show the amount or the quantity demanded at the different prices between1 and $5 and so what we did before was we graphed out our supply curve and then we graphed out our demand curve and we found that our equilibrium was right here which is a price of $3 a price of $3 and a quantity of $9 or a quantity of nine chocolate bars so the question is can we find that without having to to draw these graphs and for so forth and the answer is yes so let's say that our demand instead of thinking about it as a line we could think about it as an equation that represents this line right and so the equation that represents the demand curve in this case would be price is equal to 18 minus the quantity demanded divided by 3 okay so that's this equation right here represents this line and then our supply curve is represented by the line price is equal to the quantity Supply divided by three just to show you how that's true so take our quantity Supply let's say it's three so that' be three for Q and then we divide that by three that gives us one and that's a price of one okay so I'm not going to talk too much about how we derive these and stuff maybe I'll make a different video on that but I just want to show you right now that we can find the equilibrium price and quantity which again was 39 that was our so our equilibrium price and our equilibrium quantity that we found using these graphs we can find that with these equations so what we do is we set the equations equal to each other and then we solve so we're going to have 8 18 - Q / 3 and then we just set that equal to Q / 3 and even though this was the quantity demanded and this is the quantity supplied we just use a generic Q because we assume at the at the equilibrium point uh the quantity supplied is going to be equal to quantity demanded right so what we can do is there's a number of ways you could solve this but let's multiply each side by three right so if we multiply this side of the equation by three 3 * Q over3 is going to give us q and if we multiply this side of the equation by three then that's going to have 3 * 18 - Q over three now what's going to happen this three will cancel out right the three in the numerator and three in denominator and that leaves us with 18 minus Q equal to q and now we can add Q to both sides so we add Q to this side that gives us 18 add Q to this side that gives us 2 Q and then now we could divide each side by two so we divide the right side by the two that gives us Q divide this 18 by two that gives us nine so we know that our equilibrium quantity is going to be nine but now you might say well what is the equilibrium price how do we find the price what we can do is take this Q equals 9 and now we can plug it back into one of our equations so let's plug it in here to our our supply equation that'll be easier so now we're going to have well I'll just put here's our p = q over 3 and we're going to plug in that 9 for Q so P = 9 over3 and 9 / 3 is three so we see that at our equilibrium which is where our demand our demand equals Supply right so we just set the equations equal to each other and solve for Q and then we just plug Q back in so now we see that we have an equilibrium price of $3 and an equilibrium quantity of nine and that matches up exactly with what we found in our graph here where we have an equilibrium price of three and then then an equilibrium quantity of nine
|
Microeconomics_entire_playlist
|
The_Effect_of_a_Price_Floor.txt
|
in this video we're going to discuss the effect of a price floor so let's take the market for wheat for example and so I've drawn a downward sloping demand curve and an upward sloping supply curve and we see that before the effect of any kind of price floor we see that the consumer surplus is this blue area here so that blue triangle that's the consumer surplus and this orange triangle here is the producer Surplus okay so that's before the effect of a price floor and so the equal equilibrium price the equilibrium price is $133 let's say per ton of wheat or however you want to measure it so it's $133 per ton of Wheat and then we have 58 million tons of wheat for example that are are produced and consumed in the US or or something like that so here's our equilibrium right here so I'll just I'll draw our equilibrium point and again the equilibrium price of wheat is $133 per ton now the price floor what is going to happen is the government is going to come in and set a price floor of $200 okay so the government is going to come in and set this price floor so let's draw the price floor let's put it right here so I'm going to label that P subf price floor of $200 and what is happening is this so the government is saying look it is illegal to sell wheat below the price of of $200 okay so it's just saying it's illegal now technically technically the government could say and therefore if there's any surplus left over we will buy it I'm not going to talk about that in the video that's actually called a price support right when the government not only says okay the floor is going to be $200 but we're also going to buy up any extra wheat that's a price support what we'll make a different video we'll talk about that later right now in this video we're just saying look the government is saying that the price of wheat is going to be $200 it cannot come below this floor okay so when the government does this what what are the effects well if we see at a price of $200 if we were to look and say okay well how much wheat is going to be demanded and how much is going to be supplied so the amount supplied is right here at $200 and you can see that that's higher than the amount demanded by consumers at a price of $200 and so let me draw this out we'll we'll put some numbers to this so let's say for example that this is 100 million that that supply that the wheat farmers are willing to to supply and that but the demand is let's say 25 million so then we see that okay at a price of $200 a ton that farmers are like hey we we'll Supply 100 million tons of wheat we think that's a great price but the consumers are saying look for that amount of money that's higher than what we're expecting to pay we'll only demand 25 million tons okay so that difference between the 25 and 100 million that's a 75 million difference and that is a surplus that's a surplus so there's going to be some extra wheat that is left over and that's why I talked about this idea of of a price support and again with a price support a price support is basically a price floor plus a guarantee from the government that hey we'll buy this Surplus we don't have that here I just want to give you the basic example that the you know so before we get into the government buying this let's just say the government is not going to buy it so we just we just have this Surplus now not only do we have this Surplus we're also going to have a change in our when we think about our consumer surplus and our producer Surplus now when I say Surplus here it's it's actually pretty confusing I maybe I shouldn't use the word Surplus just so we don't get confused let's say excess Supply let's call that excess Supply but it is a Sur if you're looking in your economics textbook it might say Surplus there it's basically mean we have extra wheat okay but the reason I don't want to use Surplus is because we've got our consumer surplus and a producer Surplus and we can think about the two of those added together as our total Surplus and that's not extra wheat that's value for consumers and producers right we made another video where we talked about that but that's going to change because now we look and we say hey we're here we're not at the equilibrium anymore people are only demanding 25 million tons of wheat now so what is going to happen is we are going to have our old friend the dead weight loss we're going to have it right here this amount I'm coloring it in here and letting you know so this is all going to be lost value this is going to be lost value this this dark blue spot is not going to be consumer surplus or producer Surplus any longer and this let me just draw this out here this is our dead weight loss which is a reduction or or lost lost total Surplus you can think of it as okay and when I say Surplus again I'm talking about the consumer surplus and the producer Surplus when we add them together we say that was the total Surplus for society and it used to be the total Surplus was this triangle right it was that whole triangle but now it's only so so this part of the triangle is gone so our total Surplus our consumer and producer Surplus has been reduced now there's also been a shift where and let me just call it a s so now because the price floor is here we've got this price floor is higher now we're going to have a shift so producers are going to get this so there's actually a transfer of this amount used to be for consumers but consumers don't get that anymore they're having to pay a higher price the people who are consuming wheat still and there are a lot of people who are not because the price is too high but now there's this the Surplus so so all of this orange this orange part is now producer Surplus this tiny triangle this tiny blue triangle is now a consumer surplus so that has shrunk a lot and then this dark blue the dark blue belongs to nobody that's a dead weight loss so the effect of this this price floor on the wheat even though we might have had good intentions maybe the government said hey look we're going to help farmers out we're going to guarantee them a price of $200 and say it's illegal to to sell wheat below them so what has happened though is that we Lo we had we had that triangle of total Surplus and remember we want to maximize the total Surplus and we had that whole triangle but now the only part we've got is this we don't have this dark blue part of the triangle so the total Surplus has been reduced and we have an excess Supply this extra wheat laying around
|
Microeconomics_entire_playlist
|
Scarcity_and_the_Fundamental_Economic_Problem.txt
|
in this video we're gonna discuss the concept of scarcity and the fundamental economic problem so the fundamental economic problem is that people have unlimited wants there's an infinite supply of things that people across the world want and unfortunately we live in a world that has limited resources right so we only have so many resources available to satisfy all these wants so the wants are infinite but the amount of resources that we have are finite right so when we talk about finite that's this concept of scarcity there are scarce resources but there's all these wants and needs and desires that people have and how do we satisfy those and that's what economics is largely about is trying to figure out how people make choices and trade-offs so let's just say for example that you wanted to live in a world where everyone everyone had electricity at least everyone that wanted access to electricity had electricity and that everyone could eat organic foods and that everyone had really high-quality health care than any kind of health issues they got the best health care that everyone was able to get a college degree get a college education and that they could live in a large home with their families and have air conditioning and heat in the wintertime and you say you know what these are the things that are really important to me I want that we I want us to just have all these things for all people in the world who want those things now the reality is that the resources that we have available to meet all these wants are limited right so they're limited and what I mean by that is this so let's take electricity all right so electricity so let's take so we think okay where do we get electricity from well in a lot of countries right now electricity largely comes from coal and increasingly particularly the United States comes from natural gas and so we could say well there's a lot of coal but there's not an infinite supply of coal right so coal my last is hundreds of years but it's not an infinite supply of coal and so you say well we rely on natural gas well there's not an infinite supply of natural gas so then one of you might say well hey wait a minute well what about a windmill what about a windmill we could get windmill and and now we've got a source of energy where we're not having to rely on on a piece of material that we dig from the earth it's just from the wind right but the thing is there's a there's a finite amount of places that you can put a windmill right there's only so many places that we can put a wind well space on the earth is limited right so space is going to be limited so just the idea is that there's there's just a limited amount of resources and so it's not to say it's it'd be impossible to get a lot of people electricity and we're getting more and more people electricity each year but to be able to get everybody electricity is a very difficult issue right there's gonna be trade-offs because as we use more coal for electricity for example that might be less coal available to make steel for cars or buildings right and so if you want everyone have organic foods well we have a limitation of how much land is there that's available for agricultural production right we have a lot of land but we don't have an infinite supply of land and a lot of land we do have is not necessarily fit for growing food and then when we want organic foods they're saying we might have lower yields and so forth because we're not using pesticides and so forth and so even when we think about quality health care we can think about how many doctors do we have available how many doctors will we need for everybody even in the most rural areas to have access to really good high-quality health care in terms of a college education we can think about the number of professors and it's not just if you only think about like health care and education it's really not just about the amount of doctors and in my professors it's that we have to pay these people right the doctors need to be paid and where are we going to get the money and the resources to compensate the doctors for for doing the healthcare and so forth for example in the United States healthcare is almost 20% of the gross domestic product of us so it's just a limitation in terms of the number of resources where those resources be people or natural resources or land or things like that we think about everyone having a large home with air conditioning and heat and so forth well it depends on how you build a home but assuming that you use bricks there's there's just a finite supply of or of wood or land where we can put homes right because if we're thinking why we need a lot of land to do agricultural production well then that's going to be less land available to build homes right and it's not to say that we can't increase the amount of our wants that we're satisfying and so forth definitely over time we can have changes in advances in technology right so at new technologies can allow us to satisfy more and more of people's wants but the bottom line is that we're always going to have an unlimited number of wants there's just of all the people in the world if we counted up all the different things that they want we're going to be trying to satisfy those wants with a limited amount of resources
|
Microeconomics_entire_playlist
|
Positive_Externalities_in_Economics.txt
|
in this video we're going to discuss positive externalities in economics so a positive externality exists when you do something that provides a benefit to somebody else but that person doesn't do anything to reward you for what you've done so you don't get any reward you don't get any payment or anything even though you've provided that person with a benefit so let's say that you live next door to this person and you decide that you want to plant some beautiful flowers in your garden you Prov you make these beautiful flowers that are a lot prettier than what I just drew and your neighbor is happens to be selling their home so they have a for sale sign up and people come by to look at your neighbor's home and they say wow look at this look at this beautiful property next door I think I would really like to live here this is such a nice neighborhood you have provided some benefit to your neighbor your neighbor has benefited from this they they they might be able to sell their home for more if you do a good job taking care of your home however you're not compensated by your neighbor you can't go over and say hey look what a great job I did maintaining my lawn I really think you should give me 500 bucks right it doesn't work that way so you've provided some benefit but you haven't reaped any reward for what you've done so let's take an example of education so education is a common example of a positive externality so I want to graph this out so you can see exactly the nature of the externality so let's say that this axis is price and then we've got quantity and we can think about quantity as being let's say it's the number of college graduates or the number of people the number of people graduating from college or something in a given year so we'll think about that as our quantity of Education now we can look and say okay there's a marginal social cost there's a marginal social cost curve that Maps out the marginal social cost if the the cost of providing one additional college graduate to society right there's a cost there there's tuition and and so forth but then we can also think about we can think about the marginal o make that a little more straight the marginal social benefit I'm sorry that's not exactly a straight line marginal social benefit or the demand curve you just think about that as demand and so there's some social benefit there's some incremental benefit that we get as a society and this is the social benefit benefit to all of our society from having more college graduates but when an individual person when an individual person is deciding hey do I want to become a college graduate or not they are thinking about their own costs and benefits right they're only thinking about they're saying okay well look if I decide to go to college I I could have higher wages I could have higher income I will have hopefully more knowledge and so they're thinking about the benefits that they're going to get and there's nothing wrong with that they're they're just thinking about hey what are the benefits in it for me if I decide to go to college because it's going to cost me money it's going to cost me time and so forth so this person is considering their private benefits we can think about increased wages and so forth as their private benefits and now we can map those out as well we can say okay well what is the marginal private benefit to a person of becoming a college graduate so let's say that this is the marginal private benefit this mpb marginal private benefit and you'll see at the marginal private benefit at any given level let's say we'll say where would our equilibrium be well in a perfect world our equilibrium our socially efficient level would be where marginal social cost equals marginal social benefit right so this would be the efficient the socially efficient quantity of college graduates let's say that it's uh let's say that it's 10 million people or something let's just throw out a number there now the actual amount produced by the free market would be something less than the socially efficient or socially optimal number of college graduates and let's just say that that's 7 million this is the amount this is produced this is not efficient not efficient but it's the one that this is the equilibrium this is what the free market left to its own device is this what the market creates is 7 million college graduates and the reason is is that when we think about like this quantity right here 7 million if we think about price and I know it's hard to think about price in terms of the benefit of a college education and so forth but let's just say we could put a number on it by increase wages and so forth the price is actually there that's higher at this quantity there's more marginal social benefit than marginal private benefit but this person when that individual is deciding hm let me weigh the costs and benefits of going to college they're only thinking about their private benefits they're not thinking about social benefits which includes so this marginal well the marginal social benefit is going to include the private benefit plus what are called external benefits those are benefits to people other than the person going to college for example let's think about this let's say that this person goes to college and graduates from college now they're less likely to commit crime right there's so there's going to be the more people that graduate college and so forth theoretically they will be less likely to engage in crime and so because they don't engage in crime that's a benefit to them of course they don't have to go to jail or suffer any consequences but it's also a benefit to all of us because we're less likely to be the victim of a crime committed by this person so actually we are better off the more more people get educated and so forth because it might reduce crime and so but when this person is making the decision of hey what are the costs and benefits of me going to college they're not thinking about hey other people would be less likely to be the victim of a crime that I commit at some point so they're not considering the marginal social benefit and the social benefit includes both private benefits and external benefits they're only considering their own private benefit the marginal private benefit to going to college and that's why we have this situation where we end up with an inefficient number of college graduates and so it's basically because the marginal social benefit is actually greater than the marginal private benefit for the person but the person has no incentive the person there's no incentive for the person to say hey what about to reduce crime for for everybody else and so forth they're not thinking about those things and so what's going to happen is that education is going to be under supplied that's what we're talking about when we say that this the socially efficient amount of college graduates would be 10 million but the actual amount is 7 million so we're short by 3 million we say hey education is being under supplied here and so there are a number of ways that the government could come in and say hey that we've got a market failure we need to do something for example they could subsidize education and in fact that's what many countries do and we'll talk about that in videos
|
Microeconomics_entire_playlist
|
Law_of_Demand.txt
|
in this video we're going to talk about the law of demand so the law of demand says that all other things held equal if there's an increase in the price of a good or service that's going to cause a decrease in the quantity demanded of that good or service and conversely if the price of a good or service goes down if it decreases people are going to demand more of the good right so if your favorite pair of shoes if it goes on sale and there's a decrease in price people are going to want to demand and buy more of it right so another way of thinking about is you could say other things being equal setus parabus as the price of a good Rises people are going to buy less of it and as the price of a good Falls people are going to buy more of it now there are certain exceptions there's these things called gifing goods and stuff but this is such a a frequent relationship that we see in actual practice that we actually call it a law we call it the law of demand so I want to show you how we would map this out and I'm going to show you a thing called the demand schedule so let's think about the demand for chocolate bars let's think about your personal demand for chocolate bars actually so I'm hoping you like chocolate and so let's say that the price if the price of chocolate bars you go to the store and the price of chocolate bars is $1 or one Euro whatever your currency is you would be willing to buy 10 chocolate bars so that would be your quantity demanded so let me put here this are quantity demanded at the price of $1 or1 whatever you would demand you would want to buy 10 chocolate bars now if you went to the store and instead the price was $2 or 2 you would instead want to buy eight chocolate bars because now you're seeing that hey now there's a higher price it's $2 instead of $1 so I still like chocolate bars but I'm not going to buy 10 right so I'm going to buy eight so we can think about well what if the price were to go up to $3 or what if the price went to to four or $5 and we could map out we can map out the amount the quantity of chocolate bars that you would demand that you would want to buy at each price we could go through the different prices and say and now what we're going to see is that the price the quantity that you demand is going to go down it's going to go down as the price increases so as the price goes from a dollar to $2 to all the way up to five now at five you say look I really like chocolate bars but for $5 a piece I I really would only get two of them and so what we can do now that we have this this what we call demand schedule and now that we have that demand schedule we can actually create a graph that would show our de or your Demand right so we can put P price on the Y AIS and then quantity demanded Q on the x axis and you'll see this very commonly in in economics so we can go and say okay at $1 at a price of $1 you demand 10 so we'll just plot a little point there we'll say okay that's that's at one and 10 now at $2 at $2 you would demand eight chocolate bars and then at $3 you would demand six chocolate bars and at $4 you would demand four chocolate bars and at $5 you would demand just two chocolate bars now you not notice something this is is this is your demand curve this is your demand curve let's make sure that's a straight line there in this case it doesn't have to be a straight line but but in this example it is and so what you see and when you when you study economics you'll see this someone will say why don't you plot the demand curve and the demand curve is going to be downward sloping right so we have a downward sloping demand curve and what does that reflect that reflects our law of demand and that is is that as the price of this good so in this case chocolate bars as the price increases Ines from 1 to two to three to four to five you are demanding less and less you we want fewer and fewer chocolate bars that you are willing to buy so so that's a law demand now we'll talk about in future videos what can happen there could be a change in preferences there could be something happen that makes you want chocolate bars even more maybe your favorite musician starts eating chocolate and you think Ah that's the cool thing to do and we could have where we actually have a shift we have a shift in the demand curve and so now we have to draw a new demand curve right and we we'll talk about that in videos to come
|
Microeconomics_entire_playlist
|
The_Economics_of_Airline_Safety.txt
|
in this video we're gonna talk about the economics of airline safety so if you've ever flown on an airplane you might have wondered how safe the airplane was and how the airline or how a regulatory agencies such as the FAA in the United States made decisions about how safe the airplane should be right so basically this is all about trade-offs when we think about airline safety because we could add an infinite number of safety features to planes we could do all kinds of different things right we could add backups to this or we could add that we could add extra pilots right we could have ten pilots on the plane just in case there's a some kind of issue and one of the pilots says art attack or just the extra expertise we could do all these different safety features to try and make flying as safe as possible but from the firm's perspective or from the perspective of a regulatory agency however you want to view it basically we're only going to want to add a safety feature if the incremental benefit that we get from that additional safety feature outweighs the incremental cost right an incremental benefit you'll sometimes hear is called the marginal benefit right so we make decisions in economics at the margin so we're gonna look at by adding an additional pilot what's the benefit and what's the extra cost and then if the extra benefit outweighs the extra cost then we should probably be doing that right so let's think about it from the perspective of a firm and so you've got you've got an airline you've got an airline and this airline says okay look we could have a different different numbers of pilots right so we could have one pilot we could have two pilots we have three we could even have four pilots that we pay to be on this plane just in case there's some kind of issue right but each time we add a pilot we go from one to two to three to four the marginal cost the incremental cost of having an additional pilot is going to go up and up for example so let me just show you how we'll graph it out and then make it a little more visual here for you so this would be the marginal cost curve of adding an additional pilot now the reason the marginal cost is going up and not just the average cost because you might say okay well we add another pilot but then we have to pay another pilot right so that makes sense but why would the marginal cost be going up well the marginal cost of each additional pilot if you think about for example if you normally we see like with a larger plane you have like two pilots on there well what if we went to three pilots well not only do we have to pay an additional pilot right we don't just have to pay their salary and so forth but we might have to expand the size of the cockpit right maybe the cockpit was only designed for two pilots and so when we go from two to three now we've got two problem we not only have to pay that additional pilot now we have to make some changes to the plane and so forth so we can say that the marginal cost of adding a pilot goes up right it goes up as we add more and more pilots so so if we think about the marginal cost going up we can also think about the marginal benefit and what happens to the marginal benefit right so the marginal benefit to having one pilot instead of zero is huge right so we can think of it way up here basically you don't want to be on a plane that doesn't even have a pilot right so we'll just start let's say the marginal benefit of one will put it here and then as we go to more more pilots the incremental benefit so the marginal benefit from adding an additional pilot it gets smaller and smaller right so if we go if you just think about it you know theoretically is if we had zero pilots nobody would want to fly at all right because it wouldn't be safe so we go from zero one there's a huge a huge benefit but so we got our we'll call this our marginal benefit up here and and and I know it's kind of hard to think about marginal benefit in terms of dollars or whatever but just bear with me so as we go from one pilot to two there's an incremental benefit right I mean if the pilot one has a heart attack or maybe they you know it's just good to have a second person there to kind of help out they've got some extra experts parties and so forth but as we go now we get to the third pilot we've already got two pilots right so we've got two pilots we've got two pilots and now we go to a third pilot is the third pilot helping that much and you say well you know maybe it's a really wise pilot and it's great to have three just in case but now we go to four pilot so it's like okay how many pilots are we going to have on this airplane right we have to now you know do all these different things to accommodate these pilots right the marginal cost is going up but the marginal benefit is going down right if you think about it for example we don't want 40 pilots on the plane now that probably a very safe plane right in terms of we've got a lot of expertise and wisdom up there but the marginal benefit is going down and we might even think that the pilots marks are having disagreements the more that we have and so and so the marginal benefit of adding the safety feature of having more pilots goes down the marginal cost is going up so the point there's a point where we get to if I was a map this out with a marginal benefit the marginal benefit equals the marginal cost that would be the point that would be the point that we want right and actually here it probably comes out to you know I should have drawn this a little better but it comes out though maybe like two point three pilots or something we obviously we can't have that but you know in a perfect world we basically I would've drawn us a little better we come out to maybe two pilots or three and it seems like two because that's what most airlines have at least one shall larger plane but when you're thinking about a smaller plane that's maybe not even designed just them a big enough cockpit or whatever there's just a few passengers and you just think about one now I've kind of framed this all from the perspective of an airline but some of you might be asking yourself well hey wait a minute wait a minute in terms of safety if we're thinking about safety you know how many customers of airline go and actually check the safety record of the airplane right it's not like so when you're going to buy an airline ticket when you go to buy your ticket you're probably looking at price right you're probably looking at pricing you maybe even just assume that all air air planes are equally safe or you just figured that hey this is regulated before you get on an airplane you don't go and talk to the mechanic and say hey when was the last time you inspected this airplane I want to make sure that it's safe right so you don't really have the ability to kind of check as much but with safety and so forth I mean you there are some some things you can do if you read online that there was an airplane that had a crash then obviously that's a big deal so that the airline has an incentive reduced a number of crashes certainly right but now we might say well you know we might actually want some expertise we want might want government agency sis's the FAA and the United States or whatever your country is then probably has a regulatory agency that regulates it you know airplane safety and says hey look we need to look and think about the marginal social benefit right because the only thing about from this perspective affirm they're looking at the marginal benefit the the marginal cost but we think need to think about from society's perspective what would be optimal is to have where the marginal social benefit is equal to the marginal social cost right because the firm is going to optimize their marginal private benefit right to the firm where that's equal to the marginal private cost to the firm but actually what's optimal will be when we're we have an amount of safety that with a marginal social benefit to everybody not just the airline is equal to the marginal social cost of whatever safety feature
|
Microeconomics_entire_playlist
|
Externalities_in_Economics.txt
|
in this video we're going to discuss what externalities are in economics so an externality is when you do something that affects the well-being or the good of another person or a company but you're neither harmed or rewarded for what you did to that person so the externalities can be positive they can be negative a negative externality is when you've harmed someone you've done something to somehow impose a cost on someone or some some company or something and you haven't reimbursed that person you haven't paid them any money or or done anything to compensate for what you did so let's say that it's a company that we're talking about let's say there's a company that manufactures chemicals and so when they produce their their their chemicals there's some sludge that's left over and they just dump it in a nearby River so there's this River and they just dump that sludge right in the river and so let's say that there's a house there's a house near the river and the kids from that that home they play in the River and they get sick now that company that company has imposed costs on this family the children are getting sick but if the company hasn't done anything to reimburse that family or somehow you know paid them or done something then they're basically creating a negative externality they're they're harming someone by what they're doing right so as they produce the chemicals they're they're creating the sludge that that is getting in the river and it's it's harming these people but the people aren't being made whole so another thing would be let's let's say that you had a neighbor let's say you live in an apartment complex and your neighbor on the other side of the wall really loves to play Britney Spears music at 3:00 in the morning so they're playing all these Britney Spears songs Hit Me Baby One More Time all this stuff at 3: in the morning and you are having to listen to this you say hey I've got an exam tomorrow and I can't study because this person is constantly listening to this music so they're creating cost they're doing harm to you but they haven't reimbursed you now if they had said to you look I just really love Britney Spears I need to hear Britney Spears at 3: a.m. and you said okay you know what I'll deal with this but I need you to give me an extra 50 bucks a month toward my rent and then you work out an agreement that's different but we're assuming here they haven't done anything to reimburse you they're not paying you they're just doing something that harms you and you're not getting any benefit for that now a positive externality is where you are doing something that doesn't harm someone it actually benefits that other person you're doing something good that just as a side test like it's as a tangent it's actually helping some other person or people and so but those people aren't turning around and compensating you for it right so you're doing something good uh you're you're helping yourself but it has this side benefit of it helps other people but those people aren't doing anything for you and so for example when you get a flu shot you get a flu shot or other type of vaccine and so you're doing something that helps yourself right you're trying to say hey I want to avoid getting getting you know the flu and so forth so I'll just get this flu shot but you're actually helping other people as well right because if you don't get the flu other people are less likely to get the flu as well so they're benefiting the people who work around you and stuff all these other people are benefiting from what you've done but they're not compensating you they're not saying hey you know what I'm really glad you went and got the flu shot here's a dollar or something like that right and so what happens is because because you're not receiving the full social benefit you're just getting your own private benefit basically things where there's a positive externality the good is going to be under supplied because if you were actually paid if people said hey I really like what you did and then you you reap the full social benefit for what you did you might be more likely to get a flu shot and and and just if you even think let's say with your house let's say you've you you you you live in a neighborhood and there's other homes that are nearby and you do a good job maintaining your lawn and you mow your lawn and stuff but you don't really spend a l a lot of time you know making your house look pretty now if your neighbor is trying to sell their house they have a for sale sign up they might appreciate if you went out and really did a great job maintaining your home they would really just love that because then when people come to see their house which is for sale that would increase the value of their home right if the neighboring properties like yours look really really nice then that would help them sell their home because the neighborhood would look great but you don't have an incentive you don't have an incentive to do that why because you're only considering your own private benefit you don't benefit if their home goes up in value right so so basically if you do something nice and for your home then that would help their their home's value but they wouldn't turn around and compensate you necessarily and so that's a positive extra ality and for that reason these type of goods that have a positive externality would be undersupplied in situations where you have a negative externality like pollution or something like that it would be overs supplied relative to what is socially efficient or optimal
|
Microeconomics_entire_playlist
|
Production_Efficiency_in_Economics.txt
|
in this video we're going to talk about production efficiency in economics so production efficiency refers to a situation in which the economy is producing so many Goods right so let's say that it's producing food and steel it's producing so much with the resources it has that it cannot produce an additional unit of one of the goods let's say food without decreasing production of the other good let's say steel so for example so so let's say our economy here here's our ppf curve right so this our production possibilities Frontier and we could conceivably produce 100 million tons of food if we produce zero tons of steel or we could produce 50 million tons of steel and no food right so all the combinations along this curve are efficient in production so let's take one for example so let's say let's say right here let's say that if we did 90 million tons of food so let's say this is 90 million tons of food and then if if we did that we would be able to produce 40 million tons of steel okay let's just say let's just take this point right here okay so that point is efficient in production and I want to prove it to you I want to prove it to you by using this definition here so let's say that we were to say okay so at this point we're getting 90 million tons of food and 40 million tons of steel so I'm just going to put it here 90 m 40 m so this is food this is steel at this point okay now if we were to say well let's produce an additional ton of food okay let's see if we could do 91 million we could but we would have to decrease production of Steel right we cannot produce additional food because 91 million tons of food with 40 that would be a point here and that point is not feasible right all the points out here these are not currently feasible given our resources that we currently have right so we only have so many resources so many people so much capital and so forth to produce so much food so this point this point as well as all the other points all along that curve are efficient in production because if you take any point right you could take this point you could take this point Etc it's basically saying look at those points on that curve you could not produce any more food without giving up some steel or alter L you couldn't produce any more steel without giving up some food and so that's what efficient in production means now all the points on the interior of this are are those are inefficient right so and what I mean by inefficient is the following so let's take this point right here okay let's say that that corresponds let's say that that corresponds to 70 million food so that's 70 million food and then 40 million steel now that is ineff efficient because look we could increase from 70 million to 90 million we could increase and get 20 million additional tons of food we could go up here without giving up any steel right we wouldn't have to give up any steel to do it because we'd go from 7040 to 9040 look the steel production does not decrease to get us from 70 to 90 but once we are on this curve once we are on the ppf that is no longer possible it's efficient because we maxed out right so think of efficiency in production as we are maxed out we cannot produce any more of one good without decreasing our production of the other
|
Microeconomics_entire_playlist
|
The_Economics_of_Pharmaceutical_Drug_Testing.txt
|
in this video we're going to talk about the economics of drug testing so most countries have some type of regulatory agency that's in charge of approving new drugs right so let's say a pharmaceutical company comes up with a drug that's intended to treat cancer now this regulatory agency which in the United States would be the Food and Drug Administration the FDA that regulatory agency has to make it an important trade-off right they're basically trading off how long do they spend scrutinizing this new drug that treats cancer right so did they spend five years 10 years 15 years they want to maximize the amount of safety for the public right if they don't scrutinize drugs at all and just let all drugs onto the market then there could be a lot of drugs that end up harming or killing people right so they want to scrutinize this this new cancer drug and make sure that it's safe but by the same token the longer the period of time that they spend testing this cancer drug but they're creating a delay that is going to lead to lives being lost right so it's kind of either way there can be lives lost if you don't do any kind of testing at all then you have lives lost lost due to safety and if you delay this drug for years there are people who have cancer and and could potentially be cured from cancer but they're not getting the drug due to the delay and so there's lives being lost so either way so basically the regulatory agency has to make this trade-off and figure out what's in the best interest of the public right so I want to sketch this out for you graphically so let's say that we have on the y-axis we have the number of deaths so we have and this isn't to scale it forgive me but a hundred thousand right here and then a million deaths and let's say that this is the the x-axis is the number of years of testing so we got one year of testing two years three years and so forth so now I want to kind of sketch out for you so so let's say we can think about the deaths due to the drug being unsafe right so let's say the cancer drug is not safe let's say we only do one year of testing then it might be that the drugs that we are testing lead to a million deaths due to safety issues because we only looked at drugs for a year there's a lot of people who end up dying right so but as we go to five years of testing now we have this amount here which is less than a hundred thousand so let's say fifty thousand twenty-five thousand there's some small amount right so the amount of deaths due to it being an unsafe drug are going to decrease as we do more and more testing however however there's another graph we have to think about which is the deaths due to the delay of not getting the drug on the market fast enough so let me just let me just sketch that out so let's say that this is the curve this is now this is deaths due to delay again when I say delay I mean that the FDA or whatever agency didn't approve the drug fast enough and so while the drug was still being tested there were people dying because they didn't get access to the drug who might have lived had they gotten the drug right so now we can look and from society's perspective from the perspective of society what we want to do is we want to minimize the death rate we want to minimize the number of deaths minimize number of deaths it doesn't matter from society's perspective whether the deaths are due to delay or due to unsafe drug they're looking at the total number of deaths right so the number of total deaths is what society cares about right so what we can do is we can add up for example with one year of testing to get the number amount of total deaths we could say okay well hamat what was the point where deaths due to delay so let's say that that let's just say that was 50,000 let's just throw out a number there and then if one year there's a a million deaths due to unsafe drugs so 50,000 due to a delay a million due to unsafe drug and so that would be 1 million 50 let's just say that's 1 million 50 total right so now I'm gonna draw I'm gonna draw the the total total so let me let me choose a color here so let's go with red so this is the the total now I'm gonna draw that let's look let's say this total curve looks something like this okay that's the total curve so all I did was that each year we would go and again I didn't put numbers for all of these and but you get the idea right we just at each point add up the number of deaths due to unsafe drug the number of deaths due to the delay and then we graph we graph this is the total number of deaths so total deaths so from society's perspective we want to minimize the number of people who died right it's pretty common sense so where are we gonna have the lowest number of deaths it's gonna be right here right now I accidentally I picked a part where is a really high number it does is kind of depressing but let's say that this is nine hundred and seventy five thousand right I don't know if that matches up there but you get the idea so nine hundred seventy five thousand deaths so basically right here so the amount of testing if we draw that down it comes out to three years of testing so that's basically saying look we account for deaths due to delay of not getting the drug out fast enough and people dying because they didn't get access to it and deaths due to unsafe drugs right when we think about that in the market for drugs for cancer drugs or whatever three years is the amount if we that's gonna minimize the number of people who die from either of those causes right now here's an issue that's from society's perspective as to the best benefit right but what about the regulator what about the regular like I said in the US that's the FDA whatever your country is you might think about the regulating agency and say what is this regulatory agency what is in their best benefit and you might think well hey they're a government agency they just want to do what's best for society well sure the probably lot people there who feel that way but by the same token there's a lot of pressures on regulators for example for example when there's a death due to unsafe drugs right let's say thousand people died because some drug got approved and it ended up killing people and there's a lot of scrutiny now on the regulator saying hey why did you approve this drug what are you doing here there's a lot of these people need to be fired we need to get rid of the head of the the regulatory agency and so forth right but when there's deaths due to a delay it's not quite as publicized right it's more public these deaths due to unsafe drugs there's a recall and every you know so this is this is more public and it might lead to more issue for the regulatory agency and so you might have so so this here will say that this is the opt this is the socially optimal level is this this three three years of testing but maybe the regulator the regulator goes for five maybe the regulator goes for five and this is the this is so this ends up being the actual level so you actually have this inefficient level of so you have five years of testing when actually the socially optimal level is three and it's because you have to think about the incentives of the regular it's not that they're bad people they don't care about people dying but they want to keep their jobs and they know they're very sensitive that hey if we approve a drug too quickly yes people are going to be helped lives are gonna be saved due to not having unnecessary delay but there's a chance that there's gonna be unsafe drug on the market and that's gonna lead to a lot more publicity and public outcry about whether or not we're doing our job and so you might have a situation where actually the actual level the actual amount of years of testing might be higher than the amount that's socially optimal
|
Microeconomics_entire_playlist
|
3_Types_of_Economic_Efficiency.txt
|
assuming there's not a market failure such as a monopoly or an externality we assume that competitive markets are going to produce an efficient outcome the economy is going to be efficient but what do we mean when we say efficient or efficiency well there's three types of efficiency that we can think about in terms of the economy there's efficiency in consumption efficiency in production and product mix efficiency all three of these types of efficiency are essential to have Paro efficiency and with par efficiency what we're basically saying is when we're at an allocation that is par efficient there's no way we could make somebody in the economy better off without making at least one person worse off but let's get back to our three types of efficiency I want to start by discussing the efficiency in consumption which will'll sometimes see referred to as exchange efficiency so exchange efficiency just means that the goods that are produced in the econ economy end up in the hands of the people who value them the most so let's say for example that I really like cookies so here's me I really like cookies and you you really like cake so let's say there's there's one cake and then there's one plate of cookies in our economy and those those are the only Goods you should get the cake and I should get the cookies right now if it started out where I actually had the cake and then you had the cookies we would just trade right so we would trade to the point where we get to where we have an efficient outcome an outcome that's efficient in consumption because I value cookies more and you value cake more we would just make a trade so when we are at an allocation that is efficient in consumption at that point there is no further scope for mutually beneficial trades because they've all been exhausted we've already made the trades so when we're at the point where I have the plate of cookies and you have the cake we'd say that we are efficient in consumption the economy is efficient in consumption the people who value the cookies and value the cake the most are the people who have them now efficiency in production has to do with basically when you're at a point in an allocation that is efficient in production you cannot produce any more of one good without producing less of another good so let's say that we had an economy with ice cream cones and then we had slices of pizza so let's just think about those two specific types of goods and let's say that if we just completely focus on producing ice cream cones we'd produce 200 ice cream cones and zero slices of pizza now if we just focused on slices of pizza we'd have 100 slices of pizza and Zero ice cream cones so any point so this is called the production possibilities Frontier this this line right here this this slope so that curve now any point along this curve is going to be efficient in production but I just want to focus on one point let's let's do an easy example let's say we're producing right here where there's 200 ice cream cones are produced and then there's zero slices of pizza so if that's the case if we wanted to produce one slice of pizza or two slices of pizza what we want to increase from zero we want to increase an produce an additional slice of pizza go from zero to one we're going to have to decrease the amount of ice cream cones we cannot at this point right here we cannot produce any more any any more than zero slices of pizza if without going from 200 to let's say 198 ice cream cones we have to decrease the number of ice cream cones at this point to be able to get some more slices of pizza so this point this point here along with all the points along the curve are efficient in production because at all those points you couldn't get any more of one good without decreasing the amount of the other Goods so that's efficient in production Now product mix efficiency is which is also called allocative efficiency has to do with people's preferences in the economy so basically the goods that are going to be produced are going to go to the people who actually want or or need them but it it's different than efficiency and consumption I know it sounds similar but but let me give you an example how it's different so let's say we had economy where we produce left shoes and right shoes right so if the economy just focused specifically on left shoes and produce no right shoes at all just zero then we would have 50 left shoes and zero right shoes conversely if we just focused on right shoes we'd have 50 right shoes and zero left shoes okay let's just say that that's the way the economy is now it would be efficient in production now I'm not talking about allocative efficiency here I'm talking back when we talked about efficiency in production right it would be efficient in production to produce 50 left shoes and zero right shoes why because at that point we could not increase anymore uh they we couldn't produce any right shoes or increase our production of right shoes without decreasing our production of left shoes so that point would technically be efficient production but it wouldn't be allocatively efficient it wouldn't be that wouldn't have product mix efficiency because who wants a who wants an economy where there's only left shoes it doesn't correspond to people's preferences the resources need to be allocated to their highest value use right so the highest value use needs to correspond to to the the goods that are produced it can't just be productive efficiency we need to say hey the goods being produced need to actually correspond to those that people prefer right and people don't want only left shoes they want pairs of shoes they want a left shoe and a right shoe so actually if we were thinking about well how would we decide this actually we'd we'd map out the the indifference curve and we can get into this another video if you don't quite follow this but the point where the indifference curve just assuming everybody in the economy's taste were the the same the point where the indifference curve meets this this uh ppf is going to be the point where we'd want to produce so it might be just something just hypothetically this is just these are just hypothetical numbers but let's say you have 35 left shoes and then you had 35 right shoes so that point that let's just say that we had this then that'd be a better example of allocative efficiency than just having an economy where we have entirely left shoes 50 left shoes and zero right shoes might be efficient in production but it's not it's not allocatively efficient because nobody wants to buy just left shoes you want to buy a pair of shoes
|
Microeconomics_entire_playlist
|
Expanding_the_Production_Possibilities_Frontier_PPF.txt
|
in this video we're gonna discuss how the production possibilities frontier can be expanded so let's say we had an economy that produces food and it produces natural gas and if this economy focused strictly on producing food it would produce a hundred million tons of food but if it's strictly pretty produced natural gas it would end up with 50 million tons of natural gas and we know that all the points along this line are efficient in production right this is our PPF curve this is this all the points that are efficient in production and that means if you picked any point here so let's say right here at that point you could not produce an additional unit of food without giving up some natural gas and vice-versa so we've talked about that right so this is basically the max the maximum amount of the different combinations of food and natural gas that we could produce in our economy given our current level of resources now the question is how does this ever get bigger can we expand this PPF going forward so that there's a point where we could for example produce more than a hundred million tons of food or more than 50 million tons of natural gas and the answer is yes there are a couple things that could happen that could allow us to shift this PPF outward right and then we could produce either more natural gas or food or more of each right so one way is that if we had some kind of technological breakthrough we could end up producing one or more of one or both of these goods so let's take natural gas let's say that there's a new technology comes out you've you've heard of fracking right so we come out with fracking we've got this new technology and we say now we can reach we can access resources reserves the natural gas that previously we couldn't access so we come up with fracking we've got this new technology and so now instead of before it was oh the maximum amount of natural gas that we could we could achieve would be 50 million tons if our economy was completely devoted to natural gas production we can get 50 million tons but maybe now now it comes out and let's say it goes to 75 million tons because of this new technology now we've basically unlocked and been able to produce a lot more natural gas than we were before right so if we focus strictly on producing natural gas now do the new technology we can actually produce 75 million tons whereas before the maximum no matter what we did was 50 million tons now another thing that we could do is we can have a situation where there's capital accumulation so if we accumulate capital then we can have where we're gonna be able to shift this PPF outward right and what does it mean by saying capital accumulation we're basically trading off current consumption for investments so let's say let's say instead of food let's say that this is bread right that's bread and then here instead of natural gas let's say that we have ovens right and I know these numbers are gonna be a little weird but let's say that we have a situation where we said okay we have 100 million units of bread is the max we could produce or 50 million ovens okay so if we were to say in one period let's say this year we say you know what we're gonna focus a lot on ovens right we're gonna produce like so the bundle that we choose remember all these points along the PPF are efficient in production but we can choose any combination to be the actual one that we have so maybe we choose some point over here where we have a lot of ovens but not as much bread right so we choose that point because look see we'd have not that much bread but we'd have basically be at the close to the max the number of ovens so if we choose in year one right in year one we choose a point like that all right so in year one we invest a lot in ovens right so we invest in ovens in year two what is gonna happen is now we've accumulated some capital so the ovens are capital right now you're you're to because we have more ovens we can actually produce more bread than we did before right so now our bread let's say that this curve now ends up coming to something like this and maybe now it's a hundred and ten million loaves of bread or whatever right so we've expanded we've now say okay we can actually if we just focus strictly on bread in year two we can actually produce 110 million instead of a hundred which was before because we've accumulated some capital right now what's the trade-off there well we're trading off when we do this in year one we're saying okay we're gonna eat less bread that's current consumption right so we're trading off consumption and we're basically investing then in capital instead right so consumption and then we're trading off for capital and then that capital so in this case ovens is gonna allow us to actually be able to consume more in the future now there's actually an additional way that isn't really expanding the PPF but it would actually allow you to consume at a point outside the PPF right so so beyond right because all these points here we'd say that they're not feasible they're not feasible given the current level of resources right we're not going to be able to reach any of these points now obviously if we expand the PPF then we can reach some of those points right and I just gave you two ways capital accumulation and technology but there's a third thing that would not expand the PPF but it would actually allow us to consume at a point outside the PPF and that's called specialization in trade right specialization and trade so if we specialize in producing goods where we have a comparative advantage right so if we if we have a comparative advantage in producing bread or whatever if we specialize in goods where we have a comparative advantage and then trade with other countries right then we can actually get to a point one of these points that says is not feasible right we can actually consume potentially at one of those points by specializing in a good in which we have comparative advantage and trading with other countries and we'll talk about that more in the videos to come
|
Microeconomics_entire_playlist
|
Public_Goods_in_Economics.txt
|
in this video we're going to discuss what public goods are in the context of economics so a public good is any type of good that is both non rival risks and non-excludable and what non-rival risk means is that the marginal cost of providing this good to one extra person would be zero so let's take national defense as an example so let's say the military of Spain so let's take Spain's military their National Defense and you're thinking of moving to Spain so if you move to Spain what is going to be the marginal cost of Spain's national defense well it's gonna be zero right if you move to Spain tomorrow it's not like the Spanish government's gonna say hey we need to get some more military this person's just moved here so really the marginal cost of national defence is zero how weather's one extra person whatever it's still going to be the same cost and so we call that non rival risk now something that would be rival risk for example if you think of a slice of pizza if I eat a slice of pizza that means you cannot eat that slice of pizza because I just ate it so it's rival risk right so National Defense or things like a streetlight for example so if we're thinking about a neighborhood not at the national level but just a local community level so if we provide a streetlight on the street and any number of people can use that streetlight they can walk under that streetlight at night if one extra person moves in the neighborhood and occasionally walks under that streetlight or something like that there's no there's no marginal cost it doesn't cost any more so it's non rival risk now the second characteristic of a public good is that it would be non-excludable and that basically means that you cannot prevent or force anybody so that they can't benefit from that good that they can't enjoy the good so again let's go back to if you move to Spain so if you move to Spain they can't say you know what if you aren't willing to pay for for national defense then you're not going to be able to benefit from national defense it doesn't make any sense right because if Spain were to get attacked the military is going to defend the country regardless of whether this extra person was willing to pay for that or or not and so really everybody can just there's no way to force where they can't benefit from this so think about the streetlight so if you put in that streetlight there's and then someone moves into the neighborhood there's no way that you can prevent that one person from benefiting from that streetlight and it's also another example of a public good would be clean air so if you think about clean air let's say there was some some invention you came up with that would allow you to clean the air for your neighborhood and provide clean air and and and so forth now clean air is something that think about it it's non-rival risk so once you've actually provided the clean air with your new invention what's the marginal cost of an additional person in enjoying that clean air well zero it costs them nothing any number of people in that neighborhood can enjoy the clean air once you've done that and then it's also not excludable you can't you can't prevent somebody in that neighborhood from breathing in that clean air so you deserve a clean air itself as is a public good now there's an issue with public goods is that they tend to be under supplied and the reason that they're under supplied or not supplied at all in some cases is because of something known as the free-rider problem and the free-rider problem is is basically that somebody is saying hey if this good is just going to be prevented anyway and there's no way that you can exclude me from enjoying it then then why should I pay why pay for something that somebody else is just going to provide and there's there's really no way that you can prevent me from from enjoying the benefits of street light or of clean air or of national defense so so why should I pay for that what incentive do I have to pay for that and so there are a couple different solutions but basically this is kind of provides a rationale for for government to get involved and so one solution would be for the government to come in and say look we're going to provide this public good our selves and then we're going to tax people to pay for it or we're going to we're gonna charge user fees or something and then we can ensure that the public good is provide
|
Microeconomics_entire_playlist
|
How_to_Find_the_Equilibrium_using_Suppy_and_Demand_Curves.txt
|
in this video we're going to talk about how to find the market equilibrium by using supply and demand so in a free market the price of a good or service is going to be determined by the forces of supply and demand so that price that we reach by forces of supply and demand we're going to call it the equilibrium price and that's the price where the quantity of the good demanded is equal to the quantity supplied so I want to give you an example and I'll show you how this works so let's take the market for chocolate bars and so we can think about supply and demand for chocolate bars at different prices so let's say at a price of $1 that consumers would demand they would want to buy 15 chocolate bars but producers would only be willing to supply three chocolate bars and now we can think about for different prices for different prices what is the quantity demanded by consumers how much do they want to buy and how much are producers willing to supply and then we can plot out those data points in a graph and then that will tell us where our equilibrium is okay so at a price of $1 the quantity demanded is 15 so let's do our demand curve first so we've got 15 at $1 now at a price of $2 the amount demanded would be 12 at a price of $3 the amount amount demanded would be nine at a price of $4 the am amount demanded would be six and at a price of $5 the amount demanded would be three so this is our demand curve this is our demand curve sorry that's not let me see if I can get a straight line that's our demand curve okay so I just I'm just going to put a little D so you remember it's a demand curve and it's downward sloping because the law of demand the price goes up quantity demand it goes down now let's put together our supply curve so at a price of $1 and I'm going to change colors here so at a price of $1 the amount that producers are willing to supply is three chocolate bars at a price of $2 they're willing to supply six chocolate bars at a price of $3 they're willing to supply n chocolate bars at a price of $4 they're willing to supply 12 and at a price of $5 they're willing to supply 15 so let's draw a line this is our supply curve so I'm going to I'm just going to label it s so that's our supply curve and then we've got a demand curve now where these two curves intersect they intersect right here so that's at the point 39 so a price of three so P equals 3 and then D demand or uh excuse me q q equals 9 so our quantity our equilibrium quantity is nine and our equilibrium price is $3 so what does that mean that means that absent any government intervention or any kind of other issues just a free competitive market we would arrive at a price of $3 for our chocolate bars that would be the price that arrives at by just the forces of supply and demand and then there would be a quantity of nine chocolate bars right so we'd have three $3 would be the price of the chocolate bars and we'd have nine chocolate bars there are they're ultimately consumed or made so let's think about what if why why not a price of $2 right we're saying that the equilibrium price so our equilibrium is right here that's the free market but why not $2 so think about it at $2 if we were to look at this here or we can go back we can even go back to our our little Supply uh schedule and demand schedule the quantity demanded is 12 and the quantity deply uh Supply is six think about this Gap here see that Gap people at the price of $2 people are wanting 12 chocolate bars but they're only getting six they're only six being supplied so we'd say if we set the price at $2 there would be a shortage right conversely if we were to set the price at $4 we would have a surplus because people are willing to supply more than what people people are demanding we would have chocolate bars left over so the magic of supply and demand is that the free market ultimately arrives at this point here this equilibrium where the quantity demanded equals the quantity supplied and if you look back at our schedule our demand schedule and Supply schedule we see that at a price of $3 there are nine chocolate bars demanded and nine chocolate bars supplied so that's that's our equilibrium in a free market
|
Microeconomics_entire_playlist
|
Allocative_Efficiency.txt
|
in this video we're going to discuss the concept of allocative efficiency so when we look at a production possibilities Frontier let's say for food and for clothing we know that every point along the ppf right so all these different points they're all efficient in production and what we mean by efficient in production is that at any point so let's say right here with three units of food and four units of clothing so 34 so at that point we cannot produce more of one good for example food without giving some of the other good up right so we can't produce at 34 which is right here on the curve it's efficient in production because we couldn't go to four units of food without giving up as some clothing right so we're going to be at at 34 we can't get any better in terms of producing we're using our resources to the maximum to get this we could get more food but we'd have to give up some clothing okay so all these points along the curve are efficient in production but which one which point which of these points is going to be the one that we end up as best for society well that gets into the idea of allocative efficiency allocative efficiency is saying that the goods produced corespond to those that are actually desired by individuals think about a crazy example think about a society where where you have a factory that only produces left shoes right and and so we might say hey we get to a point where we're offici in production and so forth but do people want a society where we only have left shoes no we want left and right shoes we want a pair of shoes right so the goods that are produced have to correspond to those that are desired by individuals and we can find that we can find that we can we can use calculus and we can do things like uh we can talk about where we have an indifference curve and we see the point where the indifference curve is tangent to the ppf right so if there's an IND indifference curve is tangent to the ppf that's the point of allocative efficiency but another way we can do it without using any math or anything like that is we can look at the marginal cost curve which we can derive from the production possibilities Frontier and we can look at the marginal benefit curve which cannot be derived from the ppf but if you have the data if you have PE information about people's willingness to pay we can use the marginal benefit curve and the marginal cost curve to find the point where the marginal cost equals the marginal benefit which is going to be the point of allocative efficiency so I want to show you how we would superimpose the marginal cost curve on the marginal benefit curve right so here we've got in a previous video we talked we mapped out the marginal cost right using this these numbers are just from the ppf and so forth that can all be derived from up here now we so we've got our marginal cost curve and it's increasing and that explains the bod out shape of the ppf resources are not equally productive right now we also have our marginal benefit curve right so we've got a decreasing marginal benefit curve why people like variety right as they get more and more of one good they start to Value the other good more right so you get tons and tons of food and you start to say ah I'm not getting as much benefit from food as when I had no food and you gave me the first piece of food that's when you get the most benefit right so we've got this decreasing marginal benefit and the increasing marginal cost and we want to know where is the point of allocative efficiency what is the amount of food and clothing that everyone would be best off what would be the efficient amount so what we can do is we can take the marginal cost curve and we can map it onto this marginal benefit curve right so we've said here I said okay that this y- axis is marginal benefit but let's say let's change things let's say it's marginal cost or marginal benefit or you could just put a dollar sign however you want to think about it but I'm just going to keep the marginal benefit curve here but then also also I'm going to draw this this marginal cost curve right where's and we'll just use the data from right here so when we have zero food the marginal cost is one right so that would put us right here 01 one food the marginal cost would be two two food the marginal cost is three which oh we've got another Point here right so I'm going to I'm going actually put that let's put that in a different color okay and then when we have three food the marginal cost would be four so that's going to correspond to this point right here all right now I'm just going to put a little line through this so you can see so now this is so this is our marginal cost curve so that's our marginal cost curve and here we've got our marginal benefit curve okay now you notice that this point right here they intersect okay they intersect and what is that point that point is two foot right so that's two food so two food is going to be this is going to be the point of allocative efficiency right so this is this is the efficiency this is the allocative efficiency point right so not only is this point not not only so let's go so we've got two food remember that so we've got two food I'm going to show you that two food corresponds to seven clothing right that's the point that's the most when we have two food right we can have seven clothing that's this point right here so this is 27 so if we were so this is going to be the point this is the bundle of goods we want to produce 34 is also efficient in production but it's not allocatively efficient this is the most preferred bundle right the remember the ppf is just a bunch of different combinations of goods that the the society hypothetically could produce of all these combinations this is the combination that consumer that people want the most right and so this is not only efficient in production it's also allocatively efficient so society would be best off if we produced two units of food and seven units of clothing and now if we were to draw an indifference curve right if we were to draw an indifference curve it would be tangent it would be tangent to our ppf right here at the point of allocative efficiency
|
Microeconomics_entire_playlist
|
Cross_Elasticity_of_Demand.txt
|
in this video we're going to talk about cross elasticity of demand so cross elasticity of demand tells you what happens to demand for one good when there's a change in the price of another good and so basically it can tell you whether two goods are substitutes or whether they're complement and so you calculate the cross elesy Demand by taking the percentage change in quantity demanded of the one good and then dividing it by the percentage change in price of the other good and if the number that you get is positive then that means that the two goods are substitutes and if the number you get is negative then it means that the two goods are compliments so let's walk through an example let's pretend that you are the president of a professional baseball team and you decide that you're going to raise the price of hot dogs which are popular uh food item sold at at baseball games by 20% so you're going to increase the price of hot dogs by 20% and so you do that and then there's a 25% increase in the quantity of hamburgers that are demanded at the baseball games but there's a 40% decrease in the quantity of beer demanded at the baseball games in response to this 20% increase in the price of hot dogs right so basically the price of hot dogs went up by 20% and then that had a 25% increase in hamburgers demanded 40% decrease in Beer de demanded and so now we can go and we can calculate the cross elasticity demand for hamburgers with respect to hot dogs and we can calculate the cross elasticity of demand uh for beer with respect to hot dogs and when I say with respect to hot dogs I mean that there was a 20% increase in the price of hot dogs and we want to say how does that affect demand for hamburgers how does it AFF demand for beer and so if we take the 25% increase in the quantity of hamburgers demanded so let's I'm going to change colors here so we've got the cross elasticy demand for hamburgers we say well what's the percentage change in demand well there's a 25% increase I'll put a little positive sign there so you know it's it's an increase in the quantity of hamburgers demanded and then we had a 20% increase in the price of hot dogs okay so that equals 1.25 and what is relevant here this is positive this is positive so it means that the two goods are substitutes think about it what this is saying is that hot dogs became more expensive and what happened to demand for hot hamburgers it went up so people substituted and say I'm not going to buy a hot dog I'm going to buy a hamburger and so that's when we say that this this cross-laced demand is positive the two goods are substitutes now let's look at the across elasticity demand for beer so what happened to the demand for beer when there was a 20% increase in the price of hot dogs well demand for beer went down by 40% it went down by 40% so I'm going to say -40% divided by and what do we have we had a 20% increase in the price of hot dogs 20% increase in the price of hot dogs so we take that that's -2 and so this is a negative number we've got a -2 and so that means that the two goods are complement that means and when I say the two goods are complement I mean that hot dogs and beer based on this fact that this number is negative hot dogs and beer are typically consumed together and so if you increase the price of hot dogs by 20% it's leading to a decrease in the quantity of beer demanded right even though the price of beer hasn't changed people buy hot dogs and beer together so if you make hot dogs more more expensive people are going to buy less beer and so that explains why this number is negative that the goods are compliments
|
Microeconomics_entire_playlist
|
The_Effect_of_an_Import_Tariff.txt
|
in this video we're going to discuss the effect of an import tariff on a country's total surplus and I'm going to show you with a graph how it leads to a deadweight loss so let's take the market for steel production in the United States and so let's say that the equilibrium price of steel is five hundred and sixty two dollars a time and that the equilibrium quantity is ten million tonnes if we're just looking at the market for steel in the US supply and demand in the US without any kind of international trade any tariffs anything like that now we've got our consumer surplus our producer surplus let's say now that the world price of steel is $350 a tonne that's cheaper than the equilibrium price in the US so the u.s. is going to be a net importer of steel it is going to import steel so I'm going to show you now well how it changes only import steel so with trade and still we haven't introduced of tariffs yet I just want to show you when we introduce trade we see that the consumer surplus is going to expand a lot okay because the world price we've got the world price of steel it's 350 so that's this green line here and so now the consumer surplus expands and the consumer surplus together shifts from producer some of this before that is now shifted here now it's become blue so there's a shift from producers to consumers and so producers are worse off but consumers are better off but there's also this new triangle with trade that is an increase in the total surplus so the u.s. is better off importing with trade right and so we've got the number of imports is this amount here right because it's the demand at this point exceeds the supply in terms of the US producers how much steel they're willing to produce they're only willing to produce four million tonnes but the consumers are demanding 16 million times so we have 12 million 12 million imports 12 million tons of steel imported okay so 12 million imports now I want to show you what happens when we introduce an import tariff let's just say for example that the US said you know what we don't like importing steel for whatever reason these producers are being hurt right even though the US is better off because it's getting this section Purdue have a strong incentive to complain and say hey listen we're having people are being fired and where our profits are being eroded and so forth right we can certainly understand where they would they would take issue even though as a whole the u.s. is better off from trade so let's say this import there's an import tariffs of one hundred and fifty dollars a tonne on steel okay so what that is effectively going to do is it's going to raise the cost of steel in the US instead of it being where that you can get it at the world price of 350 a tonne now now it's going to be in let me choose a different color here let's let's go you know what let's stick with green we'll stick with green so now open it white okay so let's stick with green and we all have a price of 500 this is the world price WP world price plus the tariff so if someone in u.s. wanted to buy steel from the on the world market it's not only gonna cost 350 but now I also have this import tariff of 150 so the real price is 500 it's 500 dollars a tonne in that 500 that's just the 350 world price plus the 150 tariff okay so what is going to happen is before we had we have this whole this whole triangle here okay now so that would have been able to correspond it to this but now it's so that we're only going to get this for consumers okay so I'm going to stick with the same colors so I'm going to use blue for consumer surplus so this here this amount is going to be our consumer surplus it's going to be our consumer surplus that triangle so I'm just going to call it abbreviate that CS now you could see that this triangle is smaller than that triangle so consumers are worse off but now producers US producers of steel they're going to get this triangle and this again it's orange so that is the producer surplus so now you see that that triangle is larger so the producers they have before would trade there they were shrunk to that tiny triangle there but now they have captured some of the value that consumers used to get right so they're benefiting the US producers of Steel because obviously now there's going to be more steel purchase from producers in the u.s. because become more expensive to buy on the world market because of this tariff so producers are gaining but they're gaining from consumers right so it's a transfer from consumers u.s. producers so that doesn't help that doesn't how that doesn't affect the total surplus right now you might be wondering what about this area here this area here before with trade and note that was actually part that was actually part of the total surplus well what is going to happen now is a couple things so we take this amount here see this it's like a little square this amount so we just draw from from right here at this point in this point we just draw you can draw all the way down here so we'll say that we call this 8 million tons and then we'll call this 12 million tons so the difference between from here to here that's now the number of imports that's now the number of imports so in this case it looks like so we went from 8 million to 12 million I'm just making these numbers up to show you but that difference 12 minus 8 that's four million imports right and before that we had 12 million imports okay so this amount though this amount here see this this white amount that's square that is revenue from the tariff so this is tariff revenue for the federal government right so the US government imposes tariffs and so they're getting revenue from the tariff so this is this is tariff revenue this goes to the US government okay so the US government this square the area of that okay that would just be the amount of the tariff which is a 150 dollars because it's just the difference between 500 and 350 so that 150 times 4 million which is the number of imports so that makes sense right that would be the revenue raised from the tariff now we have something else though we have something else and I'm going to choose which color here let's let's make it red let's make it red see this Oh let me change that so see this triangle here tiny little triangle and then there's also triangle here so those those two red triangles those are deadweight losses each of them is a deadweight loss and if you're wondering what a deadweight loss is if it is the first video you've seen in mind and we see it you never heard of it before think of it like this it's a reduction in the total surplus the deadweight loss is a reduction in the total surplus so whereas when we went to trade when we introduced free trade before we had any trade at all we had an increase in the total surplus remember total surplus is consumer surplus producer surplus we had that whole triangle the whole triangle was an increase in total surplus but now now we lost this area now the white part is government revenue so maybe the government does some great things with that that depends on your opinions on how the government spends money but the red triangles they used to be part of this surplus here and now they're gone they're not consumer surplus they're not producer surplus they're not surplus for anybody right and so they are a reduction a reduction in the total surplus so even if you're fine with the notion of hey the government's getting this revenue and maybe a little spend it to provide and job training programs or something like that even if you're cool with that we're still worse off with the tariffs right the u.s. if we're just thinking you I'm not even thinking now hear about other countries that would be selling steel or some like that to us forget all that just thinking about the u.s. there is a reduction in the total surplus of the US because these triangles right this triangle and that's why these two red triangles used to be part they used to be part of the the surplus they were actually consumer surplus and now they're gone so why would we even another tariffs why you might be thinking hey if this decreases the total surplus so this is so obvious then why would any government ever impose the tariff welllook the producers the producer surplus increases right so steel producers in the US would have a strong incentive to say hey listen we're being hurt by these these are competition from steel producers in other countries we need this tariff to protect our domestic domestic industry our steel industry to save American jobs and so forth right and but when we think about it on the net basis right we don't care just about producer surplus we also care about consumer surplus and on a net basis we're actually reducing our total surplus with the tariff because although producers are being made better off it's be kept because coming at the expense of US consumers and so the u.s. as a whole is worse off because of the tariffs
|
Microeconomics_entire_playlist
|
The_Effect_of_a_Price_Ceiling.txt
|
in this video we're going to talk about the effect of a price ceiling so let's think about the market for a college education in the United States and so we'll have on the y-axis we'll have our price and let's just say that that's the price of one year tuition at a college in the US and then we have on our x-axis the quantity demanded so this is the quantity of seats and colleges demanded by students and so forth so let's say that our equilibrium when we look at our demand and our supply and where they intersect we have our equilibrium right here right here that's our equilibrium point and so our equilibrium price is $50,000 for one year's tuition and at that price let's say that there's 20 million students that are attending college right so that's the equilibrium point and then our consumer surplus our consumer surplus would be this this blue triangle right here and then our producer surplus would be the orange triangle right here that's shaded in now we've got the equilibrium price of 50,000 what the price ceiling is is the price ceiling is the government coming in and saying look we're gonna set some ceiling so let's say twenty thousand dollars and say that the price cannot go above that so let's take it let's say right here let me make sure you can see this alright this point here is going to be our price ceiling so I'll put P sub C and we'll say that's twenty thousand dollars now this is going to have a number of effects first of all there is going to be a shortage because if you notice that it's at this point a twenty thousand the quantity demanded is right here here's the let me change that to yellow make it really clear so here's our quantity demanded and let's just say that that's thirty million students or 30 million seats and colleges however you want to think about it but our quantity supplied at twenty thousand the amount that universities or colleges are willing to supply is here let's say that that's ten million so this difference here is a shortage we have a shortage because at a price of $20,000 a year there's 30 million people who are demanding that hey I would like to go to college for $20,000 a year but there's only 10 million spots suppliers are only willing to supply 10 million seats in colleges for that amount so that difference between 30 million and 10 million we'd say that there's a shortage okay so now let's think about this so we've got this shortage but what effect does this have on the market well we see that here is where we normally ended up at our equilibrium but now with this price ceiling even though the demand is over here at 30 million we're only going to have 10 million supplied so that's going to be what happens in reality so 20 million people are gonna be left out so we're gonna end up over here so because we end up over here we can draw an imaginary line I don't know if you could see that let me just make it thicker so here's our imaginary line and all this amount here to the right of our imaginary line all that amount is going to be lost this is lost value so if we think of our surplus remember we had our consumer surplus and our producer surplus and if we added them together we have the total surplus now we are losing from that total surplus all this this blue area this dark blue area we call that there's a special name for it it's called a deadweight loss it's called a deadweight loss and a deadweight loss is just a reduction so it's a reduction in the total surplus and so we are losing some surplus there we're losing some surplus now if you think about it if you think about it Arkin's consumer surplus it used to be this triangle right here that triangle now we've lost this amount here that's gone that's no longer part of the consumer surplus however the consumer surplus has grown to this area here it's now actually some of the amount that used to be for producers some of the consumers are getting because they're getting a cheaper price doesn't mean it's a great policy but some people are there we're increasing the consumer surplus for some of the people right so some of the consumers are benefiting 20 million people are losing out right because there's a sort shortage so I don't mean to make this Ali who's a great thing for people going in college however the people who do get to college I do get in who do get those 10 million seats there's gonna be an increase in surplus for those people and then now you see that our producer surplus this amount here look how small it is it's it's just this area here is the producer surplus so the producer surplus has gotten smaller we can actually in some cases it might be the case that the consumer surplus actually grows larger or whatever but the but in this case you know I we could just eyeball it I didn't give you numbers sufficient to figure it out but bear in mind this that this deadweight loss means that when we look at this we used to have this whole triangle we used to have that whole triangle was the total surplus but now the total surplus we only have this part of the triangle I don't know if you could see that but we lost this entire blue shaded region is now been gone that's not value for consumers that's not value for producers that's not value for anybody so the the effect of the price ceiling is that we created a shortage and in terms of that there are more people demanding to go to college than there are seats then there are producers willing to provide seats in colleges right so we've got a shortage so a lot of people are losing out we've lost all this value here the dark the dark valve that's we just had a total reduction in total total surplus now sure some of the numbers a transfer for some of the producers to some of the consumers but it's not enough to outweigh this this loss that we have here so this price ceiling economists agree generally speaking that price ceilings are universally seen as a bad idea it's a bad idea it's gonna reduce the total surplus it's gonna create a shortage and so forth and we see price ceilings I just gave the example of the University education here we anything about it with college but you would commonly see it with something called rent controls so we'll have a lot of cities particularly in the u.s. New York City San Francisco different places that will say look we're gonna have some kind of rent control now the rent control could be something where you say look this apartment here's the amount of the price that could be charged and that's it or it could be something where they said look that the price of the apartment can only go up one percent per year there are lots of different ways to implement rent controls but they're there generally speaking there they are a price ceiling and we see that when we have a price ceiling we're losing all this value and we're creating a shortage and when we create the shortage who are the people most likely to be able to get around this what might be wealthier people we might even think well hey we're helping people we're having this ceiling but it might be wealthy people who have the money to be able to figure out well how do I get in how do I get one of those apartments even though there's only there's a shortage of apartments there's only so many how do I get in can i bribe the landlord etc and so forth so we have to think that you know price ceilings by leading this shortage and reducing total surplus they're just universally seen as a bad idea for the economy
|
Microeconomics_entire_playlist
|
Introduction_to_Economics.txt
|
in this video we're going to discuss the concept of Economics so economics is a social science just like political science sociology anthropology which is a little surprising to some people because a lot of people think that economics is just about money but economics is actually a a science that's examining how people make choices right so similar to things like psychology and sociology where we study how individuals or groups make choices with economics we're seeing how people make choices in the context of having unlimited wants and desires and yet having limited resources available to fulfill those wants and desires and what I mean by that is this so we all want things we want better health care we want better schools we want to make more money we want to live longer we have all these wants and desires and yet there are limited means to fulfill those desires right so we might want for example for everybody in the world to have a electricity and to be able to just flip a switch and and to have lights in their home and to have heat and and all those things and and so the question is well do we have the resources right now available uh to be able to have all those res all the electricity and and the things and clean water and so forth for all the people in the world and you might say well yes actually there's plenty of water and there's plenty so forth but there's not an infinite amount of water there's not an infinite amount of coal of natur natural gas there might be a lot of some of those things and you might argue that there's enough that we can find a way that we could allocate to this to all the different people and make everybody happy but the fact is that the resources are limited there's a finite supply of resources and so economics is about hey how do we deal with the fact that we have a finite supply of resources and how do we find ways how do we make choices to be able to satisfy people's wants and desire desires and that's going to involve things like tradeoffs right so we want more electricity we say let's for example we want people in India we want everybody in India to have electricity and then somebody says okay well to do that we're going to need to produce some coal and then somebody else says okay well for producing that coal what is going to happen to uh carbon dioxide emissions and so forth so we think about carbon emissions and then somebody else would say hey look we need to think about these people's quality of life and as really more important that we get electricity and then so we're thinking about all these things and again it's not just about money it's about people's uh standard of living it's about even families there's an entire branch of Economics called family economics which is started by a guy named Gary Becker and we can think about things like how families decide how many children to have how families decide whether they want to invest in the education of those children do they want to educate all of their children or just pick the smartest one and that might seem immoral to you you might think hey well why would people make these choices they should always choose to have uh the best health care they should always choose to educate every child as much as they possibly can but again resources are finite and some people might be have in a situation where they have to make difficult choices and economics is about studying how people make those decisions the fundamental economics problem is that we have all these wants and desires but we have finite resources to satisfy those wants and desires so let's study how people make those choices and ultimately we're going to we can divide economics into two branches it's a lot more complicated than this I mean we can think about things like behavioral economics where we start saying what about when people don't behave rationally when they don't do something that might be in their best interest and so forth and we'll talk about that but there's really two primary branches of Economics that you'll hear about and you have individual courses in each of them you'll have microeconomics and you'll have macroeconomics and with micro economics we're examining the behavior of individual economic actors and when I say IND individuals it doesn't just mean a person although that's what it could very often be the case but it could also be a firm right so we could look at a corporation and see how a corporation behaves and so forth so when I talk about just you know people or individuals we can think about people we think about firms and so forth we can even think about societies but generally we've got these individual economic actors we want to know how they behave so we might ask how how do people respond to a change in price or how do people respond to a tax for example let's say that we increase a cigarette tax how is that going to affect the behavior of people who smoke cigarettes how is that going to affect the behavior of children who haven't decided to smoke cigarettes yet will they be less likely to do so and so forth so we can think about how attacks would affect individual's behaviors and so forth so microeconomics is looking at individuals and firms and so forth and saying how do they behave in the environment well macroeconomics we're looking at aggregate economic indicators things like gross domestic product which we'll talk about in videos to come but basically when we say aggregate we're thinking about the entire economy so if we think for example of the US the United States we say we're looking at the gross domestic product that's all the goods and services produced by everyone the entire economy all of the people that are living in the United States and so forth and we're not just thinking about oh well what about this individual person or this individual group of people we're aggregating these things and looking at GDP we're looking at inflation these large uh scale measures that can be measured at the country level or at the state level and we're looking at how those economic indicators change over time and how different things affect them and we're going to talk a lot more about microeconomics and macroeconomics and the different types of exciting field of economics in the videos to come
|
Microeconomics_entire_playlist
|
Change_in_Supply_vs_Change_in_Quantity_Supplied.txt
|
in this video we're going to discuss the difference between a change in supply and a change in quantity supplied they sound similar but they're different so when there's a change in price so the price of a good or service goes up or down that's going to lead to a change in the quantity supplied which is a movement along the supply curve so let me show you for example so let's say we've got our graph here and that this is our upward sloping supply curve call that s1 so if there's a change in price of whatever this good is we're just going to move from one point along the supply curve to another point along the supply curve we're not going to draw a loose apply curve however if there's a change in something other than price let's say there's an earthquake that affects the supply of the good or there's a change in the price of factors of production or there's a new technology that increases supply then we're going to have a change in supply not just the quantity supply to change in supply which is going to require a whole new curve we're going to have a shift of the supply curve so let's say for example there was an increase in supply then the supply curve is going to shift to the right and we're going to have a we're going to draw a whole new curve and we'll call that s2 conversely if there was a decrease in supply we would have a shift to the left so I want to show you an example it'll make it a little bit easier to understand so let's say that we have a situation where we have a supply schedule here where we can look at each different price so let's say this is the market for coffee for example this is coffee and let's say that the price is dollars per pound or something like that and the quantity supplied as millions of pounds of coffee so at $1 per pound producers are willing to supply 3 million pounds of coffee and so forth so we've got our supply curve now if I were to ask you what would happen if the price decreased so what would happen if the price went let's our let's say it decreased from five dollars from five to three from five dollars to three so from here to here now what we can see is that there's going to be a change in the quantity supplied of 15 to nine right so it's going to go from fifteen at five dollars a pound the producers were willing to supply fifteen million pounds but now the price is fallen they're only willing to supply nine Millions right and so we can go we can look at our graph so here's five dollars a pound we're right here and now we have gone to three dollars a pound right here if you just map that out we end up right here okay so we have moved along the curve we haven't drawn a new curve we just moved along the curve because there was a change in the price holding everything else constant we're assuming nothing changed except the price now let's pretend that something else happened that completely changed supplies not the price not just the price so something other than price let's say that it was weather let's say it was weather and let's say there was a hurricane or tsunami or something that happened and it wiped out it wiped out coffee plantations in Costa Rica so if it wiped out a bunch of coffee plantations then we're going to see well there's going to be a decrease in supply of coffee right so there's going to be a decrease in the supply because we've this natural disaster came and wiped out a bunch of coffee plantations so this does not have to do with the price right now I might have an effect on the price and we'll talk about that but this we're not just saying what happened if producers said that they they went from five dollars to three dollars or something like that how we move along the curve we're not talking about that now we're saying that some kind of thing happened other than price and now it has created where we basically need an entirely new supply curve we need a new supply curve and so here's our new supply schedule so this is the this is the new quantity supplied up up here was the old this was before the natural disaster so I've just got that this is a supply curve before the natural disaster so our new our new supply curve is going to look like this and if you see for example at a price of one the quantity supplied is zero right so we would end up that point would be right here and so that corresponds to our supply schedule but now I'll call this s2 and you see that we have shifted we have shifted to the left so notice a change in quantity supplied as we're moving along the curve we're not drawing a new curve however and we have a change in supply we are actually shifting the curve we're moving in because we had a decrease in supply we're moving a curve we have a whole new curve why do we have a whole new curve because at each price at each price the quantity supplied the quantity the producers are willing to supply at that price is now different than before so before at a price of $1 quantity supplied was three now at a price of one dollar quantity supplied is zero and so forth each price now there is a new quantity supplied so we need a whole new curve now if you were to ask well what is going to happen now what's going to happen now to price well the price of coffee for example so to know that we would need to draw a demand curve so here's just a generic downward sloping demand curve I don't have any data points there let's say that this is our demand curve now our original equilibrium which is s1 and D where they intersect so that's our original equilibrium and we're given some little space so let's say that was q1 and then here is p1 and now our new equilibrium is going to be right here where s2 the new supply curve intersects with demand and now we're going to have q2 and we're going to have p2 so you see that the price has increased and the quantity has decreased it makes sense right if a bunch of coffee plantations are wiped out that's going to decrease the supply a coffee there's going to be less coffee and it's going to get more expensive now again this price effect is different from sit when I said not price is leading to a change in supply I'm not saying that a change in supply has no effect on price I'm saying that when we think about whether there's a change in quantity supplied or a change in supply what we're thinking about is what is leading to this this change that we're asking about so in the first example I just asked you what happened if the price decrease from five to three so we just move along our curve because that's what one thing happened was price changed and I asked you what happened to supply so there's a change in quantity supplied however in the second example something other than price changed it was the weather and so the weather decreased the supply now because it decreases supply and we drew a new supply curve we're able to see that the price of the good would ultimately increase
|
Microeconomics_entire_playlist
|
Economics_of_Housing_Vouchers.txt
|
in this video we're going to talk about the economics of housing vouchers so some countries use housing vouchers as a way to ensure that all families can have access to affordable housing for example in the United States we have a program called Section 8 and Section 8 provides vouchers to lowincome families to make sure they can get housing and the way that section 8 works is this so for a given neighborhood uh you assess what is the fair market rent for that neighborhood and then the tenant he or she is going to contribute 30% of his or her income toward the fair market rent so let's pretend that the fair market rent for a neighborhood was $1,200 and that the tenant uh 30% of his or her income let's say that that is $400 so what's going to happen is that that $1,200 fair market rent we're going to subtract the $400 that the tenant pays and that's going to leave a shortfall of $800 there's going to be $800 left that the the tenant can't afford and the voucher the Section 8 Housing Voucher is going to cover the remaining amount so that $800 is going to be a subsidy right that's a subsidy from the federal government so we have this subsidy and that's going to cover part of the rent so 400 comes from the tenant and then 800 would come from a subsidy so that's how housing vouchers work at least in the US so I want to show you now the effect that this would have on supply and demand in terms of the market for housing so let's think about this if we have let's say that our initial quantity of housing in an in an area happens to be let's say five 5,000 units say there are 5,000 units this is at the equilibrium so we've got Supply and then we've got Demand right so our equilibrium we have 5,000 is our quantity of housing and then our initial price let's say is $700 a month or thereabouts so if we have an initial quantity of 5,000 and then we have an initial price of 700 and we say okay we're going to introduce these vouchers vouchers are going to make it easier for more people to obtain housing that's the goal of the vouchers is we want people who before couldn't afford housing maybe they're homeless maybe they were doubled up living with relatives we want to make it so so that they can get housing or maybe they want to move into a new area Etc but we're going to be increasing the demand for housing that's the whole goal that's what we're doing here we're going to increase demand for housing so here's our new demand curve D2 so we had our initial demand curve D1 and we are shifting to D2 we're Shifting the demand curve outward so what's going to happen now we've got a new equilibrium our new equilibrium is right here and so now our equilibrium Quant our new quantity it's going to be our new quantity is going to be higher H let's just say that it's 6500 so it's going to be higher than before and I'm just pulling these numbers out of a half so our new quantity of housing is higher than our old quantity our initial quantity before the vouchers well hey it worked that's what we wanted right we wanted to get more people into housing and and so this makes sense now what happens to price so let's look for from our equilibrium here and let's go outward so we say okay let's go and extrapolate this over to our y AIS and now we've got our new price of housing well we can see immediately the new price is higher than the old price the initial price before the voucher so let's just say that it's $900 a month or something like that so you see that that the 900 is higher than the 700 so what has happened here well we've introduced these vouchers so we' made it easier for people to get housing so we've increased the equilibrium quantity of housing that has gone up because our demand curve shifted to the right we've also increased the price because we have more people now who are entering the market for housing if the price didn't go up then we'd have a shortage right so landlords are going to say oh okay well there's more people now wanting to buy houses they've got vouchers and so we're going to increase the price that's what happens now you might be wondering well is this a good thing is it a bad thing I mean we're intending to make housing more affordable and now the price is going up so what is going on there and then I will leave to you whether it's a good or bad thing but here's a justification for why that might not be a serious problem we might have a situation with our market for housing where housing affordable housing can be viewed as a situation where there's a positive externality I have another video on positive externalities if you want to check it out but let me just give you the basic idea so when families are able to obtain affordable housing there are certain benefits that come from having affordable housing so people who get affordable housing and couldn't afford it before let's say before they were in a homeless shelter it doesn't have to be a homeless shelter maybe they were living with relatives or whatever but they didn't have stable housing and then now we get them into housing because now the housing has become an affordable option to them it also could be that they were in a rough area it was high crime and now we got them into a better area a number of things could happen but the idea is that we get them into a better situation housing wise now when we do that when we get them better housing if they have children for example their children might be more likely to graduate from high school and if they graduate from high school they're going to probably have a higher lifetime earnings they're going to have higher lifetime earnings they're going that means they're going to be a more productive worker we're going to have a more productive Workforce and they're also less likely to engage in crime and they're going to have lower health care costs there are a number of benefits right because now they've got the affordable housing maybe they're out of a bad area or they're not worried about being shuttled around from homeless shelter to homeless shelter so the children can focus in school and so we have these different benefits we have these benefits that come from having affordable housing but here's the issue and this is the nature of the positive externality when a family is considering when they're making the decision whether to buy housing not to buy housing Etc they're having different they're only considering their private benefits all of us consider our own private benefits our own private costs and benefits when we decide to do anything to buy a car to take the train Etc ET and so when we consider our private benefits we can map this out as a demand curve or I can just I've got mpb here marginal private benefit curve and then we've got our marginal uh social costs right so we can think about the marginal social cost the marginal private benefits that gets and and or you could just think of marginal social cost or marginal cost it's however you want to think about it our equilibrium for the amount of housing that would be demanded would be we'd have a qm so let's say I'm just going to throw out some numbers here let's say that's 8,000 units of Housing and that the price is let's say 750 a month so here's our equilibrium but here's the nature of this positive externality the marginal social benefit that curve is actually higher that's to the right right the marginal social benefit of people being in affordable housing is high higher than the private benefit to that family and here's why so if we think about the family and we talked about all the benefits a higher graduation rate and so forth and a lower crime that helps the family that's in the housing clearly they're better off if their child graduates from high school but so are the neighbors the neighbors are better off as well if the child graduates from high school doesn't commit crime and so forth everyone's better off Society Society enjoys these benefits not just the private individuals who are making the decision whether or not to get housing and so actually our socially efficient level of housing would be here I'm just going to call that QE that's the socially efficient level and then our price at that would be PE and so our equilibrium if we were at a socially efficient level we're considering the cost and benefits not just to the individual family but to all of society the socially efficient equilibrium would be right here and we're not there so what we can do see this difference in price what we can do is we can have a subsidy we can subsidize the families to encourage them or enable them to invest in housing so as we get them into housing we move toward our socially efficient equilibrium so this qm and PM that's the this is the equilibrium here if we just let the free market work but we have a market failure we have a market failure whenever we have any type of externality and in this case it's a positive externality because the individuals who are making a decision of whether not to buy housing are not capturing all the benefits right there are benefits that acrew to all of society and so we want to get to this equilibrium here and so by subsidizing the family we can get to that equilibrium now as you've seen it increases the price but we're getting to the socially efficient or optimal level of housing
|
Microeconomics_entire_playlist
|
How_to_Calculate_Price_Elasticity_of_Demand.txt
|
in this video we're going to discuss the price elasticity of demand so price elasticity of demand is basically just a fraction it's just a ratio that tells us how responsive that customers or people demanding a good are to a change in the price of that good or service so for example let's say that we're talking about ice cream and I were to tell you that okay look if there was a 10% decrease there was a 10% decrease in the price of ice cream then people would increase the quantity that they are demanding of ice cream by 50% so then what you do is you just take the 50% divide it by the 10% and you you ignore the fact that this is negative it doesn't matter we know that if price goes up demand is going to go down and conversely right so you ignore the sign you just take 50% divide it by 10 and you say okay you've got five now you might be thinking okay what does this number tell us what do we care well we can say that a number greater than one we would say is is very elastic demand and that means that if it's greater than one that consumers are really responsive they're really responsive to a change in the price of the good if they if there's a decrease in the price they're really going to increase the quantity demanded if it's in elastic so if it if is less than one for example in elastic that means that customers aren't as responsive maybe it's something like a gallon of milk and they say you know I really really need milk so you know even if the price goes up I'm not going to change how much I demand and so forth right and so the nice thing about price elasticity of demand is that it's a units free measure so if we're thinking about we we don't have to think about like let's say if ice cream was measured in dollars and we if you're were trying to think about it in terms of slope but if we said well let's instead measure ice cream in cents right in cents instead of dollars well then that would change the slope and so we don't want these issues of well what me what unit did we measure this in how was it comparable this one was measured me in dollars this one was measured in cents so this is not slope it's not measured in any kind of unit it's just a unitree measure we just take the ratio of these two percentages and then that tells us whether H how elastic the demand is for this good so let me give you a more little more complicated example to show you how you calculate this in a real world example so let's say that you run a movie theater and the current price of a ticket is $9 and at that price people want buy 20,000 movie tickets but you're you're considering a price change you're wanting to know what would happen if I increase the price by $2 and and let's say you talk to somebody and that really knows the movie theater industry and they say look if you increase the price to $11 which is that $2 increase then that is going to decrease the amount of tickets that are demanded to $15,000 or 15,000 tickets so you're going to have a decree so an increase of $2 is going to lead to a decrease of 5,000 tickets so now you want to know okay how is this how is this going to affect business and so forth so you could just you could just look and say okay $9 time 20,000 you originally had 180,000 in sales and now you've got 11 time 15,000 and you've got 165,000 in sales so you might even be thinking hey I increased the price by $2 and yet somehow somehow my sales ended up going down and so let's just say you're thinking about all these things and you say hey let me try and calculate the price elasticity of demand so to calculate the price elasticity of demand I I know we already talked about the the ratio but it's a little bit more complicated than because not always are you just going to be told what is the percentage of change in uh quantity demanded or the percentage change in price if I just tell you those things then it's it's pretty straightforward to calculate but here we've got to use these numbers we've got to use these numbers to figure it out so the percentage change in quantity demanded the way that we calculate that and let me let me make sure I do the colors so I'm going to make this these colors blue for the numerator so the percentage change in quantity demanded we're going to have our change in quantity our change in quantity divided by the average quantity I'm going to explain what that means so this is that so if we go to change in quantity 20,000 minus 15,000 that was our change in quantity okay now we divide that by the average quantity During the period the average is just 20,000 + 15,000 / 2 so that gives us 17,500 in our denominator 5,000 in the numerator if we get that that would be 286 rounded multiply it by 100 so we convert it to a percentage we would have 28.6% is our percent change in quantity demanded so I'm just going to put 28.6% here okay before in that example that I give you with the ice cream I just told you this number but that's how you would calculate it from from real numbers now we've got to calculate the denominator this percentage change in price so it's the same idea it's the same idea we take the change in price divided by the average price so the change in price we say okay well it was 9 and 11 so we'd say it was 11 minus 9 divided by and the average price is we just take the original price of $9 plus the new price of 11 so it's 11 + [Music] 9 and then we divide that by two that's our average price so this will give us 2 over and then this is 20 ID 2 is 10 so 2 over 10 which is equal to 0.2 which if we multiply that by 100 to get a percentage we'd have 20% so that means that our denominator the percentage change in price is 20% so now if we want to know our price elasticity of demand for this movie ticket uh $2 price increase we have say this 28.6 divided by 20% so I'm just going to take that's s and I'm going to call it price elasticity of demand peod 28.6% divided by 20% and rounded I'm rounding this all that comes to 1.43 now remember we had said that if the number was greater than one so which 1.43 is right that's greater than one so we would say that demand for these movie tickets it's at least within this price range when we're talking about a change of $9 to $11 we would say that the demand is relatively elastic which means that the consumers these movie uh movie consumers here are customer base is very responsive to a change in the price
|
Microeconomics_entire_playlist
|
Income_Elasticity_of_Demand.txt
|
in this video we're going to discuss how to calculate the income elasticity of demand so income elasticity of demand measures how your demand for a good or service changes as your income goes up or down so as you become wealthy or poor what happens for example to your demand for candy so we can calculate it with this formula here where we take the percentage change in the quantity demanded let's say the percentage change in the quantity of candy that you demand divided by the percentage change in your income that occurs so let's say for example that your income were to go up ten percent and you had a corresponding percentage change in the quantity demanded of candy of fifty percent so if your income goes up by ten percent the amount of candy that you're demanding the quantity goes up by fifty percent so we'd say your income elasticity of demand is five and if you're wondering how do we calculate the numerator and the denominator well to get the percentage change in quantity demanded we take the change in the quantity demanded so the change in the quantity of candy that you're demanding divided by the average quantity demanded and to get the denominator the percentage change in income we take the actual change in your income divided by your average income I know it's a little abstract so let's go into an example and I'll show you what some actual numbers so let's say that you make two hundred dollars a month so your income is two hundred dollars a month that you get from working as a clown at children's parties but then a YouTube video of you getting hit by a car while wearing your clown suit it goes viral people feel bad that this clown was just out walking and got hit by a car people feel bad for you they send you donations they set up a GoFundMe site and now your income is increased to three hundred and fifty dollars a month so now you have become wealthier you've got more income and so that's going to change how you the quantity that you demand of certain items and so let's think about the food that you eat let's say that you eat but you eat rice you eat pasta and you eat chocolate bars okay so when your income was $200 a month you demanded let's say 10 bags of rice eight eight boxes of pasta and chocolate bars that was what your demand was but now that your income is three hundred and fifty dollars a month you're only demanding five bags of rice so your demand for Bryce is actually going down you're eating less rice but two additional boxes of pasta you're demanding but you're demanding 13 more bars of chocolate okay and so you can already see that okay the income went up right your income nearly doubled you're eating less rice more pasta and a lot more chocolate and what we can do is we can calculate the income elasticity of demand for each of the items right so again remember that the income elasticity of demand what it is is we're going to put let me just put a little abbreviation here so we can think of it as the percentage change let's have a triangle for Delta for change in the quantity demanded I'll just put a little little thing here and then divided by the percentage change in income I'll just abbreviate i NC okay now remember the formula was given at the beginning of the video I just wanted to show you here's a quick little thing so what we had to do to calculate the percentage change in quantity demanded is we have to do the actual change in demand which is we went from ten to five so it was negative five okay so that's going to be when we change color so negative five but then we need to divide that by the average quantity demanded which is ten plus five divided by two that's seven point five okay so that's going to be our numerator it's going to be that's going to give us the percentage change in quantity demanded I can actually I'll just calculate that right now so it's negative sixty six point seven percent actually this is negative five divided by seven point five times 100 so we're going to multiply everything here by a hundred to get it to a percentage from a proportion okay now in our denominator and in our denominator we got the percentage change in income and so we're going to take we what happened here well we had 200 to 350 so we went up by 150 we went up by 150 that's our actual change in incumbent then we have to divide it by the average income the average income is 200 plus 350 divided by 2 which is 275 and that's going to be the same this percentage change in income which is 54.5% that's going to be the same for each of these items because your income change is the same for every item what's going to change is the percent of change in quantity demanded okay so now let's think about and we can go ahead and let me just finish this out here so that is going to give you negative negative 1 point to 2 negative 1 point to 2 and so we will say that demand for this good we will call that inferior demand for this good is inferior and inferior what this good is an inferior good means that as your income goes up you actually buy less of this good that's what I mean when we say that this is an inferior good ok now with pasta with pasta we had we had an increase right so we can tell right away it's not not going to be an inferior good but now we need to calculate out we need to calculate it out here our formula so the percentage change in quantity demanded is going to be we're going to have to which is our increase our change in demand divided by our average demanded right so 8 plus 10 is 18 divided by 2 is 9 ok now in the denominator again we have the exact same thing 150 divided by 275 because the income has not changed so we have 22.2% remember we multiply everything by a hundred to get a percentage divided by 50 4.5% and that's going to give us that's going to give us zero point four one zero point four 1 and so we would say that this is a normal good and when we think normal good what we mean is if your income goes up you become wealthier you buy more of the goods we say that's normal okay so it's a normal good and we'd say normal and inelastic in elastic because a 1% increase in your income it increases demand for the good but by less than 1% it's a 0.41 ok now with the chocolate we see that Wow chocolate went from 2 to 15 so your chocolate you really your clown that really likes chocolate so we're going to have a big change here and so we're going to have 13 is our actual change in the chocolate bars and the change in demand divided by the average demanded which is 8 and then all of that and again we take the 150 divided by 275 so what do we have we have 150 2.9 percent increase in the numerator and then we have 50 4.5% in the denominator and so if we calculate that out that gives us 2.8 income elasticity of demand and so it is a normal good it is in normal good but demand it's a normal elastic good ok so we've got let's just do a quick review so for the rice for the rice we said that it is an inferior good and we know even if you didn't know that okay well people buy less Rice's they become wealthier they start buying all fancy things like chocolate even if you didn't know that if you just look at the income elasticity of demand it's negative anytime the income elasticity of demand is a negative number that means that it is an inferior good ok so you know that now with the pasta the pasta is a normal good because the income elasticity demand is positive but it's inelastic because it's positive and less than 1 okay so if income elasticity demand is positive but less than 1 then you say the good is normal and inelastic could be normal and elastic good chocolate is different chocolate a 1% increase brings more than a 1% increase in a 1% increase in income brings more than a 1% increase in the quantity demanded for chocolate right your income goes up a little bit your demand for chocolate goes up a lot and so therefore we say the chocolate is a normal normal elastic good look at it it's 2.8 is its income elasticity demands higher than one so it's a normal elastic good
|
Microeconomics_entire_playlist
|
Negative_Externalities_in_Economics.txt
|
in this video we're going to discuss what a negative externality is in economics so a negative externality occurs when you do something that imposes costs on another person without reimbursing that person for the harm that you've done to them so let's say that you play the radio really loud at night and it makes it hard for your nextd door neighbor to study for his or her exam the next morning and so then they get a bad grade and so now you've imposed costs on that person by your actions and so it's not just people that can create negative externalities we could also have companies do that with things like pollution so let's say that I happen to live near a lead smelting company and so they smelt lead and the the tailings from the lead can create pollution that contaminates the the soil and and it can also contaminate water so let's say that there is a river nearby and I I live near this lead smelter and this River and so there are fish in the river and I like to go fishing in that River but because of the lead smelting process it ends up killing some of the fish some of the fish die and so I can catch fewer fish because of this lead smelting process so it's it's basically it is imposing a cost on me now also there could be other there could be Health costs and so forth but bottom line is that the lead smelter is taking an action it's producing some some good that is is creating a cost for me but I'm not being reimbursed I'm not being consulted or anything like that so I want to graph this out for you to see basically how this would work in a free market how a negative externality uh would occur and and what it would look like so let's say we've got the price of smelting lead and then we have the quantity of lead to be smel uh smelted so we've got the price and we've got our quantity and let's map out the demand curve so here's our downward sloping demand curve and the demand is also the marginal social benefit so you can think of that as the marginal social benefit and you might be thinking hey what is the social benefit to smelting lead well we use lead in batteries and stuff like that so there is some benefit to smelting lead and so from a societal standpoint we can look at our marginal social benefit but then we can also look at the marginal social cost so let's say that's our curve here of the marginal social cost and that's our so this is the social cost and the optimal amount of lead that should be smelted from the standpoint of society is going to occur where the marginal social benefit is equal to the marginal social cost so that's going to be right here that's going to be our optimal quantity of lead from a social standpoint so this is our we'll call it the socially efficient or socially optimal amount of lead and let's let's use let's say that that's 100,000 tons I have no idea if that's a realistic amount but that there's 100,000 tons of lead that is smelted that is optimal now here is the nature of the externality for the lead smelting company their cost is lower than the marginal social cost because they only consider here's their they consider the marginal private cost and by private cost I mean the cost to the smelter itself right to that company itself they're not thinking necessarily about what what's happening to me with the fish I catch or or things like that right they're just looking at for their firm what is the marginal cost of their firm of smelting an additional ton of lead and so their marginal private cost is lower than the the social cost and the reason is is that the social cost this social cost includes two things so it includes the private cost the cost to the lead smelting company plus What's called the external cost that's the cost to other people in society aside from this smelting company so it's those two things but the marginal private cost does not include the external cost and what does that mean that means that in a free market the amount of lead that is actually going to be smelted is going to be Q it's going to be some amount larger let let's just say that it's 130,000 tons and the the reason is is that the when the lead smelting company is making the decision of how much lead should we smelt they're going to look at their private cost the cost to them the marginal cost of of smelting additional ton and they're going to go oh where the marginal private cost equals their benefit that's where that's the amount that they're going to smel so this is this is the amount this is going to be the equilibrium this is the amount produced by the free market the amount of of lead that's going to be smelted now you see that this is higher the amount that is that is produced by the free market this is the the free market that amount is actually higher so basically from a social stand Point there's a lot more lead that's being smelted than what is what is optimal and and from the social stpoint we're considering all the costs and benefits to everybody right not just for that particular firm and so we're smelting too much lead and and you think the reason is is that this smelting company the the smelting company as they're making the decision of whether to do this they're not considering the cost imposed on people like me or you they're thinking about their own private cost their own marginal cost now that is going to cause them to produce the Quant or to produce a quantity of lead that is higher than what would be socially efficient so this is a market failure this is a negative externality here and there are several ways to address that and one is with a pigouvian tax which is a corrective tax there's also you can have marketable permits or cap and trade you probably heard of and then also there's a thing called Coast bargaining that we'll talk about in some of the videos to
|
Microeconomics_entire_playlist
|
Change_in_Demand_vs_Change_in_Quantity_Demanded.txt
|
in this video we're going to talk about the difference between a change in demand and a change in quantity demanded so a change in price holding everything else constant just changing the price of a good or service is going to lead to a change in the quantity demanded of that good or service and that's going to be a movement along the demand curve so if we had our demand curve we just got our standard downward sloping demand curve we're gonna be moving from one point let's say here to here for example okay so if we moved from here to here what we have is that as we're decreasing the price we're increasing the quantity demanded right because our P is here here's our Q and so we're just moving along this curve somewhere okay so we're moving along the curve now a change in something other than price for example people's incomes people's tastes and preferences the price of a substitute or complement good and so forth will lead to a change in demand which although it sounds similar to change in quantity demanded it's not the same thing we're not talking about a movement along this along the demand curve now we're talking about an actual shift an actual shift so let's say here's our demand curve and let's say tastes and preferences a consumers change so that they increase their demand for this good what is going to happen is here let's say this is demand curve one we're going to shift to the right and we're gonna have a new demand curve we'll call it D two and so when we actually have a shift in the demand curve that's what we mean when we say there's been a change in demand so if someone says demand for oil is increasing what they're talking about is there's been a rightward shift in the demand curve and now if we're talking about just simply what would happen if we just changed the price but nothing else that's a change in quantity demanded and we don't have a shift we just have a movement along the curve so let me show you an example it'll make it a little bit easier to understand let's think about the market for pink jeans so pink jeans and let's say that our demand schedule looks like the following so we've got at $100 price or a hundred euros people are demanding twelve twelve pairs or twelve thousand pairs what pink jeans at $200 they're demanding less they're demanding nine pairs etc all the way up to $500 so we can we can graph this out and I've graphed out our demand curve here so here is our our demand curve and what is gonna happen is that we could change let's let's just say hypothetically that our price was at $500 so at a price of $500 P equals 500 Q our quantity demanded equals zero but what if the price were to change what if the price were to change and now it's a price of $300 well we see that at P equals 300 Q is 6 right so at a price of 300 the quantity is 6 okay see that so what we've done what we've done is we've just moved along the demand curve we haven't drawn a new curve we haven't shifted or anything like that we moved from this point here we've moved from that point to here to this point we just moved from one point to another but we stayed on the same demand curve because there was simply a change in price but what if what if there was something other than a change in price there was a change in preferences for example you see a famous musician somebody who's very famous they are wearing these pink jeans and then consumers say hey hey I want some jeans so now I want some pink jeans so now there's a change in preferences and consumers preferences there's an increase in demand for pink jeans so now we've got a new quantity demanded schedule right we've got a new demand schedule so our old demand schedule was as follows but now we have a new demand schedule where each price consumers are demanding more they're demanding more the good it used to be at $100 that consumers were demanding 12 units of pink jeans but now they're demanding 15 and a hundred right so now what we can do is we can say oh okay well if that's the case we'll draw a new demand curve we'll draw a new demand curve we'll call that d2 so our first demand curve was d1 and now we've got d2 and what has happened is that we have shifted the demand curve to the right because there has been an increase in demand so now think about it what has happened here when we were changing just the price we were changing what that was doing was it was a corresponding change in quantity demanded right so we just moved along the curve we didn't create a new curve but when there's a change in something other than price an on price change preferences people's income and so forth that's when we have a change in demand so we're not just moving along the curve when there's a change in demand we're actually creating an entirely new curve we're shifting it either to the right or to the left depending on whether there's an increase or decrease in demand and the reason the curve is shifting when there's a non price change is because at each price point there's a new quantity demanded so we have to create an entirely new curve so when something other than the price changes we say it's a change in demand but when the only the price changes all else constant then we just have a movement along the curve and it's a change in quantity demanded
|
Microeconomics_entire_playlist
|
Simultaneous_Change_in_Demand_and_Supply.txt
|
in this video we're gonna talk about what happens when there's a simultaneous change in both demand and supply so there's four different things that can happen with the simultaneous change we can have demand and supply both increase together or they could both decrease together or we could have where there's an opposite effect where demand is increasing but supply is decreasing or conversely demand as decreasing but supply is increasing so those are the four different scenarios and so there's a different effect on the equilibrium quantity in the equilibrium price in each situation so I've just summarized two here for you so basically if you have an increase in demand and an increase in supply so people are wanting to buy more of the good and suppliers are wanting to supply more of it there's there's these changes happening then the equilibrium quantity is definitely going to increase it's definitely going to go up but the effect on the equilibrium price is going to be indeterminate that means that we don't know we don't know what's going to happen and that is because we need to know which effect is stronger because murder we have two different things going on here we've got an increase in demand and an increase in supply so normally if we just had an increase in demand with no increase in supply if there was just an increase in demand the demand curve would shift to the right and so then we would have an increase in the price however think about the opposite if we had an increase in supply but no increase in demand just an increase in supply by itself the supply curve would shift to the right and we would have a decrease in the price so the increase in demand by itself causes an increase in price but the increase in supply by itself would cause a decrease in price so the effect on price is indeterminate we don't know what would happen right and I've laid out the different situations and in each case we either know definitively what happens to either the quantity and then we then the effect on prices indeterminate or we know what happens to the price in a couple of situations and then the effect on the quantity is indeterminate now this is all pretty theoretical I want to draw an example for you and I'll use this this first example where we have an increase in demand and supply I think once I sketch so now for you with the graph it'll be a little easier for you to understand so let's take the market for wind energy so we're talking about wind energy which could be produced with windmills and let's say that the two things happen so first we have some tetanus some increases in technology that make it easier and cheaper to produce wind energy so because it's cheaper to produce that's going to cause an increase in supply so we've got an increase in supply but then also let's say there's a second thing let's say there's a second thing and that consumer preferences are changing they're changing so that now people are preferring more renewable energy so they prefer wind energy to energy produced by coal or something like that so we say now there's going to be an increase in demand so we're going to have both of these right so there's there's going to be a shift of the demand curve to the right and there's gonna be shift of the supply curve so let me show you let's just let's just graph this out and I'm just gonna show you a really generic graph okay so we've got our price we've got our quantity and so here's our downward sloping demand curve and then we've got and then we change code we've got our upward sloping supply curve I'll call this s1 and then our supply curve is going to shift to the right it's going to shift to the right call that s2 by the way I so our initial equilibrium would have been right here where s1 and d1 let me put a little one so d1 where they were that's our initial equilibrium now our demand curve is also going to shift to the right demand curve is going to shift to the right so say here is our d2 so now now our new equilibrium our new equilibrium is going to be we're asked to and d2 intersect just as our original equilibrium was where s1 and d1 intersect it so now let's draw the new equilibrium it's gonna be he's gonna be right here so if we were to say okay well what's what's what happened to our equilibrium quantity well initially our equilibrium quantity will call this q one that's just the Q the quantity right here but now we are here Q two so our equilibrium quantity has increased considerably see that so that's when we have an increase in supply and an increase in demand we know that the equilibrium quantity is gonna go up that's that's guaranteed so we know that that that's gonna happen now what's the effect on price was the effect on price well take a look here the price is basically unchanged in this example now it doesn't have to it's not always until it's gonna be indeterminate and the reason is that I could draw this differently I could draw this differently so that we would have a situation where that basically if let's say so the increase in demand is going to increase price so this increases price and the increase in supply is going to decrease price so if the increase in demand if the increase in demand is is kind of weak if the increase in demand is weak let's just say this was weak and the increase in supply was really strong then the increase in supply that the effect it has on depressing the price would outweigh the the increase in price from the increase in demand now again that's kind of theoretical so let me give you an example of this right here so now this is just kind of a generic scenario where I have basically the increase in demand and supply we're about the about the same amount but let's say this scenario where we say okay the increase in the supply is really strong but the increase in demand is it's kind of weak and by weak I just mean it's not not as not as a large an effect so so let me let's do this here we go I'm not using numbers and everything I just want to you know save you time and go through this quickly so let's say that here's our demand curve we'll call it D 1 and then we have a shift or well let me let me finish drawing out all the when we draw out the supply curve and stuff - not too so don't want to confuse you so here's s 1 and then that's our initial leak equilibrium would be right here so we'd have Q 1 and we'd have P 1 so again we're gonna have an increase in demand and an increase in supply simultaneously but I'm gonna draw it such that the increase in demand it's just it's kind of a small increase in demand so let's say let's make it even smaller than that so let's say the increase in demand it shifts to the right you see that the D 2 is shifted to the right of D 1 so it's an increase in demand alright but it's not as wide as as we had over here right so now let me show you let's do again a big increase in supply a big increase in supply so now we've got s 2 so our equilibrium there's going to be where s 2 and D 2 intersect that's right here see that that's our our new equilibrium so we plot these points here so we're at Q 2 is our new equilibrium quantity and P 2 is here so now you see P 2 so the new this p2 is less than p1 so Q 2 is greater than Q 1 so we have an increase we know that right remember we said with an increase in demand and supply at the same time there's always going to be an increase in quantity in the equilibrium right so we know that that's not changing based on which effect is strong or weak or right whether the supply of the change or demand change but now look at the price before we said well we don't really know what's gonna happen but now because this change shift in demand was was kind of weak it wasn't as strong as the shift in supply the effect of the shift in supply which by itself if we had a change increase in supply would decrease price the equilibrium price that effect now that is the stronger effect so that decrease in price is like it's winning the battle Lu consultant so to speak and think of that increase in supply and its effect on price is stronger than the increase in demand because the increase in demand there's just this little shift here there's just the it's a shift but it still low but then look at this shift of the supply curve okay so conversely if we had done something where we had a really strong increase in demand and then a really weak you know kind of small increase in supply then the opposite thing would have happened then we would have actually had an increase in price
|
Microeconomics_entire_playlist
|
Law_of_Supply.txt
|
in this video we're going to talk about the law of supply so the law of supply states that if there's an increase in the price of a good or service there's going to be a corresponding increase in the quantity of that good or service that is supplied and conversely if the price goes down if there's a decrease in the price then there's going to be a corresponding decrease in the quantity supplied so let's think about ice cream so as the price of ice cream goes up as it rises uh ice cream suppliers are going to be willing to supply more ice cream but if the price of ice cream were to go down if it were to fall then ice cream suppliers would be willing to supply less and less of it as the price goes down and we're holding everything else constant all else equals cedus parabus we're just looking at what happens uh to the quantity supplied when there's a change in the price of the good or service so it's similar assumption to the law of demand only it's working in the opposite way because if you remember from the law of demand we said that if there's an increase in the price of a good then the uh consumers are going to demand less of it right but with suppliers it works in the opposite way if there's an increase in the price suppliers are willing to supply more of the good or service so let's think about the market for chocolate bars and we can put together something called a supply schedule where we look at different prices and we say okay at this price what would be the quantity that producers would be willing to supply so when we're talking about quantity we're talking about quantity supplied so at a price of $1 or1 Euro let's say the chocolate bar producers wouldn't produce any chocolate bars at all they say hey we're not going to produce any so the quantity would be zero but if the price were to go up to $2 then they say you know what we would supply three chocolate bars at that price if the price is at $3 then they'd be willing to supply six chocolate bars and so we see that as the price goes Higher and Higher and ultimately gets up to $5 the quantity supplied goes higher and higher so at a price of $5 they're willing to supply 12 chocolate bars where if the price for only $1 they won't Supply any chocolate bars at all and so that reflects of law of supply we say that as the price is increasing of the chocolate bars the quantity that producers are willing to supply that's also increasing as well so we've got an increase in the price that leads to an increase in the quantity supplied and now what we can do is we we can create a graph and we can with this graph we can create a supply curve where we plot out all the points for our supply schedule and we can see how it looks graphically so let's take the price of $1 so at $1 and we always have price on our y AIS and quantity on the xais and in this case remember quantity is quantity supplied at a price of $1 the producers are willing to supply zero chocolate bars so zero it's right here at the origin now at a price of $2 they're willing to supply three chocolate bars at a price of $6 they're or excuse me at a price of $3 they're willing to supply six chocolate bars and at a price of $4 they're willing to supply nine chocolate bars and then if the price goes up to $5 they're willing to supply 12 chocolate bars so now what we can do is we can draw a line through the graph now it in in the real world the supply curve is not always going to be a straight line right but here in this case it is and so what this is reflecting is the fact that this this curve is going up it's sloping upward that is telling us that we've got the law of supply in action right as the price is going up as it gets higher and higher then the quantity demanded or excuse me the quantity supplied is also going higher and higher so we have an upward sloping supply curve and so we would label this s or we could label S1 or something like that now we can have things that happen that change something other than price for example if there were to be a change that made it a lot cheaper to manufacture chocolate bars for example if if labor became cheaper or something some input some factor of production for producing chocolate bars became cheaper we could actually have a shift we could have a shift where the supply curve shifts to the right it shifts to the right and so then we would have S2 or conversely it could also have something H let's say a factor of production becomes more expensive we could have a shift to the left and so we'll talk about these shifts and the things that can cause a shift and Supply in the videos to come
|
Microeconomics_entire_playlist
|
How_to_Graph_a_Change_in_Supply.txt
|
in this video I'd like to show you how to graph a change in supply so a change in supply can occur for a number of reasons it could be that there's been a change in the number of suppliers maybe new suppliers have entered the market could be a change in the price of factors of production maybe labor became cheaper or so forth it could be that there was an advance in technology for example fracking dramatically increase the supply of natural gas or there could be even be a change to the weather for example if a hurricane we're to hit Florida could reduce the supply of oranges and when we say that we have an increase in supply what we mean is that the supply curve is going to shift to the right so let's say that we have our standard upward sloping supply curve so I'll call that s1 and then something happens that increases the supply what that means is that we're gonna have a shift the curve is going to shift to the right and we're gonna have an entirely new supply curve conversely if the supply were to decrease what that means is that the curve is going to shift to the left and then we would have a new supply curve to the left so I want to show you an example to make it a little bit easier to understand let's say that a tsunami hits the country of Costa Rica and destroys a number of coffee plantations I hope that doesn't happen but let's say that it did we could go and we could look at the quantity of coffee supplied before right so this is before this our supply schedule before the tsunami hits at different prices we could look and see okay what was the quantity that producers were willing to supply so let's say $1 a pound that coffee producers were willing to supply 3 million pounds of coffee or so forth okay so we could put this together and we can actually map out our supply curve right which we've talked about in previous videos so we've got our supply curve here and now what we can do is we can say okay well what's gonna happen now that the tsunami wiped out a bunch of coffee plantations is that going to increase or is that going to decrease the supply of coffee well it's pretty clear in this example that it's going to decrease the supply of coffee right because a bunch of coffee plantations got wiped out so we're going to have a decrease in so why and that means that the curve is going to shift to the left but I want to show you why it's gonna shift to the left so if we look at each price so let's look at one dollar for example it used to be before the tsunami the quantity that producers were willing to supply was three million pounds of coffee but now at one dollar the producers are only willing to supply zero coffee right there now they're not willing to give any coffee at all and at SiC at a price of two dollars a pound before they were willing to supply six million pounds of coffee and now they're only willing to supply three Millions so you see a each at each price the amount that they're willing to supply has decreased it is decreased after that now that we've had this decrease in supply that these coffee plantations have been wiped out so what what does that practically mean that means that we're gonna draw a whole new supply curve we're gonna draw an entirely new supply curve here and so when we say that it shifts what we mean is we have a new supply curve and I'm gonna call that s2 I'm gonna show you how I came up with that and why it's shifted to the left let me show you here I put a little arrow so you know that's shifts to the left that's the new this is a new supply curve and the reason it shifts that aleph is because if we look at for example a price of one dollar at a price of $1 now the quantity supplied is zero so we're gonna be right here it used to be that at a price of $1 the quantity supplied was three right here but we had to move it over we had to move it over because there's been this decrease in supply and so now we've had this shift in supply so now you might be wondering for example what would happen to the price of coffee so what we would need to do to find fine now is we would have to draw a demand curve now I don't have it a demand schedule here that I've put together I'm just gonna draw a generic demand curve and let's just say that that that's our demand so now what we can do is we can go so our original equilibrium would have been right here because we've got we've got our demand and then we had s1 that was our original supply curve before the tsunami so our price our price I'll call this p1 about there around $3 and then let's say here here would have been our quantity so it's called that q1 so we were originally here at our equilibrium but now we look at s2 the new supply curve because the tsunami hit we have a new supply curve and then the same demand curve the demand curve doesn't doesn't change so now we have the new equilibrium is here so now let's see what happens so now q2 is the new quantity in equilibrium and p2 is the new price so what we see is that this decrease in supply has actually increased the price so the equilibrium price has increased in the equilibrium quantity has gone down
|
Microeconomics_entire_playlist
|
The_Efficient_Provision_of_Public_Goods.txt
|
we've been discussing how public goods can lead to a market failure because of the free rider problem which basically says that why would people want to pay for a public good if they believe it's going to be provided anyways and there's no way to exclude them from enjoying that good so the question is if we're going to have the government come in if the government's going to play the role and provide the public good and then tax people or something so for to fund it the question is well what is the efficient quantity what is the socially optimal the socially efficient quantity of that public good in question how do we determine let's say for example that we're talking about street lights so we're thinking of installing some street lights in a neighborhood we're trying to figure out well how many should what is the socially efficient number of street lights well the rule is that we want to do the number of street lights where the marginal social benefit is equal to the marginal marginal social cost now you might be thinking hey we've got an issue here people don't buy public goods they're not like private Goods it's not like we're just going to ask people in the neighborhood hey why don't you go out and buy your own street lights and and so forth so how do we figure out what is the total marginal social benefit how do we figure that out and what we could do one way we could do it is we could basically ask each individual or try and figure out hypothetically what would be the marginal benefit to each individual how much would they pay for one additional unit one additional street light under different scenarios so say okay if the quantity is X how much would person a pay for one additional street light and so then we could put together once we know the the preferences of the people we can put together a demand curve for each person who lives in that neighborhood and then we can sum those demand curves together and we put together what's basically called the collective demand curve and then we map out that Collective demand curve and that's going to be the marginal social benefit and then where that's equal to the marginal social cost where these two curves intersect that's going to be the efficient quantity of the public good in question so for example street lights so let's just say I just want to plot this out for you so it's a little less abstract let's say that we're using the example of street lights and said let's say this is in thousands of dollars here I know I have no idea how much it is for a street light but let's say it's thousands of dollars so 1,000 2,000 3,000 and let's say that the marginal cost the marginal cost curve is here it's it's basically $5,000 that's the marginal cost of of a street light and we've got two people in this neighborhood Let's Pretend so we've got person one uh their demand curve is this this white line right here that's their demand curve and and here's how I'm going to plot it out for you so at a quantity of zero at a quantity of zero at the origin there this person their marginal benefit from an additional additional street light would be six it' be six now if they were at let's say we're at a quantity of one then person one their marginal benefit of an additional street light would be five and so that's how I that's how I plotted out this line here I just basically went and said okay what would be the marginal benefit at different quantities and when you get to where there's six street lights at that point for person one the marginal benefit of another street light is zero so what and then I've got person two here is the red line so to come up with with the collective demand curve that's like the think about it is like the total marginal benefit for all of society of this Society of two people here this neighborhood what we do is we add each together so that might look a little hard to do but if you think about it let's go with at zero quantity is zero we add the marginal benefit of each person so for person two it would be three is their marginal benefit and then for person uh person one it would be six so so 6 + 3 is going to give us 9 so I'm just going to I'm just going to fill in right here here at at zero when we have zero street lights the marginal benefit of providing additional street light is nine now for one street assuming there's one street light the marginal benefit would be 2+ 5 that would be the total so that would be seven and then if we were at two let's say two street lights the marginal benefit of one more would be one right here for person two and then four for person one so 4 + 1 is five oops so 4 + 1 is five so we will be right here five and then now when we get to three at this point gets a little weird because person person two no longer has any marginal benefit so it's basically the curve is just going to come in with with person one because the total marginal benefit is just person one which is three and then at four street lights the marginal benefit would be two and then at five it would be one and at six it would be zero it's six neither of these two people it's any more marginal benefit from an additional street light so we can go and we can map this out and I apologize for my drawing skills in advance and basically this green curve this green curve is the collective demand curve that's the collective demand curve so this is collective demand and you could think of that as the marginal benefit and remember we said that how do we determine the this the optimal the socially efficient level of this public good of street lights so we're going to do it where the marginal benefit this our marginal benefit the total marginal benefit equals the marginal cost right so where the marginal benefit equals the marginal cost that's our socially efficient level and so we see here's our marginal cost C curve right there it's in purple and then here's our marginal benefit curve so right here is where the two curves intersect see that see this purple line and then the green line right here they intersect that's where marginal benefit equals marginal cost and we say well what is the quantity there what is the quantity the quantity at that point is two so what that means is that the efficient the socially efficient amount of street lights the amount where the total marginal benefit the collective marginal benefit equals the marginal cost for this neighborhood would be two so that means that they should the socially efficient level of street lights would be two
|
Microeconomics_entire_playlist
|
Scarcity_vs_Poverty.txt
|
in this video we're gonna discuss the difference between scarcity and poverty because scarcity and poverty are not the same thing these two concepts are not equivalent so if you remember the fundamental economic problem is that people have unlimited wants and desires but unfortunately in the world that we live in there are limited resources available to fulfill those ones right there's limited natural resources there's limited land and so forth there's only so much there's finite resources that are available to satisfy all of people's wants and so we say that we have a scarcity issue we have a scarcity problem and that that's scarcely refers to this inability to fulfill all these wants and desires that people have resources such as land are scarce now taking issue of energy for example if we think about electricity air conditioning things like that and let's say that you wanted everybody in the world everybody throughout the world even in the most rural remote regions to be able to have access to air conditioning to cheap energy for electricity and so forth right and we're striving toward that but the bottom line is there's limited resources available to put to meet all those wants even if you think well hey we'll use solar power right we'll use solar power and then we've just got an unlimited supply of energy from the Sun right but then there's the question of solar panels right how are we going to provide solar panels to everyone in the world hopefully one day we might be able to do that but right now with the resources we have they're limited and the more resources we use to make solar panels the fewer resources available to make other things right so we've got this world with limited resources there's only so many resources to go around it's scarce a world of scarce resources now as of the example I just mentioned you might be thinking well this has to do a poverty because it's really a question of these people in these rural remote regions they don't have the money to acquire to pay for electricity for energy or to pay for air conditioning and a lot of cases those people may be living in extreme poverty and so really you might be thinking this is really just a question of that we're talking about poverty that scarcity and poverty are the same thing because you know people on wealthy nations maybe you think they don't face scarcity there's really people who are poor below a certain income threshold that they are the ones facing scarcity and I want to kind of dispel that notion and show you that even wealthy people or just people in general can face scarcity so let's say let's just say that you're really really wealthy you are just you have billions and billions of dollars and you want to become the world's greatest guitar player but you also want to be the world's greatest tennis player and the world's greatest chess player you're very very ambitious and-and-and you also in your spare time you happen to manage a daycare center right so you're running around you've got all these children that you're managing in the daycare center meanwhile you're trying to learn how to play guitar and play chess and stuff on a computer or something so you're doing all these different things and no matter how wealthy you are no matter how much money even if you could hire a private tutor to tutor you to get Garry Kasparov to help you learn how to play chess and and and your famous guitar player to help you learn play guitar and so forth the bottom line is there's only 24 hours in a day you only have so much time available so we could say that you face a scarcity you face a scarcity of time right so even though you might be wealthy and have billions and billions of dollars you only have a limited amount of time right so you have a scarce amount of time there's just so much you do and you're not gonna live forever right if you're gonna live forever maybe you could go and do all of these things but because you face a scarcity of time you have to make trade-offs you have to decide okay if I'm gonna spend an hour practicing guitar that's an hour that I cannot spend practicing tennis or chess or maybe managing the daycare center and we're going to talk talk about trade-offs and economics in the videos to come
|
Microeconomics_entire_playlist
|
How_to_Graph_a_Change_in_Demand.txt
|
in this video I want to show you how to graph a change in demand but first it's good to understand why demand can change and so I just want to give you a few examples so de demand could change if People's income increases or decreases if there's a change in people's tastes and preferences or if there's a change in the price of a substitute or a complement for the good so for example let's say that the price of natural gas is decreasing if the price of natural gas is going down then that's going to affect demand for coal and the reason is is that coal and natural gas can both be used to heat people's homes and so they're substitutes for one another so if the price of natural gas is going down natural gas is becoming cheaper and so then people will demand less coal and so what does it mean to say that there's an increase in demand well basically if there's an increase in demand the demand curve is going to shift to the right and if there's a decrease in demand it's going to shift to the left so let's just say we've got here here's our de our our graph and we got our demand curve it's downward sloping so this is D1 and now if demand increases if we say there's a change in demand and demand increases what we're saying is that this this is going to shift to the right and we're going to have an entirely new demand curve conversely if it were to be an in or decrease in demand it would shift to the left and we would have a demand curve that would be over here I want to give you an example that'll make it a little bit easier to understand so let's say that there's a very very famous musician in your country who begins wearing pink jeans right so they start wearing pink jeans everywhere they wear pink jeans to the Academy Awards to the Super Bowl everybody sees them wearing pink jeans and so they say hey I want to wear pink jeans too you could see where this would affect people's tastes and preferences so now because this is a very famous musician this is increased demand for Pink jeans so we're going to say this increases demand for Pink jeans so I want to show you what demand is for Pink jeans before this happens before the mician starts wearing these Pink jeans we say that here's our quantity demand this is our demand schedule so at different prices we can see what is the quantity of jeans demanded so at $100 let's say there are 12 units or or 12 pairs of Pink jeans or 12,000 however you want to think about it there are 12 demanded at a price of $100 so at $100 we'd be right here at at 12 okay and then at a price of $200 the demand amount demanded would be N9 so that would put us right here and so I've already I I've got this is our demand curve this is our demand curve right here before the musician starts wearing the Pink jeans but again we said that there was an increase in demand so what does that mean that means that this curve is going to shift to the right so how is it going to shift well at each price at each of these prices now there is a higher amount a higher quantity demanded so at $100 before it was that they demanded 12 pairs of Pink jeans but now it's 15 at n um $200 it was nine jeans demanded Now it's 12 so it's increase the afterwards after this musician has done that changed people's preferences so now they're demanding more Pink jeans at every single price so now we need to shift our curve to the right and so what we're going to notice is that if we look at let's look at a price of $500 now in the past they demanded zero so we ended up right here but now they're demanding three pairs of genes so that's going to put us right here and actually if we were to map out this entire all the new quantity demanded if we were to map that all out that would give us a new demand curve that is going to look like this so there's our new demand curve and I'm going to call that D2 just going to put d22 so we remember that this is the second demand curve we had D1 was our first demand curve now look this has shifted to the right you see how we have shifted the demand curve so this is this is reflecting that we have had an increase in the demand for Pink jeans right this is the market for Pink jeans now you might be wondering you say okay well that's easy enough we have an increase in demand we shift the curve to the right well you might be thinking well what happens to price and of the Pink jeans and and so forth so to understand that what we're going to have to do is we're going to have to draw a supply curve I I don't have all the the quantity supplied and everything I'm just going to draw a generic curve just to give you just to give you an idea of how it would affect price for the Pink jeans so here's our this is our supply curve I'll just call that s now our initial equilibrium our initial equilibrium was right here okay so we'll just call that we'll say that that is q1 and then here is P1 that's our initial price somewhere between $200 and $300 now because we have a new demand curve because we've now shifted to the right remember Supply isn't affected Supply just stays the same so but we have a new demand curve and so now there's going to be a new equilibrium so the new equilibrium is going to be and let let me change colors again so let the new equilibrium is going to be here right this point right here where the supply curve and the the D2 intersect so now we're going to have Q2 should be about right right here and then we're going to have P2 somewhere between $33 and $400 so now what you see is that P2 is greater than P1 what does that mean that means the price of the Pink jeans has has gone up and now again we're talking about the relative price it might not be the actual relative to the average price of other goods and services Pink jeans have become more expensive and we also see that Q2 is greater than q1 see that and that means that basically people are now demanding demand is increased for Pink jeans people want to buy more Pink jeans because they saw that famous magician wearing the pink Janss and now they want some and then that has increased the price of the Pink jeans as we have shifted our demand curve to the right
|
Microeconomics_entire_playlist
|
Gains_from_Exports_How_Countries_Benefit_from_Free_Trade.txt
|
in this video we're gonna talk about how a country can gain from exporting goods or services through international trade so let's take for example the market for copper in Chile so let's say that there's a market with no international trade at all we're just looking at Chile by itself so the equilibrium price of copper is $4,000 a ton in Chile without any kind of international trade right so we just say we've got our downward sloping demand curve our upward sloping supply curve and this is the supply in Chile in the demand in Chile for copper okay so we've got our equilibrium here we've got a price of $4,000 at time and then we've got 2 million tons of copper is the equilibrium quantity now we've got our consumer surplus is this blue triangle right here and then we've got our producer surplus is this orange triangle right here no no if you can see that that orange triangle so that's it now if we were to add those two together that would be our total surplus so our total surplus is this entire triangle right here the blue and the orange that's the total surplus for Chile with with the market for copper now let's introduce the world price so the world price of copper is five thousand four hundred and forty dollars a ton now you'll see that that is higher than the equilibrium price in Chile of copper per ton which is four thousand dollars a time so because the world price is higher than the price in Chile Chile will export copper right so producers of copper in Chile have an incentive to say hey we have a comparative advantage in producing copper if we go out and look at countries other than Chile we can get five thousand four hundred and forty dollars a ton for copper so the Chile is going to be a net exporter they're gonna export copper okay so now I want to show you how consumer surplus producer surplus and total surplus are going to change when we introduced the idea of trade in allowing Chile's copper manufacturer producers to trade on the global market so we said that the the world price was five thousand four hundred and forty dollars ton so what we're gonna do we're gonna put that here it's gonna be higher remember it's it's higher than 4000 okay so I'll just call this w sub P that's the world price and that is 5000 for 40 and it's up here because it's higher than the equilibrium price of 4000 right so what is gonna happen is we're gonna have some shifts of when we look at the surplus for consumer surplus and for producer surplus so consumer surplus and I'm gonna keep it I'm gonna keep it the same colors here so I'm gonna go with blue again consumer surplus right here which I'm just gonna call I'm just gonna call this CS this is consumer surplus so now it's just this tiny triangle so see before it was this big triangle and now it is shrunk consumer surplus has shrunk okay because now some of the consumers in Chile are having to pay a higher price for the copper so not not as many are demanding they're not demanding as much copper so but here's the thing though there is a huge benefit to producers producers that their surplus used to be this triangle here but now now they not only have this but they also get this part which which used to be part of the consumer surplus this part right here and then this part here is brand new surplus nobody had that before that's like if we drew a line right here this amount that was not consumer surplus or producer surplus before so so that that is new so basically there is a shift when we introduced the idea of trade and we allow X the copper producers to export copper in the world market they're gonna get the PERT there's gonna be a shift of some of the consumer surplus is going to go to the producer surplus and when we think about shifting from consumers to producers that doesn't change the total surplus right remember cuz total surplus is just going to be the consumer surplus producer surplus so we're thinking if the country is better off or worse off we care about new portions right in this triangle right here I don't know if you can see that let me sew this right here this this part this is new that is new surplus and so Chile as a whole when we consider the the gains to producers and the losses to consumers right because consumer orchid consumers consumer surplus shrunk then here we've got I'll just put this PS that's a producer surplus when we think about the net effect this triangle here was there before right that was that was this but now we've got this new area it goes to producers but hey total surplus has increased total surplus has increased and why is that the case let's think about it so what is happening at a price of five thousand four hundred and forty dollars a ton well let me let me kind of extrapolate this down so we had before the equilibrium quantity was was two million tons of copper but now let's say here I'm gonna call this 1.5 I'm gonna show you why these are relevant in a minute just here I'm just gonna address oh that's 2.5 now at this world price of 5000 for 40 now there are the the copper producers in Chile are willing to supply this is our supply curve right for Chile 2.5 million tons of copper right that's the that's right here they're one to supply 2.5 million tons of copper but at that price five thousand for 40 Chilean consumers say look we'll only demand 1.5 million tons of copper right here so this gap because of it because of we have international trade that we're allowing in this example that gap is going to be exports that's going to be the number of exports so we can take that we can just take the 2.5 - the 1.5 and that is going to give us 1 million so there's 1 million tons of copper that is being exported because at this higher price now remember but what does higher price signifies is look Chile has a comparative advantage relative the Russell world in producing copper right so they can go out on the world market and say hey we get sellers for 5000 for 40 now Chilean consumers are saying look at that price we're only good to me at 1.5 million tonnes but the producers are saying hey will supply 2.5 million tonnes but it doesn't matter because it's not like there's gonna be a surplus or copper left over because of the world market right this world price so the Chilean producers can say hey we'll go out and get this price we'll go out and sell that copper on the world market and so they end up exporting they end up exporting 1 million tons of copper to other countries and so we end up with this net benefit where we get this this whole new triangle which is producer surplus but it's increasing the total surplus and as a whole Chile is better off because it's exporting this copper
|
Microeconomics_entire_playlist
|
Consumer_and_Producer_Surplus.txt
|
in this video we're going to talk about consumer and producer Surplus so consumer surplus is when a consumer's willingness to pay for a good or service is higher than the price that they actually paid so let's say that you play the guitar and you find this guitar on eBay that you really really want and You' be willing to pay you'd be willing to pay up to $800 for this guitar but you end up in the auction you end up actually getting the guitar the actual price that you pay is $550 so you would have paid up to $800 but it only got bid up to 550 you were the winning bidder and so you see here that your willingness to pay exceeds the price that you actually paid and it exceeds it if we were to subtract 550 from 800 we would get $250 so we would say that you have $250 of surplus all right you've got you're your benefit exceeded the price that you paid by $250 okay so now the producer Surplus we're just thinking about the amount that the producer receives minus the cost of producing the good so let's say that there's a producer that makes drum kits right so they they make drum kits for people to play the drums and stuff right for for a band so they make drum kits and let's say that it costs them let's say that it costs them $300 to produce a drum kit but they sell the drum kit they sell the drum kit for 700 so now we see that their minimum cost that they needed to get to break even would have been $300 but they actually get $700 for the drum kit so we would say that the difference there which you can think about is profit or or however you want that is producer surplus of $400 and so we can actually calculate the consumer surplus and the producer Surplus for all the consumers and all the producers in the market right in this example I just gave the the consumer surplus of one person we just talked about you buying a guitar on eBay but we could think about all the different consumers and producers in the market and we could calculate the Surplus for both of those groups and then if we were to add the consumer surplus and producer Surplus for everybody then we would have the total surplus of of basically that's produced by having uh transactions occur in the market okay so let me give you an example now again we're thinking about let's say we've got our our demand curve and this is not a demand curve we're not just talking about one person's demand uh for whatever the good is in question and let's just say that we're talking about surfboards so let's say that this is the market for surfboards and we're not just talking about one person's demand for surfboards we're talking about everybody so this demand curve tells us all the different amounts that would be demanded at different prices okay and then we've got our supply curve that's the amount that the producers of surfboards would bu be willing to supply at different qu different quantities for different prices right so let's say that so we've got our equilibrium right here and i' I've got a video on supply and demand if this is new to you but when the supply curve and the demand curve intersect that point is our equilibrium so I'm just going to put it that's our equilibrium right there so in a free market we would have and if we can just trace this out here to a quantity of let's say 100,000 surfboards being produced and then we have a price of $800 that's our equilibrium price and our equilibrium quantity now we can see that at this equilibrium price of of $800 there were some people in the market who would have been willing to pay more than $800 right they would have paid more than $800 to get a surfboard for example right here at this point let's just say that that would be a price of $1,000 so there are people who would have paid $1,000 but they got the surfboard for the equilibrium price of $800 so those people got each of them got $200 of surplus and so how do we calculate all of the consumer surplus because some people would have been willing to pay 1,00 some people would have been willing to pay 850 Etc but no one no one would have been willing to pay 1,200 or more because we see at 1,200 at a price of 1,200 the quantity demanded is right here at zero at a price of $1,200 nobody's demanding anything so really it's all this area here this area here is our consumer surplus that triangle is our consumer surplus so it's the area below the demand curve because again this is our demand curve right so all the area it's below the demand curve you see that but it's above the price it's above the price here's our price $800 okay so this whole triangle is our consumer surplus and we can take the area of that we can actually calculate the number because we can see and let me get rid of the Thousand here so I don't don't confuse you the points that we really care about if we're thinking about the area of a triangle so we'd say so let's say here is our consumer surplus it's going to be equal to 12 time the base times the height of the triangle okay and our base is going to be th this amount here that distance and and that's 100,000 that's that's 100,000 minus 0 which is 100,000 so we're going to have 1 12 times 100,000 times the height and the height is just going to be 1200 minus 800 1 1200 - 800 which is 400 if you multiply all this out you get $20 million is the consumer surplus so that's our consumer surplus and now with the producers let's think about it with the producers with the producers this supply curve we can think of as the minimum Supply price right that's the minimum amount at each at each point given a certain amount or certain quantity what is the minimum price that the suppliers would demand so here we want all the points below the price of $800 below that price but above the supply curve okay so we want everything here this whole area is going to be our producer Surplus there I'll just link that there that's our producer Surplus and so similarly we can calculate the producer Surplus because again in this in this case it's just a triangle there'll be cases where we have some kind of weird situation going on and and you might have to divide it up and and calculate it differently It won't always be this easy but in this case just half time base time height so we're going to have 12 time the base which is again 100,000 times the difference the the the height here of this triangle is 800 minus 400 which is again $400 so in this case it just so happens to work out that the producer Surplus and the consumer surplus are exactly exactly the same $20 million but we're going to talk about all kinds of instances where the producer Surplus and the consumer surplus are not the same we're going to have things like a a quota on Imports or a tariff or different things that they're going to change where we have maybe some transfer where there's some of the consumer surplus is actually transferred to producers and then we're going to have some cases where we lose some value where we actually lose some value and nobody gets it right we'll have a situation where nobody's getting some section and we'll call that a dead weight loss and so basically if we want to add up the value being created by this Market we can add the consumer surplus and the producer Surplus together we can add these two amounts and that's $40 million so $40 million is the total Surplus that's the total Surplus and now why is this relevant why do we even care what the S Surplus is because we can think about things like a tariff we can think about different things that a government can do and then we can say how does it affect this the Surplus because if there's some kind of government action like a price ceiling or something like that that comes in and takes away some of that Surplus then we can think about wow we've lost value as a society and then we can also think about who are the winners and losers of a policy is it the consumers or they're paying more The Producers losing and so forth and we'll talk about all that a lot more in the videos to come
|
Microeconomics_entire_playlist
|
The_Production_Possibilities_Frontier_PPF.txt
|
in this video we're going to discuss the production possibilities Frontier which is also known as the ppf so the ppf is simply a graph that shows all the different combinations of goods and services that e an economy could produce given its current level of resources so let's take Japan and let's say that Japan has a choice between producing two goods it can produce cars it could produce trucks or it could produce a combination of the two right so Japan has to decide well how many cars and trucks do we want to produce let's say that Japan produced zero trucks right so trucks are here on the x- axis so zero trucks would be right here and that would correspond to a level of 50 million cars so basically this is saying if Japan focused simply on producing cars and just completely said forget about trucks we're not even going to produce trucks just cars that Japan would be able to produce 50 million right here 50 million cars now you might say well why are we saying why can't they produce 60 million why can't they produce 70 million and stuff because Japan is limited given its level of current resources right Japan only has access to so much steel to so much rubber remember when we talked about scarcity right so resources are limited so there's only so many resources so it's basically saying look if Japan just just strictly focused on making cars the max that they could produce is 50 million cars okay now there are different combinations obviously Japan doesn't have to just say well we just want to do cars or we just want to do trucks they can have different combinations they could have different combinations they could say okay so basically all the points along this along this curve are different combinations of of cars and trucks that Japan could produce for example let's say that let's say that they were going to produce 40 million cars right they were going to produce 40 million cars and so that brings us to this point right here and let's say that that also corresponds to 40 million trucks so hypothetically this this combination right here would correspond to 40 million cars and 40 million trucks so instead of so they could have 50 million cars in zero trucks right that's that's right here but they say you know what another combination we could have is 40 million cars and 40 million trucks right so they've got these this we'll call this combination a and then B right so or you think of point a point B so to go from point A to point B what are they giving up well they're giving up 10 million cars because they're going from 50 million to 40 million right they're going to have 10 10 million fewer cars that they're going to produce that's this difference right here but what are they getting in exchange well they're getting 40 million trucks right so if they go if they go like this they're going to get and say wow we're getting 40 million trucks but we're only giving up 10 million cars and so that's the the marginal cost of going from A to B the marginal cost would be 10 million cars so 10 million cars would get you 40 million trucks okay so you going to think about that in terms of marginal cost or the opportunity cost remember when we talked about opportunity cost so the opportunity cost of going from point A so let's say this is this is point a here to point B is that we're giving up 10 million cars right we're giving up 10 million cars here but we're getting 40 million trucks right so now all these points along here along this curve are efficient in production and we we're going to talk about all this more in in future videos but we say that these points are efficient that means that at any of those points along the curve you could not Japan could not produce an additional unit of either a car or a truck without giving up one of the other right so at at right here at point B for example they got 40 million cars 40 million trucks they could not produce 41 million cars in 40 million trucks if they wanted to go to 41 million cars they would have to decrease their production of trucks right so when we say it's efficient in production we're saying that you couldn't produce at that point so let's let's go back to the the point point a right here so at Point a Japan cannot produce an additional car Beyond 50 million it could not do that right it's just not able to do it so so basically you you think about is This this term of opportunity cost or and and then we can think about the marginal cost now the marginal cost as we go along this Curve will be decreasing or increasing and when I say that the marginal cost is increasing what I mean is that look for us to go from 50 million to 40 million we give up 10 million cars but we get 40 million trucks TR but now let's say we want to go to 50 million trucks right so we want to go to 50 million trucks now we get where we have to give up 40 million cars right so we gave up 10 million cars to get 40 million trucks but now the incremental cost the marginal cost to go from uh 40 million trucks to 50 million trucks right to go from from here to here we now have to give up 40 million cars right to get to where we're producing an extra 10 million trucks and so we say that the marginal cost is increasing and that's why we have the Bat Out shape that's why we have the bod out shape of the ppf the ppf is looks like this and it doesn't look like a just a straight line oh that's not a straight line so it doesn't look for example if it look like this that would be just a straight line and we're B basically saying that there is no increasing marginal cost the marginal cost is is constant okay and that's possible but usually it's the case where it's bowed out like this because we say that uh you know as you produce more and more of one good as we produce more and more trucks it's like the the equipment that we have to produce uh trucks some some of the equipment that we have was better for you producing cars and we start repurposing that to produce trucks and it gets more and more difficult right and so it's harder at those extremes so basically the marginal cost is increasing and that results in this bod out shape of the ppf now I had said that it's in it's efficient in production any of these points along the curve now the points Beyond The Curve outside out here those points are not feasible not feasible it means that given the current level of resources we cannot get to any of these points we can't no matter what we do even if we decrease our truck production to zero we cannot produce 51 million cars it's it's not possible at this point right so we just we just can't do it so those points are not feasible and then the points along the curve are efficient as we discussed now the points inside the curve are inefficient they're inefficient so that means that if we're at an inefficient point it means that we're either wasting resources or we're misallocating resources right so so let me give you an example let's say that we were looking at let's say this point right here and let's say that this point corresponds to 30 million cars 30 million cars and 40 million trucks so let's take a look at 30 million cars and 40 million trucks now this point we say when we say it's inefficient what it means is that look we we at 30 million cars and 40 million trucks we already know it's possible with our level of resources to produce 40 million cars and 40 million trucks right at point B at point B here so we can produce 40 million of each so why would we produce 30 million of the one of cars when 40 we could actually go we could increase without giving up anything else right without giving up anything any additional trucks we could produce an extra 10 million cars right we could go from from here to here right we can move that way without giving up any trucks so why wouldn't we be doing that right and so we say at that point is inefficient and the reasons are that we it might be inefficient that we have that is again maybe we're wasting resources maybe we're not not using all the labor and and capital that that we have available to us now what economies will do and over time what you could have you try and expand the ppf outward right so let's say there was a new technology that allowed it make it easier to make trucks then the curve might look something like that right we expand it outward right and now maybe instead of producing the max 50 million trucks and maybe it's 80 million right so we can we you know technology and so forth and then also when we when we engage in trade right so when we engage in trade that allows us to consume actually outside at a point that's not feasible and we're going to talk about all of this more in the videos to come I just want to give you a general overview of of the ppf
|
Microeconomics_entire_playlist
|
Government_Paternalism_vs_Libertarianism.txt
|
we've been talking about how the government can intervene with public policy when there's a market failure but we haven't asked the question yet should the government intervene with policy and so in this video I want to talk about two different views paternalism and libertarianism of thinking about the answer to this question so paternalism is sometimes kind of pejoratively referred to as having a nanny state but what it is is basically saying in certain situations the government might know best it might know better than citizens what's good for the citizens so just to give you an example so for the government to require that you wear a seat belt when you drive a car so where I live I'm required to wear a seat belt if I'm driving a car and if I'm not wearing a seat belt then I can get a fine I can get a ticket and have to pay money to the government as punishment for not obeying the rule that I'm supposed to be wearing a seat belt now not wearing a seat belt dramatically increases the chances that I would die if there's an accident right so this this is in a sense this is a good thing for me that the government is requiring that I wear the seat belt right so there's good intentions and so forth so you might just think hey obviously this makes sense and this should be a rule and there should be a fine but there's a few issues with that and and so one of the issues is what if when the government enacts its policy that policy doesn't actually work and and so you might be thinking hey how could this be a problem with requiring people to wear seatbelts how could that possibly be an issue but there's a couple of things that you want to think through with the policy whenever the government enacts a policy it might change the way people behave for example if people wear seatbelts research suggests that they actually drive faster so people drive faster when they're wearing a seatbelt why because they're less afraid of the risk of of what would happen to them if there is an accident it's a there's actually a whole literature on this it's called risk compensation so Bates because they now have a seatbelt they feel a little more safe and so they feel comfortable driving faster you see the same thing with anti-lock brakes so if you have an anti-lock brakes that helps you stop your car faster so if you have abs brakes people will actually tailgate more they'll be more likely to tailgate and drive closer to the bumper of the person in front of them if they have anti-lock brakes this isn't to say of course that just because people drive faster when they're wearing a seatbelt that we shouldn't require seatbelts or anything like that I'm not taking a position here I just want you to understand that sometimes when you implement a government policy you might have some unintended consequences and so also we need to think if the government is going to be intervening in the economy or doing things then the government can open itself up to be influenced by special interest groups right so people with a lot of money or with it can come in and basically you know by politicians in you know get influenced get access they can do all kinds of things and then change the policy that ends up getting enacted so then it's not so much the government is doing something because it knows best for the people but it's actually enacting something because these special interest groups have paid bribes or campaign contributions or so forth so once policy once you open the door of a policy you also open the door to special interest potentially but another thing to think about is it's just kind of philosophically is that government policy when the government comes in and says look you need to wear a seat belt otherwise you will be forced to pay a fine to the government you could think about that is that the government is subverting or kind of undermining people's individual freedoms right on some level we should be able to be free and and do what we want with our lives provided we're not doing harm to other people by the same token you could absolutely see the intentions of requiring people to wear seatbelts and so forth but this idea of individual freedoms leads into another way of thinking about instead of kind of the paternalistic view of thinking that hey the government in some cases knows better than we know how to live our lives you can also think of libertarianism where basically with libertarianism the government is should stay out of people's lives is the logic and it should respect their freedom and so the idea is that we're our free moral agents and and our freedom is paramount and even if the government thinks it knows best or what at the end of the day we are free and should be able to choose how to live our own lives free of government interference and so that's just kind of a brief sketch of what libertarianism is but if you think about that there's an issue with that and so the issue is what if your exercise of freedom affects somebody else now we might not think about this so much with with like the seatbelt example so with the seatbelt example we could just say look if I don't feel like wearing a seatbelt let's say there's some guy named Tommy and Tommy doesn't feel like wearing a seatbelt and so then Tommy gets into an accident and Tommy dies because he wasn't wearing a seatbelt let's just say that happens well you could say hey that's Tommy's problem right Tommy made that decision Tommy had the freedom to do that and so forth right so we get into things like is Tommy allowed to drink a soft drink right some some cities they propose hey we're gonna ban that you can't have an extra large soft drink or something like that right and so all those things you might say hey that just affects Tommy but what if what if Tommy's decision and his exercise of freedom actually affects somebody else right so let's just say that Tommy let's pretend that Tommy lives Tommy lives in a house and there's a river there's a river behind Tommy's house and so Tommy when Tommy does his oil changes on his car Tommy says well I've got this oil now I'm just going to pour it pour the used motor oil into the river I don't use the river it's just convenient for me it's too far for me to drive into town and so Tommy just says I'm just going to pour the motor oil directly into the river and so let's just say that there's another house there's another house near the river and there's just like a whole neighborhood of people who live around around Tommy and let's say that some of their children play in that River and then their children get sick because they're playing in a river that has motor oil now if Tommy Tommy owns all this property and and so forth and let's just say he is the waterfront rice here and so forth so Tommy is exercising his freedom but what about the freedom of this child to be able to display in the river and and not get sick and so forth and actually what we call this is we call this an externality this is actually a negative externality now I'll have another video on negative externalities and they could be positive too but in this case what it is is Tommy Tommy's exercising his freedom but he's imposing costs he's imposing costs on somebody else right so and he's not reimbursing that person for that cost now this isn't to say that libertarianism is garbage and we got to throw out the whole idea and let the government get involved there's different ways of dealing this one way is a government policy of something called a pigouvian tax the pigouvian tax I'll be talking about in another video but what you basically tax the the activity that's generally generating the externality but there's another way this is without government policy this this guy named Coast came up with this idea of coast bargaining where basically if both parties to a transit if both part if this family and then Tommy's family assuming that each have clearly defined property rights that that and that there's low transaction costs those assumptions that basically they could work out an agreement maybe this family says hey look we we really don't like you dumping the oil in here there's a problem and Tommy says look I'm sorry about that I'll pay you $25 if when you go into town you could take the oil and just dispose of it for me so so there are ways to respect people's freedom and still be able to reach a socially efficient outcome but it's just bear in mind that there's no clear-cut decision when we're thinking about the government government intervention with public policy just need to bear in mind that there's several different ways to look on the one hand we want to respect people's individual freedoms but on the other hand sometimes there can be problems when we ignore things like externalities or the existence of public goods and free-rider problems some things that we'll talk about and videos to come
|
Microeconomics_entire_playlist
|
Common_Resources_Common_Goods_in_Economics.txt
|
in this video we're going to discuss common resources in economics so a common resource is a resource that has two characteristics first it's non-excludable and second it's rival RIS I want to give you an example to explain what those terms mean so let's take the ocean it's very difficult to exclude people from being able to use the ocean so if you're a country and you say hey we think that there's a problem over fishing might be difficult to prevent people from going into the ocean and fishing so that's what we mean by non-excludable rival RIS means that basically if somebody goes out and catches a fish out of the ocean that's one less fish that somebody else could catch so there's not just some unlimited number of fish in the ocean there's a finite number of fish and if I catch a fish that's one less fish that you or somebody else could catch so I want to talk about this in the context of of lobsters let's take lobstering and let's think about the ocean and let's think about lobstering the more boats you have let let's say you you want to catch lobsters for a living the more boats you have the more lobsters you're going to be able to catch right so the number of fish you catch per boat is going to decrease as the number of boats increase but even so you would been an advantage you would continue to get more and more boats you could catch more and more lobsters right and you're going to continue to increase the number of boats that that you have until you reach the point where the return for per boat is equal to to your marginal cost right so we think of that as your marginal private benefit is going to be equal to the marginal cost that's the decision rule you're going to follow and so but think about it when you add more boats when you increase the number of boats you're reducing the catch of other boats right so there are other people out trying to catch lobsters and so each additional boat that you put out in the ocean is going to decrease the amount of lobsters that are available to the other boats and so adding another boat is imposing a cost on these on these other fishermen and so it's it's actually it's a negative externality so I want to just kind of graph this out and show you the nature of the externality so let's say that we've got price up here and you can think of this as the the the number of fish quad or the value of the fish CAD or so forth and then we'll have quantity with quantity we'll have this is the number of boats this is the number of boats to go out and catch lobsters and so I wanted to show you the return to this and how the marginal private benefit differs from the social benefit so let's take let's say that the marginal private Ben or let first let's do the marginal cost let's just say the marginal cost here's the the cost of of a boat cost of a boat and so let's say that the return the average return to having a boat let's say that it's it's downward sloping like that so we could think of this as the marginal private benefit and the reason I keep calling it private benefit is different from the social benefit I'm going to talk about that in a minute but this is this is the average return per boat or of having a boat okay so that's what this curve is here and now what we're what I want to show you is here's why we have an externality because the marginal social benefit let's say that this is the marginal social benefit I'll put it MSB it's the marginal social benefit and the social benefit is the benefit when we think about Society in general this private benefit let's say you have a lobstering company you're just considering the benefit the payoff the return per boat when you're deciding oh do I get another boat do I not get another boat you're only looking at your return how many more fish will I catch if I get this extra boat and then you're comparing that to the the marginal cost of a boat right so where we're going to end up in a free market is we're going to end up here this is going to be our equilibrium number of boats so this is the equilibrium number of boats that that we're going to have and and let's just say it was let's say it was a TH or something right so there's a th lobster boats now from a from society's perspective the socially efficient or optimal number of boats is going to be less it's it's going to be a lower number right so th this is the efficient amount so the market the free market is not going to produce the efficient number of boats and the reason is is that there's a difference here between these two curves see this so you're just saying with your company you're saying hey look I'm just looking at I'm going to stop buying boat boats I'm going to stop putting boats out there when I get to the point where the cost of doing so is equal to the benefit right because you're not going to continue to put G get Vates if the benefit is lower than the incremental cost right so you're going to do so until your benefit is equal to the cost but that point that equilibrium that is produced by the free market is actually higher than the socially efficient quantity this qar right so it's is actually from a social perspective weighing all the costs and benefits to all the people in your country or what it would actually be best if there were just 600 boats and the reason that the socially efficient number is is lower than the what the free market produces is because these people so so when you are making your decision and saying hey I'm going to keep producing boats or getting boats out there until my benefit uh the incremental benefit equals the incremental cost is cuz you're not considering cost to other lobstermen you're not considering hey what about these other Lobsters by me getting another boat out there I'm just catching some fish or some lobsters that would have been caught by other lobster boats anyways by these other firms you don't care about that because you're thinking hey I just I'm thinking about my own return per boat right you're just looking for your own self-interest and that's fine but we need to understand that there's a problem here because we're actually this common resource it's basically now there's an incentive to overuse it there's an incentive to overuse it because people are just trying to put more and more boats out there and we see that it's being in this example there's 400 additional boats the 1,000 minus 600 we got we got almost double the number of boats out there that that we should have from a socially efficient perspective and so that could lead to over fishing it could deplete the lobster Supply and we've seen this happen in in the real world where you see Fish stocks or or Lobster are depleted and it's be basically because of the situation you have a common resource where it's difficult to restrict people from going into the ocean and and and but by the same token they're creating cost when they do so right if they go out they're getting an incremental benefit maybe from having an extra boat but they're imposing costs on some of the other people with boats as well and so they're going to we're going to overproduce we have too many boats the quantity is is too high and so basically there's there's there's a number of ways you could deal with this one way that is proposed and some people don't like this but is privatization so if you privatize the common resource then the owner of the resource so this is where people say like oh well if if we have a lake or something if you have a private owner instead of it being like government land or what or government lake or something then the private owner has an incentive to to basically they'll find a way to restrict access and and make sure that there's not over fishing or whatever and way is is some people have issued licenses for lobstering or for things like that and then now people have an incentive because the license has value and and of itself they don't want to completely deplete the fish stock or the lobster stock and we we'll talk about these methods in the videos to come but I just want to give you a quick introduction to the idea of common resources and how they lead to overuse and how the free market will end up with just basically the the resource being depleted
|
Microeconomics_entire_playlist
|
Introduction_to_Microeconomics.txt
|
in this video we're going to discuss the subject of microeconomics so microeconomics is concerned with studying the behavior of individual economic actors an individual's doesn't just have to mean one person it can mean families that can mean workers or community investor's companies firms so we're looking at the behavior and so how do firms make investment decisions how do individuals decide how much peanut butter and jelly that they want to buy how do people come together in form markets how do companies and individuals come together how do they decide on the price and are these markets going to be socially efficient are they going to produce the optimal level from society's perspective of pollution and other things we're end up with these things called externalities that we'll talk about where the free market does not produce the optimal level of some product or service and then also we'll look at how these these individual economic actors they can form industries particularly with companies will think about oleg ah poles' monopolies and competitive markets and and how these companies come together and form these industries and then how the government's actions how the government imposing a tax or something can affect all of these individual economic actors so it's going to give you some examples of different micro economic topics so for example you might study you might look at the household level and say do households buy more peanut butter if the price of jelly decreases and so we'll talk about things such as substitutes and complements I mean for obviously if jelly is something that naturally goes together with peanut butter in a lot of countries but we might also ask you we might look at the family level and say and forgetting money or things like that or even buying things we might just think about the decision to have more or fewer children right so when we say that people's what family's income ZAR increasing over time are people going to start having fewer children or are they going to say well now we have more income we can have more children and we can think about substitutes for children and so forth and we can also think about we look at from the firm perspective we can look at companies and say okay if the government were to cut taxes let's say that the government were to say okay we're going to get rid of the corporate income tax so we're going to cut taxes our firms going to respond by hiring more employees are they going to invest that money and more machinery or are they just going to keep the money as cash or what are they going to do or they're just going to give it as a dividend to their shareholders so we can look at those things we can think about with consumers and how they would respond to a tax for example we might say okay if the government were to impose the tax on yachts and a yacht is just a luxury boat if they were to impose a tax on yachts how are consumers going to respond and we talked about consumers were talking about buyers the people that would buy the yacht normally how are they going to respond it's going to depend okay is there demand for the yacht elastic or is the inelastic and so forth and we're going to talk about what all those things mean but basically this tax it can depend on what type of good it is for example if you tax milk people might respond differently then if you tax something that's a luxury item such as a yacht and also we can think about industries in general right not just about the firm's but industries and so how does the structure of let's say the agricultural industry how does that affect something such as the price of corn and we're going to talk about all these topics and a lot more in the videos to come
|
Microeconomics_entire_playlist
|
Elasticity_of_Supply.txt
|
in this video we're going to talk about elasticity of supply so elasticity of supply is a measure of how responsive the quantity supplied of a certain good is if there's a change in the price of that good so if the price of oil goes up by 5% what happens to the percentage change in the quantity supplied of oil how do oil producers respond to that so we calculate we take the percentage change in the quantity supplied and we divide it by the percentage change in the price of the good and similar to price elasticity of demand if it's greater than one so if it if it's greater than one we would say that the supply uh the elasticity of supply is elastic so if it's greater than one it' be elastic and we say if it's between zero and one so if it's less than one then we would say that the elasticity of supply is inelastic okay so I want to give you an example and and we'll I'll show you how to calculate the elasticity of supply so let's say that we're talking about the market for natural gas and that the price of natural gas increases by 10% and then firms in response to that let's say that they increase production of fracking or whatever they're doing they increase production to get more natural gas and therefore as a result the quantity of natural gas supplied increases by 15% so now let's calculate the elasticity of supply using those figures so again we're going to take the percentage change of quantity supplied which is going to be 15% right so that's 15% and then we're going to divide it by the change in the price so the price of natural gas increased by 10% 10% so when we divide those and that is 1.5 it's a five 1.5 and so that is going to we would say that the elasticity of supply is elastic it's elastic because it's greater than one that means that a 1% % change in the price leads to more than a 1% change in the the quantity supplied now the elasticity of supply is going to depend on a couple of things one is the substitution of resources how easy is it for firms when they see that increase in the price to turn around and say okay let's now produce more natural gas right they might not have the resources to do so or might be really difficult they might have to switch resources from something else they were doing in order to to to produce more natural gas so it depends on the good in question and how easy it is to to substitute resources and it's also going to talk not just the the substitution of resources as an issue but also thinking about the time frame and whether we're talking about the short run or the long run and so forth It's not necessarily easy if we see that the price goes up 10% to immediately the next day say okay now we're going to increase uh natural gas production by 15 % maybe that's something that takes a a huge investment of resources and machinery and so forth and maybe in the long run maybe it would be more elastic than it would be in the short run so again it depends on the good or service and we have to think about how easy it is to shift resources to change the production of that good or service in response to the price change and then how easy it is to do that in the short run versus the long run
|
Microeconomics_entire_playlist
|
Inefficient_Production_in_Economics_how_some_economies_misallocate_or_waste_resources.txt
|
in this video we're going to talk about inefficient production so let's say we're looking at an economy that can produce two goods it can produce food and it can produce steel if it's strictly focused on producing food it would produce 100 million tons of food whereas if it's strictly focused on producing steel it would have 50 million tons of steel and we know that all the different points that are along this line our production possibilities Frontier that's this this curve here all the points along this line are different combinations of food and steel and as long as we're along this line we'll say that the combinations are uh efficient in production or that we've achieved production efficiency which means that we couldn't produce an additional unit of one good without decreasing production of the other good so for example let's say that we were at this point right here and let's say that this was 90 million tons of food and then that meant 40 million tons of steel right so at this point for us to go so we got 90 million and 40 million so for us to go to 91 million tons of food we would have to give up some steel right we couldn't reach this point right here for example right we can't reach it's not feasible but you might be wondering what about all the points in the Interior right all these points in here can't we get those points right the points outside right so when we say we're outside the ppf we know that those are not feasible they're not feasible it means we can't reach them with the current level of of resources that we have but what about all the points on the interior aren't these feasible you might be asking and the answer is yes they are feasible we could reach any of these points but we don't want to do that right we don't want to do that we want to be on this line because that's efficient in production and so you might be thinking okay well why do we even care well let's take let's take let's take uh this point right here right so let's take this point so this point let's say corresponds to the 40 million tons of steel and then it corresponds to let's say 60 million tons of food so You' say well what's wrong with this point well this point we could get to right here without giving up any steel right because this is 90 million tons of food and 40 million tons of steel so why would we take 60 million tons of food and 40 million tons of steel without giving up any steel we could get an additional 30 million tons of food so obviously this point is better so that's why all these points along this curve are efficient in production so you might be wondering then why would we ever end up at this point right how would we ever end up on the interior of the ppf because it's certainly possible and it'd be foolish for an economy to Cho why would somebody choose that they want one of these combinations inside the ppf and there's a couple of reasons that a firm or that an economy could end up with inefficient production and one of the reasons is that resources are being misallocated all these points in here right we're reaching them with the resources that we have in the economy right we're talking about we're talking about resources are people they machines and so forth they're all the things that we use to produce steel and food right so we have some level of resources and we're bounded by this curve here of how much food and steel we can produce given the current level of resources but if resources are being misallocated somehow if they're not being used wisely then that means that we could actually not be producing at a point that's efficient in production but we're at one of these points on the interior which are inefficient right so these are inefficient so I want to give you I want to give you an example of inefficient production or this this misallocation you might be wondering so China in the late 1960s embarked on a program called the Great Leap Forward right and so one of the things with the great leap leap forward was that the government wanted to increase production of Steel so they wanted to and they set quotas and so a lot of people who were farmers and stuff they had quotas that they needed to meet of the amount of of Steel that they could produce right they needed to get uh they needed to get steel and so what they would do in some cases they were actually melting down farm equipment right so they would melt down farm equipment so that they could get scrap metal right that then they would they would melt and say hey we we've got some uh we've got some steel here right so we got the ability to make steel B basically pig iron was uh you know you can look that up if you're curious about it they were making pig iron right so they're melting f farm equipment to get pig iron which in a lot of cases the pig iron wasn't even usable right it it was not usable there were problems with the pig iron and so forth and so you you could think about this is a misallocation of resources because a lot of people millions of people ended up starving cuz there wasn't enough food right so you can think of it as we're at a point like this right here where it's inefficient they could have if they instead of melting down the farm equipment they had instead used it to produce more food maybe they would have been able to have 90 million tons of food instead of 60 million for example right these I don't know how accurate these numbers are I just picked them out of a hat so the idea is that we're not using using the resources in the right way right each resource we want to use it in the best way possible if we have a bunch of farm equipment the farm equipment is best suited uh to be to be producing food it's not really well suited for us to melt it down make pig iron so that we can try and increase our steel production so that's a misallocation of resources and it can lead to where we're actually at an inefficient point of production another thing would it be if if resources are simply being wasted right and so when we we think about resources being wasted well we come back to well what are resources well we've got people we've got machines we've got buildings right so let's say you have for example in the United States you have a lot of vacant buildings particularly in urban areas right and so metropolitan areas you will have a lot of vacant buildings or vacant land right land that's not being used for agricultural purposes there's no food being grown there it's not being used for commercial purposes they're just buildings sitting there or there's land right due to de-industrialization and so these are resour res ources they're being wasted right they're they're being wasted you can even think of people being wasted right if there's really high unemployment there's a lot of people who could be doing something but for whatever reason the economy is is not using them right so you end up at some point inside the ppf where theoretically based on the resource that that you have you actually could get to a point on that ppf without giving up more steel you could actually get more food or or vice versa right because resources are being wasted not all the resources are being used or they're not being put to their best use in terms of misallocation you can end up with one of these inefficient levels of production
|
Microeconomics_entire_playlist
|
6_Types_of_Market_Failures_in_Economics.txt
|
in this video we're going to discuss several different types of market failure so first off a market failure is a case when the free market just left to operate on its own is going to produce an allocation of goods and resources that is not Paro efficient so if you remember from before Paro efficient allocation is one in which we could not possibly make anyone better off without making at least one person worse off so that's Paro efficient and a mark market failure we're in a situation where we could conceivably make a person better off without hurting anyone else and so if we're in that situation that means that markets have failed we haven't produced this Paro efficient outcome just by letting free markets operate on their own and so some people would say that implies a role for government intervention then the government could come in and Implement a policy that could get us to that Paro efficient outcome let's talk about some examples so first off an externality is the classic case of a market failure so there's two types of externalities there's negative externalities and positive externalities let me start with a negative externality because I think it'll be a little easy to understand so let's take pollution let's say that there is a factory that produces some good let's say they make steel for example and when they make steel there's there's these different outputs other than steel some chemicals or something that's that's left over after the process is over and they dump it and they dump it into a river and then there's a house up River where children play in the river they play in that water and they don't realize that there's chemicals being dumped into this River here and the children play and the children get sick and they have to go to the hospital their parents are paying medical bills and so forth so what is happening here is that one party the steel company is imposing costs on another party the children and the family without reimbursing this family right now if they had worked out some kind of deal where they said hey look we're going to be dumping chemicals in this River and so therefore we'll give you $220,000 a year for this inconvenience and the family said hey we'll take that heartless though it may seem let's say that that actually happen then it's not an externality but what is crucial here is the company or or individual or whoever is imposing a cost on another person or party without reimbursing them now positive externality is different in the sense you think of something like a vaccine so a vaccine is one party is imposing a or giving a benefit to another party now it's not a cost that one person is giving a benefit to another person but they're not reaping any of that benefit so if I get a flu well let's leave in inside the vaccine we'll say a flu shot to well I guess same idea but we'll say that I get a flu shot the specific thing and I get this flu shot and all the people who work around me or or people I encounter there have a less chance of getting the flu because I got a flu shot right so I have given them a benefit I gave myself a benefit but I also gave them a benefit now they don't go and pay me or say hey here's a dollar thank you for getting that flu shot right so because I don't reap all of the benefits that recr to other people then I will be less likely to get a flu shot if they actually if I if I got more than just a benefit to myself and actually got all of the benefits that ACR to Society for getting that flu shot I might be more likely to do it right and so when you have a positive externality that good in question whether it be a vaccine or whatever will be under supplied and when you have a negative externality the good in question let's say steel or whatever it is that's that's being produced that's causing problems will be overs supplied relative to what would be the socially efficient outcome now so that's externalities a public good is also another situation when you have a public good that you can have a market failure and so what is meant by the term public good is it's talking about something very specific a good that's going to have two qualities so it's going to be nonrivalrous and non-excludable and I know that those might be a little obtuse or that are hard to understand let me just give you an example and make it a little bit easier so let's talk about National Defense let's talk about National Defense the military and let's take the country of France let's take France I'm not going to be able to draw France but let's just say that that's France that's a terrible picture but it's France so here in France we we've got a country with a military and and so forth now let's say somebody from Spain moves to France let's say one person from Spain decides they want to move to France Now by that when that person moves to France they are not making it harder for anybody else in France to get National Defense right they're not interfering with anybody who already lives in France their their enjoyment of France's national defense and France's military now let me give you an example of something that would be rival risk that would not be let's say an apple or if fish if I eat an apple or I eat a fish then that's one less apple or fish that somebody else could eat right so it's rival risk now when this person moves to France they're not preventing anyone who lives in France from get enjoying that benefit of that National Defense right so that's what it means by nonrival risk non-excludable is that if this person moved to France it would be very hard to say okay we're going to have our mil military in France defend everybody except this person if we get attacked this person's on their own but everybody else in France the military is going to protect them and and so what that leads to the these two properties is they're going to create an issue it's called a free rider problem and the free rider problem is this that because of these these two characteristics of the public good of this National Defense in this example but there are other types this person who's moving to France they have no incentive to pay any money for National Defense unless you tax them and force them to do it which is you know the government's getting involved but just voluntarily this person would not pay for this public good right they're not going to pay any money so they will free ride and the reason is they don't have to pay you can't exclude them from getting National Defense if other people are paying for National Defense and there's a military and France gets attacked this person will enjoy the benefits of having the milit whether they paid any money or not and so because of because of that way that the public goods are they will either be under supplied so you won't have enough of them or there there won't be any supply of the public good at all and so then people would say well the government can come in and play a role by taxing people taxing this person and everybody else and forcing them to pay for the public good now you will also have a market failure when you have a monopoly situation and a monopoly is is situation where you have a single firm by itself that is supplying the entire market right so if if there's just one firm that you can buy goods or services or whatever for then you have a monopoly there's there's what this really is is a failure of competition so there's a lack of competition and the reason that this is problem is that when you have competition normally when you have a situation with perfect competition you'd actually have where firms are what called price takers so they just have to take the price is given no single firm can influence the price and so what'll happen is all the firms they'll just take set the price equal to the marginal cost of the product but when there's just one firm when there's a monopolist then the monopolist can say okay well I have some influence basically how much I produce how many of this this good I produce affects the price so I can set price in a sense by how much I produce and so I will set price higher than the marginal cost now the monopolis can't just pick a a just outrageous price that no one will pay because of elasticity of demand and we'll talk about that in a video when we do on monopolies but just for right now understand that they the monopolis can set a higher price than would be achieved were their competition were their perfect competition now that's good for the monopolist right so as the producer the monopolis is going to have more gains from that but consumers are going to lose out and actually on a net basis on a net basis Society will be worse off and so it's because the Monopoly there's a lack of competition and they can charge a higher price so that leads to a market failure but also sometimes you'll have something referred to you hear incomplete Market or there's some kind of issue with the market and this could be a number of things I just want to give you one example of of where basically a market could in a certain situation wouldn't develop or it wouldn't completely develop and that would be an insurance market so sometimes there's problems with insurance markets and the reason that an insurance Market would have a hard time developing in certain instances is because of something called uh adverse selection and then also another thing called information asymmetry and I'm going to explain what each of these things they're very important Concepts and I'll talk to you more specifically about them in an entire video on this topic but with an insurance Market so so let's say that let's say that the people's willingness to pay for insurance let's say there's a healthy person who says you know what I'd be willing to pay a premium of $200 a month for health insurance and the insurance company says wow that's wonderful I actually could give you that insurance for 150 a month so then the insurance company hypothetically could have profit of $50 so the market would develop right just normally competitive market there' be a market here because there's profit to be made right so people their willingness to pay is higher than the cost to actually produce the product of insurance so you'd think there'd be a market but there's an issue here is that when the insurance company says okay here it'll be a $150 insurance premium for health insurance but the people who come to sign up for the insurance they might not be healthy right you might have some healthy people who say hey I would have been willing to pay 200 and I can get it for 150 I'll come get the insurance but you might have sick people really sick people want to sign up for the insurance as well now there's nothing wrong with sick people wanting insurance and so forth but maybe when the the insurance company was coming up with this price for the premium they were assuming that the person would be of average health or something like that and if you have mostly sick people who sign up because they are the people who need insurance the most right if you're sick which is an issue called adverse selection then you could have an issue where they say oh well the insurance company says Hey we'd be losing money if we charge this premium we better jack up the premium so then they say okay well we'll put the premium at 225 but then maybe some of the healthy people say hey I was willing to pay 200 because but I'm healthy I don't want to pay 225 so they leave and now you're left with even higher proportion of sick people so the ultimate problem is what's called information asymmetry is that you know more about your health than the insurance company does and so that's a problem for the insurance company in terms of trying to figure out how to set the price and how to determine what your price is and so forth and there are ways that basically insurance companies could offer deductibles and stuff like that I won't get into all that to kind of address this but it makes it difficult for the market to develop even though that some people's willingness to pay is higher than the cost of the product also sometimes you'll have not an incomplete Market but incomplete information in the market where the market on its own does not supply enough information to Consumers and that could be an issue with with lending maybe payday loans people don't understand really what the interest rate is that they're paying or if you think about with Pharmaceuticals with with uh with drugs so if you go to get some pharmaceutical drugs and and or some some prescription if you don't know what's in that drug you you might have some kind of an issue and so people say hey well the government should come in and people don't really understand what's in drugs it's very complicated so they should come in and force these companies to come up with a label right so they should make them have a label that explains what's in the drug explains what the side effects are the market on its own is not going to supply that information and consumers aren't Savvy enough to really understand they don't know a lot about pharmaceutical drugs and so forth also sometimes people will say that there's a market failure when you have just really really high unemployment or or inflation of some countries in the 20th century Germany Zimbabwe just have really really High inflation just so high that it was ridiculous or or countries right now have unemployment is higher or higher than 25% the United States during the Great Depression had unemployment around 25% people were living in Hobo camps and so forth and so people would say hey look if there's some kind of serious macroeconomic problem where there's that many people that want to find work and they can't that might be indicative of a market failure itself
|
Microeconomics_entire_playlist
|
How_to_Graph_the_Production_Possibilities_Frontier_PPF.txt
|
in this video we're going to discuss how to graph the production possibilities Frontier which is also known as the ppf so let's say that you and a group of friends are stranded on a deserted island after a plane crash and you have to make decisions about how much food to collect and how much clothing to produce right so your production decisions are about food and clothing and so you can have two different or you can have a number of different combinations you can produce four units of food and zero units of clothing you could produce three units of food and four units of clothing Etc right so we've got these different combinations right and the ppf remember it's just a graph of the different combinations of goods that or services that could conceivably be produced right so it's all the different combinations of the maximum right so we're going to have a curve and along that curve is going to be all the maximum points that we could have of food and clothing so let's graph this out so we're going to start and we're going to I'm just going to draw a simple graph here so here's our graph and I'm going to put food let's put food on the Y AIS okay and then we're going to put clothing on the X AIS and I'm making the xais a lot longer because we're going to go from 0 to 10 and clothing right so so here's our clothing over here here so this is our our number of units of clothing and number of units of food so our food we've said could go it could go from zero up to four right so from zero to four units so we'll have one and then this will be two three four right so that's how you you just just map out on the access one of the goods and now we're going to map out this clothing and the clothing is going to go from 0 to 10 so that's why I've driv drawn a much longer line there for the x- axis so let's say we've got here one two 3 four hope I left myself enough space there we can extend it out if not uh just barely okay I know this isn't a scale forgive me so what we're going to do now is we're just going to plot the points right so four units of food is zero units of clothing so we can we can plot that we can say okay four units of food is going to be right here because this is the origin right so this would be zero units of clothing and so 40 we can think about this 0 40 and then we've got 34 so three units of food is going to correspond to four units of clothing and then two units of food if we look up it's going to correspond to seven units of clothing right as we give up as we give up some food we're able to get more clothing right so now one unit of food will be nine so that would be right here and then if we produce no food at all and just decide to starve we could have 10 units of clothing and wouldn't we be happy okay so now what we can do is we can just we can draw our curve right so we can draw our curve through those points so this is this is our ppf this is our production possibilities Frontier this tells us at all at each of these points it's telling us the maximum amount of of goods and services we're able to produce given our current level of resources right remember we have limited resources there's only so many people in our group or or so forth right or there maybe there's only so many coconuts or so much food on the island okay so we have to make tradeoffs now if you think about this and you look and you say okay basically for us to go from zero so again this is here here's our I going to write this and this is the origin so this is zero so if we had zero units of food for us to go to one unit of food how much clothing do we give up so you can you can look at the table if you want or you can look at the graph so look at the table here for zero to one for us to produ if we produce no food and we're like Hey we're starving let's produce some food and we go to one unit of food we give up one unit of clothing right so we could say that the marginal cost of of one unit of food for well from going from zero to one right from going to zero to one unit of food the marginal cost is one unit of clothing so we give up one clothing to get one food but then to get the second unit of food we give up two units the clothing all right so we give up two and then to get the third unit of food so assuming we have two units and we go to three we give up three units of clothing you notice that right so three clothing oh excuse me this should be a this should be a two right so we go we go we give up one clothing to get the first food we give up two clothing to get the second food three clothing 7 minus 4 to get the third piece of food and then we give up four clothing if we want to get see if we want to go from three food having three food to four food we give up four units of clothing because we end up we had four and then we get to zero now the production so that's why the ppf as we've drawn it out here that's why it has this bod out shape it has this bod out shape because of increasing opportunity cost or increasing marginal cost right so when we're producing no food at all and we're just dedicating everyone on the island of producing clothing we see that to say okay well let's just have one unit of food to get that we don't give up that much clothing here we only give up one unit we only give up one unit but then when we go for that last unit of food we give up a lot of clothing and why might that be the case that's because resources are not equally productive right so they're they're not equally productive so we might have somebody in our group that's real maybe they're a professional tailor and they're really good at producing clothing right but there are other people who don't know anything about making clothing so if we do nothing but make clothing our entire group and we say okay well hey let's produce some food here we go to one we don't give up that much clothing because maybe someone in our group really was terrible at making clothing but they're good at collecting food so we only give up one clothing we only give up one unit of clothing to get one piece of food but as we go along this curve we see that we have this increasing marginal cost of what we're giving up we're g up for that to get that last unit of food to go from three to four units of food we give up four units of clothing and we do it because maybe that person who's a tailor and that's their specialty is making clothing maybe they're maybe they're a lot better at making clothing than they are at collecting food right and so we'll see this this Bat Out shape now it does not have to be that boat out shape if resources were equally productive right let's say that it it was you substitute one unit of food for one unit of clothing or something like that right then you could just have a straight line right so let's let's say for example that it was the case where we could either produce four units of clothing and no food or no uh no clothing and 40 units of food right you could have a straight line you could have a straight line and say okay to give up one unit of food we get uh we get one unit of clothing and vice versa so you could you you hypothetically you could even have the curve like that right so the ppf could be a number of ways but in an economics textbook and stuff basically you're going to see This Bat Out shape and now because I had 10 items here for clothing and only four for food it looks a lot longer than maybe what you see if you've got an econ textbook or something you're probably seeing something like you're probably seeing something like the following where you just come with a simple graph and then you just see this this nice curve right and again this curve is reflecting the fact that we're not we're not going like this generally because resources are not equally productive as we produce more and more of a good as we produce more and more food to get that last unit of food we're giving up a lot more clothing than we did to get that first unit of food and we'll talk about that more in in the videos to come
|
Microeconomics_entire_playlist
|
Cap_and_Trade_vs_Carbon_Tax.txt
|
in this video we're gonna discuss the difference between a cap-and-trade policy and a carbon tax so both of these policies are intended to address a negative externality a situation where you have for a firm the marginal private cost to the firm is lower than the marginal social cost of the carbon that it's producing so there's a negative externality the firm is producing so much pollution but it's not internalizing all of the social cost costs of people like me and you so the firm's should be producing where the marginal benefit is equal to the marginal social cost so this would be our efficient level of carbon emissions however the firm is over producing is producing at a higher level because it's not internalizing the external cost of carbon emissions so each the cap-and-trade and carbon tax are both intended to reduce reduce the amount of carbon and get it to the socially efficient level however the carbon tax uses a price mechanism in the sense that it's creating a tax on this amount here the external cost is it's adjusting the price and then assuming the firm's will naturally adjust the quantity downward because now they're facing a higher price whereas cap-and-trade you're not setting the price what you're actually doing is you're setting the quantity and saying okay we're just gonna set the socially efficient quantity and Kappa at this QE and then the price will naturally adjust so the carbon tax is a price mechanism that leads to the efficient quantity whereas the cap-and-trade is a quantity mechanism that ends up yielding the efficient price but in each case we're trying to get to the socially efficient level of carbon emissions but there are some pros and cons to cap-and-trade and and I just want to discuss the pros and cons of cap and trade and to the carbon tax so what cap and trade the nice thing is you can set the quantity based on science so you could say hey look scientists have told us that two hundred million tons of carbon is the efficient level or whatever the efficient level is I have no idea but you could set that and say okay we're setting that in advance with the carbon tax you set the price and you hope that it will go to this efficient level but it might not right you might have set the price incorrectly and firms end up producing more than what you thought in terms of carbon emissions but with the cap-and-trade the advantage is you get to set that quantity and you could do it based on science now as things change over time so that the cost of abatement that's the cost of cleaning up and and reducing carbon emissions that class of abatement may change over time it may become cheaper and cheaper to do that and so you can adjust the amount of permits that you give out to firms for for polluting right so if you say okay look this we all this firm to do 50 thousand tons of carbon but now it's become cheaper so now we'll only allow them to do 25 thousand this year so as the cost of abatement changes you can adjust the amount of permits that you give out and a nice thing too is that it creates these these permits that you're giving out these marketable permits for pollution they create a unit of exchange that can go beyond just your country ultimately it could be harmonized internationally where these these these credits could be traded on an international basis and so forth and so we could think about a world in which we have that with the cap and trade also if you allocate the credits based on an auction like if you say okay look here's the amount of or we're gonna auction off the credits we're not just gonna give them out then you could actually raise money you could raise revenue but there are some drawbacks to cap and trade and one is that so assuming you don't use auctions to allocate the credits then you have the issue well how do we determine the initial allocation of the initial credits right because we might say okay well look this firm a they produce 200 million tons and and firm B produce 600 million tons of pollution last year so we'll give firm B more credits but is that fair what if what if the firm firm B they basically are producing a lot more pollution because they haven't invested in clean technology and maybe firm a they're producing less less carbon because they've been investing clean technology so if you give more permits to firm B and say while they need the permits more you're really rewarding bad behavior right so then there's a issue of well okay well politicians are getting involved they're allocating the credits and now you've got issues of maybe they're pressured to give certain credits to firms they back them and so forth and also you have to set up a market right with with cap and trade we're setting up a market where people can buy and say our firms can buy and sell credits and so now you've got to regulate the market and there could be issues with the market the market could be manipulated and so forth also one major drawback to cap and trade is that can't be used for all types of pollution and so if you think about somebody's car right so your let's say you own a car and your vehicle it's easier just to say okay look we're just gonna have a rigid rule for emission standards for your car we're not gonna set up some cap-and-trade market where you and your next-door neighbor are buying and selling credits from one another about how much emissions you get just be too difficult there's just too many people with too many vehicles and and in terms of heating your home things like that is harder to set think about setting up a cap-and-trade system for that now the carbon tax is easier to administer in the sense that you don't have to set up a market right so you you don't have to create a market and say okay here's this market we're gonna set up and people can buy and sell you just implement a tax and now of course it's not easy to get a tax passed with legislation and stuff and there'll be arguments over what the tax rate should be and so forth but it's easier administer than setting up an entire market and also it's going to avoid the volatility of having a market and in firms will particularly like that but also a nice thing about the carbon tax it can generate a lot of revenue right billions of dollars in revenue can be generated that can instead be used on different programs and stuff so right not only by doing this price mechanism of carbon tax are you getting the socially efficient quantity of carbon emissions but you're getting that money raised that you could instead that use on other programs or you could invest it in R&D for new types of technologies and so forth for for reducing emissions in the future you could do it on early childhood education you get this money beyond just getting hopefully the socially efficient quantity of carbon produced now some some drawbacks of carbon tax and and the major one is that it doesn't actually cap the emissions level remember we said in theory by using this as a price mechanism by adjusting the price and forcing firms to internalize the external cost of carbon emissions it will adjust to the efficient quantity however there's no guarantee right if you didn't set the tax to the right level or did any number of things would happen and you end up with more emissions more carbon emissions than what is socially efficient and so the cap-and-trade has the nice thing of leaders say look I've just kind of set it at 20 million tons or whatever the amount is and then it's not going to exceed that cap but with the carbon tax you're using this price mechanism and ideally theoretically you can get to that that's socially efficient quantity but is there's no guarantee and then also you're not going to even though it's easier to administer the tax that than it is to set up an entire market for cap-and-trade and stuff there's still the issue of legislative battles of people fighting over what the tax rate should be who should be exempt and so forth so really the important thing to remember is that both the carbon tax and cap-and-trade er are both trying to bring about the socially efficient quantity of emissions and in theory and they can both do that but the question of which one is better might depend on the particular context right for sulfur dioxide for example they actually have set up basically a cap-and-trade and it's work that's worked pretty well but in some other cases you might say look it's a bit too difficult to administer cap-and-trade we should set up a carbon tax so the ease of administration can dictate which system would be better because ultimately they're equivalent in terms of what they're trying to achieve
|
Microeconomics_entire_playlist
|
Pigouvian_Taxes_in_Economics.txt
|
in this video we're going to discuss the concept of a paguan tax in economics so remember when you have a negative externality and how that can create a market failure where you basically have a situation where one person is imposing costs on another person but not reimbursing that person for those costs so let's say for example that you had a situation where you have two roommates and one person is a smoker and the other person doesn't like smoke or they're worried about secondhand smoke and so the one person by smoking is imposing costs on the other person The Roommate by their action and they're not if they're not reimbursing or paying their roommate for what they're doing or they don't have some kind of agreement then we have what's called a socially inefficient outcome otherwise known as a market failure and what a pigouvian tax is is it's a corrective tax that is set equal to the marginal external cost and the marginal external cost all that means it's a little wordy all it means is the cost cost to people other than the smoker right so the private cost the marginal private cost to the smoker is the smoker saying hey if I smoke this you know x y and z may happen to me but the external costs are the cost to everybody other than the smoker for example the roommate so if you set a corrective tax a per unit tax for example every pack of cigarettes that the the smoker smokes and you basically set it equal to the external internal cost per unit right so the cost that the roommate incurs for every pack of cigarettes that's smoked or so by doing that you can actually bring about the efficient the efficient the socially efficient outcome where the smoker is smoking the socially efficient number of packs so I want to graph this out for you but let's use let's use a different example let's talk about traffic congestion which can also lead to a negative externality so when you have traffic congestion in a city let's say that there's an additional person who's saying hey should I start driving to work or should I take public transportation or whatever and so they're going to weigh their private costs of doing so right but they're not going to be thinking about they're going to think about fuel they're going to be thinking about all their private cost but they're not going to think about hey if I start driving that's going to make congestion even worse for other people that's an additional car and that might make it other people take longer to get to work right people aren't thinking about other people and what they're commute times are you think about your own commute time and your own you know cost of fuel and and so forth so let me graph how that this can lead to a negative externality and and how a paguan tax can play a role so let's say that the marginal social benefit of of driving and and so instead of thinking of traffic congestion let's actually map this out as miles driven let's say we're going to set a pigouvian tax on miles driven that'll be the quantity right because we're not going to think of a quantity of traffic congestion we'll think that miles driven basically the more miles people drive then the more traffic congestion you're going to have let's just let's just say that that's our situation right and so we've got our marginal benefit we've got the marginal social benefit or demand we can think about this as the demand curve for miles driven right so there's some social benefit from people driving cars and and and trucks and so forth right they need to transport food or they need to do different things right there's commercial purposes there's private purposes people need to go to a hospital there's a social benefit to people driving but the more and more people drive there's less and less social benefit right when you're thinking hey I just want to go to the convenience store instead of walking there's less and less social benefit there now we can also map out and think about well what is the marginal social cost so let's say our marginal social cost that's our marginal social cost curve and the marginal social cost is going to equal the private cost to that person who's weighing whether or not to drive that decision to drive plus the external cost the cost to everybody else in terms of increased traffic congestion and so forth right and there could be other costs to driving a lot of miles increase vehicle fatalities but let's just focus on traffic congestion to make things easier so let's think about this individual and let's say that their let's let's let's map out now their marginal private cost so this is the marginal private cost to that individual who is deciding whether or not to drive right so whether or not to drive so they're just going to say hey what's the cost of fuel and so forth if I decide to drive they're just going to be thinking about their own private cost right now you see that there's a difference between the marginal social cost the marginal private cost so in equilibrium where we're going to end up is where the actual equilibrium at absent any pigouvian tax or anything is going to be we're going to be at we we'll call this Q Prime and this is going to be this is an inefficient this is socially inefficient outcome but this is the equilibrium if free markets rain and we don't do anything this is going to be the equilibrium and so this is going to be the number of miles driven and let's just say that it's let's say that it's 100 million miles or something like that okay now so this is the amount the dri now what is socially optimal well socially optimal is where marginal social cost equals marginal social benefit from a social standpoint that's where we want to go with the number of miles driven so we want to be where the quantity is where marginal social benefit equals marginal social cost that's considering everybody not just that one individual and that is going to be right here right so this is going to be the socially efficient quantity the reason is see look this is where the marginal social benefit this curve here where it intersects with this curve here the marginal social cost right there that's our optimal quantity so this is the efficient this is the socially efficient equilibrium that but that's not where we're at right unless unless we have this this pigouvian tax so what the pigouvian tax is going to do is we're going to say hey this difference here we've got a way of solving that what we can do is we can actually set a tax right here for for this amount right that difference between right here and right here we can set that as a tax so let's roughly correspond to here I'm trying to make that is the exact same thing here and here so that's going to be the amount of the pigouvian tax so the pigouvian tax and and let's just say for example that this I don't know that this was $75 right here and that this was let's say $40 and the tax would be the difference right it'd be 35 I'm just throwing numbers out here I don't know if any of that makes sense uh in reality but let's throw that out that's this the amount of the paguan tax it's a corrective tax and what you're basically doing is you're forcing that individual to say hey I can't just think about my private cost because now this tax got added on and what does the tax do well it basically gives you a per unit charge for the cost that you are are imposing on other people so you're just charge the government is just saying okay look we're going to step in there's a market failure we're going to charge individuals for that external cost so they're not just considering their private cost but they're also considering the external cost and so you can charge them for example on the number of miles driven maybe you you tax them you know if there's $35 per mile driven or something that sounds ridiculous now maybe I should have chose a lot a lot lower numbers but you get the idea right so you charge them for the external the the activity that generates the externality you have this per unit charge this pigouvian tax and that basically forces the person to internalize this external cost okay so now they're not just going to be weighing their own costs and benefits they're also going to be thinking about uh Society right but but okay now here's another thing to think about there's there's what's called a double dividend with the pigouvian tax the double divid or a double benefit and what this means is that with the peuan tax and pan tax has been applied to a lot of things right we already talked about cigarettes traffic congestion there are a lot of different people even talk about using it for like Capital volatility and so so there a lot of people talking about different ways to use paguan taxes to affect to bring about a socially efficient outcome right but there's a second benefit and that's this double dividend is that we're also raising revenue right we're raising tax revenue here and actually this amount mountain right here and I'm just going to put little Stripes there I know I know that's a little maybe I'll just color it in it's a little hard to see I made this a little ugly I apologize but this amount here that's colored in this square is actually tax revenue so this is tax revenue so now the government could say hey look well now that we've got this tax revenue what we could do is we'll say we'll use that money and we'll actually provide wider Lanes on the highway so that'll actually help reduce traff traffic as well or they could take the money and they could give it to uh some kind of issue for uh early childhood education they could do whatever they want but the idea is here you're getting two benefits you're bringing about the socially efficient outcome but you're also you're also generating tax revenue that can be used for some other purpose now the paguan taxes is basically a similar idea to what we're talking about with you might have heard of a carbon tax but with carbon tax you're not actually taxing like for example somebody produced with steel and then this the process of producing steel they generate a lot of carbon or or greenhouse gases or whatever in that case the pigouvian tax would be directly on the steel output itself but here with carbon tax you're actually you're taxing what's called like an effluent tax you're actually taxing the emissions and not the actual output so it's a little bit different but it's a similar thing now sometimes people will say hey look you know paguan taxes are nice but to know the paguan tax we need to know what is the marginal external cost we need to figure F that out and if we don't know that then how can we really set the tax and so forth and some other people say hey look that if we know the the external cost then we also know the optimal quantity we also know the socially efficient quantity and if we know the socially efficient quantity why don't we just set the quantity and say that's the that's the total amount we're going to allow of miles driven or carbon or or whatever and that basically leads into the idea of marketable permits otherwise known as cap trade and we'll talk about that in the videos to come
|
Microeconomics_entire_playlist
|
How_to_Graph_the_Marginal_Benefit_Curve.txt
|
in this video we're going to discuss how to graph the marginal benefit curve so when we were doing our production possibilities frontier earlier we are identifying the maximum amount of food and clothing that an economy could produce given the resources that were currently available right and we had given an example where we are stranded on an island right and we were trying to determine with our group how much food we were produced and how much clothing we produce so we plotted out these points and got our little production possibilities frontier here right so this little curve is all the different combinations the the maximum amount of food and clothing that we could produce given our current resources and we had said to each point here this point this point each of these combinations was efficient and what we meant by when we say efficient is for example right here we could not produce an additional unit of food right so when we're doing this right here we're doing three units of food and four units of clothing we couldn't give up this is a so I'll just call this point three four we're doing three units of food and four units of clothing we couldn't do an additional unit of food we couldn't do four units of food without giving up at least one unit of clothing right and then all the ones in Tier all these points are inefficient we don't want to be there now the question is if all these are efficient these are all these points along the curve right everything along the curve is efficient then how do we decide which is the best one right because we have all these different combinations how do we decide do we want this point do we want this point do we want this point etcetera right and we decide by looking at consumers preferences and seeing what we'll actually what we'll call allocate of efficiency now we're talking about efficiency in production when we're talking about the the PPF and constructing it and how he at this point we couldn't produce an additional unit of food without giving up an additional unit of clothing etc right that's production efficiency every point along the PPF is efficient in production allocate of efficiency is we're saying okay there's some point along this the longest curve that corresponds to people's preferences to the goods that they want the most there's some bundle of goods right and we might make guesses for example zero food and ten clothing we might just guess they even though we could produce that bundle of goods that probably people aren't going to be happy in a world where we have zero food right so we can go and we can if you had indifference curves that you know how to do that we actually put an indifference curve and see the one that is tangent to the PPF right that's a little more advanced and we'll talk about that later I want to show you a simpler way we're going to just think about the marginal benefit we can look if we know people's marginal benefit the willingness to pay for an additional unit of food and so forth and we also know the marginal cost curve right which we can derive from this and we have in our last video then we can find the point where the marginal benefit is equal to the marginal cost so the marginal benefit of a unit of food where is that equal to the marginal cost and that's going to be our socially efficient level that's gonna be our optimal level of food and clothing so that's a little abstract but this let me show you before if you haven't watched a video we we made a video where we looked at the marginal cost of from the perspective of producing food if we want from zero units of food to one additional unit of food the marginal cost would be one if we have one unit of food and wanted to get an extra unit of food the marginal cost would be two and so forth right we have this increasing increasing marginal cost curve and that explains why we have this boat out shape with the PPF right resources are not all equally productive for each good right some people are better producing clothing than food and so forth and if we have everybody producing clothing we're not getting as much benefit for that last unit of clothing we're giving up a lot more in terms of food than we were so we went through all that before right now I want to show you the marginal benefit curve right so marginal benefit marginal bow just abbreviate em B marginal benefit is our willingness to pay or consumers or the people on the island right we can look at their willingness to pay so we could say okay look everybody if you had zero units of food right if you have no food to get an additional unit of food to go from zero to one unit of food what would you be willing to pay hypothetically all right and let's say they say okay well five units of clothing they'd be willing to give five units of clothing if they have zero units of food to get one extra unit of food so we could say okay so if we're at zero right there I don't think just put the origin here we'll say zero then at that point I actually need to put a little little extra one here so we have hope let's do this in purple to be consistent so here's five so the marginal benefit the marginal benefit here would be if you have zero food the marginal benefit would be you'd be willing to pay you willingness to pay wtp willing to pay up to five units of clothing to get an additional unit of food right now if you have one unit of food let's say and now all of this in bear mine this cannot be derived this cannot be derived by the PPF so if you're in an economics class or something somebody has to give you this information right so this is not we got the marginal cost curve from the PPF OOP the PPF but we cannot get the marginal benefit information right so if you're an econ or something your professor would have to give you this information so the marginal benefit when we have one unit of food so the incremental benefit of one unit of food so we go from one at this point when we're at one unit of food we'd be willing to pay for units of clothing and when we're at two units of food that amok we'd be willing to pay an additional three or give up three units of clothing and then we get let's see here so now we've got two so three units of food we're at three units of food we'd be willing to give up two units of clothing so this is based on people's preferences this is based on people's preferences and so now we can actually draw out the line we can draw we've got a little line here and so it's see that it is decreasing it's interesting because man when we made the marginal cost curve the marginal cost was inked Racing but the marginal benefit curve is decreasing generally and the idea is that that people like variety and so they're not good you know as you get more and more of a certain good it's not as valuable to you right so okay so let's just we've got this decreasing marginal benefit curve right so as people have no food at all they'd be willing to pay a lot to get some food but then as they get to three food that I'd only be willing to give up two units of clothing to get to get an extra unit of food so now here is where this becomes important and this is how it's gonna tell us when we have that PPF and we say which point do we want along that curve this is what's gonna tell us because we can look and say where the marginal benefit equals the marginal cost that's the point where we want to be at right so we want to be where the marginal benefit equals the marginal cost so we can look and now we're going to need our marginal cost information so if you remember if you remember our marginal cost here of a food when we are at two food right when we had two units of food and we say what would be the cost to go one extra unit it was three and when we look at the marginal benefit we want marginal costs to equal marginal benefit that's a point where we want right marginal cost equals marginal benefit when we notice that well when we're at two food the marginal benefit is also three right to see that we're at two food the marginal cost equals the marginal benefit all right so think about this why is this the efficient point well let's say we went to three food right if we're if we're at three food if we have three food then the marginal benefit would be two and the marginal cost would be four so the marginal cost would actually be higher right and so that's why we want the bundle that's the best the one that we prefer the most even though they're all efficient and production would be two food right so that that would be the amount right and so how much does that correspond to that corresponds to two clothing right so of all the different points along the PPF right we would actually prefer where we have to food and seven points so this would be we'd say this is the most preferred this is the most preferred so if we had if we had an indifference curve it would be tangent to the PPF right there okay and so we said at that point so it's not only efficient in production it's also allocated ly efficient
|
Microeconomics_entire_playlist
|
Elastic_vs_Inelastic_Demand.txt
|
in this video we're going to discuss the difference between elastic and inelastic demand so when we talk about inelastic and elastic demand what we're really talking about is price elasticity of demand and I just want to give you a quick review so price elasticity of demand is a measure of how responsive that consumers are to a change in the price of a good or service and we calculate it with this formula where we take the percentage change in the quantity demanded of the good or service divided by the percentage change price and if this formula if we calculate that and it gives us a number that is greater than one we would say that the demand for the good is elastic and that means that customers are very responsive to a change in the price of the good or service if the price elasticity of demand is equal to one we'd say that it's unit elastic demand and unit elastic basically means if the price were to go down by 1% then the percent change in the quantity demanded would go up by 1% so 1% ID 1% is 1 it's unit elastic however if this formula gives us a number that is less than one then we would say demand for the good is inelastic and that means that customers aren't that sensitive to a change in the price of the good or service so let me give you an example let's pretend that you run a grocery store okay you run a grocery store and you're thinking about increasing the price of milk or a candy bar either one by 10% and you want to know what would happen under under each scenario so let's say for example let's take milk first so let's take milk if you increase the uh price of milk by 10% then you predict that quantity demanded would decrease by 2% and so you want to know what the elasticity is so what we do is we take our percent change in price that's going to go in the denominator and then in the numerator we have the change in quantity demanded and remember it doesn't matter that it's negative you ignore whether it's negative or positive uh we know that an increase in price is going to decrease quantity demanded and so forth so this is going to give us 0.2 for the milk the price elasticity of demand and because this is less than one this is less than one we say that demand for milk is inelastic it's inelastic okay so that means that customers are not that price sensitive with the milk at least with the price change we've talked about here that if you increase the price of milk by 10% it's not like people say all right that's it I'm not drinking milk anymore and quantity demanded decreases by 50 or 60% that doesn't happen it only quantity demanded only goes down by 2% even though you had a 10% price increase so they're relatively insensitive to the price of milk they see it as maybe a necessity or something that is very important however if you were to increase the price of the candy bar by 10% then you predict that quantity demanded would decrease by 30% and so let's say you predict that and you say okay well I want to know what is the elasticity for this candy bar and so we'll take the candy bar so we'll say here's the candy and so in our numerator we're going to have a percent change in quantity demanded and that is let me change colors here so that's 30% divided by the percent change in the price which is is 10% so 30% / 10% is three and so because this number is greater than one we will say that the demand for the candy bar this candy bar is elastic it's elastic and so that means that when we're thinking about the candy bar as a good we could say that if we were to do some kind of price change with this candy bar we should expect our cost customers to be very responsive so when the demand for a good is elastic we say that the demand the customers are very responsive to a change in the price so if you Tinker with the price you increase by 10% it's not just like oh well we'll have a decrease in in the number of uh candy bars bought by 10% or by maybe even less than 10% Like the milk it was just just went B down by 2% it went down by 30% so that means that people maybe they don't see a candy bar as as much a necessity as milk maybe they see it as a luxury or something like that there could be any number of reasons but basically we can look and we can just calculate these price elasticity of demand for each good and so we can just look even if you don't know what the good is but you see the number if you know the price last see a demand you can say okay even if you didn't know this was milk and you're just like what is this I don't know the price lasticity of demand is 0.2 you can say okay that is less than one so that means the demand for this good is relatively inelastic a price change isn't going to have as much effect on the quantity demanded of that good however if if you didn't know this was a candy bar and you saw oh the price elasticity demand of three or five or 10 or whatever something higher than one then you know okay demand for that good is elastic so cons consumers are very responsive to a change in the price of that good
|
Microeconomics_entire_playlist
|
Gains_from_Imports_How_Countries_Benefit_from_Free_Trade.txt
|
in this video we're going to talk about how a country can gain from importing Goods through International Trade so let's take the market for steel in the United States let's assume that the market before there's any international trade or anything let's assume that the equilibrium price of steel in the United States is $562 a ton so we've got our price here we've got our downward sloping demand curve we've got our upward sloping uh supply curve and this is the supply of Steel in the US and the demand for steel in the US so that leads to an equilibrium right here and we've got equilibrium price of 562 a ton and then an equilibrium quantity of 10 million tons of steel right here now we have our consumer surplus our consumer surplus is this blue triangle right here and then our producer Surplus is this orange triangle right here okay and then our total Surplus is this entire triangle it's just the consumer surplus plus the producer Surplus okay so we have all that that's without trade now let's introduce the the prospect of trade because we're going to say that there's a difference here the world price of steel is $350 a ton now you see that there's a difference in the us if we were just thinking about the US with no International Trade the equilibrium price of steel would be $562 a ton yet on the world in the world price it's $350 a ton so you can tell here because the world price is cheaper than the the equilibrium price in the US the US is going to be a net importer of Steel so the us is going to import steel because it would be cheaper to buy it from some of the suppliers on the World Market so now what I want to do is I've shown you without trade how how the market would look and I want to show you what would happen when we introduce the idea of trade okay so let's say so our equilibrium our equilibrium price is 562 but the world price the world price is 350 so I'm going to put that let's put that right here and I'll just call we'll call that WP or the world price that is 350 a ton it's cheaper to get steel on the World Market so what is going to happen is this we're going to have some changes to the consumer surplus and the producer Surplus they're going to shift a little bit okay so what's going to happen is the following so our now our consumer surplus is going to be this area oh let me make sure I'm going to change it to Blue for consist y sake so this triangle is going to everything that is blue here is going to be our producer Surplus or our consumer surplus is going to be blue okay this entire triangle that's our consumer surplus I'm just going to label that CP cons or CS excuse me now the orange just this tiny little triangle here is going to be the producer Surplus now we're thinking about producers in the United States right so this is the producer Surplus for the United States PS Now notice the following there there are several things that we can see from this one is that the consumer surplus for for the US that has increased considerably and it's increased for a couple of reasons one is that there's been a shift from consumers to producers or um from producers to Consumers some of the Surplus basically if we think about it like this oh let me change color so it'll be easier to see I'm going to go back to this original without trade this area here has been shifted now from producers it used to be a blonde to producers but now it's been gone to Consumers it's like let me see if I can choose a color this area here or let me choose let me choose red this amount has been shifted from producers it used to be producer Surplus but now it's consumer surplus so that's just a net change that doesn't affect the total Surplus what does affect the total Surplus is this new triangle right here this area so that triangle is new surplus that's like this right here that all that is new that's been added so the total Surplus the total Surplus has increased now producers in the US are worse off steel producers are worse off is they have a tinier sliver right but we we we don't care just about what happens to producers we care about the total Surplus the total Surplus which is consumer plus producers Surplus okay and the total Surplus has increased obviously producers aren't going to be happy about that but that's a fact and the the reason that so so what we've got going on here is is this difference right here is the amount of imports the amount of imports and and let me put some numbers to this so it'll make it a little easier to understand so right here and right here we can draw lines and let's say that this here is 16 million tons and then this this here goes to 4 million tons what is saying is is you see this point here and then this point here hope you can see that this point and this point at this point we are saying that the demand for steel at $350 a ton in the US the demand for steel exceeds the amount that us suppliers of Steel are willing to produce cuz they're willing to produce 4 million tons right here at $350 a ton but the US consumers are demanding $16 million or 16 million tons right here so us consumers are demanding 16 million tons but us producers are steal only willing to produce 4 million so that means that there is 12 million 16us 4 12 million in Imports because they buy it from the World Market so that difference of 16 million versus 4 million the 12 million is imported steel from countries out outside the US who have a comparative advantage in producing steel because that's that's why it's cheaper uh to buy steel on the World Market and so this difference this this this Gap here between the 16 million and the the 16 million to the 4 million that has 12 million of imports that's that 12 million tons of steel is imported and we see that there is a an increase in the total Surplus and the the US consumers they do great look at the consumer surplus is huge now it's this whole triangle right so the consumer surplus is huge but we see that even though the total Surplus even though the us as a whole is better off because we added this new triangle all this is new surplus that's that's that's that's been created for the us because of import so the US is benefiting as a whole however the producers just now they just have this tiny sliver so the US steel producers lose out right and so they have an incentive to lobby and say hey maybe we want a tariff or something like that but we can see here that clearly because we get this new area here that clearly the us as a whole when we consider the cheaper prices to Consumers and so forth the us as a whole is better off by importing Steel
|
Microeconomics_entire_playlist
|
Equity_vs_Efficiency_in_Economics.txt
|
in this video we're going to discuss the difference between equity and efficiency in economics so when we say that an allocation of goods in the economy is efficient what we really are talking about is Pareto efficiency and that means that we're at a point where no one could be made better off we can't make any person better off without making at least one person worse off so if we're in some kind of allocation where there's no way that we could change the way the goods are divided to make someone better without at least hurting one other person then we say that we are at a Pareto efficient allocation however a Pareto allocation might not be an equitable allocation and by what what we mean with equitable is we're really talking about fairness is it fair if it's equitable is it fair is the way that the goods are distribute in society fair to all the people in society so let me give you an example let's say that we have a let's talk about the market for cars right so let's say in our society that there we live in a small community and there are a hundred cars produced and we have this community consists of Jay Leno and everybody else so everybody else is all of us and then there's Jay Leno so this is the different ways this is a set of Pareto efficient allocations all lie along that curve but let's say that we happen to be here this is the point where we're at where Jay Leno gets a hundred of the hundred cars that are produced and everyone else gets zero so at this point at this point because it lies along the the PPF we can think of this this curve as the production possibilities frontier this is the set of all the possible efficient in production allocations because we're here this is Pareto optimal or this is optimal in terms of production right and what we mean by that is that we could not produce we couldn't give everyone else a car without hurting Jay Leno right so Allinol cannot we can't have any situation where jaylen will still gets a hundred cars and then we get two cars or three cars or anything for everyone else at this point the best that can happen without screwing over Jay Leno is that everyone else gets zero cars so although that's efficient we would say that that's not equitable right so that's efficient but it's not equitable it's not fair and the reason is people would say hey wait a minute why does Jay Leno need a hundred cars I know he has a big garage but so why does he need all a hundred cars what about everybody else and so you might say hey let's get the government here to come in and and force Jay Leno to transfer some of his resources we'd really like to be at a point here so let's say this initial point is point X and we'd really like to get to point Z over here and we say hey look that would be efficient in production - but as part of the transfer we end up let's just say we end up somewhere inside the PPF at point X so at point x this is inefficient this is inefficient in production and why the reason is that we're wasting some resources in the sense that let's just say for example that we attempted to transfer we try to transfer six cars that we have the government say hey look we will give Jay Leno ninety four and everyone else is gonna get six but then maybe throughout that process we only end up everyone else ends up getting you know let's say four cars or five cars and you say hey well I thought we just had the government transfer six well this is a problem it's sometimes referred to as the leaky bucket it's an issue with government transfers and the reason is that when you do this transfer you change people's incentives you're changing people's incentives right so if we can think about this in terms of income for example in the amount of hours people are working and think about it in terms of income instead of cars when you transfer some income from someone who's wealthy to someone who's not as wealthy you've given the person who's wealthy less of an incentive to I'm not saying they're not going to continue working or something like that but you've changed their incentives as you transfer resources from somebody you change their incentives right so that's why we say the taxes are distortionary unless there are a lump sum tax most taxes are distortionary and that they change people's incentives the people who are having their income transferred away or the cars transferred away have now a distance until have as strong an incentive to work and our money and then the people who are receiving the transfer have less of an incentive to work now that's not to say that we shouldn't have any transfers right it's not saying just because of this incentive problem that we're not going to have any transfers and we're just going to let jay leno have all 100 cars right so this really gets into the heart of public sector microeconomics is this idea of there's an equity efficiency trade-off the equity efficiency trade-off and what we're saying here is we say okay look it's not fair for Jay Leno to have 100 cars we know that we know that but how many cars are we supposed to transfer what will the loss be how will it change people's incentives what's the best way to achieve this right do we want to try and transfer 50 cars if so what is going to happen how much equity do we need right do we want where everybody has the exact same number of cars some people say that's too extreme now we're getting into communism or socialism so this is this is a trade-off that you should be aware of now there are a couple ways that you can think about it so some people will say look what we need to do instead what we need to do instead is instead of thinking about this transfer business what we need to think about is how can we grow the pie so if we think about this as think about it as a pie right with with different slices of pie or a pizza right and you've got different slices one way of thinking about it is hey is there some way I can make this pie even larger right so here with our PPF we're just assuming that the technology is fixed and so forth but that's just in the short-run in the law technology is not fixed right we can make it we can come up with new technology and stuff like that so if we make the pie larger then we might have a situation where Jay Leno is able to have a hundred and fifty cars and everyone else gets fifty wouldn't that be great right that's kind of a win-win situation where it's still kind of inequitable and that why does jaylen all have or need so many cars but we've helped both groups right in the long run so some people say look we need to grow the pie if we can't if we can grow the pie then everybody is better off and other people would say look that that's nice and theory but we might have difficulties growing the pie or maybe when you think you know right now there are people who are struggling and so we need to re reallocate or read avoid or figure out a different way so that we don't have let's say all of this part of the pie is going to to Jay Leno and then we have this part is the small part is for everybody else so we might say well we need to reallocate it we need to change the way we need to do some transfers so that we get to here but again once you do that you change people's incentives and you get to where you might be at an inefficient allocation right and so that's the heart the crux of this equity and inefficiency trade-off
|
Microeconomics_entire_playlist
|
Individual_Transfer_Quotas_ITQs.txt
|
in this video we're going to discuss how individual transfer quotas can be used to prevent overfishing so we've discussed how common resources tend to be overused and if we take the ocean as an example we think about how fishermen have an incentive to continue to add boats even beyond the point that is socially optimal and the reason is is that when a fisherman decides to say hey I'm going to add an extra boat I want to increase them on a fish I catch they're also reducing the number of fish caught by other boats but they don't take this caused into consideration they're only considering their own private costs and benefits so we have a negative externality that the results in overfishing so we're catching more fish than is the socially efficient and that can reduce the fish stocks and create serious problems for this renewable resource of fish so individual transfer quotas try to solve this problem and what they do at least the best individual transfer quotas they have three main properties and so what the quote is going to do is it's going to entitle each person each fisherman to a specific number of a specific fish so let's say that it says okay this particular quota allows you to catch 100 Cod then that's what you get for that particular quota and you might have 500 quotas you might have a thousand or whatever but each quota gives you a specific number of a specific fish but when you add up all the quotas that are given to all the fishermen you're going to have the socially efficient quantity of fish so again let's let's use Cod let's say that basically your government says look we think that 1,000,000 Cod could be caught in any year and that that wouldn't create a problem so if we were to add up the quotas if everyone who's given a quota one of these I t qs it should not exceed 1 million because that's the socially efficient level hypothetically I have no idea how many Cod you can catch however also the quotas can be bought and sold on open markets right so you're basically what you're doing with I to Q is you're creating a market you're creating a market and so one fisherman let's say let's say one fisherman is given the right to catch 5,000 Claude and let's say the theft fisherman invests in his boats and he gets to where he really has low cost in terms of catching fish he's really good at it and it's he could do it cheaply and some of the other boats haven't really made those investments so this fisherman could say look I really like to be able to catch 7,000 cod so i'm going to buy i'm going to go and purchase two thousand i TK rit cues that will allow me to catch an additional two thousand cod and so the other people if they haven't really invested in technology and it to be really good at inefficient at catching Cod then they can sell they can sell credits of two thousand cod to the boat that is really efficient so it's kind of it gives an incentive to invest in technology and so that's a real key of this the quotas that they're transferable they can be bought and sold on the open market now the question is how you initially allocate these individual transfer quotas who decides and how do you decide how many quotas that this fisherman gets and that fisherman gets and so forth so one way you could do it is you could auction them off so you hold an auction and the highest bidder gets the dye t qs right so you have a number of people calm and their bidding for the IT cues that raises revenue for the government you could use that revenue on any number of things you could use on early childhood care you could use it to prepare the boat dock and do maintenance and so forth but you could also you can also say well look we're going to base it on the historical catch so the people that caught the most cod in the past that the people who are doing the most fishing they're the most boats they're going to get the the most i t qs to start with and then people who didn't catch as much cod maybe they don't get as much in terms of their quota their IT queues and then what happens is new people so let's say five years from now there's somebody who's who's a new person and they want to start get into the cod fishing business then they would have to purchase from somebody who already had AI t qs right so they they basically like the new entrance to the market have to purchase from people who already owned itq so so that's another way of doing it and now there's some challenges with itq so I tqs have been used in a number of countries and and there's been a lot of success in several of those countries with basically keeping the fish stocks where they're at a sustainable level so that you don't have overfishing and and and depleting the fish stocks for future generations so there's been a lot of success but there are some issues and one issue is with a thing called bycatch so bycatch is basically if you think about a boat so you've got this boat out there and they throw a net into the water and then they catch any number of fish right but some of those fish let's let's go back to example of cod let's say they're trying to catch Cod they might catch a shark right they might have some shark in there and they say hey we didn't mean to catch this thing this thing looks nasty but maybe they'd say well we don't have any quota for sharks right we don't have any quota for sharks so if we bring this shark back to the dock we're going to get a fine because we have enough quota to bring these cod back but we don't have any i 2 q's for sharps so we're going to get a fine so let's just take that shark and let's throw it back in the ocean so so let's just release it but the thing is that the shark might die after it's been quiet and pulled into the boat and the net and then you throw it back there's a good chance it'll die and that's actually they have a phrase for that it's called release mortality so if you have really high release mortality then you're basically creating waste right you're creating waste because they caught this fish but they didn't have they weren't intending to catch a shark but they and they could have used it maybe shark fin soup or something but they didn't have an IT queue and so they say well look we've got it we've gotta throw it back into the ocean and it dies anyway so so it's really created waste and so by catches is a real issue also another challenge with I to use is something called high grading and so high grading is basically with the nature of the quota system where you can only get so many fish are you going to get so many pounds or however its measured you basically have an incentive to keep the larger fish and to throw back the smaller fish right so if you if you're keeping so let's say and again we've got the net we got this net and then we catch these different fish and some of them are small and some are quite large right some are kind of fat so we keep those fat ones and then we throw back the the tiny ones or the juveniles the skinny fish however you want to think about so we throw those ones back and so then that becomes waste so now we've got this issue of basically they're putting a premium on the larger ones but then the smaller ones they've already been caught and then we have that release mortality issue and basically just dumping them in the ocean and so people would say well one way to do to deal with this these challenges is to just ban discarding a fish we can just say look you can't discard fish and so if you catch a shark and you don't have and I TQ I guess you're paying a fine so you could do that but another thing you could do is you could allow basically allow firms to have a temporary overage or basically to pay for a temporary overage to say like okay we brought those fish back we happen to have a shark we didn't have any IT queues for that but we're going to purchase a temporary overage we're going to purchase an IT queue or pay some kind of money or some other than a fine we're going to basically have a system set up where we can say look we didn't intend to catch this or we didn't and we could buy a little bit extra credit so that we don't get a penalty and then that basically release reduces the amount of waste but by the same token it's not forcing the fisherman to be penalized
|
Microeconomics_entire_playlist
|
Specialization_Trade_and_Comparative_Advantage.txt
|
in this video we're going to talk about specialization and trade so specialization and trade is a way to consume outside of the ppf right so let's say we've got our ppf here for an economy where you can produce food or bicycles and we think about being limited with our current level of resources to all the points along that curve right we can't currently produce outside here these points are not currently feasible given our current level of resources but by specializing in producing a good in which your country or economy has a comparative advantage right if you have a comparative advantage in one good you can specialize and produce only that good for example food and then trade with another uh country to get the bicycles or something you don't have a comparative advantage in producing and you can actually consume at a point beyond the ppf right so you're not you're not expanding the ppf so it's not like when we get new technology or capital accumulation we're not expanding the ppf uh what you're doing is you're you're specializing in something where you have a comparative Advantage you're trading with another economy and and you're getting to consume at one of those points that isn't otherwise feasible so I want to give you an example so let's say that you are stranded on an island you're stranded on a deserted island after a plane crash with only your economics professor exciting I know so you can collect two coconuts per hour right you have to climb trees or whatever you get two coconuts an hour or Andor you could collect half a liter of water let's say there's a spring and you can go and collect some fresh water right so you for one hour's work you could either collect two coconuts or half a liter of water now your professor can also collect coconuts and water but he or she can only collect One coconut per hour and also half a liter of water per hour so you each can get half a liter of water per hour but if you decide to use that hour to collect coconuts you're going to do a lot better job you have an absolute advantage in producing coconuts you can prod prod two coconuts an hour your professor can only produce One coconut an hour so let's pretend that you each worked for 8 hours that you thought you know what we we really need to be working eight hours and so that's what we're going to do and you after eight hours let's just say that you end up you spend four hours collecting coconuts right so you get eight coconuts and then you spend the other four hours producing water so four * half a liter of water per hour is 2 L of water and then your professor uh also spends 4 hours doing coconuts and four hours doing water so it ends up with four coconuts and 2 lers of water this is just one combination of goods it doesn't have to be this way right you could spend six hours doing coconuts and and and two hours getting water but I'm just trying to give you an example of a combination of goods that you and your professor could produce right now your production possibilities Frontier would look like the following and it's it's actually going to be linear in this example usually a ppf we we see it having that familiar curve right because resources are not generally equally productive in all uses but let's just say here we've got this linear ppf right so for example if you produced uh zero lers of water you could produce 16 coconuts right and if you produced zero coconuts then you could produce four lers of water Etc and so each red point along here along the ppf is efficient in production meaning that at any given point let's say right here which is the point we were talking about we talking about you produce eight coconuts you collect two Co eight eight coconuts and 2 lers of water right so we'll say that's that's 82 right here so that point is efficient in production what does that mean that means that you cannot given your current level of resources produce for example uh let's say a point let's say the point 10 two let's say 10 coconuts 10 coconuts and 2 lers of water you can't currently produce that right this is not feasible this is not feasible on their current level of resources but let's think about the following you might might be able to trade with your professor and and actually get to that point 10 to right you might think that's crazy how can I consume at a point beyond my ppf those points aren't feasible well that's where this idea of comparative advantage becomes important so your opportunity cost and I just abbreviated that OC here so that's opportunity cost your opportunity cost of producing One coconut is .25 L of water so think about it right you produ you collect two coconuts an hour right two coconuts an hour and so for you otherwise you could spend an hour collecting half liter of water right so two coconuts two coconuts or 0.5 water right so if we divide this by two we divide that by two right and then divide this by two that gives us 0.25 right because we want to know the opportunity cost of producing One coconut for you right so it's .25 L of water so to produce One coconut you give up a/4 liter of water that you could have collected right instead now your opportunity cost of producing one liter of water is four coconuts so let's think about it like this for you to get one liter of water would take you two hours right because you get half liter of water per hour so it would take you two hours and in two hours you produce you get two coconuts an hour so you could get four coconuts so the time that you could have spent collecting one liter of water could have been spent collecting four coconuts all right that's the idea so we can look at your opportunity cost producing a coconut is 0.25 lers of water and producing a liter of water is four coconuts now it could do the same thing for your professor right your professor's opportunity cost of producing One coconut because he or she does one coconut an hour and half a liter of water an hour so to produce One coconut you're giving the professor giving up half a liter of water right so 0.5 L of water is the the opportunity cost of getting one coconut and conversely the opportunity cost of one liter of water is going to be two coconuts right because it takes your professor to get two hours to get one liter of water because they do half a liter of water per hour and to for them to get two in two hours they could have collected two coconuts because they collect One coconut per hour so we look at these opportunity costs right we compare them and we see that for you you have you collect um the Coconuts Cost You Less right coconuts Cost You Less it's only quar liter of water for you to go get a coconut that you're giving up your professor's giving up half a liter of water right so you have a comparative advantage you have a a comparative advantage I just put comp Advantage you have a comparative advantage in coconuts in coconuts right you are it costs you less to go get a coconut right now for your your opportunity cost to get water is for coconuts you're giving up because you're really good at producing coconuts right you're really good at climbing those trees and so you give up uh to get one liter of water you're giving up four coconuts whereas your professor is only giving up two coconuts to get one liter of water so your your professor has a comparative advantage in producing water and collecting water and that might seem weird to you you might think hey we both collect a half liter of water an hour why is it that my professor has a comparative advantage well it doesn't matter so if your professor collected more water per hour than you then you say your professor has an absolute Advantage but it doesn't matter what we care about is comparative advantage and what we're noticing here is even though you both collect water at the same rate at 05 lers of uh um or excuse me at uh 0.5 lers of water per hour you both collect at that same amount but you have a different opportunity cost of producing water because you are really good at producing coconuts so you're giving up a lot more to produce the water okay so that's the comparative advantage okay now what we want is for each of you to specialize specialize where you have a comparative advantage and then you can trade right and each be better off so for example let's say you say you know what I'm going to take my whole so you take eight hours and you just do coconuts you just do coconuts you would end up with 16 coconuts because you do two coconuts an hour right now your professor your professor if you just say look look just collect water I'll take care of the Coconuts right so your professor spends 8 hours and half liter of water an hour so they'd end up with four L of water now comes the trade right you need to trade and there there are a number of Trades you could do here but but let me just give you the one that we were talking about so we were talking about can you could you consume at the point 102 right where you have 10 coconuts and your professor has or excuse me you have two 10 coconuts and 2 lers of water so for that to happen you could give six coconuts because you you want to have you want to have 10 coconuts and two lers of water right so you could give six of these 10 or these coconuts you give six coconuts to your professor so you give six coconuts to your professor and then your professor could give you 2 L of water okay so now at that point let's look at you so you have 10 Co so you just gave away six of your 10 or 16 coconuts right so now you have 10 coconuts you uh your profess you had no water but your professor gave you two lers of water so now you have two lers of water all right and if we think about your professor you're your so this is you and the professor now they're uh here she is receiving six coconuts and now here she had four lers of water but but gave you two so now has 2 L of water so your professor is at six coconuts 2 L of water you're at 102 the professor's at 62 so let's go back remember 102 and 62 so now this is where you're at 102 and 62 right now it had seemed like this wasn't even possible right because we already went through this and we said hey this is efficient in production right this point 82 is on the ppf right so in a way it's we were actually able to consume you're able to consume at a point that was beyond the ppf right by specializing in trading right so you basically say okay look I have a comparative advantage in producing coconuts I'm I'm do do a lot better job at it than the professor so I'll just specialize in producing coconuts the professor has a not an absolute Advantage but a compar comparative advantage in producing water so they'll just produce water and then we'll trade and in each case so now let's assume again you worked eight hours and and so forth you could have a situation where you have 10 coconuts instead of eight and 2 lers of water you're no worse off you have the same water and your professor now gets six coconuts and two lers of water so you're each better off by specializing in trading and you're able to consume at a point beyond your production possibilities Frontier
|
Microeconomics_entire_playlist
|
How_to_Graph_the_Marginal_Cost_Curve_using_a_PPF.txt
|
in this video we're gonna talk about how to graph the marginal cost curve when you have a production possibilities frontier so we use the following data in a previous video to create a production possibilities frontier and we assume that you and a group of friends were stranded on an island right so you were stranded on an island and all you could produce was food and clothing and so you had to make trade-offs and think about how much food or clothing you could produce and we graph the PPF right so we plotted each point so for example if you were to produce four units of food that would mean zero units of clothing right so four units of food zero units of clothing three units of food would give you four units of clothing right so these are the following combinations and we plotted them all out with our PPF right so we made our production possibilities frontier and then we introduced the idea that there's an increasing marginal cost right and that actually explains why the PPF usually has this boat out shape right instead of a PPF because think about a PPF could conceivably just look like this right where you just have a straight line but instead we said usually the PPF has this boat out shape due to the fact that not all resources are equally productive right so basically as we go from let's say clothing right so where we have zero units of food produced and ten units of clothing produced as we go from 10 units of clothing to nine units so we we produce one unit of food right so let's let's take here we start with zero units of food we're all starving and we say hey wouldn't it be nice if we had food and so we decide okay well we're gonna produce one unit of food we're gonna go from zero to one unit of food we give up one unit of clothing because clothing goes from ten to nine right so we would say the marginal cost the incremental cost of producing going from zero units of food to one unit of food we can see as this amount here what do we give up we'll give up one unit of clothing so we'd say that the marginal cost of producing one unit of food when we start with zero units of food the extra it is gonna cost us one right so that's of marginal cost to produce that first unit of food but the marginal cost changes it's not constant right if it were a straight line then it'd just be constant but we said that the whole reason we've got this graph is the marginal cost is gonna be increasing so let's think about it as we go from one unit of food and now we say you know what it'd be nice to have two units of food so we produce an incremental we produce an extra unit of food right we go up one additional unit of food now look we have now have to give up two units of clothing right if you go back to the numbers if if we go from nine to seven units of clothing we give up to this time to get an extra piece of food so now the marginal cost here would be two now we think okay well what if we wanted a third piece of food what if we go from two units of food to three units of food that extra one unit of food what is the cost now see look it's getting more and more of our cost now our marginal cost is three and when I say marginal cost I mean the amount of clothing that we're giving up to get that extra piece of food and think about when we started with zero food and went to one it only costs us one piece of clothing right but now now we're gonna where we've got a marginal cost of three okay and it gets worse because when we go we'd say okay well I'll let's go for the max let's go for four units of food from three to four what is that we're getting one extra piece of food but we're giving up four units of clothing this whole distance see the distance is getting longer in long are see we started out here where it's okay and then we got longer and longer and longer that's because we're going and we're getting this this marginal cost is increasing now their marginal cost to go from three units and it's of food to four the marginal cost is for you units of clothing so when we really have no food at all to give up some clothing to get one piece of food doesn't cost us much right it doesn't cost us much at all in it so we can we and actually put together a little table and we say okay when we have zero units of food what's the marginal cost of producing one you the food right marginal cost of producing one unit food in terms of clothing right so we'll say clothing given up this is the clothing given up to get one extra piece of food so and when we got zero food right for me goes through zero to one we said the marginal cost was one okay so the marginal cost is one and then when we have one food to go to to the marginal cost was two and then three and four right so we can just fill that out three and four and now what we can do is we can graph this we can graph this little table that we've made here and that'll tell us something about our marginal cost which you probably already can see just from looking at the numbers but so when we have so I've got food here on the x-axis and the marginal cost on the y-axis so when we have want zero units of food the marginal cost of producing one unit of food is 1 so 0 1 will be right here now one unit of food the marginal cost is two so that will put us here two units of food the marginal cost is three so we go right here and then when we have three units of food that make that extra last piece of food it will cost us four units of clothing okay so we see that actually the marginal cost is so this is our marginal cost curve this is our marginal cost curve and you see that the marginal cost is increasing it's an increasing marginal cost and so why is that so again this is this idea that resources resources are not equally productive right so they're not equally productive and what I mean by that is so again let's return to our example so we're on an island right it's a group of us we've been stranded due to a plane crash and we have to think about how much food and clothing to produce now some people are gonna be better at producing food and other people are gonna be better at producing clothing right so let's say there's somebody on the island who is a tailor and then let's say there's somebody on the island who maybe was a chef right so if when we're producing zero food and ten clothing that means that the chef is actually he's being at or she is being asked to make clothing right everybody's making clothing right and so to give up and say well let's give up let's go to one unit of food and how much clothing do we have to give up well if we say okay well let's have the chef he can go or she can go make some food well maybe the chef wasn't very good at making clothing right so we only really give up one unit of clothing so our marginal cost is one when we want to get that first unit of food right the marginal cost is low there was somebody that maybe wasn't even good at making clothes to begin with but as we get further and further along right and we get to a point where to make that last unit of food to go from three to four now we're asking the Taylor the tailor who makes clothes for a living right in the real world they make clothes for a living and we're asking them to make food right so now we're giving up four units of clothing to get that final piece of food so that explains why we have an increasing marginal cost because as we move along the curve right not all resources are equally productive if we could equally trade off resources right so let's say hypothetically that we were in a world where we just say okay well you know everybody is equally good and making food and equally good at making clothing then the PPF wouldn't look like that it wouldn't have that boat out shape it would be like this it would be a straight line
|
Microeconomics_entire_playlist
|
The_Effect_of_a_Price_Support.txt
|
in this video we're going to talk about the effect of a price support so a price support consists of two things you have a price floor and you have a promise from the government to purchase any excess supply so let's take the market for wheat for example and so let's say that before any price support we just let the market forces work we've got an equilibrium price of wheat of a hundred and thirty three dollars and then we have an equilibrium quantity of fifty eight million tons of wheat so that's and now we've got our consumer surplus as here in blue and then we've got our producer surplus is this orange triangle so if we were to add this up this this whole triangle here would be our total surplus our consumer surplus plus producer surplus okay so now let's say that the government comes in and says look we're going to institute a price floor of two hundred dollars so the price of wheat has to be at least two hundred dollars so it can't go below that so we're going to put P sub F that's our price floor of two hundred dollars a week so it's it's illegal it's against the law to sell wheat for less than two hundred dollars now in addition to that the government does the second thing where it makes a promise and says look if we have any excess supply of farmers make too much wheat or people aren't wanting to consume all the wheat that is produced then the government agrees to purchase that excess supply okay so now in fact when we look at what is going to happen here at a price of $200 we see that the the amount supplied by the wheat farmers is going to be right here that's the amount supplied and that's going to exceed the amount demanded over here by consumers and so we could actually let me just we'll put some numbers to this let's say that the amount demanded is twenty five million tons of wheat and then let's say that over here the amount that is supplied is a hundred million tons of wheat so we see that we have excess supply we have excess supply right here and sometimes you'll see your economics textbook call that a surplus of wheat which is which is perfectly correct and everything but I don't want to use the term surplus here I'm going to use the term excess supply because I don't want to confuse you with consumer surplus and producer Plus because we're talking about different things excess applying means that the wheat farmers produce a lot more wheat a hundred million tons then what consumers were willing to buy which is 25 million tonnes okay now when we think about this so normally if we just had the price floor and we didn't have the government promise to purchase the the extra wheat we would end up here we would end up here because consumers are only demanding twenty five times so we would have basically this amount here would all be a deadweight loss and so forth that's what happens with just a price floor but now because we have this government promise to purchase that excess supply we're actually going to see that the producers are going to get this area here this area here is going and you know what let me change colors because they're actually the producers are going to get a lot the producers are going to get this area here so that's all going to become part of producer surplus but in addition to that in addition to getting this new area the producers are going to get some of what the consumers used to have so the consumer surplus is going to shrink and the producer surplus is going to grow a lot you see why wheat farmers would be really happy to get this all of this so the wheat farmers are getting some of the consumer surplus now the consumer surfaces that small triangle in all this orange that big orange triangle is for the producers okay now let's think about this this looks like it looks like the total surplus has gone up because even though consumers lose out they just have this tiny triangle now you might be thinking hey look the total surplus used to be this triangle let me make sure you can see that this triangle like that that was the total surplus but now we've added this triangle here so you might be thinking hey this price support is actually a great thing we've increased the total surplus but don't forget something don't forget the government has promised to purchase all the extra wheat the gum - promise so what we're going to have we're going to have a situation where we're going to have a cost of the government and who is the government who's paying for this taxpayers right so we can think of all of this this entire a mile here this entire amount this big rectangle is going to be cost the taxpayers that's going to be the cost to the government and so what we can actually do what we can actually do is we can actually calculate we can actually calculate the cost of that government so we can we can calculate we can also calculate the area here so that's not so let me just calculate this small triangle here so just in case you are wondering what is the gain here let's see that amount so that game is two point five one two five billion the way I got that was 1/2 times the base times the height so we've got 1/2 times 75 million times two hundred minus 133 so it's in case you're interested in the math that's the area of that triangle right there that the producers have gained all sewing producers gain some of the consumers surplus but that that doesn't affect society right so if we just think about the increase in the total surplus we've got dat amount but again remember that not only are we gaining this amount right if there was no cost to the government at all then yes this would be a dish this would be an increase in total surplus but now let me let me show you so then not only do we have this and this is billions by the way that's the billion that's that that's that triangle call that the new triangle or how everyone I think about it now we're going to have this whole area this whole area here I don't even know from the goal color all this in I'm going to miss some spots but this whole area is the cost to the government this is the cost cost to the government or cost to taxpayers of buying all the extra wheat and we can calculate the area of this rectangle here okay so what we're going to do to calculate the area of that is we'll seventy-five million and how did I get you might be wondering how I got this 75 million there's 75 million here or two so 75 million is the amount of the excess supply that's a hundred million that is actually supplied minus the 25 million that consumers actually want so the extra so the extra is 75 million that's the excess supply of wheat that's all the wheat that will supply the consumers didn't want okay so that's 75 million and then we multiply it by the price floor multiply it by the price floor just two hundred dollars because the government guaranteed that price right so it's two hundred dollars two guaranteed to the wheat farmers and so that's going to come up with the ultimate cost to the government let me change colors here 15 billion I hope you can see read this number here too it's I know it's a darker color two point one five one two five billion so if we think about the gain and the loss here what happened is this we gained the producers gained they not they gained some of obviously the consumers area as well but that's not that does I producer is win and consumers lose this total surplus is unchanged right so the producers before we factor in the cost of government big game at two point one four to five billion that was this amount here okay they gain that but then that that area plus all of this area to the entire rectangle was lost as they cost the government to taxpayers of this 15 billion dollars so actually you could look and say okay well we gained two point one two five or two point five one two five billion but we had to pay fifteen billion in taxes to be able to fund this we can actually take the net difference so the net 15 billion minus the two point five one two five billion so we'd say that that that amount is twelve point four eight seven five billion so that is the cost that is the lost value in terms of doing this price support and that's why price supports can be a very very bad idea
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_3_Singleshot_Multidomain_Camera.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. ROARKE HORSTMEYER: I'm a new student with the Camera Culture Group. But for a little over a year I worked at a company called Miter which, one of them's up the road. The other one's in DC. And we on and off worked on this project, myself with a guy named Gary [INAUDIBLE] and then Mark Levoy who is from Stanford. And together, we eventually put together a nice project. But it took a bit. So 2009, 2008, is kind of a general time frame we did this. But this is going along the lines of capturing multi-domain or multi-dimensional data spectral data or polarimetric information or high dynamic range with your camera but doing it in a single snapshot. So we'll discuss that in a little bit of detail. But first, just some background. Just giving you an idea of what I mean when I say multi-domain. So we know there's a ton of pixels. Ramesh showed some slides of the increase in number of pixels per sensor and whatnot. And the thrust of this research was finding a new way to use those pixels besides just spatial resolution. We think there's plenty of spatial resolution out there. What about color or some other interesting properties of light to capture it? So some of those include polarization. So here a group used a polar emitter to distinguish a man-made object-- this is metal-- from some plants. The way it works is metal reflects more polarized light than a plant does, which actually reflects a little bit of polarized light. And they pick out objects like that. I'll show some examples some of that what we did later, I think. Anyways, so you can see cool effects like [INAUDIBLE] in glasses or transparent media. And then, obviously, what's known for photographers is it helps-- I forget the word- but clarify haziness in images of water, because water reflects polarized light and so does the sky. So you get a clear blue sky with a polarizing filter for conventional photography. You can also do multispectral imaging, which is just capturing a very specific spectral information instead of just red, green, and blue, which are huge bands in the frequency dimension. These are very narrow-band images. These were both captured over time. So this one shows a aerial view. They're often used in airplanes to capture information about foliage or stuff. USGS is using them a lot. And then this is an example of detecting blood oxygenation. So this is an image of a thumb next to another thumb. One has a rubber band wrapped around it. The other one doesn't. So you can obviously tell the difference. And then high dynamic range imaging, which I think we talked about. So we're going to capture all that in a single snapshot. And basically what happens when you want to do that is you're presented with this huge dimensionality mismatch, which is true of all camera capture techniques. You just have this two dimensional sensor. And it's small and it's flat. Soon, it might not be flat. But right now it is. And you also have the time dimension to work over. But you have these seven or more, arguably, seven dimensions of information out there. You have a four-dimensional light field, which I guess you'll learn about more. Some of you might know about. He also has a temporal, the changing of whatever scene you're looking at. And you have spectral polarization information. So there's a lot of different ways that people have come up with to capture this. And I'll talk about some. One of the most direct is just doing it over time. So this is a picture of one of the first color motion picture processes. It's called Kinema color. What they did is they just put a rotating green and red filter in front of a regular old movie film camera. And it captured sequential images. Then when they projected it-- it didn't have blue for some reason. I don't know why. But projected it, they just projected it through the same rotating color filter. And your eyes couldn't distinguish the fast frame rate. So it looked like a color code. And then some of the things have been done with hyperspectral imaging. So the way I push through hyperspectral imager works is it's a two-dimensional sensor. One dimension is used to capture spatial information. And the other is used to capture spectral information. And you put it up on a plane, and it flies. And a sweeps out the second spatial dimension. So you're capturing this three-dimensional data cube and over time, two dimensions. But you're capturing spectrum [INAUDIBLE].. It's a little bit different. And then this is very similar to what RK was talking about. But there's tons of different ways to do that over spatial encoding. So you can work over time. You can also work over space. The Bayer filter image, I think this is the exact image that you guys talked about. People have also done it with polarization filters. And then I could also talk about assorted pixels, which was from [INAUDIBLE] group. And we mix and match neutral density and polarization and spectral filtering, so-- all over a sensor. The problem with that is it's extremely hard to fabricate. And once you make it, you're stuck with it. So you can't very easily take off your lens and peel off your filters and then put another one on it. Very sensitive to alignment and stuff. So other people have tried capturing multispectral information or other types of information with just many cameras. So you guys saw the profusion camera, we had on the first day. This is a very simple, just similar design called periodic or [INAUDIBLE]. We also learned about where you just put different color filters over each little camera and you can get a single snapshot from all the cameras. The problem with this is there's issues of registration and alignment with the cameras and it's expensive. That camera costs $10,000. So it's a good method, but our method is a little bit different, I guess. The third way, I think the coolest way, of doing this is with this method called co-division multiplexing. There's only a couple of examples of this. But basically, you computationally capture a three-dimensional data keyed on two dimensions and then computationally try and guess the third dimension. So you're trading off in computational power and also error associated with your estimations. But some examples of this is this CASSI which was developed at Duke. And it's a single snapshot spectral imager similar to what we do. What they do is they put a coded mask in an intermediate image plane and it shifts the data cube in three dimensions. And they can estimate color, many color channels from a single image. And a similar very famous example of that it's called CTIS. It stands for, I think, computed tomography infrared spectrometer. But it was invented in Arizona. And what they do is they use a very, very special, cool holographic plate. And with that you can take an image, and the holographic plate makes this very interesting spectral description, which they know exactly how it's being dispersed so they can reverse the spectral dispersion to get a full dimensional image but also have all this color information. Problem with it is that the hologram they use is very sensitive to directionality of light. So it has a very, very narrow field of view. So basically they have a really cool movie of some guy lighting a lighter and you see all this spectral stuff. But that says why does the field of view is it's like one little lighter. But it's a really cool. So just the basic idea of our approach-- I won't get into too many details. I'll go over this kind of quickly. But you have a camera, and you're capturing two spots on an object, on one red one. And at the center, the rays-- let's pick the red spot-- are both integrated at the sensor so you can't tell which ray came from where, any other part of the lens. If you misfocus your sensor, you can tell, once again. So you know this ray is not integrated with that one. So you know it's coming from this spot in the lines, likewise for that one. But when you misfocus something, everything just blurs together. There's no separation, spatially, of the rays. It's hard to distinguish them unless you're doing astronomical imaging where the stars are surrounded by black and you have a chance. What we did is we put a pinhole array on top of the sensor. And that has a couple of interesting properties. It's very similar to a light field. I won't really get into it. But basically what it allows you to do is it allows you to take your Bayer filter that you originally had at the sensor [INAUDIBLE],, and you just move it up into the lens. So now instead of having a repeated Bayer filter pattern over your focal point, you just have one filter and you can stick it in your lens. And what each of the pinholes are doing is they're effectively imaging this aperture. So you can think of each of these little pinhole camera. And each of them, all you can see is this filter. So you're creating a style of filter. But it's also still capturing the information about the object. So for example, if you put a red filter in this part of the ray and a blue filter there, the red filter will attenuate the blue ray. So you'll figure out it's red there. But this is a grayscale sensor. Only one of the sensor areas will light up. The same with the [INAUDIBLE] area on the other pinhole. So that's the general idea. But the trade-up with this is now you have a reduced spatial resolution. And your spatial resolution of your output image is going to be given by the number of pinholes you have in your pinhole there. And your filter resolution or whatever-- it's not really color resolution anymore, because we're going to put different types of filters in here-- it's going to mean even by the number of filters you stick in your filter. And there's an interesting trade-off there, which I might have time to talk about. I won't really talk about the specifics. Basically, just since we have a digital image now on a film set up, this wouldn't work at all, because you can't really cut out the little pieces of different spectral areas and stitch them together. It'd be really difficult. You could reverse project it, which people have done with them. But anyway, we just take different areas. So let's say-- and this is just a piece of an image, which I'll show you later. And each of these is not projecting well. Each of these white boxes is the area where the filters are being projected. So there's a 3 by 3 filter at the center one, I think. It's a very non-dense filtering. So let's say this is a green filter from this area. You can take all these green filters, stitch them together, and make the green image. So let's say this is the red filter here. You can do that in the red image. And you just do that digitally. It's just like a lookup table. It's very straightforward. And this is what our setup looked like. We just had a regular lens. It's a Nikon lens. It was $100. And they're really easy to take apart. So if any of you ever wants for whatever, your project for this class, to put stuff in the aperture of a lens, come to me. I can teach you how to do it in five minutes. And it'll take you 30 seconds each time to undo the lens and put it back together. But we put color filters in there. You can't really see them that well. But we just put these little color filters discs. And then I'll show you some results for something I did this summer. I put a spectral filter in the aperture, which changes our design to a snapshot spectral imager. And then here we have our [INAUDIBLE].. This is the CCD. You just sandwich a pinhole array right up against the CCD. And the covered glass over the CCD, which is like a millimeter thick, dictates the distance between the two. So here are some example images. This is just a image, a raw data image. And here's a zoomed up version. You can see each little pinhole region close up. And then you could do some [INAUDIBLE] division, which is just normalizing the image. So basically we wanted to use a lenslet array, which is just almost the same thing as a pinhole array. I could show you some slides from Stanford where they have the lenslet array. That's just really expensive. And once you get it, it's fixed. Pinhole arrays you can print off, you can vary the distances between pinholes and the sizes of pinholes. So it's a much easier way to check when you're doing an experiment. But they have imperfections, obviously, especially from printing. So that's why we did that version of it, sort of get rid of those imperfections. And so this was done on a grayscale sensor. So it's just a gray sensor, no color information. But we put six filters in the aperture for this picture, a red, green, and blue filter and then three polarizers. So we took an image. And the images of-- there's a big TV back here. And there's a color chart here and a book and then a polarizer sheet over here. But you can see we've got a color image, which approximates the color of the scene pretty well. Yes. AUDIENCE: When you say "printed", why you say "printed"? ROARKE HORSTMEYER: So there's this company called Pageworks. It's actually in Cambridge. And, what's the process called? I'm blanking. AUDIENCE: Because it's just really just [INAUDIBLE] ROARKE HORSTMEYER: So it's on our transparency. So it's just like printing a transparency. AUDIENCE: OK. ROARKE HORSTMEYER: Yeah. PROFESSOR: OK. It's just easy to make. And the contrast isn't great, but it's because there is, [INAUDIBLE] ROARKE HORSTMEYER: Yeah, yeah. AUDIENCE: [INAUDIBLE] ROARKE HORSTMEYER: No, it wouldn't have. So the pinhole sizes are about 50 microns. So I don't think the laser-- AUDIENCE: [INAUDIBLE] shrink. AUDIENCE: So that's what we were trying to do last year. AUDIENCE: And then, what? AUDIENCE: Well, it does work. AUDIENCE: OK. AUDIENCE: For what we wanted to do, we needed still much more. AUDIENCE: So it's sort of-- OK. AUDIENCE: But, yeah, it worked. You print on a [INAUDIBLE] and then-- AUDIENCE: And it sort of-- it shrinks and it-- AUDIENCE: It's not uniform but-- AUDIENCE: OK. ROARKE HORSTMEYER: Yeah. So yeah, you just print out on transparency. But you have to use, you can't just do it on a normal printer. You need a high resolution printer. Well, we did. Because we needed 25 micron resolution. 720, I think, dpi. But anyway, so in one image, we get a color picture. But we also get a degree of color polarization picture. So you can see the TV's polarized, because TVs have polarizers in them. There, you can see down there, there's a polarized down there over a resolution chart. So that was a simple little tip. Then we did one of 16 filters. So basically I bought every filter I could from Edmonds that was cheap and put them in a filter array. So we have red, green, blue, yellow, magenta, cyan. I got an IR filter. I got a bunch of different polarizers going to different angles to get a precise degree of polarization measurement. And then I put three neutral density filters. And so these are the 16 images you get from one single image. You just separate them with a lookup table. So you can see up here is the color information. It's kind of hard to see from the projector. But the spots are the color are changing, the checkerboard pattern's changing for the color. The near-IR is really dark. And I've got a lamp here, which is saturated on some. But you can see the resolution chart with the near-IR. The polarization filter, you can see the TV screen screen's changing and how bright it is and stuff. And then neutral density is just darker. AUDIENCE: [INAUDIBLE] ROARKE HORSTMEYER: What's that? AUDIENCE: [INAUDIBLE] ROARKE HORSTMEYER: Yeah, yeah. AUDIENCE: But then you stack them from each other. ROARKE HORSTMEYER: No, they're not stacked on top of each other. AUDIENCE: Oh. ROARKE HORSTMEYER: So here are the filters. It's hard to see on a projector. But here you can see it's just this little 3 by 3 filter array. Here you can see. So basically it's just literally a flat 2D array of filters. And I put it right on the aperture stop. You have about this much room on the aperture stop, a little bit bigger than a quarter or so. So you can put as many filters in that space as you can. AUDIENCE: So this also decreases resolution. ROARKE HORSTMEYER: Yeah, definitely. [INTERPOSING VOICES] Definitely. So your spatial resolution is going to be decreased at least proportional. If you did it perfectly and you put one pinhole image of each filter to 1 pixel it would be decreased by the number of filters. So I have 16 filters. My spatial resolution is going to be at least 1/16 the original spatial resolution. That's a critical point. So these are very low spatial resolution images. AUDIENCE: The effect is very similar to what they are-- ROARKE HORSTMEYER: Exactly. So it's the same thing as a Bayer filter. You have a red, green, and blue filter. It's a third. AUDIENCE: But you get this on the lens, right? So it's not on the sensor. So how do you know which actually pixels are affected by those guys? ROARKE HORSTMEYER: Yeah, so each image-- AUDIENCE: Because if you go back to the slide when you show, OK, this part of the [INAUDIBLE] ROARKE HORSTMEYER: Yeah. AUDIENCE: So you can actually know exactly each pixel was effected by-- ROARKE HORSTMEYER: Yeah, so I'm sticking a pinhole right over the sensor. So the pinhole array is going to create multiple images. Each one of these dots is a pinhole array image essentially. And we zoom into that. I'm highlighting one pinhole array this white box. And so I know just from a priori knowledge, when I put the filter array in there that this center filter is a red filter. I know the one on the upper left is a green filter. The one in the upper right's a blue, for example. Because I put the filter array in there. So now under each pinhole, I have a, ideally I would have nine pixels. And I would know which each pixel corresponds to just from the orientation of how I put the filter array in there. AUDIENCE: And the [INAUDIBLE] has to be fixed for this to work, right? ROARKE HORSTMEYER: Yes. So I glanced over that. I skipped that slide. But yes, you basically have to be imaged at infinity. You can't change the focus. Once you change the focus, you're basically crossing those rays in front of the sensor plane, and that's destroying your detection diversity. AUDIENCE: So to scene has to be planned out. ROARKE HORSTMEYER: Yeah. AUDIENCE: And it has to be [INAUDIBLE] ROARKE HORSTMEYER: The scene has to be planned. And lambertian. Well, not technically lambertian. It has to emit polarization or spectral information. AUDIENCE: But they should be the same. ROARKE HORSTMEYER: And all the same rays. AUDIENCE: Did you try removing the effect of reducing the spatial resolution by trying to demodulate? Because see, the pinhole array is basically a [INAUDIBLE] which has a modulation function on it. So if we demodulate it on computers once the image has been captured, you can actually regain the spatial resolution reduced by modulation. ROARKE HORSTMEYER: I don't really know what you mean modulation. Basically, since you have filters in your aperture, those are attenuating light. AUDIENCE: Operating a function over the image. ROARKE HORSTMEYER: Yeah, but you're also mixing in all this diversity. So now you have all this spectral and polarization diversity mixed in from the different filters, which are going to change depending on your spatial location. So if I have a polarized object up here, it'll be modulated by 1 polarizer and not by another. PROFESSOR: It's just like there [INAUDIBLE].. You can take off some sophisticated [INAUDIBLE] information. But it wouldn't just be hard [INAUDIBLE] ROARKE HORSTMEYER: Yeah. OK. I'll just try to finish up. So just an example of using those 16 images. People ask, well, why do you want 16 images of all these different things? So I try to come up with a clever example. But basically you can create a color image and you have this huge saturated area from the sun or from a lamp or whatever. So you can just make a quick HDR image using the three HDR filters on top of the color information. So now you can see there's like a resolution chart back there. And then what you can do is take a degree of polarization measurement using the five polarization filters. And that'll find things that are reflecting polarized light or emitting polarized light. So for example, the man-made object example I showed you before. You can pick out which objects are man-made, which are these or whatever. And you can also pick out things emitting IR information in the IR filter. So people or well, I guess plants emit near-IR more than people. But if it was extended into the IR range, you could identify people. But you would need a better sense than a regular CCD. So you combine all those and I did it with 12 different filters out of the 16. And I created this sort of phony region of interest extended, dynamic range color image. And you can see there's some errors, obviously. And that's because what I was trying to find polarization, for example, if it's a low intensity area, it's not going to emit any real information about polarization. It's just going to be a dark area. Things like that causing error. And also from angular or the different planes perspective, you can see that here that there's a bar sticking out. And that's because as things get closer, error associated with essentially having different perspectives on an object becomes apparent. So it only works for plane or objects towards infinity, or it works best. So the next example, this is I just did this over the summer for a quick paper. Instead of putting a filter array, we put a spectral filter in the aperture down here. And what this does is essentially like a grating, just splits up the incoming light into all different wavelengths. It went from 400 to 700 nanometers, so across the color spectrum. And it had a resolution of about 10 nanometers. That was pretty good. And they're cheap. They're like 200 bucks. And they're small enough to fit in an aperture. So they're really useful to do experiments with cameras. And so I just took a picture of some crayons. And picking out different-- So this is a reduced-resolution image of some crayons. By picking out certain pixels, you can get a roughly 25-pixel resolution of the 400 to 700 nanometer range. So you can see this is magenta. That's this red and blue parts, but not the green, teal, orange, blue. So yeah, I have one more example of that. So this is same crayons, different pictures. So I noticed when I was doing the experiment that if I had the lights on or if I had another light on, I would just get really different results. And I was scratching my head. Why does that happen? Well, it's obviously because fluorescent lights emit so much different spectral information. Or I just can't really tell, but these lights versus these fluorescent lights have a very different spectral distribution. So I didn't really label them. But these are definitely just fluorescent lights. Or maybe one's fluorescent lights and one's another desktop fluorescent light. So you can see the sharp peak throug this. Most pixels just have a sharp peak. Our eyes just integrate over all of that. So we don't really notice it. Just looks like light. But if I just put on a regular desk lamp, like an old fashioned bulb, you get this distribution. So these are all from the same exact pixel. But I was just changing the lights in the room. So it was pretty interesting. PROFESSOR: [INAUDIBLE] the light coming in as a modification of the incident-- ROARKE HORSTMEYER: Incident lighting and the color of the-- exactly. Exactly. So, yeah, that's it. Any questions? AUDIENCE: How many [INAUDIBLE] you have? ROARKE HORSTMEYER: From this? Roughly 25. It just depends on your pinhole array pitch. Another thing-- so I really want to do this and make a better filter. Because the filter I use is just a one-dimensional filter. It's generally used for laser experiments. So you hit the laser on it and you can tilt in different ways and select a specific wavelength if the laser is more broadband than 10 nanometers. So it was only one-dimensional. So it would be cool, much better and more efficient if I had a two-dimensional filter so that I could use all the area. But yeah, roughly 25. 25. Any other questions? AUDIENCE: So those graphs were made out of 25 [INAUDIBLE].. ROARKE HORSTMEYER: Yeah, yeah, yeah. AUDIENCE: [INAUDIBLE] in your experiment, so each pinhole was making an image of the-- ROARKE HORSTMEYER: Of the filter, yeah. AUDIENCE: --of the filter? How much in your experiment, how many of the pixels were just wasted on-- ROARKE HORSTMEYER: On the border? Yeah. A lot. A lot. So I started with a, I think it was 10-megapixel sensor. And my spatial resolution was around 300 by 300 for each of the images. So that's for something like 16 filters or nine filters. So a ton is getting thrown away somewhere. And each filter is not being imaged to one single pixel. It's being imaged to a 3 by 3 or 4 by 4 array of pixels. So I mean it's incredibly wasteful but it's just an idea. So you can really fine tune it, probably make it more efficient. PROFESSOR: I think the big advantage of this, you can do all of those things while putting a filter in the lens and not have to put it on the sensor. So someone was asking, can you change those kinds of things. Yes, you can just take it out and put a new one, a different spectral response if you need it. That's what's interesting. ROARKE HORSTMEYER: Yeah. PROFESSOR: Can you say something about the diffraction issues? Do you have any? ROARKE HORSTMEYER: Yeah, a little bit. So some pinholes are really pretty inefficient, pretty bad at resolving light, which you guys might learn about. So lenses, everything has an associate. It's called a point spread function. I don't if you've heard that term, but basically a blur spot size. Pinhole blur spot sizes are more of a geometric projection of the size of the pinhole. So you want to let a lot of light through with a pinhole. But depending on the size of the pinhole, you can just imagine light just passing straight through the pinhole and not changing its direction. So that light will have the blur spot size roughly the size of the actual pinhole. So that blur spot size, in my case, I was using 50 micron pinholes. It's going to be 50 microns, which is relatively large. Because a pixel is about 10 microns. So that's why I was imaging to about five pixels. If I use the lenslet array, it would have been much, much better diffraction effects and whatnot would have been much less. PROFESSOR: I think we're going to be talking about using the next array in other projects. ROARKE HORSTMEYER: And another interesting thing is that, so the point spread function in a camera or the blur spot in a camera, is given by the shape of the aperture. So if you have you're taking a picture and you go out of focus, the things out of focus have a circle, like a little blur spot. That's because the aperture is a circle. With a square aperture, the things would have a square and so on. But when you're in focus, even the little blur spot is given by the shape of the aperture. So I was putting all these crazy filters in the aperture. And each filter is going to change the point spread function depending on the color of the scene. So basically across my image, I was getting many different types of point spread function. Some were strange shapes because of the different properties of the objects in the scene. So it's very different problems to try and fix or analyze because of all the filters in the aperture. PROFESSOR: So I have just a couple of more points I want to make before we end. We've seen right at the start, beginning of the class, I had these four or five things that we wanted to improve about cameras. And one of them was improving the spatial resolution. So in the case of film it's sort of ambiguous. Because there is more, it's hard to define what resolution is for film. But for sensors or for digital, it's usually just a number of pixels. And it would be hard to imagine how you can increase the information you're getting from a fixed number of pixels. But there are techniques for doing that. And the most popular ones are what's called super resolution. I'm not going to go too much into the details of what super resolution is and how you do it. But just to give you an intuition, the idea is that you take an image with your camera fixed at one position and then you move the image sensor by a fraction of the pixel size. And then you take another image. And since you've moved it by a fraction of a pixel size, the information that you've captured is actually different from the first one. And you can combine these two images together to get a higher resolution image. There are, obviously, issues with that, one of which is that you have to move the sensor by less than a pixel size, which is usually one or two microns. And it has to be very precisely controlled. So you find this in really high end camera that you sign out [INAUDIBLE] cameras which do this medium format, big medium format cameras. And there are also fundamental limits to how much you can do using this. You can usually not go beyond a 3x or 4x or even just a 2x increase in resolution. Anything beyond that is what they call hallucination. It's you may have create an image which is 100x resolution, but it doesn't have any information in those higher frequencies. So that's super resolution. So panoramas over time is just taking an image and just scanning it like this and stitching them all together, create a large panorama. You have a big high resolution image and you do so. And in doing so you also get a wider field of view. So recently, this technique was adopted by people at Microsoft Research and the University of Konstanz where they built a device that essentially scanned the whole scene and took a whole bunch of images. So this one was created out of 800 images that were captured over the period of a few hours. And then they found correspondence points and essentially just stitched them together to create this one large 1.5 megapixel image, I think. Or 3.5 megapixel image. And just to give you a sense for how much information is there, you can actually see the person sitting inside that crane, whatever that is. And there is a website. I think it's called GigaPan. They sell this device now. And Microsoft gives you the software. I think it's called HD View, which you can use to both view these images and also generate your own images. You buy this thing. You put your camera on it. It sits there and takes images for a few hours. And then you just stitch them all together. There is also a group, there's a husband and wife pro couple who've been doing this for a while using film photography. They have this large format film camera that they go out and take these huge gigapixel images. And they have their own custom scanners. And they will scan them and get a digital image out of it. And a number of other people who have done this kind of thing. AUDIENCE: But if this is a consumer device, how I imagine this actually having the camera move around a lot. PROFESSOR: So the way they have is something like this. If you can see it, it just rotates and takes multiple images. It doesn't have to move at all. So it's this thing. The one that they're selling is it has a very cute shutter release button also. So you can just use any camera. And as long as you can just move the arm and have it so that it can pretty much [INAUDIBLE] button. And it will go out. And you can, I think, program it to say how many images you want and so on. So either you can just create a panorama or you can create a very, very high resolution panorama like that. So some of the challenges in this thing that I just want to point out is there's a huge variation and intensity. And this is something that we discussed the high dynamic range issues. When you're moving the camera and you're doing all of this, you could have parts of the image that are much, much, much darker than other parts or much brighter. And then how you stitch them together, what kind of exposure issues do you have to take care of? Because you just do auto exposure then one part of the image may be too bright than the other. And it might not line up. You might not get the alignment right or you might have issues with finding correspondences. And all of that is taken care of the software from Microsoft called HD View. Another way of doing high resolution, this is again going back to those three basic things. This one I just talked about this was epsilon in time. But you could do epsilon in sensors. You can have 500 cameras and you just take lots of images at the same time. So this is similar to the camera array we saw earlier. But the difference in this is that the cameras are all looking out parallel to one another. In the previous case, they were all looking at something in between. So there was a lot of overlap. In this, the overlap is much less. It says for people-- and I think it's even less than that. And what you can do with this is you can get all of these images and then, just again, find the correspondences and stitch them all together and get something like this. This is a very, very high resolution, and it has lots of information. And then again, you can again, find those correspondences, like them and just stitch them together. But there's also issues of geometric and color calibration and also high dynamic range issues, which I briefly discussed earlier. So this is what you get once you fix all of those. For some reason, it's really dark here. And again, it shows you can really zoom in and get a high resolution image. I don't think this is anywhere close to [INAUDIBLE] Because this is a much older project. And again similar to the assorted pixels, they actually have different exposures for different cameras. And so you can combine all of that information together to get not just a high resolution image but also a high resolution, high dynamic range image. The last thing I want to talk about is increasing the temporal resolution. So far, we've seen how we can increase the spatial resolution. We've seen how we can increase the dynamic range and focus depth of [INAUDIBLE] and the other thing that remains, the temporal resolution. And you can now buy cameras that have 1,000 frames per second or even more. And you can just hit the shutter and it takes a whole series of images. But this is from a few years ago where what they said was that instead of taking one camera with very high frame rate, what if you took multiple cameras with more reasonable frame rates and then you combine that information together. And sure enough, they came up with their own camera array. This is also from Stanford. And you can combine, I think they triggered them in a way that they can then later combine the information. But again you need to calibrate the images and also color correct them and so on. It's just a [INAUDIBLE]. You can see this off like artifacts as the thing moves and because the calibration is imperfect. And each of these is a relatively low resolution camera, I think about six [INAUDIBLE].. So that's basically it. So that's epsilon photography. It's how we enhance film-based photography. And the idea is to modify the exposure settings, spectrum of color, focus, camera scene, illumination, basically anything else that is a parameter of the camera and take multiple images or take images [INAUDIBLE] over time or sensor or pixels. And the end result is get a better camera. And as we'll see in the next class and future classes that this is not the interesting part of computational cameras. This is just something that it's good to know about. And there's still research going on on many of these topics. It's just, it's furthering or making even better cameras the way cameras were for more than a century rather than coming up with cameras that do new things. I guess that's it.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_6_Lightfields_part_2.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RAMESH RASKAR: The final project is a very critical part of this class. So I'll emphasize this. The assignments are simple and straightforward. If you're struggling with it, talk to me, or [? Ankit, ?] or Professor [INAUDIBLE],, or Professor [? Oliveira. ?] And we are here to help you. But the final project is really, really critical. The project has to be novel. It has to be something nobody has done before. Or at least, nobody's done it the way you are doing it. And so the problem statement has to be novel. Its execution has to be beautiful. And its impact-- you should reap some results. You should show that you are solving it or it's possible to evaluate what you have done. And we have multiple stages to get you prepared for this final project. You have to come with at least three ideas for a final project. And this is very critical. If you go to the wiki on the camera culture group page, we have a whole section on how to come up with great ideas and how to brainstorm and so on. So I encourage you to do that. On our [? Stellar ?] page we also have a whole presentation on how to come up with new ideas. There are six ways of coming up with new ideas. And you should start setting up a meeting with me or any one of these people in the next week. If you're in the building, it's easy. You can just catch me. Usually, between 5:00 PM to 6:00 PM is the best because there are no officially scheduled meetings. You can just come to my office, and we can chat. If you're not taking this class for credit and you're a listener, I would really appreciate if you will pitch an idea for a project. And we'll have an opportunity to do that on October 30. So just come up here and say, I have this crazy idea. If somebody wants to work on this, let's team up. And then I want you to send a very simple email with-- after discussing these three ideas, I, or [? Ankit, ?] or our other two mentors will help you narrow down to maybe the top two ideas, or maybe the top one idea. And then, for that, you need to send these five things. All right? And on November 6, we will do a three-minute presentation so everybody knows, in the class, the kind of problem you are attacking. Maybe there's some synergy between multiple projects. You can help each other find software, or talk to people, or get some equipment. And then final proposals are due two weeks before the actual final presentation. And by then you should have some initial experiments. Remember, your final assignment is due on November 13. So you can't start thinking about your final project after the final assignment is due. You need to start thinking now. And at this stage, it doesn't have to be completely hashed out. That's why you're in the class. We'll help you think through that. And then December 4, the class is finished. You finish really early, because as I said, we meet on Fridays. And the Thursday after this, so December 10, is the last day of classes. So we are one of the first classes to finish. Unfortunately, November 27 is a Thanksgiving break. And so-- which means we don't have a class just before our final projects to discuss and do other things. But I will be available throughout that time. And we can help you. If you need equipment, you need cameras, you need software, whatever-- we can try to help you. Any questions on final projects? And Mike was saying that it might be good to know what students did last year in-- as last projects, and how they decided, and so on. So Mike Hirsch is going to come and talk today about his [? pilot ?] screen. And that would be a good way to ask him, how did he get started, and how did they consider the paper? So today we're going to talk about lightfields-- just finish up the lightfields, and then talk about cameras for SCI. So we said there are basically three ways of coming-- of capturing a lightfield. Can somebody tell me in two sentences what's a lightfield? AUDIENCE: On the real cameras? RAMESH RASKAR: In a real-- inside a camera? What is it? How are they represented mathematically? A representation-- just a representation. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: So it's-- how many dimensions does it have [INAUDIBLE]?? AUDIENCE: Four dimensions. Four. RAMESH RASKAR: It's four-dimensional. Although the world seems to be two-dimensional, [INAUDIBLE] actually four-dimensional. And where do we get those four dimensions from? We have a lens. And we got a sensor. So in flatland it's just two-dimensional [INAUDIBLE] wall. Where are the two dimensions coming from? Well, the x dimension is easy. That's one dimension. And the other dimension is? AUDIENCE: Through [INAUDIBLE]. RAMESH RASKAR: [INAUDIBLE],, That's this dimension. So if you connect any two points, that indicates the direction of the rate. And then flatline, [INAUDIBLE],, all the raised bits is two dimensional. In the real world [INAUDIBLE] it's four dimensional. Because your sensor is going to be X and Y. And your lens is going to have [? VW. ?] So that's the lightfield that we care about. And why do we care about lightfields so much? What's so unique about lightfield? AUDIENCE: If we capture the lightfield, we have captured almost everything that we can capture. RAMESH RASKAR: Exactly. It's a complete representation of light that's entering the lens. So anything that you could ever imagine doing with dimension-- changing focus, changing zoom, changing aperture size-- all those things are already captured in this four [INAUDIBLE]. And from there you can do anything you want. And if you cannot do it using mechanically changing camera parameters, you cannot do it with lightfields. Sorry-- if you cannot do it with lightfield, you cannot do it by changing [INAUDIBLE].. So it's a very, very powerful way of-- and as we saw earlier, the lightfield also represents the [? waveguard, ?] both phase and function of [INAUDIBLE]. So given that this is so important, photographers never thought about capturing the lightfield. [INAUDIBLE] capturing light. And that's why this is computational-- from a pure computational camera, because we're trying to understand this relationship. And this is what allows us to build cool camera toys, and also come up with [INAUDIBLE].. So given that, what are the three ways we can capture this? One is using a lens [INAUDIBLE] that we saw earlier [INAUDIBLE]. All you want to do is, right now, at this pixel, a ray from here and a ray from here. They converge. And any variation in the radius along those rays is lost. We want to make sure that we can capture this and this individually. So what are some ways we can do it? Instead of putting the sensor here we can put the sensor further back, and then [INAUDIBLE].. And the light can [INAUDIBLE] here [INAUDIBLE] for the lighting. [? Inputs ?] here. And it'll be mapped [INAUDIBLE] different pixels. So you have [INAUDIBLE] equals 0, [INAUDIBLE] equals this [INAUDIBLE] and plus 4, and [INAUDIBLE] minus 4. Then have captured-- for this given x, I've captured-- let's say this is 1, 2, 3, 4, 5. Then have captured their 3 comma-- well, minus 4. AUDIENCE: Minus 4. RAMESH RASKAR: And yet I've captured 3 comma [INAUDIBLE].. So I have captured all [INAUDIBLE].. But of course, you won't have what? AUDIENCE: Resolution. RAMESH RASKAR: You won't have resolution. Because for x equals 3.5 they're just opaque. So that light is lost. [INAUDIBLE] captured. So if I say here-- let's say I have 1,000 pixels. So that's 900 pixels [INAUDIBLE].. And I'm going to chop it into [INAUDIBLE] plus 4, minus 4. What is the final-- how many pinholes can I put here? So this total resolution is x resolution times [INAUDIBLE].. In a traditional camera, 900 is just your execution. But now we're going to also try to capture the [INAUDIBLE] variations in the same number of [INAUDIBLE].. So we know that our [INAUDIBLE] resolution is 9. So [INAUDIBLE] 100. Which means that our image is only going to be 100 pixels. [INAUDIBLE] only 100 pinholes. And for each pinhole I'm going to get the image [INAUDIBLE] pixels. And from this 100 times 9 image, which is 900 pixels, I can create a lightfield where the spatial resolution is 100 and the angular resolution is 9. And what's the disadvantage? One is that you lose resolution. [INTERPOSING VOICES] AUDIENCE: Light. RAMESH RASKAR: You lose light, because all the lights that's going through this opaque area is completely lost. It's like looking through multiple pinhole cameras. It's like looking at the world through [INAUDIBLE] holes. So most of the light is lost. In this case, let's say out of nine pixels, one pixel is open. Then only 1/9 of the light is being captured. The other 8/9 of the light is lost. So that's a big problem. But still, conceptually, this is very key, because you can say, I want to capture space-- space radiation as well as angle resolution-- angle radiation. So I'm just going to chop my [? word ?] into 100 pinholes and 9 angle-- [INAUDIBLE] angle spaces. And I'm going to capture nine pixels, and so on. It's a very clean, simple model. But in the real world, this is very inefficient. It's just like for a camera, if you're [INAUDIBLE] pinhole model-- pinhole camera model. But in the real world, [INAUDIBLE].. Now-- AUDIENCE: So let me ask you this. Can you really construct this in such a way that, as you go to the end of the-- I mean, your x. Aren't you wasting part of the nine spatial samples that you have? RAMESH RASKAR: You mean at the top or the bottom? AUDIENCE: Yeah. Because maybe you can't actually have this [INAUDIBLE] opening over the entire set. Yeah, exactly. RAMESH RASKAR: As long as you-- this is your sensor. And you can create a pinhole that can capture light from different directions. You're OK. There's not much you can do. I think you might-- what you may be thinking about is, this pinhole, at some angle, will become really opaque. You won't be able to see through it. Is that what you're saying? AUDIENCE: Yeah. What I'm saying is that maybe, through this particular angle, the pinhole doesn't span the whole nine pixels you have behind it. RAMESH RASKAR: That's a good point. But you are leading me to the next question, which is, this spacing here. What is magical about this spacing? Because if you don't have the right spacing, you'll get those problems. What needs to happen in the spacing so that you actually capture all these 900 [INAUDIBLE]?? Yes? AUDIENCE: [INAUDIBLE] they really should not overlap [INAUDIBLE]. RAMESH RASKAR: Exactly. Exactly. So if you think about the blob that's coming through from top to bottom, there'll be some loss here. If you see the blob that's coming through here, these blobs are just barely touching each other. So here we have theta equals plus 4, 3, 0, 1, minus 4. And then the next one is theta equals plus 4. So it's flipped [INAUDIBLE]. Yes. Then, from minus 44, you guys want to go from minus 44 [INAUDIBLE] Now, if I move this pinhole further or back, you're going to get [INAUDIBLE] variables, right? Let's say I move this pinhole a little forward and I just take one pinhole [INAUDIBLE].. And everything else is the same. Are the blobs larger? If I place the pinhole at the same-- I place the pinhole exactly at 9 pixels away. If I can put it exactly 9 pixels away, the rays on the [? cloth ?] will slide into [INAUDIBLE].. So this blob here will start interfering with this blob here [INAUDIBLE]. So that's one issue. What happens if it's too close? Instead of moving further away, it gets too close? AUDIENCE: No spaces. RAMESH RASKAR: There will be a space between those blobs. Is that here [INAUDIBLE] a blob here and we get a blob here. And some pixels will be missing. So this is a very important point. And this is related to the numerical aperture, or the F-stop of the lens. So an F-stop of the lens-- again, pardon all this terminology. I don't like it. But [INAUDIBLE] about it is the f-number is simply the ratio of the diameter of the lens with the distance to the sensor. So it's a diameter. And your distance of the [INAUDIBLE].. The relationship between the two is just c over [INAUDIBLE] so if you have a-- and remember, the diameter is in the denominator, which means that as you have a larger lens, the f-number actually goes down. [INAUDIBLE] So let's take a concrete example. Let's say my lens has a diameter of 25 millimeters. And my focal length and distance is about [INAUDIBLE] millimeters, then the f-number is what? 50 [INAUDIBLE]. Does anybody know what the f-number of [INAUDIBLE]?? [INAUDIBLE] 0218. Then there is 4 and so on. So as you can see that if I make my diameter half-- so instead of 25 millimeter, I have 12.5 millimeter-- then what do I get here? I get 50 divided by 12.5. So that's [INAUDIBLE] the f-number of a 12.5 millimeter lens with a 50 millimeter focal length is 4. From 2 we jump to 4. What should we do to jump from 2 to 2.8? This is where the [INAUDIBLE]. But maybe some of you have forgotten [INAUDIBLE].. Yes? AUDIENCE: [INAUDIBLE] RAMESH RASKAR: [INAUDIBLE]. How do you go from 2 to 2.0 [INAUDIBLE].. But how do you get [? 4.4? ?] AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: So what's [INAUDIBLE]?? AUDIENCE: [INAUDIBLE] RAMESH RASKAR: 1.41428 something, right? As far as [INAUDIBLE] are concerned, it's [INAUDIBLE] because we like to think about this size numbers. And this problem [INAUDIBLE] even bigger problem once you go for [INAUDIBLE]. Anyway, so why-- where are we getting this [INAUDIBLE]?? If you go from [INAUDIBLE] when you go from 25 to 12.5, the amount of light that's collected by this [INAUDIBLE] across this distance is related by what factor? AUDIENCE: [INAUDIBLE] RAMESH RASKAR: 5 times 4. So the diameter is [INAUDIBLE] plus 2, but the area is decreasing by a factor of 4. Now, if you want-- so four times [INAUDIBLE] coming in. On the other hand, if you want to go from 25 to some other diameter so that you get half the width [INAUDIBLE]---- if you divide 25 [INAUDIBLE],, whatever that is-- 18, something like that? 25 [INAUDIBLE]. Let's say it's 18 point something. That fair? [INAUDIBLE] I'm being imprecise. [INAUDIBLE] So when I go from here to here, [INAUDIBLE] of 2 in diameter, I get half the length. And that's what photographers mean to go by-- go down by 1 F-stop. So when you go down by one F-stop, are you going from 2 to 3? Are you going from 2 to 4? What are you doing? It's such an imprecise amount. When you're going by F1 F-stop, it means going from 2 to 2.8. It's completely unnecessary [INAUDIBLE].. So just get rid of this terminology altogether. And if you really want to think about-- the reason why it's worthwhile in thinking about the ratio of diameter to the focal length is about the amount of light. And can I tell you why that's the case? So let's say I have a 25-millimeter lens with a 2.5-- sorry, 50-millimeter focal length. Or let's say I have a 5-millimeter lens with a 10-millimeter focal length. This system and this system have the same exact f-number, which is 2. Here it's 50 by 25. Here it's 10 by 5. So both of them have the same f-number. What's constant between the two? This angle here. This angle is the same in both systems. And this is critical, because when you are looking at the [? world, ?] you want to say, at the given pixel, over what [INAUDIBLE] am I [INAUDIBLE]?? And so that's a very important factor when you think about how many photons you want to capture, because the angle-- the [INAUDIBLE] angle [INAUDIBLE] from the light determines how [INAUDIBLE],, and certainly helps you to think in terms of ratios of angles. Other than that, [INAUDIBLE] terminologies [INAUDIBLE].. What happens after 4? What's the next f-number? AUDIENCE: 5.6. RAMESH RASKAR: 5.6. What do you think 4 times square root of 2 is? Slower [INAUDIBLE]. AUDIENCE: 5.64. RAMESH RASKAR: It's probably 1.4 times 4. So it's 4 point-- yeah. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: 5.64, [INAUDIBLE].. Getting more and more imprecise. And as you can imagine, as you go further away, you-- so this is the same confusion between whether a megabyte is 100 kilobytes or 124. Actually, it's even more confusing than this. And I heard a very interesting story at [INAUDIBLE].. When the economy was not going so well, in the contract they said, we will charge you this much for each megabyte or each gigabyte. And [INAUDIBLE] a kilobyte-- the difference is very small. It's less than 2%. [INAUDIBLE] larger gigabyte to terabyte. Difference between the power of 10 and power of 2 is actually significant. So this woman wakes up one day and says, you know what, I'm gonna charge people by the bytes-- the [INAUDIBLE] power, not [INAUDIBLE] power. And so they [INAUDIBLE] by [INAUDIBLE].. And so the same kind of problem is creeping in here. [INAUDIBLE] square root of 2 is 1.4. It starts getting more and more confusing as you go. And then, as you know very well, after 5.6 you start jumping. You have 8, and 11, and 22, and [INAUDIBLE].. And then a [INAUDIBLE] plenty more. It just gets too confusing. So just ignore [INAUDIBLE]. It's the most confusing thing. Way down by F-stop, I think I should go ahead and approve [INAUDIBLE]. Photography school [INAUDIBLE]. So what we're trying to do here is our angle here is decided by this ratio, [INAUDIBLE] to focal length. And the angle here is this-- again, a ratio of the blob to this distance. So let's call this C1. And let's call this [INAUDIBLE]. So what you want is the ratio of e-- sorry c to d to be equal to the ratio of l to your [? losses. ?] And if that is matched, then all your blobs are going to just barely touch each other. If it's not matched, then either you'll get [INAUDIBLE] or you'll get [INAUDIBLE]. That's the basic math behind a light frame. So it's very conceptually, but as we know, it's going to block the light. And [INAUDIBLE] even talk about [INAUDIBLE].. Has anybody done, here, pinhole photography? So what are the problems you've faced in your pinhole photography? AUDIENCE: It's just, there's so little light [INAUDIBLE]---- RAMESH RASKAR: So little light, so-- AUDIENCE: --exposure time. RAMESH RASKAR: Exposure time is very long. And? And the image quality? AUDIENCE: It's blurry, because you could never [INAUDIBLE] the small holes. RAMESH RASKAR: Exactly. The image is blurred because of refraction. So if you're viewing in a pinhole, then light comes in. And it actually doesn't go in a straight line. It actually bends a little bit. And because of that, the single point in the [? world ?] doesn't map to a single point in the sensor. But it maps to a small [? blur. ?] And this is very similar to the analogy of just using a water hose. If you have a water hose, and it's open, water comes in. And you get always the same thickness, same width of the water flow, right? But as you start shifting this, eventually the water just starts spraying [INAUDIBLE]. When the size of your opening in the water hose becomes comparable to the molecules of water, it actually starts spraying out. And that's diffraction, as well. And we have the same principal here. In very simple words, we have photons coming in here with certain wavelengths. What's the wavelength of green light? AUDIENCE: [INAUDIBLE] RAMESH RASKAR: 500 nanometers. Remember these numbers. They're very, very [INAUDIBLE]. So 400 to 700 is blue, green-- and this is nanometers [INAUDIBLE].. So if the light is 500 nanometers, which is 0.5 micrometers, and your pinhole starts [INAUDIBLE] about 1 millimeter, which is 1,000 micrometers, then you're probably OK. But as you start weighing millimeter and below-- which is what your cell phone cameras are, fortunately-- you start getting to 500 microns, which is half a millimeter. Then your wavelength is comparable to the size of your photon. And you start getting this [INAUDIBLE].. So the relationship between the size of opening and the size of the wavelength kind of decides the diffraction. And the focal length is very easy. This one here is simply-- [INAUDIBLE] I said the aperture is 8. This angle here is going to be [INAUDIBLE] in gradients [INAUDIBLE]. We won't be talking about it too much in this class-- in this lecture. [INAUDIBLE] an indication of how quickly the light is turning up. So in the worst case scenario, when you have the big hole the same size as the wavelength of light-- so let's say you created a pinhole whose width equals [INAUDIBLE] nanometer, or 0.5 micrometer, what [INAUDIBLE]?? If you're wondering, 1 gradient is how many of this, [INAUDIBLE]. So this cone will be 57 degrees. So it's part of the question. Even if you can just do calculations, and even if you plot something that's 1,001 millimeter-- so that's [INAUDIBLE] micrometers-- the angle's [INAUDIBLE] pretty wide. That's what you need to know. And so when camera makers are selling you lenses-- really crappy lenses on a mobile phone camera, but just giving you a [INAUDIBLE] resolution of 5 megapixels and 10 megapixels, it doesn't make sense because your image will be blurred. You don't need that many. So other way to think about that is, can your camera capture all this? AUDIENCE: Mm-hmm. [INAUDIBLE] RAMESH RASKAR: You should just take photos later on. If your cell phone camera has a typical aperture size of, say, 2 millimeters, that's [INAUDIBLE] and then your sensor light comes in, you're kind of zooming in [INAUDIBLE].. And it kind of spreads out. And let's say it spreads out over only about 20 micrometers. The pixel of a camera is about 5 micrometers each pixel. So if the blur is already 20 micrometer, it doesn't make sense to have a pixel that's that small. You should just have pixel that match. [INAUDIBLE] If someone's selling you a 10-megapixel camera with a 5-micron pitch, they should just give you something that's much lower resolution than that, because the numbers won't make any sense. But again, this is a typical marketing gimmick camera makers will use, where they will sell really high resolution sensors, although clearly, you cannot really capture that big [INAUDIBLE] resolution. Skip. And there's a new trend. I think camera makers have started selling [INAUDIBLE] megapixel [INAUDIBLE]. And now they have to start selling cameras with lower resolution. I think the latest Canon has lower resolution than its previous version. I believe the G6-- I think it's 12 megapixel. But at least now it's dynamic. Because they realized it doesn't make sense to just keep boosting the megapixels when the aperture of the lens is [INAUDIBLE].. So recently, I was working on a project where they wanted to capture 50 gigapixels. 50 gigapixels. 50 times 10 to the 9. This is usually 2 times [? 10 to the ?] 6 pixels. And as the megapixels increased, the size of the-- as you can see, this ratio here-- as the megapixels increased, the size of the lens required increases correspondingly. And that's why expensive cameras have much bigger lenses. AUDIENCE: [INAUDIBLE]. I had a question. [INAUDIBLE],, wouldn't it be much more effective to come up with a measure [INAUDIBLE] size of the camera [INAUDIBLE]? [INAUDIBLE] can't really [INAUDIBLE].. RAMESH RASKAR: Exactly. AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: And the camera makers [INAUDIBLE].. AUDIENCE: Yes. I mean, of course it's a marketing thing. [INAUDIBLE] RAMESH RASKAR: Yeah, yeah. AUDIENCE: [INAUDIBLE]? RAMESH RASKAR: Yes, there is. So there are all these measures called modulation transfer function, and space-bandwidth product, and so on. And if you want, we can discuss about that later on. It's not so simple [INAUDIBLE] what we're doing here. But [INAUDIBLE],, yes, let's get more resolution, or, this [INAUDIBLE] resolution. Those are the laws of physics. You can have a lot of holes, but [INAUDIBLE].. So the other way to think about this is, how can you create a [INAUDIBLE] that creates a spot that's [INAUDIBLE] start decreasing your [INAUDIBLE] size for the things [INAUDIBLE]. And it's almost-- it's funny how nature works. There are always, always limits as we get closer to certain [? questions. ?] If you think about the density of water, it decreases as [INAUDIBLE] temperature. Or, at some point, [INAUDIBLE] potentially starts [INAUDIBLE] diffraction, or [INAUDIBLE]. [INAUDIBLE] where it seems like [INAUDIBLE] puts some stuff [INAUDIBLE]. So we're talking about resolution. We're talking about the ratio of the diameter to the length. There's a very minor and very subtle point here. I'll just make it for those of you who are thinking about [INAUDIBLE] focal length, effective f-number or [INAUDIBLE] number. Which is-- although we're defining it by the diameter to the focal length, the sensor is never [INAUDIBLE],, because that's really the case when you're looking at [INAUDIBLE].. When you're looking at something closer, the sensor is-- say the focal length [INAUDIBLE] millimeter, the sensor is 50-plus, like 51 or 52, maybe. But again, that's small enough that [INAUDIBLE] a lot of times, where [INAUDIBLE] like that. It's by taking [INAUDIBLE]. So we're talking about pinhole. Then we started talking about how pinholes aren't great because they lose light. Exposure ends up too long and [INAUDIBLE].. And all those problems are going to appear here as well. It's just that what we have created here is an array of 100 virtual cameras. So 100 virtual cameras where each camera has a pixel-- a resolution of [INAUDIBLE] pixels. [INAUDIBLE] one big camera into 100 cameras, each with [? 9 ?] pixels. And in the real world, it would be 10 [INAUDIBLE] cameras, [? 100/100, ?] each with a resolution of [? 9 ?] megapixels. That's what makes it interesting. From one camera [INAUDIBLE] 100,000 angles. And as we saw last time, you can do it at home. [INAUDIBLE] So just like in regular cameras, we don't use pinholes. We use lenses. You want to replace each of these pinholes, now, with lenses. And that's why we call it [INAUDIBLE].. So I'm just going to draw on top of this with a different color. Tell me if it gets too complicated. I'm just going to draw the landscape. And it's going to do almost the same task. We know that at the center of the lens, [INAUDIBLE].. Because remember, a lens is made up of [INAUDIBLE].. And the middle one is just a sheet of glass. So when it goes to the center, nothing changes. [INAUDIBLE] When you use a lens [INAUDIBLE],, we can do the same thing. But there's one very special-- two, actually, very special constraints we have to achieve to make this happen. Because it's a lens and we're trying to form an image. When we're converting light to the camera, what we're really doing is forming an image of the lens on the sensor. You remember, we heard this concept of imaginary plane, a point in 3D as well as pointing a location on the-- sensor position is what [INAUDIBLE] the picture, the corresponding [INAUDIBLE]. And here, what you want to do is create a lens which forms an image of the lens all [? the same size. ?] [INAUDIBLE] this focal length and its distances in such a way that if I put a point here, I get a sharp image of the point here. Because in other words, [INAUDIBLE].. And each detail here has to map to 1 pixel exactly. So that makes it extremely difficult. Because now we're almost in a microscopic point. And [? I do ?] create these lenses extremely high quality so that an image of a point can-- at about 50 millimeters is formed at some very small distance, usually 1 [INAUDIBLE] or 1 millimeter. It's about 500 microns [INAUDIBLE].. So this becomes very challenging. This is a very specific constraint. There's exactly one plane in which I can put the sensor. If I put it too far here, then the image of the lens on the sensor will be blurry. If I put it too close, again, the image will be blurry. And that very special constraint makes building a lightfield camera out of lens [INAUDIBLE] is extremely challenging and very expensive. [INAUDIBLE] So let's say you-- by the way, all these things we talk about where if you put the lens stack-- you can put the lens a little bit further out and change its focal length so that this one is still imaged here. But conceptually, it's to the pinhole. So if you put the lenses over here and the ratios of this to this is not matched to this to this, then you'll get overlap. And if you put it too close and change the focal length accordingly, you'll get dark spots [INAUDIBLE].. So we still have to worry about a standard set of issues. So this additional thing you have to do, which won't look-- a lot of people forget-- which is, if there's some points-- there's some point here out in the world [INAUDIBLE] again [INAUDIBLE]. And what we're trying to do is, this point is in focus over here. And then we're going to chop this into nine [INAUDIBLE],, and [INAUDIBLE] plus 4, [INAUDIBLE] minus 4. And the one that goes to the center gets divided too plus [INAUDIBLE].. That is very focused. And what we have done is we have captured the lightstream of [INAUDIBLE]. If I want to create a lightfield, I [INAUDIBLE] differently over here. The question is, can we-- what's happened with-- remember, we made this claim that if I capture this 4D lightfield, I can do anything I want. I can focus here. I can focus there. I can use the light. I can change the aperture size. I can change the focal length. I can do anything I want if I capture that [INAUDIBLE].. So one question to be thinking about when you are doing this assignment is [INAUDIBLE] capture the light for one particular plane, how do you recreate it [INAUDIBLE],, either by refocusing it or [INAUDIBLE]?? What's the difference between this situation and the assignment that you're doing? Because [INAUDIBLE] cameras, and maybe last time we realized that the lens is what? It's the area of impulse with [INAUDIBLE].. I can chop the lens as [INAUDIBLE].. You can take off the lens and you can chop it up. And you can treat it the same as these pinholes, which corresponding to something next to it. In this case, just a sheet of glass. It's a pinhole plus a prism. When you take a picture with a camera array, unfortunately, you're going to have to do that computation. All you have is a set of cameras. AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: Thanks. All you have is a set of cameras. You don't have this prism. And mathematically, you're going to shift the image, which is equivalent [INAUDIBLE]. So you're going to take [INAUDIBLE] images and you're going to shift them by plus 2 pixels, plus 1 pixel, 0 pixel, minus 1 pixel, minus 2 pixel. And that's the same as putting a set of prisms. Is this analogy clear? So once we have done all this, there's two classic ways of thinking about capturing lightfields. Again, 1908 is when this idea started coming up, [INAUDIBLE].. But the practical solutions came much later. Remember, back then, it [INAUDIBLE].. Meanwhile, a third solution came on just two years ago, whilst capturing lightfields. And we're going to look at that for the next few minutes and then switch over to [INAUDIBLE].. Anybody has thoughts on some other ways of capturing the light [INAUDIBLE]? Yes? AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: Excellent. Excellent. So you [INAUDIBLE] lots of [? hats ?] to go around this. So one very interesting type these guys came up with was they say, I don't want to put anything close to the sensor-- just a traditional camera. And [INAUDIBLE] this. I'm going to do this part up here. Because all I care about is this pixel, I want to know how each of these rays [INAUDIBLE] what is the radius of each [INAUDIBLE]?? So all I will do is I will block this part of the lens. [INAUDIBLE] and that will give you this direction for each pixel. Then I take the second photo, but I will block everything except this region. [? 84. ?] And if I take nine such photos, then [INAUDIBLE] getting 100 times 9, what will I get? [INAUDIBLE] each has a resolution of 900 [INAUDIBLE].. So we get 900 [INAUDIBLE] this whole time. And this is a single [INAUDIBLE] and 9 times [INAUDIBLE].. So this is what [INAUDIBLE]. And they came up with better [INAUDIBLE],, but better than this class. But you still learn it's not a new matter, because it's using the same basic concept that [INAUDIBLE].. You have something of your own? AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: Yeah, OK. What else? Think about all the-- that document online, that presentation about how to come up with new ideas. One way to think about it-- think about all the cameras you know, all the [INAUDIBLE].. On a camera, you have [INAUDIBLE] exposure time, focus, focal length, moving the camera, wavelength. Think about all the ways you can use those parameters to reach the scope. [INAUDIBLE] This is called x [INAUDIBLE]. And you know what it means. Yeah? AUDIENCE: What if you got a-- so you're shooting out of the [INAUDIBLE].. RAMESH RASKAR: Why does it have to be a thing? AUDIENCE: Yeah. RAMESH RASKAR: [INAUDIBLE] something else. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Exactly. What [INAUDIBLE] should it be? AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: Like on some kind of a concave or [INAUDIBLE] sphere. AUDIENCE: A similar [INAUDIBLE]. RAMESH RASKAR: Remember, CAT scan machine works on the most simplest [INAUDIBLE].. And a CAT scan machine, it is [INAUDIBLE].. [INAUDIBLE],, Or even thinking of doing some non-interfering optical communication. [INAUDIBLE] interested in this. You will do the same thing. [INAUDIBLE] are coming from different directions [INAUDIBLE] them, as we told them [INAUDIBLE].. We'll put the area of angles [INAUDIBLE].. [INAUDIBLE] a bigger project. AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: So let's think about this third solution, which is a relatively simple idea to explain, actually. What we're going to do is not use lens [INAUDIBLE],, but use a [INAUDIBLE]. I'll show you how this works, and then I'll explain how it works. All I'm going to do is place a mask-- instead of a pinhole array, we're going to place the mask that has certain consequences. So here, it's a pinhole array. They're going to be, basically, some [INAUDIBLE].. It's going to have some strange effect. You can bring this [INAUDIBLE] and put it [INAUDIBLE].. And if you do that, it turns out, you'll get pictures that look like this. And if you try to force it out of focus, you have this really strange [INAUDIBLE].. Part [INAUDIBLE] focus, it looks like light has been [INAUDIBLE] a little bit. But they're the same I think. [INAUDIBLE] But for out of focus, it's this really strange [INAUDIBLE].. And I'll summarize how it's computed and then come back and explain why it works. And all we're going to do is take a traditional camera. [INAUDIBLE] You could [? take ?] any image, in general, and just take its Fourier transform, which is what JPEG compression will do as a [INAUDIBLE] step. You realize that most of the energy in this Fourier transform is in the low frequencies including you put low frequencies in the center. One day you will [INAUDIBLE] center [INAUDIBLE].. So most of the energy is in low frequencies, low spatial frequencies. So in the center, you have the DC component, which is the average of all pixels. And then you have first frequency, which is how many [INAUDIBLE] you can put in a dimension, and so on. Most of the [INAUDIBLE] over here, if you place this very high frequency mask where the pinhole layer was, and then [INAUDIBLE] Fourier transformed, it looks really, really strange. It has [INAUDIBLE] energy in high frequencies, as well. And those of you who are used to looking at oscilloscope or radio frequencies and so on-- if you just capture the radiowaves, all the radio stations, and look at the spectrum, it will also look something like this. It will have some carriers in the middle. And those will get [INAUDIBLE] around the carrier. And there will be a [INAUDIBLE]. So a 99 megahertz station is transmitting its audio over [INAUDIBLE] stations and transmitting the audio [INAUDIBLE] and so forth. So this is what the spectrum will look like if you just capture any kind of [INAUDIBLE] signal. And something similar is happening here. And I'll come back to this and explain how this example works. So that-- and what we're going to do eventually is take the truly Fourier transform, which looks like this, and we're going to shift. We're going to take this 2D wall and make it into a 4D hypercube. And we're simply going to take this inverse Fourier transform and recover all those images that you will have captured if you placed the camera at different positions. It's as if we have taken the lens and split it into 81 different cameras. [INAUDIBLE] As you can imagine, from these 81 cameras we have captured-- so for every one of those slices on the lens, we have captured an image that's only 200 pixels by 200 pixels. But we have created 81 such pictures. And so here you are just seeing some of those different pictures, a few [INAUDIBLE] pictures. But using this 81-camera array-- the box I showed you last time was a 5 by 5 camera array. And you can think of this as an 81-camera array. But we didn't build a new device. It's just an ordinary SLR camera we didn't format. And we just placed the smart sensor. And certainly, we have-- for two extra dollars, we have now 81 cameras. But of course, each of them is way more efficient. AUDIENCE: What's the advantage of this mask to the pinhole-- or pinhole mask? RAMESH RASKAR: So the question is, what's the benefit of this type of mask over a pinhole array, or some [INAUDIBLE]? What are the disadvantages of pinhole arrays? AUDIENCE: More light is going to [INAUDIBLE].. RAMESH RASKAR: Right. So it's more light, because the-- almost 50% of the light will go through. So previously, we know the pinhole light is-- very little light goes through. What's the second problem [INAUDIBLE]?? Yeah? AUDIENCE: Because the image you get through a pinhole [INAUDIBLE] is not established [INAUDIBLE].. RAMESH RASKAR: Exactly. If you [INAUDIBLE] diffraction arrays. Even here, you have some diffraction [INAUDIBLE] above the [INAUDIBLE]. AUDIENCE: So it's basically assimilating. And that's not very-- RAMESH RASKAR: Exactly. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Well, [INAUDIBLE].. What you capture looks like almost not our image, except those very strange effects are called [INAUDIBLE].. AUDIENCE: So but are you saying you're applying the inverse transform to the small sessions of this spectrum, right? RAMESH RASKAR: No, not really. Not really. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: We'll take this full 2D image. And we're going to [INAUDIBLE] it and create a 4D hypercube. So let's make it simple. Let's say the image was only 1D, just this part of the line here. If you take this frequency transform-- actually, [INAUDIBLE] transform looks like that. If you take this frequency transform, you will capture-- there will be some 1D frequency transform that looks like this. And what we're going to do is-- so let's go step by step. So the frequency domain-- this is a little bit of a [INAUDIBLE]. We'll come back and talk about 4D techniques in a couple of lectures. And this will become more clear at that time. But basically, we have some variation in x and some variation [INAUDIBLE].. We saw there. And that's because it was 100 by 9. In this case, it's 200 by 9. That's 200 [INAUDIBLE] actually. 200 by 9, we're just thinking [INAUDIBLE] data. And then our sensor, however, is just 1D in this case. So although we want to capture 200 here and nine here, we cannot capture that much from the sensor. So what we basically do is take that 1D sensor-- [INAUDIBLE] We're going to take that 1D sensor, and it's going to capture different parts of the signal. And then we're going to reshape it. We're going to chop-- we're going to chop this part over to here, chop this part over to here. This one stays in the same place. Take this part [INAUDIBLE] here. And this part, goes over here. And now, from a 1D signal, you have created a 2D length. And it's not really visually clear, I'm sorry, because [INAUDIBLE] scheme. [INAUDIBLE] And from that, we can recover-- AUDIENCE: So you have all the coefficients for the entire image. RAMESH RASKAR: Exactly. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: The whole coefficient [INAUDIBLE].. Yes? AUDIENCE: What about the [INAUDIBLE]?? RAMESH RASKAR: Sorry? AUDIENCE: The [? reason. ?] RAMESH RASKAR: Frequency domain [INAUDIBLE]?? AUDIENCE: Yes. RAMESH RASKAR: So we'll talk about that. If you think about this particular problem where we have overlap or undershoot, there's also a [INAUDIBLE]. Because we're trying to capture more signal, but we don't have enough bandwidth here. And the same exact problem here as well. So you;re going to assume that the scene doesn't have [INAUDIBLE]. So we're [INAUDIBLE] and must [INAUDIBLE].. AUDIENCE: But doesn't it mean that you lose [INAUDIBLE] data [INAUDIBLE] equipment? RAMESH RASKAR: You can only-- so you [INAUDIBLE] only 200 pixels in the world. AUDIENCE: Even if the spectrum is not [INAUDIBLE],, it's very simple. You need to sample [INAUDIBLE]. RAMESH RASKAR: You have to hope that-- you hope that your spectrum looks something like this. If your spectrum actually has lots of energy up here and down there, then, yes, you'd get [? RPS. ?] AUDIENCE: Even-- and even you can separate this signal with a much lower frequency than the spectrum [? specific ?] right? RAMESH RASKAR: Repeat that? AUDIENCE: You can separate in much lower frequency than actually signal separate? RAMESH RASKAR: Using what? Using how much [INAUDIBLE]? AUDIENCE: Use the tool. No, just simply low frequency. RAMESH RASKAR: Just take-- AUDIENCE: For this center one. RAMESH RASKAR: Yeah. That's exactly what happens in JPEG. You take whatever 4-megapixel image, and then you're able to represent it as a half-megapixel image, because you're able to take only the lower frequencies-- lower frequencies and represent them above. And you take the higher frequencies, but we don't represent them [INAUDIBLE].. That's how JPEG [INAUDIBLE] frequency [INAUDIBLE].. AUDIENCE: So you assume-- RAMESH RASKAR: But we cannot do that for real. We can do that in optics. AUDIENCE: Yes. RAMESH RASKAR: You can do it in software. So here's a method to do that-- not the [INAUDIBLE],, but the remapping in optics. AUDIENCE: So you still assume it's overcycle? RAMESH RASKAR: Yeah, exactly. So if the world actually had a checkerboard, an extremely high resolution checkerboard, then you'd have problems. It's a similar effect-- your software is where you really find structure. And you just take the picture. You see these areas and artifacts. Or, if you have a-- if you take a fence, we have [INAUDIBLE] take picture [INAUDIBLE].. So it's the same problem. It's just [INAUDIBLE] you cannot get your own [INAUDIBLE] unless you do them [? frame by frame. ?] I won't go into detail. AUDIENCE: Ramesh, do you have to apply this to each channel, the [INAUDIBLE]? RAMESH RASKAR: Yeah. You have to do it for every color channel. That's right. And from there, you have captured it. So let's look at the [INAUDIBLE] of how exactly this works. And there's a [INAUDIBLE] explanation that many of you have seen. And I'm going to give you a very simple, intuitive reasoning of [INAUDIBLE]. So let's come to this problem of replacing a pinhole with this new type of mask, which we call heterodyned mask. So here we have a pinhole and now this part again, a pinhole. And we have nine values coming in. [INAUDIBLE] minus 4. [INAUDIBLE] plus 4, 4, [INAUDIBLE].. And then we do the next [INAUDIBLE] over. And this one's doing the same thing again. And by using this trick, you're not going to get more resolution. You're still have some-- miss some information in the [INAUDIBLE].. It's just that we don't lose the light. We just lose those spatial frequencies. So small digression, and we're going to talk about something called [INAUDIBLE] multiplexing. And the problem is really easy to explain in this [INAUDIBLE].. Let's say I give you-- there are several bags with different weight, weight 1 to weight 9, and I tell you to weigh them. And at the end, I want a solution of each-- what the weight of each of the bags is. [INAUDIBLE] the weighing scale, put each of the bags one at a time, and you have a solution. What are some other ways you can put it? If you don't want to do one at a time, what can you do? Sorry? AUDIENCE: Do all of them. RAMESH RASKAR: Do all of them? But that doesn't give you [INAUDIBLE] composing of nine. You must make nine measurements. I'll give you that. You must make nine measurements. One choice is, you just put one bag in. And you have nine [INAUDIBLE]. But let's say your scale actually-- so let's say the weight is from 0 to 100. I'm sorry, the weight itself is 0 to 10. But let's say your scale doesn't work very well in the first couple of [INAUDIBLE] statement. So the scale works pretty well when it's in the radiant [INAUDIBLE]. But it doesn't work so well when the weight is too low or weight is too high. So what you want to do is stay in the [INAUDIBLE].. And cameras work the same way. That's why I'm not doing this one. If the light is too low, if the light's too high, the camera cannot figure out where exactly it is. But if the light is between-- if the light is going from 0 to 55, then when [INAUDIBLE] to 200, the camera works nice in a straight line, straight line [INAUDIBLE].. When it's too dark or too bright, a camera cannot handle it as well. And this similar analogy [INAUDIBLE].. So one solution is to do one at a time. The other solution is I can put a group of them. Let's start with [INAUDIBLE] number of three. Let's say I have three of them. I could put one at a time or I could put two at a time. Plus, I can do w1 plus w2, then I'll do w2 plus w3. And I would do a w1 plus w2. In fact, there's three measurements. From the three measurements, I can figure out what each of these [INAUDIBLE] are. And you can actually write this down as a [INAUDIBLE] system. I will say that [INAUDIBLE] as well. So let's say when I put the first two, my measurement is on 1, 2, and this one, 3. And all we have done here is we have said, I'm making measurements-- and 1, and 2, and 3. What I would like to know, actually, is the weights-- w1, w2, w3. But the way I have measured it is the first one is w1 plus w2 and [INAUDIBLE] w3. Second one is w2 and w3 [INAUDIBLE] w1. And [INAUDIBLE] that's all [INAUDIBLE].. And we're just following the system. And it can tell you, from looking [INAUDIBLE].. And especially for this, [INAUDIBLE] purely inevitable. So for three, this seems like a very good solution, because now we are staying somewhere in the middle range. We are not at the bottom range of this weight. And if you put all three of them together at the same time, you might go very far over here, as well. [INAUDIBLE] because we did the sum of two [INAUDIBLE].. So this is very convenient. Another part of multiplexing is basically the same concept. Instead of three [INAUDIBLE] 9 or any such measurements, I'm going to basically take about half of them randomly. And I take about half of the bags, put them together, take a measurement. Then I will put them apart, again take some other half, and do our fun measurements, and so on. See, if I have nine of these, I will create a matrix that looks something like this. I will have-- I will measure my grades that we want to go [INAUDIBLE].. My measurements are [INAUDIBLE] 1 through the 9. And what will we have here? What's the [INAUDIBLE]? AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Sorry? AUDIENCE: Nine [INAUDIBLE]. RAMESH RASKAR: Nine [INAUDIBLE]. And this way, we put down the numbers here, we've got to put some random numbers here. Instead of putting all the bags that [INAUDIBLE] all but [INAUDIBLE] part of this. Let's head back. If I just put one bag at a time, what will this mean [INAUDIBLE]? AUDIENCE: [INAUDIBLE] RAMESH RASKAR: It will just 1, 1, 1, And also the number 0. That's [INAUDIBLE]. And same thing will be going along the diagonal. [INAUDIBLE] out of the nine, I will have at least four or five [INAUDIBLE] placed within-- put randomly 101, 10-- 4, 5, 6, 7, 8, 9. So I put-- I don't know. 01, 02. That's one sequence. And then I'll take the next one-- 01, whatever's next. I have nine such [INAUDIBLE] sequences with either four or five of one [INAUDIBLE]. And this is how [INAUDIBLE] sequence as [INAUDIBLE].. So it's sort of taking one measurement at a time. But you will take a linear combination, in this case just a sum of our [INAUDIBLE]. And the benefit of that is that you're staying in a range that's [INAUDIBLE]. I used the analogy of this one with where we're going. Using a pinhole is like doing one measurement at a time. You're only measuring one ray at a time. Using a lens [INAUDIBLE] is like putting all the bags at the same time. And what we will do now is place a mask where only about half the values are coming through. So how many variables? So you're gonna be putting on half the input variables, and measure those. [INAUDIBLE] analogy, where it is going, and I'm able to predict what the variables will be. So let's focus on this one slab here. If you just focus on one slab here, light comes in 2 and 3. Instead of 9, we'll say, 3 This one's just blocking this light here and here. What I will do is I'll block it from certain directions. So let's say I will make this-- actually, 14 doesn't look that interesting. So let's go to [INAUDIBLE]. I'll make it 1-1-1-0-1-0-0. [INAUDIBLE] in our half. What's going to happen is, for a certain part of the image-- if you look at the pixel here, I shoot the-- this is my lens. This is my pixel. If I shoot the ray here, it's blocked. But if I shoot the ray here, it goes through. If I shoot the ray here, it's again blocked. So what I'm getting at this pixel is not sum of all the rays or some image of [? conventionalist. ?] If I go to the next one, I get some other combination, because for this one this one was blocked, but for this one this goes through. Same here. But now this one's blocked, and so on. Because of the displacement, the combination that we're getting is evolutionary. And once we have done the [INAUDIBLE] combination, which is what we have here, about half the light goes through. But what we have seen here is the linear combination. We have these measurements. We don't have the original radiuses. Just simply calculating it will recover these intensities. And there you go. We're back to a traditional [INAUDIBLE].. So that's a very easy way of-- very intuitive way of thinking about how we can use [INAUDIBLE] multiplexing with [INAUDIBLE]. Again, the solution. So we get now half the light. [INAUDIBLE] but this lasts half in here, half here. In this case, we just have on half of the [INAUDIBLE] and [INAUDIBLE]. So we are [INAUDIBLE] for the masking and the actual camera is ready to go. Fortunately, you have to do a lot of computation on this equation for the whole image. Image is 16 megapixel. This matrix could be 16 million times 16 million if you do it in a brute force fashion. But of course, you just [INAUDIBLE] simplest conversion [INAUDIBLE] based on this that make it really, really important. So we won't go into inversion, but we want to stay in problem as [INAUDIBLE] would be and say that it's possible [INAUDIBLE]. Yes? AUDIENCE: Can you process other one or is something [INAUDIBLE]? RAMESH RASKAR: That's a very good question. So what I show you, I kind of cheated. I showed you a mask that looks like cosine. And what we're doing, it was certainly realized just in the last one year, is that [INAUDIBLE] and cosine is one and the same thing. So if you take a bunch of cosines-- and [INAUDIBLE] information, by the way. So that's why we have it here. So if you take cosines of different frequency-- so this is like taking [INAUDIBLE].. And you take a [INAUDIBLE] take positions on the frequencies. What we're doing is we're projecting on, again, different [? carriers ?] and so on. So if we do [INAUDIBLE]. If I do cosines, and then I'm going to make them sharper, sharper-- if I place all of them together, you start getting that [INAUDIBLE] in the center. And you get a bright spot. And away from it, you get a dark spot. The sum of all of this, as you can imagine, ends up being something like [INAUDIBLE] and then [INAUDIBLE] and then repeat some variation. So that's the part [INAUDIBLE]. And it turns out we can place these cosines in such a way so that we actually get a [INAUDIBLE].. And [INAUDIBLE] really takes a lot of work. We'll talk about it, about how you can actually get binary [INAUDIBLE]. And so something that [INAUDIBLE].. Because printing a binary mask has been measured. It's more convenient than printing a mask where [INAUDIBLE] is changing [INAUDIBLE].. AUDIENCE: And when you phrase it as a linear system, there is no-- I mean, no different, no sense in having-- blocking half the light or so. I mean, partially, with a partial transparency, but you could just go binary anyway. RAMESH RASKAR: Yeah, exactly. If you go binary-- AUDIENCE: [INAUDIBLE] RAMESH RASKAR: You're still going to lose half the light. AUDIENCE: Yeah, yeah. RAMESH RASKAR: [INAUDIBLE] some of this. AUDIENCE: Yeah. But I'm saying, instead of using [INAUDIBLE] and say 0.75 or so, I mean, [INAUDIBLE] RAMESH RASKAR: That's a great point. So let me rephrase what this fellow over here is saying. He's saying instead of ones and zeros, which means the sum of all this is still half of the total light, why not make this 1 and 0.75, and 1, 1, 1.75? So you still have this variation, but most of the light is going through. And it's actually a great topic for research as to what exactly this pattern should be. And for us, binary was very convenient on 0. And as you play with that parameter space, you get many, many different interesting solutions. Some of them are geared more towards getting to the photo, and some of them are geared more towards capturing the lightfield. So for example, if this is all ones, then you [INAUDIBLE] your photo. If this was all ones, that means every point is transferred, which means there's not much [INAUDIBLE].. If you put all zeros, that means it's [INAUDIBLE] go through. If you use a [? spectrum, ?] then it's a random [INAUDIBLE].. But if you use any [INAUDIBLE] in between, then [INAUDIBLE] optimizing [INAUDIBLE]. Yes? AUDIENCE: I guess one [INAUDIBLE] equations [INAUDIBLE]. Because holding [INAUDIBLE]. They say others [INAUDIBLE]. You [INAUDIBLE] in the Fourier [? trans-- ?] in Fourier [INAUDIBLE]. And you told me it's very easy. And it's very easy to figure out what kind of [INAUDIBLE].. RAMESH RASKAR: Right. Exactly. AUDIENCE: And I wonder what [INAUDIBLE] similar effect [INAUDIBLE]. RAMESH RASKAR: So I really would like to do [INAUDIBLE] optical domain, but we don't have the pleasure. The light is [INAUDIBLE].. through that light. And we can only do blocking and unblocking. So if you had a way to sense the image first, and mostly assess the image, and change your mask so that you can choose the right parameters for the scene, then what you're saying is true. So you get a softer compression. You get to look at the signal before you compress. AUDIENCE: But I thought you could [INAUDIBLE] it's not focused on the image. The image won't have any high level [INAUDIBLE].. But [INAUDIBLE] and you cannot capture frequencies like that. RAMESH RASKAR: So frequently. AUDIENCE: Exactly. And it doesn't [INAUDIBLE]. But [INAUDIBLE] we can [INAUDIBLE].. RAMESH RASKAR: Exactly. So what [? Deena ?] is just asking is, if I know that a machine is [INAUDIBLE] 200 pixels and 9 variations-- so let's say for the photograph, what I'm going to do is put a photograph here and then capture its lightfield. And I'm going to [INAUDIBLE] 200 pixels. And I have nine views for each of those pixels. Then what I've done is built the correct system. But let's say somebody gave me a photograph which had, actually, 400 pixels. And, let's say [INAUDIBLE] 300 pixels, and then only six views. I don't know the priority. But you just note there is a 200-pixel photograph or a 300-pixel photograph. I will continue to decode that as a 200-pixel photo. But the problem will be, because this team has higher frequencies, different areas, it's the same question you're asking. And you have no idea of knowing [INAUDIBLE].. So a typical solution in any signal processing, any device that has been sampling, is you prefigure. You reduce [INAUDIBLE] by [INAUDIBLE] or cutting off high frequencies so that it will not [INAUDIBLE] as low frequencies. In the optical domain, unfortunately, it's very difficult to do. [INAUDIBLE] that will convert something that's 300 pixels to 200 pixels purely optics. You can do the software. You can [INAUDIBLE] or you can smooth [INAUDIBLE] software. But it's not that easy to do in [INAUDIBLE].. At least we don't have a solution. Maybe you will come up with a solution. And once you do that, then you can [INAUDIBLE].. AUDIENCE: So it says here that if [INAUDIBLE].. And so I'm thinking just about will happen to-- [INAUDIBLE] and the-- it's a complex number. And you have [INAUDIBLE] and the magnitude. Are you capturing both, or are you only capturing magnitude? RAMESH RASKAR: Only magnitude. So the other way to think about that is, as I was drawing these cosines, instead of one, zero, and [INAUDIBLE],, what you're doing is really truly a prediction. So the very first frequency we have is first cosine. The second one we have is the highest [INAUDIBLE].. The third one is that you will have the lowest cosine. And we have these nine cosines with four with plus, negative phase, and other four with [INAUDIBLE].. And the middle one is just [INAUDIBLE].. That's why I [INAUDIBLE] this particular pattern. And it turns out, instead of using cosines, we can do the binary [INAUDIBLE].. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: That's the keyword [INAUDIBLE].. AUDIENCE: [INAUDIBLE] because now we [INAUDIBLE].. And are all these Fourier transformed different? Or are some of them [INAUDIBLE]? RAMESH RASKAR: No, no. They're very completely different. They're going to be [INAUDIBLE]. AUDIENCE: So [INAUDIBLE]? RAMESH RASKAR: Because, remember, I'm taking cosine and it's [INAUDIBLE]. So basically, again, I have nine unknowns and nine measurements. And I can't come up with any of these signatures as [INAUDIBLE]. I can mix them, bunch of ones and zeros. Just [INAUDIBLE] missing. Or I can do [INAUDIBLE] some kind of [INAUDIBLE] prediction. I can do [INAUDIBLE]. AUDIENCE: From the way you described putting the mask there, the way this matrix is formed, it seems like the second row is probably a shift of the first row. RAMESH RASKAR: It could be, because [INAUDIBLE].. So that's the [INAUDIBLE]. AUDIENCE: Because invariably, inversion probably can be easier, because you know the pattern of the matrix. And it's not a brute force inversion. Maybe you can do [? automation. ?] RAMESH RASKAR: Exactly. So all the beauty and elegance of the effort comes in in choosing the right type of mask and using-- codesigning a decoding scheme that's-- it doesn't amplify the noise, and so on. You had a question? AUDIENCE: I was going to ask [INAUDIBLE].. RAMESH RASKAR: That's the same question that [? Deena ?] is asking. In fact, I'll-- anything beyond 200, so the-- from 201 to 300 doesn't dominate the picture. I like to think of it as script domain because [INAUDIBLE] and because [INAUDIBLE].. AUDIENCE: [INAUDIBLE] from a little while back. Can't we use something with [INAUDIBLE] arrays [INAUDIBLE]?? RAMESH RASKAR: So it turns out [INAUDIBLE] whether you think about them as 4D real space or you think of light as wavelengths [INAUDIBLE] for on measuring the [INAUDIBLE].. AUDIENCE: Yeah, I was just thinking I could guide [INAUDIBLE]. RAMESH RASKAR: [INAUDIBLE] down from UCLA? AUDIENCE: Yes. [INAUDIBLE] RAMESH RASKAR: Right. But he's using diffraction for that. And we'll come back and talk about this. And we'll talk about [INAUDIBLE].. So a lot of you are wondering, where does this all fit in? This seems like a lot of math, a lot of algebra, or something you could just put a pinhole in and be done with it [INAUDIBLE]. So for example, right here, I am going to explain to you how the same exact technique can be manipulated in very different ways in two very [INAUDIBLE] series [INAUDIBLE]. AUDIENCE: So just a quick question for that. So have you already compared the results using this cosine mask with the binary ones? RAMESH RASKAR: Yes. Yes. In fact, here, let me show you. AUDIENCE: Because in theory, they should be equivalent. But in practice, because maybe some high frequencies that you introduce by the binary mask coming in-- and then you have to truncate at some point. RAMESH RASKAR: In simulation, they're identical. AUDIENCE: Yeah. RAMESH RASKAR: But you're right, because what isn't happening is when you print these cosine masks, your intensities are actually not transposed [INAUDIBLE] very well in cosine. But it's actually wearing something like a step function and so on. Because this digital levels by completing this [INAUDIBLE].. So that ended up creating a lot of trouble. AUDIENCE: And for the binary one, as well, because then you're going to have some sharp transitions that, in the Fourier domain, they may go all the way to high frequency. RAMESH RASKAR: Exactly. It's a conditioning of the [INAUDIBLE] transitions not right. And your model is incorrect, [INAUDIBLE].. So the calibration is very key. But the one benefit is that because now we're using masks rather than [INAUDIBLE],, you don't have to have the lens [INAUDIBLE] so that image of the lens is created as a sharply focused version on the sensor. In other words, very, very, strong constraint that we must put distance [INAUDIBLE] so that the image of the lens is formed on the sensor. We don't have that anymore. Even if we have some [INAUDIBLE],, there's some [INAUDIBLE],, some tolerance in [INAUDIBLE] of the mask. Because all we're doing is we're taking the values we want to measure. We're going to mix them around and get new measurements and [INAUDIBLE]. That gives us our freedom. So if you go back to the analogy of nine bags, it's rough-- it's roughly a very specific combination of those nine bags. It will be a slightly different combination of the nine bags. My mixing matrix [INAUDIBLE] most efficiently, but it would be good enough to do a reasonable job of [INAUDIBLE]. So that gives you a lot of flexibility [INAUDIBLE].. So now I'm going to show you how we can use this [INAUDIBLE] technique. AUDIENCE: So, Ramesh, but it seems that the binary mask made to do some diffraction artifact, right? RAMESH RASKAR: They do. They do. Are you going to talk about that? GUEST SPEAKER: I'm not sure I got to that. That's fine. You can see the results, and then we can talk about [INAUDIBLE]. RAMESH RASKAR: Yeah. And, just-- you can just go slow, because we just talked about it [INAUDIBLE]. GUEST SPEAKER: OK. So I--
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_6_Recent_research_BiDi_Screen.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Yeah, and again, just go slow, because we just talked about this [INAUDIBLE]. MATT HIRSCH: So this is a project that-- this presentation actually came from SIGGRAPH this August. I'm going to skip around a little bit. PROFESSOR: You might want to say just a couple of sentences about-- Mike had a question about, how did class projects start in this class, and how did they evolve? And I said you'd be a good example of how it started. So just say a couple of sentences about how you started-- MATT HIRSCH: Shall I start with that? PROFESSOR: Yeah. MATT HIRSCH: OK. So I guess I had taken Ramesh's class in the spring term before your first class here and had been really interested in camera work. And so Ramesh and I had been thinking about projects after that class. And over the summer, I kind of tried to develop some ideas that might lead to a thesis. And so I guess coming into this class, when I took it, I had the intention to hopefully develop some project that I did in here into some thesis work. And I guess that is a good place to start, maybe, if you're interested in really developing a project beyond the scope of the class, is to have some other good motivation to do that, like a thesis. But yeah, then when I did the final project for this class, I started thinking about some of the work that Ramesh had presented and working with a post-do-- or a [INAUDIBLE] who's soon to be graduating, who has done some work with Ramesh back at [INAUDIBLE],, and really got excited about it and so ended up developing that into my thesis. And that's where this project came from. So-- PROFESSOR: It's a great example. As you'll see, it's a great example of beautiful theory, beautiful implementation, and impactful implementation. So those three [INAUDIBLE] that we have-- novelty, execution, and impact, are [INAUDIBLE]. It's surprising he didn't win the best [INAUDIBLE] award., even by a popular vote [INAUDIBLE].. That's the perfect award. MATT HIRSCH: Well, we had some really cool-- have you shown them the projects from last-- PROFESSOR: Yeah, just very briefly. But [INAUDIBLE],, we should talk over them more. MATT HIRSCH: Yeah, so I guess I'll just get started with this. The goal here is to think about new ways of interacting with a thin-screen device. And imagine if your screen could basically not only support on-screen touch interaction, but this type of off-screen hover gesture interaction that we think about here at the Media Lab in a lot of contexts. And so you see that brief example where I'm able to lift up my hand and interact in free space right in front of the screen. And here's another example where I'm manipulating an object, and I select it by touching the screen in traditional mode, but can pull my hand away and rotate it around like Luke Skywalker. PROFESSOR: Use the laser pointer. MATT HIRSCH: OK. So this is kind of inspired by three emerging areas in HCI and camera research. One of them is this new class of light-sensitive display. You can mention if you've covered some of this stuff, and I'll skip it. PROFESSOR: Not how it works, so this is perfect. MATT HIRSCH: You all know how LCDs function. They're basically a matrix of transistors. There's a couple of companies that are taking that transistor matrix and embedding a single extra transistor that's optically sensitive into the matrix so that the net result is the entire LCD is a large-area optical sensor. And they're using these for touch interactions. So that was one inspiration. And then the second, of course, is depth cameras, which you may have covered a little bit. There are a couple of different techniques, but the upshot is that these cameras not only produce an RGB image of a scene, but a map of the scene where, for each pixel, you have a measurement of the distance from the camera to some object. And the third, I guess, is this sort of ubiquitous multitouch display, which has been popularized by Jeff Han here and the CNN wall, and of course the iPhone and a lot of other consumer electronics devices. PROFESSOR: [INAUDIBLE] other people, that makes a good use right now. MATT HIRSCH: So we're kind of inspired by the ability to so easily interact or so intuitively interact with information on the screen and thinking about, can we take that one step further? So what if you could combine all of these things, basically, and be able to, because it's an LCD, build it all into a thin package, but also, because it's a depth camera, be able to track hands out in front of the screen? So that's where we started. The benefit of that, of course, would be that you can bring this depth sensibility to all sorts of consumer electronics type of devices that don't have it or it wouldn't even be possible to think about today, like an iPhone or a laptop. So to give you a brief overview of the results-- I think Ramesh may have covered some of this. This is basically a light field, and you've probably seen-- PROFESSOR: We just covered today. MATT HIRSCH: Yeah. PROFESSOR: [INAUDIBLE] MATT HIRSCH: So this is what we were able to capture. And there, you see a synthetically refocused image. You guys, I think you just did your-- PROFESSOR: That's our synthetic aperture. MATT HIRSCH: --synthetic aperture project. So this is one application of that, where we're basically taking the set of images captured by the light field, synthetically refocusing them, and extracting a depth. So just to give you some of the-- I guess to think about how to adapt one of these optical touchscreens, you know, it works a lot like a document scanner, where you have this array of pixels without lenses. An object that's touching that layer of pixels can have a sharp image made because you have a one-to-one correspondence between a point in the scene and a point on your sensor, just by virtue of their being so close together. But of course, when you pull your hand away from that sensor, you no longer have that one-to-one correspondence. Rays can travel from this object to many different pixels, and so you get a blur. So I guess our approach, then, is to think about a way we can basically bring that one-to-one correspondence back without using any kind of lenses. And so what we do is separate the sensor by a small margin from the display and then display one of these types of masks that Ramesh was just describing to you guys. So in this case, what we're considering is using the LCD as both a display device for the user to see the images, like a typical LCD screen, and also as a device to create one of these masks that Ramesh was describing in order to encode the scene in a way that we can decode in software. So here's an idea of what that device might look like. You have your LCD screen here, and some distance behind it, you have a sensor layer. And out here, you have objects. And you can actually decode-- the vision is that in this thin device, you could then decode this object, process the imagery, and then re-display it on the screen, maybe in a modified way, or in the case that I'm describing, to interact with the screen gesturally. And so this is a kind of interesting device. And I like to think about the pinhole analogy, because it's very intuitive. Just looking at that mask, it's not quite clear to me what you get just from an intuitive sense. But the pinhole makes a very easy case to think about. If you imagine tiling those pinholes all across the aperture or all across the LCD, you get, basically, many tiny cameras covering the screen. And each of those cameras has a slightly different perspective of the scene. And putting those together, that's basically a light field that you're capturing right there. And the interesting thing in thinking about this is when we-- this is not a normal type of camera that you're used to using to capture pictures of birthday parties or whatever. It's going to produce a pretty strange-looking image because, if you think about how you would image an object out in front of the device, first of all, it produces an orthographic image. There's no perspective here. Let's say this is one of my pinholes, right? If I want to image something that's off to the left of my device, what I do is I take maybe the pixel on my sensor that's over here for each of these tiled pinhole cameras. And that ray basically goes out into the scene and is projected out in a straight line. So there's no perspective. It's all parallel rays. And the other interesting thing to think about is, you can see that the resolution actually does decrease. Imagine this blue is the size of my pixel. I can project that pixel out into the scene to see what I'm measuring, and you can see that as an object gets further away, my pixel, relative to the size of the object, is increasing. So just a brief tangent there. But I'll just show you a couple more ideas of how this might go. You might think about being able to navigate spaces by just moving your hand in free space. And because it's a-- think-- because it's optically sensitive, you can think about-- where is it? You can think about doing a demo like this, where I'm actually taking a real-world flashlight and projecting it into a virtual scene. So I'm taking real light and mapping the light field that I capture into a virtual world. This is an interesting mixed reality. PROFESSOR: Is it the end? MATT HIRSCH: Yeah, it seems a little-- you can see there's a hand here holding a flashlight. And that's actually shining light into this virtual world. So I guess maybe I'm going to skip over this a little bit. But you can think-- there are lots of ways to accomplish this, or accomplish parts of what I'm describing, that don't involve using the method that I'm describing. But if you look at the entire package-- putting it in a thin device, being able to capture both touch and gesture-- I think it becomes a pretty compelling idea. So I think Ramesh has described a little bit about light fields before. I'm just going to cover the basic ingredients that I've found very helpful to understand the theory behind this. And if I'm covering something that you've already seen, you can stop me. But in a light field-- let's just imagine the 2D case-- when you have a ray, the basic idea is you want to parameterize this ray. You want to describe a set of rays in a new space. So I have a ray that has some intersection with a sensor plane. And it intersects with an angle. So if I just plot the point where it intersects and the angle at which it intersects in this new space, this is a light field. And you can see, if I have a whole set of rays, that creates a sort of line in the light field space over there. And then if I have-- oh, well, this is actually important. If I-- oh, that's weird. I imported this from PowerPoint, so not all of these-- OK, well, it's important to note that one of these is what a real sensor measures. If you take a real sensor in the real world and just expose it to light without any lens or anything in front of it, it integrates rays from all directions. And so what you're actually measuring in the light field space is one of these lines. So this is what a sensor array might look like. Now, if I think about the frequency domain picture of this space, I'll have some light field here that I'm taking projections through, basically. And that light field has some spectrum in the frequency domain. And as I mentioned, these lines are projections through the light field. So there's something called the Fourier slice theorem, which basically says if I take a projection through a function in this domain, I'm actually taking a slice through my spectrum in the frequency domain. So I guess the important thought trap here is that if I am actually measuring this with my real-world sensor, in the frequency domain, I'm actually measuring only what's along this axis. So this is the only thing accessible to me with a real-world sensor. And of course, what I actually have over there is that whole spectrum. And so the question is, how can I access that data? So the next important thing to keep in mind when trying to understand this is the skew property of the light field, which basically says, if I'm going to plot a light field, I can either look at it from this perspective or from this perspective. So I can say, as my ray travels through free space, I'm going to just plot the light field that it creates over on the right side there. And you can see a kind of interesting effect where, as I add rays, a straight line from one position in space becomes a skewed line in another position. And then, as you may recall from physics or signal processing theory, if I perform a convolution function between any array of delta functions and some other arbitrary function, I get a tiled version of my arbitrary function. So I'll tie this all together, I promise. Just try to keep these things in mind as we go. And then I guess-- I don't know if Ramesh has used the term spatial heterodyning, but I think it's a kind of cool historical note that heterodyning is a word that comes from old radio broadcasts. And when we say-- it was really a technique that multiplied a voice signal by a high-frequency signal in order to transmit it. It'd basically shift that voice up into a radio spectrum that could be broadcast. And this is really what we're doing when we send a ray through a mass. We're actually multiplying that ray by some frequency pattern. Spatial frequency, in this case, instead of time, but similar principle. And so, to bring this all together now, imagine I create a mask that has a transform that looks like this. It's a series of delta functions. And because I'm multiplying it in the primal domain, in the frequency domain, I'm convolving it. So I'm convolving my mask spectrum with the light field spectrum that I want to measure. And so what you get is a tiled version of that light field spectrum. And remember that shift property. I've offset my mask from my sensor a little bit. So the mask-- well, the light field that I've created is actually going to be skewed by the time it reaches my sensor. And so now you see the really cool, insightful part of all of this is that if you look at what's on the fx axis down there, you can see, along this axis, I get different portions of the spectrum. So I've now created a way to measure pieces of that spectrum on my real-world sensor. So I can just rearrange those things and reconstruct a good portion of my light fields back to me. So the important thing to keep in mind, or one important thing to keep in mind, which I guess goes towards some of the discussion you guys were having before I began speaking, is that you really have to make sure these light field spectral copies are band limited so that they won't interfere with each other. And I guess that speaks to Ramesh's point from earlier. So in terms of building this actual prototype, it's all fine to do this in theory, but we want to actually do it in practice and see if what we're doing on paper really makes sense. So what we really want is, remember, an LCD separated from a large-area sensor by a small margin. But that's a really difficult thing to get. These things will be out there in the consumer market in the near future, but they're not right now. So what we actually ended up doing was taking a couple of cameras and simply imaging a diffuser. Much like the movie screen shows you a slice of the light passing through a certain space, a diffuser will just show us optically what we would like to measure electronically in this plane. So here are the actual cameras that we used, and here's one prototype. I have, actually, a slightly newer one now, but the LCD was sitting in this screen, and this is a diffuser. And then I'll just quickly run through the software pipeline that we wrote. I won't spend too much time dwelling on this, but the basic idea is we want a time multiplex between displaying an image for the user to see and illuminating that image from behind, and then switching to the mask mode where we don't want to illuminate it because we don't want to interfere with our measurement. And we want to actually capture the data that is being modulated by that mask. And so I'll just play a couple of videos of the data from different portions of that pipeline. So here, you see the actual [INAUDIBLE] code that we display on the screen. PROFESSOR: [INAUDIBLE] this one doesn't look like cosine masks. It's a real binary mask. MATT HIRSCH: Yeah, this is the binary mask that Ramesh was mentioning. And it actually turns out-- if you recall from my theory description, the only thing I mentioned about the mask was that it has a transform that's a series of deltas. So it turns out, if you tile any code, you can get a transform that ends up being a series of delta functions. Those functions will have different weights, but it's kind of the Fourier series effect, right? And-- PROFESSOR: If you tile anything, you'll get [? deltas. ?] MATT HIRSCH: Yeah. And I guess the reason that this mask was chosen was that it actually is sort of optimal in terms of light efficiency. So this mask allows 50% of the light to pass through, which is pretty remarkable considering we're reconstructing an image without a lens. Just for comparison, the pinhole allows something like 1% to 2%, depending on the size of the pinhole you used. And I think the cosine mask had 18%? PROFESSOR: 18%. And there was a question out there about difference with a pinhole and-- in this sense. That's between 1% and 50%. MATT HIRSCH: Yeah. So the data that our sensor captures, and if you were sitting behind the screen, this is what you would see, basically, if a hand is moving around here and touching the screen, hovering over it. And you can see the high-frequency noise, or it looks like noise, that the mask creates. And from that, we can decode this light field. PROFESSOR: [INAUDIBLE] were 20 pixels, you think? [INAUDIBLE] MATT HIRSCH: Yeah, so it's a 20 by 20 light field angularly, and then each little tile there is about 100 by 80 pixels. So you can see many views of that hand moving around, basically. And then from that light field, as I mentioned, for each frame, we get this stack of images, or we can refocus that light field at a number of depth. PROFESSOR: It's doing the refocusing, which is your first part of assignment, in real time from this 400 images, like 20 by 20? MATT HIRSCH: Yeah. PROFESSOR: Yeah. MATT HIRSCH: Yeah. And then once we have a refocused image, for each-- basically, we have a whole stack of images. We traverse that image and use a method called depth from focus where we basically look at the contrast in each of those images at each pixel. And from that, we get a depth map. And that's the basic ingredient into all of those interactions down there. PROFESSOR: So that's for extract credit, the depth from focusing your [? SIM. ?] MATT HIRSCH: So I had some videos in there, but they don't work in there. PROFESSOR: Any questions? MATT HIRSCH: So yeah, I think that's probably good [INAUDIBLE], right? PROFESSOR: Yeah, that's what I thought. Yeah. Yes? AUDIENCE: Are you doing this in real time? Because [INAUDIBLE] complication [INAUDIBLE].. MATT HIRSCH: Well, no, luckily, computers are very fast. And actually, there's a couple of even free Fourier transform libraries. One is called FFTW. And it optimizes itself to your hardware, and it can run very fast. AUDIENCE: Was this made by-- was it in MATLAB, or-- MATT HIRSCH: No, no, this was written in C. But it-- PROFESSOR: It's running in real time. You can interact with it in real time. MATT HIRSCH: Close to real time. AUDIENCE: Six frames in a second. [INAUDIBLE] MATT HIRSCH: So the demo runs at about 20 frames per second. PROFESSOR: It's not 60 frames per second. MATT HIRSCH: Yeah, we're hoping to improve that. AUDIENCE: How about latency from hand movement to actual movement? MATT HIRSCH: It's one frame or so. AUDIENCE: We're on a different system. MATT HIRSCH: Yeah, computers are very fast. The key is really to pick a Fourier transform that can be broken down into small prime factors, because you can implement that very quickly. But as long as you do that, it'll run real fast. Kevin? AUDIENCE: What's the slowest part of [INAUDIBLE]?? MATT HIRSCH: The slowest parts are the parts that I had to write. I mean, there are a lot tools. Like, OpenCV is a great tool for working with graphics in real time. There are Fourier transform libraries. Things that are slow on a modern computer are memory accesses. And so I end up having big sets of data that I have to remap. For example, you can measure a 2D data from the camera, right? But then you have to work with 4D data, and you have to remap it in a way that your Fourier transform library can understand it. And so that remapping is actually one of the longest steps. It's just simply reshuffling things. And I guess one of the most challenging practical problems here is literally just synchronizing everything. While computers are very fast, they're also not very reliable in terms of timing. So things can just happen whenever they happen. And especially with rendering things on a video card and trying to understand exactly when they're going to show up on a monitor, there are a lot of different and variable delays in that that are difficult to account for. So that's something I'm still working on. PROFESSOR: That's the kind of final project we want to see. AUDIENCE: [INAUDIBLE] question. Why is the diffuser a key part of this device? MATT HIRSCH: The diffuser is just our stand-in sensor. PROFESSOR: Yeah. I mean, this is the key part, right? The camera of the future will not look like anything like cameras today. Your LCD screen, a 15-inch screen, is actually your camera in the future. It's just that right now we don't have it. AUDIENCE: So you're using this as your sensor and then actually imaging the thing and then working from the image that you get. PROFESSOR: It's a shortcut for now. AUDIENCE: OK, I see. Yeah. PROFESSOR: But in the future, the whole thing will be a camera. So if somebody wants to take this concept further, by the way, what will you do when the camera is 15 inch wide, but only when you touch it you get an image? It's like a document scan. You take anything away from it, you just get a blurred thing. So if anyone wants to think about that further, I'll be very interested-- MATT HIRSCH: So they're selling these devices now that-- there's a laptop on sale in Japan that has a trackpad that's made from one of these optical LCDs. So this is a very near-term technology. In a couple of years, it'll be everywhere. So it's something cool to think about using. AUDIENCE: I have another question. What's the range of this device? Like, if I were to soften [INAUDIBLE].. MATT HIRSCH: Yeah, that's a good question. This is a very big parameter space. There are a lot of variables to change. So the prototype that we built, we optimized for about 50 centimeters in front of the screen. But you can think about basically changing that separation between the screen and the sensor and changing the pixel size of the screen and sensor. And all of those variables will allow you to control the range of depth that you can measure. PROFESSOR: And before we do it on a device like this and just do however and so on. That [INAUDIBLE] small form factor. MATT HIRSCH: Yeah, we just need to buy one of those laptops. PROFESSOR: Yeah. MATT HIRSCH: Yeah. PROFESSOR: Then the parameters will be very different, because then you don't expect to interact [INAUDIBLE].. So just to give some context for the class, of course, you did not do this whole thing in real time. You looked at only the capture, is that right? MATT HIRSCH: Yeah, that's right. And in fact, for the class, I started without an LCD because there are a lot of challenging hardware issues with getting an LCD to work like that. So what we started with was simply a printed mask. And there's a great resource right here in Cambridge called PageWorks, who can print very high-resolution masks. I think that's what you use for your [INAUDIBLE].. PROFESSOR: So he did a static version first, for the class, and did a proof of concept [INAUDIBLE] theory and the static prototype. And then, the two months after that, towards the SIGGRAPH deadline, he did all these things. Unfortunately, the paper was not accepted despite all the great work and all the results. That tells you how high the bar is. MATT HIRSCH: Luckily, we got to SIGGRAPH agent. So [INAUDIBLE] a school in Louisiana. AUDIENCE: Then you've got to buy the laptop, right? MATT HIRSCH: That's right. PROFESSOR: But it was cool enough that when he presented it at SIGGRAPH in New Orleans, he won the second best paper award [INAUDIBLE].. MATT HIRSCH: It was a poster. PROFESSOR: [INAUDIBLE]. So now he's a [INAUDIBLE]. MATT HIRSCH: I don't know what that means, but-- PROFESSOR: Eventually, [INAUDIBLE].. Cool, any other questions for Matt? So this is an example of how you think about image formation and image processing in higher dimensions. It opens up a completely new space. Don't think of camera as something that takes the 3D world and maps to the 2D image and all you can do is fiddle around with pixels. There's a lot more going on if you start thinking about the whole packet. So let's take a short break, and we'll come back and talk about other HCI applications of cameras in general, not just [INAUDIBLE] cameras.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_9_Cameras_We_Cannot_Picture_a_survey_of_the_computational_imaging_field.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RAMESH RASKAR: And has done just terrific, terrific amount of work. He was also [INAUDIBLE] program manager where he led the effort that we have studied in the class, the origami lens and many interesting projects. So here, today's going to work on something that hopefully get us thinking. It's just at the right time. It's the middle of a semester, and we have learned about a lot of ideas. And all of us enjoy talking to Ravi about where the field is going and how we can classify and understand the field itself of computational imaging and computational camera better. So looking forward to it. RAVI ATHALE: Thank you. Thanks, Ramesh. It's a pleasure to come here and talk about this thing that all of us love so much-- cameras. And this, I worked a lot on this type of camera, as we cannot picture. So anyway, I sent the slides to Ramesh ahead of time to make sure. I didn't know how to calibrate where to go, but he said that these slides are just perfect, so I trust his judgment. And I wanted not just the seminar or lecture, but a lot more interactive. So we can go through some of these very quickly if I see from your faces that, why is he telling us these things, but be as it may. So history of camera-- I think this was meant more for general audience, so I really focused a lot on history and how we are at a dramatic breakpoint, how that breakpoint is likely to impact the future, and things like that. So interesting historical milestones that you may or may not have covered in the class. Let's go with that. But this is one thing, of course, now, if you need to be told about this, you all are carrying cameras. If you lived in London, you probably got photographed about 30 to 40 times during a typical workday as you stepped outside, and all these things. I just wanted to point out to-- I don't know how many of you will know what that image is. In order to know what that image is, you need to focus on the date, April 19, 1995. That date has a special significance. It was the Oklahoma City bombing, and this is the truck. This is the Ryder truck that carried the bomb. And this is a picture from a security camera in the bank across the street. And this image has been used by many, many people to justify the work on high dynamic imaging. Everything inside is completely underexposed. Everything outside is completely overexposed. As a result, this image, by itself, is pretty much useless. So things come in two sizes-- too big and too small. Similarly, this is the tape. This is a very interesting thing, and I also had another image of, somewhere in Iraq, an IED going off and blowing up, and some person holding the cell phone. And so I'm sure you're aware that, over the last few years, any significant news [INAUDIBLE] that happened in most places around the world, the first reports and the first images to come out of that were cell phone cameras. So everybody is a reporter. So anyway, one of the interesting things is this business about natural observation, gaps in the leaves projecting images of a partial solar eclipse. So our ancestors must have observed these things and said something is going on. So exactly what is going on, and how can we replicate it? How can we understand what's going on over there? This is another very beautiful photograph. I'm sure you have see waterdrop acting as a lens, forming the image. So just before there was any technology, before there was even a formal modern scientific process, our ancestors, a couple of thousand years ago, since they didn't have the distractions of cable TV and cell phones and all that, they had a lot of time to really very carefully observe nature and observe it over a long period of time from many different angles. So as an outcome of this was the observation of pinhole images going back several thousand years, and I believe-- at least I understand that this was the first comprehensive study on optics, Alhazen. And this is the drawing of a pinhole camera. I think, Ramesh, you have probably told them about the origin of the word-- RAMESH RASKAR: I have not. Go ahead. RAVI ATHALE: You have? RAMESH RASKAR: I have not. RAVI ATHALE: Oh, you have not? Camera, the literal meaning of the word, is room because the first camera was indeed a room with a hole there, and the image projected back there, so you can trace it or something like that. Now, what is interesting is, fast forward 1,000 years, and the world's largest pinhole camera was constructed in July 2006. The El Toro Marine Corps Air Station in Irvine was decommissioned. They wanted to preserve it, so they took an airplane hangar, closed the door, and drilled a hole the size of [INAUDIBLE] tennis ball or something. And on the backside, had a photographic film eight stories tall and 1/3 the length of a football field. And here it is. This is the image of the base upside down projected on the back wall. This is this the photographic emulsion, and there were a lot of numbers as to how many gallons of developer solution they needed and basically throw it over a barrel and take a broom and spread it over and develop it. And I don't know where that image is now. I think it is in California, some art museum somewhere in Los Angeles. The point of that is, over 1,000 years, and exactly the same operating principle. It's just a room, except a much, much larger room, and taking a picture of a much, much larger area. STUDENT: Right, you do make a good point there. They also [INAUDIBLE] how much time they had to spend to make the [INAUDIBLE].. RAMESH RASKAR: Yes, indeed. RAVI ATHALE: The first optical instrument are spectacles, and I think that really, the point-- this is an interesting point. Spectacles are glasses. It really is a mechanical packaging solution. So people realize that having these specifically shaped lenses improves your vision, but how do you do that? How do you make it convenient for you to wear? So people realize you have two years, and they're shaped in this manner. And there is the bridge of the nose, so 3 point support like this. Of course, then there were all the elaborations of the [INAUDIBLE] glasses sitting or the monocle that you can screw into your eye. But this is basically-- 1270 is the-- taking these optical phenomena and constructing a useful instrument that can be portable, and that is used. Beyond that, almost a 300 year gap between a single lens optical instrument and a compound optical instrument-- microscope and then telescope and everything. You might have known this the 400th anniversary of the Galilean telescope revolution, and this is the diagram [INAUDIBLE] multiple lenses. So going forward, the next history is photography. This is the first photograph ever by Niepce, 1827, and it's basically the [INAUDIBLE] sensitive media in various forms, and it just evolved. One of the big milestone was this sort of glass. It was coated in a paper and rolled in other things. So there's another tidbit-- Eastman Kodak. What does Kodak mean? Anybody knows? Eastman wanted a word that was a very crisp word, and that was very positive, forward-leaning kind of a word. And so apparently, Kodak is the sound that the shutter made when you push the button. [STUDENTS LAUGHING] So it was not George Eastman and his colleague Kodak Eastman Kodak was named after. But the slogan was, you push the button, and we do the rest. And I think $1 Brownie camera-- if you put it in historical context, that was around the same time when Model-T, maybe a few years earlier, 10, 15 years earlier. But it's taking essentially really high technology and using some innovation to bring it down to masses, so everybody can use it. So you can say that Model-T and the Brownie camera was the first age of modern consumer technological culture. More than three-color color emulsion, kodachrome stacked with doping and everything, and Polaroid instant camera. Of course, in 1945, the definition of instant was a little bit different. 30 seconds to two minutes was considered to be instant. And this, I was intrigued. This, of course, I found it on Wikipedia. But Maxwell, whom we don't normally associate with doing experimental work, but this was the color photograph. RAMESH RASKAR: Do you know how it was captured? RAVI ATHALE: At three separate exposures, and then stack them. Three separate films through three separate filters. RAMESH RASKAR: But how did he put them together? RAVI ATHALE: I guess outlined it by hand. RAMESH RASKAR: After developing it? RAVI ATHALE: Yeah. And of course, Abraham Lincoln was the first president to be photographed, and lots of beautiful photographs of Civil War. RAMESH RASKAR: The photograph looks still crisp. RAVI ATHALE: Yeah. I don't know. I don't if it was subsequently touched up. STUDENT: They used the light field camera. [STUDENTS LAUGHING] RAMESH RASKAR: Actually, light field camera was invented back in 1908. RAVI ATHALE: Yeah. STUDENT: It looks like it's a long exposure maybe because the trees in the background are kind of-- RAVI ATHALE: Yes. [INTERPOSING VOICES] RAVI ATHALE: I think-- STUDENT: Using a pinhole. RAVI ATHALE: All exposures were long. But-- Another historic milestone that captured [INAUDIBLE] display that [INAUDIBLE] so mechanical scanning and all kinds of other things, CCD. Just, I gave this seminar about a month and a half ago, and two weeks later, the CCD Nobel Prize was announced. And I don't know-- many of you are probably aware of the controversy whether these people really deserve the Nobel Prize or not because they were working on shift registered memory. And it was somebody else who said, hey, it's light sensitive. Anyway, but most of these things, there is always a controversy about including MRI and all kinds of other things. But this is kind of interesting. Here. Look at this, the first Fairchild camera. So I think 88 pounds. 0.01 megapixels, 23 seconds. And actually, if you look at the date, December '75. That's not that far back. RAMESH RASKAR: Yeah. RAVI ATHALE: That's not that far back at all. Then the subsequent digital camera [INAUDIBLE] Sony [INAUDIBLE],, quarter megapixel DCS 100, 1 megapixel $20,000. And '97 was the first transmission of picture from cell phone and cell phone camera. And this last one is very interesting. Very recently, Kodak stopped production of kodachrome film, so if anybody has film cameras, well, you better start stocking up on old film. And 2006, the 10 megapixel cell phone. I think with this camera, pixel cell phone cameras and all that, instead of a cell phone with a camera, this is a camera with a cell phone. So the bulk of it is probably the optics and everything else. Optics, mostly, that goes with it. You have heard of Moore's law? There is something called Kennedy's law. Empirical observation. How many pixels per dollar? And that's of course, a large scale, and this is a linear scale in years. From this DCS 460, 100 pixels per dollar, to all the way here. And we will probably-- I don't know, this is still an old girl. A megapixel camera, I think $5, and I think the goal of many of these companies is megapixel camera for a buck. Everything-- the lens, the focal plane array, the PC board, and everything ready to plug in. RAMESH RASKAR: With a wafer level cameras. RAVI ATHALE: With the wafer level cameras. RAMESH RASKAR: It's already about $1. RAVI ATHALE: Yeah. RAMESH RASKAR: Less than $1. RAVI ATHALE: So megapixel per dollar, so it's already out there somewhere. And this is the picture gallery, so the [INAUDIBLE] 100, Samsung. This is the Omnivision's focal plane array, 2.1 by 2.3 millimeters. And this is a very interesting thing. [INTERPOSING VOICES] For endoscopy, video endoscopy, 1.8 millimeter diameter includes lens, the focal plane, the cable for sending power, sending the signals back. Not great resolution, 326 by 382, but if something needs to be stuck inside your body, smaller the better, and 1.8 millimeter is definitely, definitely nice. This chip is the one that's used in the pill camera that you swallow to image your insides. I thought of something-- that imaging case study, studying the inner space versus studying the outer space. So the camera pill and the Hubble Space Telescope. RAMESH RASKAR: Both are shaped like a pill. RAVI ATHALE: (LAUGHING) Both are shaped like a pill. RAMESH RASKAR: Just a little. RAVI ATHALE: Both are shaped like a pill in the sense this was actually in the montage program that I started at DARPA. That was one of the motivations that, over the past 30, 40 years, the displays have become thinner, from bulky CRTs to thin, flat panel displays. But the cameras, the aspect ratio of the cameras, have not changed in the past however many thousand years. They are always tube like, and so the goal was, can you squash the cameras. Can you make a camera that is five millimeters thick, but the effective focal length is 50 millimeters, and the light gathering power also [INAUDIBLE]?? And there are various tricks that you can play. The origami lens that you probably studied is one of the examples. A multi aperture is another example. But one of the things about-- and I'm not sure, in this presentation, there are the technical details or specifications. But it's very interesting, if you look at the specifications of, say, Hubble, a pill camera, or take one of the modern lithography lenses, I don't think I have a picture of that, but the model lithography lenses are a marvel of technology. That these things-- getting-- you remember how much-- they weigh several thousand pounds. RAMESH RASKAR: [INAUDIBLE] RAVI ATHALE: Some obscenely heavy, huge number of elements in this, and cost multiple million dollars. And the reason is that it's absolutely diffraction limited performance over a fairly wide field of view with zero distortion and all the aberrations, everything corrected. Now, multi-scale images broad, wide, and deep. Now everybody, gigapixel imaging, since the inauguration, it has really taken off. There are so many images like that, having the wide panorama and finding the details over here somewhere. The eagle. And so can we get here from where we are right now, and in a realistic size, weight, cost thing? Can we get a gigapixel camera? Can we get a 10 or 100 gigapixel camera? Very wide field of view. Very high resolution. These are some of the challenges that will be addressed, and not doing brute force. So it's an interesting prospect, and it's an interesting direction. RAMESH RASKAR: But in terms of the cameras that individuals will carry? Or this will be installed, and then you have access to it, so that you can [INAUDIBLE] of interest, you can capture what you want? RAVI ATHALE: This is kind of an interesting thing. You can imagine an individual, like special operations soldiers, carrying binoculars with the standard binocular form factor that will be equivalent to a gigapixel. But there are no gigapixel displays. Human vision doesn't have the space bandwidth for gigapixel. So within that gigapixel, you should be able to [INAUDIBLE] and zoom in and look at the details. There is an ongoing DARPA program-- I don't know what phase it is at-- some cognitive technology for warfighters, blah, blah. It was colloquially called Luke's Binoculars. So in Star Wars, Luke has these binoculars, and you're looking at it. And it could put a cursor that alien there, or that's that. It was primarily focused on the processing based on neuromorphic algorithms or something like that. So the idea will be, if you can design the camera or an imaging sensor that can capture a gigapixel worth of information, and if you have this back end, this neuromorphically induced processing, that can cue the soldiers to look here, there, and the soldier can appropriately focus there. In terms of what kind of things that will be really useful for warfighter right now, that could be one of the things. And then of course, you have platforms at various levels, from micro UAVs with about nine inch wingspan, to Predators to all the [INAUDIBLE] space based imaging. And the field of view, and the pixel resolution requirements, and everything-- it spans all over the space. So I'm switching gears a little bit, and previous was, you can say, an anecdotal or event based vision of the history. Now, we'll go into a little bit of a technical angle of looking at history. So I think, first, one of the questions-- and this also gets into where we should be headed anyway. So imaging sensors, or you can say sensors in general, why do we do this? What it is, and why do we do this? So questions we ask, and this is something-- these are very generic questions. And we constantly are asking these questions, who, what, where, when. That's the sensing front end part, and how and why is the analysis and exploration part. So when you reduce it down to its sense, all we care about is answering these questions. When it comes to using imaging as spatial sensors, not imaging to take photographs of your vacation, where you can view this and recapture the experience. That's a different world, and in some sense, computational photography and computational imaging, that's where-- when I say computational imaging, you are looking at imaging, you are considering that as a spatial sensor. A spatial sensor means a system that is designed to answer these questions. Identifying various constituent elements in the scene, specifying their location and their time. Now, within this particular realm, you have two different sensory modalities. One, you can say proximate sensor, and the other is standard sensor. So proximate sensor, I want to know what is the texture of a surface. I put my hand at the sensor are move around. I say, is it rough or smooth? Is it uniform? What is the granularity? In other words, your sensors are spatially co-located with the object that you are trying to measure. And the other thing is your sensors are separated by some distance. Now, that standoff is a malleable soft concept. Say, for example, take the magnetic hard drive disk head. So for a magnetic storage, the heads will be floating, I don't know, a few microns or a few hundred nanometers above the disk surface. So if, when you go to the optical disks, and the optical head is floating two millimeters away from the surface, that's a huge distance. So that's a standoff for writing and reading, whereas a standoff, if you're talking about a Hubble Space Telescope that's looking at the farthest galaxy in the universe, that standoff distance is however many billion light years away. But the point is, where you're making the measurements is not co-located with what it is that you are trying to measure. If you are doing standoff sensing, it has to involve real propagation. No ifs, ands, or buts. And that's almost contained in the definition of wave motion. Wave is defined as a phenomenon that carries energy and information over distance without material transport. So that's the difference between a soundwave and wind. Wind will carry leaves with it. Soundwave will not carry leaves with it. Both involve movement of air molecules, but soundwave is capable of carrying energy and information over a distance without a bulk transport. And then of course, you have the electromagnetic waves, then you have seismic waves, all kinds of waves. But nonetheless, wave motion-- it is at the heart of any kind of a standoff sensing activity. So when you have wave, and you have standoff distance, you have wave propagation that involves diffraction. And that diffraction essentially strangles the spatial organization of signals. So in other words, this particular scene, there is ambient light that is reflected off your face as individual spots on your face. And so right there, if I put a photodetector and move it around your face, it can measure the reflectance of the surface. If I move that photodetector over here, and if I put a photographic film or a CCD without any optical element, I'm just going to get basically a uniform exposure. All that spatial organization is lost. So we effectively, when we are doing the standoff sensing, we have two parts. One, you can call it the source encoding. That is the phenomenology by which the objects in the scene impress information about themselves on the electromagnetic radiation, whether it's by emission or reflection or absorption or scattering. And the full domain of the electromagnetic wavelength, the amplitude and wavelength and polarization and everything, is used to encode that information. And the other is channel encoding or channel distortion. That is the result of the propagation of that wave between the objects and your entrance pupil, or where you're entering. So these are the two things. So when we say about processing, we are talking about removing the channel distortion or undoing the effects of diffraction. So what are the components of an imaging sensor? And I'm sure you have done this in this class. So there is the front end element that operates on the electromagnetic wavelength of the optical wafer directly. Then there is the transduction that takes that electromagnetic energy and transduces it into some other form. In photographic film, it was chemical. In CCDs and CMOS, it's the electrical energy. And then there is the storage display processing and exploitation. So these are the three stages, and I'm using the [INAUDIBLE].. This color is not that strong, but blue means biology and pink means technology. So in the pre-prehistoric period, all these elements were biological. The front end element was the optics of your eye. The transduction was your retina, and the subsequent processing going into [INAUDIBLE] went all the way up to the prefrontal cortex. But some recognition or some processing exploitation took place. It was probably before language, so we don't know whether pre-prehistoric primates, and what terms they thought. So then we come forward to prehistoric and historic period down to cave paintings. So there, the storage and display "technology," in quotes, was introduced. Everything else is still biological, but capturing and storing that experience-- and beyond this prehistoric and historic period, you can imagine the technology getting more sophisticated, but the principle did not change. And it stayed that way through the prehistoric and historic period. Now, as we go forward to pre-industrial period, and as I said, the optical elements were introduced. So the optics of the eye, or the front end element of the entire sensing chain, was augmented by technology. And what were the consequences? You could see farther, and you could see smaller. Industrial period was basically the invention of photography. So in addition to the eye, the transduction element and the storage and display element was now technological. What are the consequences? We could expand into invisible spectrum. Now, one has to remember, without photographic film, x-rays would never have been invented because [INAUDIBLE] figured it out that there was something going on because he had this film that was [INAUDIBLE],, and it still-- something happened. It exposed. So invisible spectrum, then second thing is time sequence recording allowing the analysis of very fast or very slow events. Time lapse photography and other things, so detailed study of motion that went well beyond the human time phase. However, everything was still non-real time. You still required chemical processing, and processing and exploitation was still by humans. So as we move forward to the modern era, 20th Century, it really is a revolution in imaging sensors, where the processing and exploitation is still primarily by humans, but that slowly is moving into a completely automated image exploitation system. And I just thought this is an interesting example. I don't think the Pee Wee soccer league still has a goal line camera or something to decide whether somebody was offside or whatever, but it's really a matter of time. And why is it a dramatic break from the past? Real time acquisition, no chemical processing, real time processing, and [INAUDIBLE] scalable manufacturing, at least as far as sensors are concerned. And now, with the wafer level camera and new designs for front end optical elements, that scalable manufacturing is also moving towards optics. And let me see, this-- I don't know. Did you show this [INAUDIBLE] in the class? RAMESH RASKAR: No, I haven't. RAVI ATHALE: As I was working on some of this presentation, from this very interesting article from Nathan Myhrvold-- people know about Nathan Myhrvold, and his-- I think he was employee number 15 or something at Microsoft. And from what I understand, his primary contribution to the thing at that time, to have a standalone company that is solely for the purpose of developing systems software and making money on that. So his contribution was the business model for Microsoft, and he's currently either the most famous or most reviled person in Silicon Valley for having started a company, Intellectual Ventures. And again, he's trying to do a business model, that a company can exist solely for the purpose of inventing things, never commercializing, never prioritizing, but essentially filing the patents and sitting on them, and either licensing it or suing people who they think violated the patent. It's a very interesting business model. So obviously, he knows something about these things. He made this comment about, cameras will also change form. They are film cameras without the film, basically like the early automobiles were a horseless carriage. So his point is that camera stores of the future will surprise us just as much. And I was making this comment, somebody said, oh what the hell is he talking about? The automobiles are still the same as in 1910. 100 years later, nothing has changed. It still is internal combustion engine and transmission and wheels. Well, yes and no. So now, we thought that we'll take this analogy a little bit forward and say, so horse drawn carriage, horseless carriage, to the family station wagon. Up to here, you can say really, it's more sophisticated, but nothing has changed. But then you say, what else has happened? And automotives, tractors, [INAUDIBLE],, a tank, to that [INAUDIBLE] that's allowed to split the lid. And this, of course, you recognize that as Stanley, autonomous vehicle. So if we say that film cameras-- filmless cameras-- I really love this picture because both of them are cameras. Identical. This-- these are all film backplane. That's a CCD backplane. So where is it going in the future? What are the future directions? And at least our projection is where it's going in the future is specialization and [INAUDIBLE],, that the same concept, of course, there is some energy source, and there is some mechanism to convert that energy into mechanical motion. So in that sense nothing has changed. But if you look at this and this and this and this, each one is designed for a different purpose. Each one is specialized for that particular purpose. So similarly, cameras-- right now, a camera is a camera that takes picture, whether it's the Hubble, or whether it is your cell phone camera or anything, or the pill camera. The structure and function, so far, hasn't changed at all. And that is where we feel a dramatic break coming in the future because of the flexibility of the technology, because of very specialized applications, very specialized constraints for things. So why if shouldn't there be-- you can say, just like there are ASICs, Application Specific Integrated Circuits, why shouldn't there be Applications Specific Imaging Systems, ASIS? So [INAUDIBLE] autonomy that an imaging sensor that decides, on its own, without any manual intervention, where to focus on, how to interrogate the scene? And this is sort of reworking the biological inspiration. The cameras are really replacing the human eye with trying to form a similar kind of representation, whether it should go to the biological world, it's a vast array of different types of imaging sensors. It's probably not even proper to call them imaging sensors. They are more like spatial sensors. The most primitive form, an earthworm with a light sensitive skin that simply is able to distinguish there is more light on this side versus that side and can steer itself accordingly. And then a variety of techniques for the insects and other creatures, and these, their spectral response, the processing, the functionality just exquisitely evolved for the specific evolutionary needs of the animal for their survival. And I don't know if I really-- I think I would like to sort of-- the optics is now getting there. [INAUDIBLE] plastic lenses, diamond [INAUDIBLE],, microfabrication, wafer scale assembly. So the grind and polish technique for making optical elements really is, now, relegated to very high quality specialized. And processing embedded in the cameras, that's the first step right now-- red eye removal, face detection, smile detection. And this is, of course, illumination. Right now, we are still using this dumb flash that this goes. So spatial temporal coding of illumination in order to enhance your ability to extract information from the scene. Different spectral regimes for exploiting that information. And this is some picture gallery of new optics. That's the Kodak microlens array, folded optics, diamond turn optics, and free form multi scale conformal optics. RAMESH RASKAR: What do you use it for, the last one? RAVI ATHALE: I'm sorry? RAMESH RASKAR: What is the use of the last one? RAVI ATHALE: I don't if this is any use. [STUDENTS LAUGHING] STUDENT: It's cool. STUDENT: It is cool. RAVI ATHALE: Yeah, this is just a technology that's there. You saw, in Charlotte, they had this machine there. They can do this. RAMESH RASKAR: They can etch away optics using it. RAVI ATHALE: I think it's a six degrees of freedom machine for turning, and then you can do that on a conformant surface. And these, I think-- RAMESH RASKAR: Just going back to your previous comment, so everything is moving in a direction where the things are more programmable, more specialized, but in the basic laws of physics, are not going to change. You're still going to convert photons and electrons and so on. And so are there some-- if you want to build an aircraft, you're going to use Bernoulli principle. If you're going to build a vehicle, it's Newton's third law. It's just going to use those laws of physics. RAVI ATHALE: Yeah. RAMESH RASKAR: Is there something fundamental about imaging that we should be challenging? Oh, should we continue it as the same laws of physics? RAVI ATHALE: And what laws of physics? I think it may not be like challenging laws of physics, but what are the laws of physics that we can bend, or that are various qualifiers? So one of the things here, resolution is limited by the wavelength of light. Yes, as long as you are in the far field regime, as long as you are in the linear regime, you highlight one of these, and you can do whatever you want. If you are proximity near-field scanning optical microscope, you can get as small as you want. But if you have nonlinear, these fluorescence microscopes, you can get 50 nanometer resolution with visible light. So that is some ways in which you can really carefully look at what we consider to be fundamental limit imposed by physics and say, what is the fine print, and can we violate that? So that would be very interesting to look at. RAMESH RASKAR: Yeah, but I mean, can we start using our own bodies as optics? Or can we start using biological principles to solve the five w and one h problems? I mean, in the film world, we actually use the chemistry to solve a lot of the problems. Now, somehow we have forgotten chemistry, and we are more back to the physics now. RAVI ATHALE: Yeah, I think-- I don't know. One of the things I wondered, can we have a three dimensional detector, a volume detector? And if we can store it, what will it record? How will we read it out? How will we process it? Some people have talked about directly recording the coherence function instead of just measuring the intensity. And you can imagine that now-- of course, you can wave the wand and say nano, and everything goes away or everything is different. But to some extent, that may be true, that if you are able to control the material at deep, deep, subwavelength level, you can do some pretty ridiculous things. So that's the direction that we haven't really explored. And another interesting thing is that, if you have the ability to manipulate matter and control it at deep subwavelength level, then it's really incumbent on you to abandon your other cherished notions about, oh, a spherical lens forming a well focused image on a focal point. Why are you sticking to that part of it? So if you really fully want to exploit the ability to manipulate the electromagnetic wavefront in very novel ways using very novel materials, you should also start thinking about what are we going to measure, and how are we going to measure it? So I don't know if I answered your question. RAMESH RASKAR: Yeah, I mean, it's an open question. [INTERPOSING VOICES] RAMESH RASKAR: Yeah, I don't think there's a clear direction. But I mean, the chart that you showed earlier of different stages where biological-- I mean, not biological, but human involved, and then we got more and more technological. Now, everything that was blue has become pink. RAVI ATHALE: Yeah. RAMESH RASKAR: So what's next? RAVI ATHALE: I think maybe just what kind of technologies? I mean, you can think of some of the things like, can we think of self-assembled imaging sensors that are not lithographically driven or that are not mechanically assembled? Yeah. I don't know what else. RAMESH RASKAR: It's like the discussion on similarity. It's almost easier to say it's coming tomorrow, but when you go back, it seems like a lot of stuff has to happen before we reach that point of-- [INTERPOSING VOICES] RAVI ATHALE: And many of the turning points in technology are turning points only in retrospect. RAMESH RASKAR: Yes. RAVI ATHALE: So you say, ha, that was the day world changed. You didn't realize that time, but-- I mean, the 40 year anniversary of the internet, and there was a story about what was the first message sent? First message was "log in." And after L-O-G, the computers crashed, and they had to reboot. And so they asked the person from UCLA, so what did you do after you send a message? Oh, I went, had a burger, and went to sleep because they didn't know that revolutionized or something. It's only in retrospect. And these are some interesting examples just to say application specific imaging, and this is one example from a surgeon in Naval Medical Center. They're using the three CCD camera on a laparoscopic tower, and basically, his point is that, when you are doing surgery, it's really important to know which is the artery and which is the vein. You treat it very differently. And you start off, say-- I mean, hey, color coding wires. Unfortunately, in the body, it's not that straightforward. So taking these three colors and then doing the processing, and using many of the things that you saw here, that you can really-- a very distinct spectrum for the hemoglobin and oxygenated hemoglobin. And so by doing the processing, you create that map, and you project it back. You show it. Veins are colored blue, and the arteries are colored red. Now, here in this particular case, the surgeon looks at the monitor and says, ah, this is the one that I should be careful about, or whatever. But then some of the things-- I don't know. How many of you have seen this thing called Vein Viewer? Have you seen the product? I don't if there is a wireless connection is good enough. We'll see. It might be working. Yeah. [INTERPOSING VOICES] RAVI ATHALE: You have seen that? And if you look at the technology, it is just absolutely stupid technically. It's 780 nanometer light illumination that you do that, and that penetrates the skin and is absorbed by your blood. And so you can see the veins deep inside, and the beauty about this is a camera is up there, and so is a DLP projector. And the processor basically scales and aligns the images so that it is projected back exactly aligned like that. But the use is phenomenal. RAMESH RASKAR: Yeah, it's one of the best forms of augmented reality. RAVI ATHALE: Exactly. RAMESH RASKAR: So I always show it in my augmented reality presentations. RAVI ATHALE: Yeah, and I think the rest of the thing may be-- I was talking to Ramesh. This is Doug Hart's company, Brontes Technology for the 3D. I think you saw-- RAMESH RASKAR: We saw it-- RAVI ATHALE: You saw that, and [INAUDIBLE] chair side oral scanners. I think there was a video. Did you see that video of the doctor manipulating in the-- RAMESH RASKAR: Did it show the video? I don't remember. [INAUDIBLE]? I forget. RAVI ATHALE: I don't know. It's not that important. You know what it is-- RAMESH RASKAR: Yeah, it's amazing how simple it is. RAVI ATHALE: Yeah, and I think, this is one of the things that maybe-- and the refocus imaging, of course. You don't need to, right now, know anything more about it. But in some ways, when you talk about application specific cameras, the key is understanding the nature of the problem and finding the simplest solution to that specific problem. And one of the examples, you saw that Vein Viewer, all of you know the pulse oximeter that is clipped to your finger. It's relatively new. I think maybe 20, 25 years old technology. Again, if you look at the technology, it's absolutely dirt simple technology. Two different ladies looking at the differentiable absorption through your finger to measure the oxygenation level of the blood. And before, they had to draw the blood and send it for analysis, and by the time it came back, maybe bad things happened. But this one is now, you saw in all the TV shows anywhere, the first thing they do is clip this on your finger. And I don't know how much it costs-- RAMESH RASKAR: It still costs $1,000. I don't know why. RAVI ATHALE: Because somebody's making a lot of money. That's why. I think-- STUDENT: They have a patent. RAVI ATHALE: Yeah. But that's some of the research projects that I was involved in monitoring. To talk about, can you do a spacial map of oxygenation levels using these multi spectral cameras and then processing? And the doctors got really excited for applications like burn assessment, whether it's a first or second or third degree burn. The treatment differs dramatically, and instead of relying on the doctor's visual assessment of what degree burn it is, if you can take a camera that can do that blood oxygenation map, or if you can map the retina and a lot of the diabetic retinopathy and other diseases, there are precursors are in, what is the oxygenation level of the blood [INAUDIBLE]. So these kind of things, just like the distinction between arteries and veins or the Vein Viewer and all that-- you don't need to invent new nanotechnologies or something. You need to understand in depth what the problem is and what the human factors is. Again, this thing, Vein Viewer-- the cool thing is not just that IR LED, but you can imagine everything else the same except that image being shown on TV, and it will nowhere be as dramatic if the doctor has to manipulate the needle here while looking at the monitor. It's a human factor, so right there. So that that's really where-- so this is the last slide. So just a little bit science fantasy, not just fiction. A personal imaging assistant, something that just like you carry your pen every day, or your wallet or credit card or your cell phone. The personal imaging assistant-- what all would it do? Health care. Checking for sunburns, or if you get scraped, hey, is it getting infected, this wound? You have an attachment? You check the ear infection of your children or whatever. Look at your tongue. Are you getting upset stomach? Wardrobe matching, of course, color and styles, especially. What kind of makeup will go with this, and other things. Hygiene, cleanliness of surroundings, presence of bacteria, water/food safety, quality of what-- The last one, it's really getting a little bit science fictiony, but maybe not given that sixth sense thing. So you're wearing these glasses, or this personal [INAUDIBLE] assistant with a headphone, and somebody walks-- Hey, Ramesh, how are you? Somebody, this person, [INAUDIBLE] It's like the politicians who have these assistants at their elbow who whisper in their ears who is the person walking in, and discerning move, that's again very science fictiony. Can the image tell you that? This guy is very skeptical. He's not buying anything [INAUDIBLE] of course, taking pictures, and how it's basically exploit the electromagnetic domain to the hilt. Combine the processing right there and have an adaptive-- one analogy that people make. Right now, it's a sensor with a total one directional flow of information from the sensor to the processor to it. How about going back? I heard somewhere about human visual pathways from the retina to the lateral geniculate nucleus to V1 and all the inferior temporal cortex and all that. Everybody analyzes that feedforward and what kind of features are extracted. And I heard that from V1 to [INAUDIBLE],, there is a huge amount of feedback connection. What's the role of that? I don't think the neuroscientists know yet. But that kind of a thing-- an imaging sensor that analyzes the scene, says now, next, I should look in this direction, in this spectral domain, this place of something. And in a few of these adaptive steps, can learn a hell of a lot about what is out there. So last thing, unobtrusive or most covert form factor, and it's literally part of getting dressed. Just like carry your wallet, your cell phone, carry your imaging assistant that can do all of these things for you. RAMESH RASKAR: But what form factor? Is it in the clothing or is it-- RAVI ATHALE: That's interesting in your eyeglasses, eyewear kind of a product. Or I was, this morning at [INAUDIBLE],, I was looking at some of these glossy magazines for Army, and I noticed that all the soldiers in Iraq and Afghanistan, their helmet had something there. And I asked what is this? They said, of course, it's a video camera, so it's part of a standard military equipment. So I don't know how unobtrusive you can make it. Is it in the clothing? Is it something that you clip on your shirt, or is it here? Hey, you are in the media labs. You figure it out. So well, that's it. Thank you. RAMESH RASKAR: Any questions for Ravi? And we can make your slides available? RAVI ATHALE: Yeah. RAMESH RASKAR: OK. STUDENT: Can I ask a question? RAMESH RASKAR: Yes. STUDENT: So there is this problem in biometrics that you want to capture-- where you want to recognize a person at a distance, and maybe they are pushing programs of iris at a distance or face recognition at a distance in [INAUDIBLE] things like that. And there are two problems in this. One is atmosphere between the subject and you, and the other is subject is-- he does not want you to recognize him. He's uncooperative. What would be your ideas in terms of how our image sensors should go to capture that problem? Because if we have a really good image, then we can run our recognition algorithms, and then we can focus more on recognition aspect rather than dealing with issues like atmospheric issues and stuff like that. RAVI ATHALE: And that's a very broad question. It also touches upon some very sensitive topics, as you can imagine. All I can say is that that program announcement is public. Right? RAVI ATHALE: That is-- STUDENT: [INAUDIBLE] not the only one that's interested. [INAUDIBLE] RAVI ATHALE: There are all kinds of agencies interested-- [INTERPOSING VOICES] RAVI ATHALE: Yeah, there is a program enhancement called Best Biometric Exploitation sponsored by-- STUDENT: Biometric exploitation? RAVI ATHALE: Science and technology. Out of this agency called IARPA, Intelligence Advanced Research Projects Agency. It's a multidimensional problem, and there are all kinds of ideas that are proposed. Hopefully, within few months, the contracts will be awarded. We'll find out what. But I think somewhat going beyond that, I think really in imaging senses-- and this is a debate that we are constantly having. Is it necessary to form an image? And if you don't form an image, if you just map directly into features, number one, can you do that? Number two, should you do that? Is it advantageous in some form or fashion? It applies to biometrics as well. Right now, for iris recognition, there is the dogman algorithm. So that is geared upon you sampling the iris at a certain resolution in a certain way. But if you tie the front end acquisition to the exploitation, could you get into a more robust, simpler system? That's an interesting question, but it's a little difficult for sponsors to launch in that direction because too much risk, and if something changes on one side, the whole thing may collapse. But that's an interesting point. STUDENT: Yeah, I think the point I'd make is that, as long as you continue to rely on collecting a high quality image to do this, I think doing it at a distance is going to pose a challenge. And the dogman algorithm is a perfect example. You start out with an image that's several hundred kilobytes, and you end up with a template that's a couple 100 bytes. So obviously, you don't need all that data that you collected in that high resolution image to do iris recognition, so why do you need to start with that image in the first place? And if you could develop a compressed representation, if you could sense that compressed representation directly, would it lead to a simpler system? Would it lead to a simpler and more robust system? That last question is very tricky to answer, and the three of us said [INAUDIBLE] when he was with us four of us [INAUDIBLE] gone over this over and over, and we still haven't come to a conclusion. STUDENT: Thanks. RAVI ATHALE: Sure. RAMESH RASKAR: Good. So hopefully, as you already saw, a lot of the project plans are also about not-- not about standard cameras and then exploiting specialization and autonomy and trying to change a lot of the rules here. RAVI ATHALE: And as a matter of fact, many of the projects we looked at each other and said, hm, should be talking to these people? STUDENT: I'm going to come back and visit. RAVI ATHALE: Yeah, exactly. RAMESH RASKAR: Yeah, welcome-- you're welcome to sign up as a mentor for the class. So we have several mentors for the class, and you're welcome to sign up for that. So excellent, so next week we have the midterm exam. It's open book, open laptop, open internet, open everything, so don't study for it. And it will be mostly about drawing diagrams and explaining things and problems that will make you think. And eventually, after that, we'll be studying animal eyes, which is [INAUDIBLE] 13th of November.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_4_Computational_Illumination_dual_photography_relighting_Part_2.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right, let's get started. So I think what's interesting about light is that it's used in so many interesting ways, whether it's programmed, whether it's not programmed, how it interacts with the world, [INAUDIBLE] there's direct bounce, multiple bounces, different wavelength, modulation, time, and space, it's a lot of fun. Like, for example, do you know how a TV remote control works? AUDIENCE: IR pulses. PROFESSOR: IR pulses. It's mostly optical, the output of RF-- but the LED of the remote is sending a code, optical code basically-- thanks-- over time to the photosensor on the TV. Now, why does it work in broad daylight? AUDIENCE: [INAUDIBLE] spectrum. PROFESSOR: It's a different spectrum. That's one benefit. That's not enough. AUDIENCE: Just looking at the differences between light, like peaks-- [INTERPOSING VOICES] AUDIENCE: --actual-- PROFESSOR: [LAUGHS] Actually, you can shine on the ceiling, and this will work. AUDIENCE: Just the variation in time. AUDIENCE: It has time. So it has the last-- AUDIENCE: Yeah, looking at peaks [? currently. ?] [INTERPOSING VOICES] PROFESSOR: Sorry, the last one? AUDIENCE: It has less energy than [INAUDIBLE] difference of pulse or something. PROFESSOR: Almost. What else [INAUDIBLE]? AUDIENCE: The modulation. AUDIENCE: Then the filter-- PROFESSOR: It's using modulation. So it's actually running at 40 kilohertz. So when it's 1, you're sending something 40 kilohertz, when it's 0, it's not sending anything. And so the AC component, which the carrier, is 40 kilohertz. And then the signal is 1 or 0. So it can record in presence of ambient light, because ambient light is mostly DC. AUDIENCE: But it can't-- well, so yesterday, I was watching TV, and all of a sudden my remote-- or the cable box stopped working. And I thought it was [INAUDIBLE] one of those, and there was in the [? wrong mode ?] or-- nothing worked. And then I noticed the TV light was blinking. And noticed-- I was sitting on the other-- at the TV remote, and it was just blasting IR light. But the cable box was just ignoring it, like it couldn't figure out-- PROFESSOR: Because you were pressing too many keys. AUDIENCE: I was holding one button on the other remote without knowing it. And-- PROFESSOR: So two different remotes were conflicting with each other. AUDIENCE: --remotes, yes. PROFESSOR: Exactly. So it's just a simple principle that we always associate with the TV remote. But can that also be used for photography or imaging? So the signal peaks-- the photodetector on the TV is decoding the signal. But that's basically a single pixel. Imagine if every pixel in the camera was made out of that photodetector that's decoding the 40-kilohertz signal. OK. AUDIENCE: What about like a-- like in your ear, you have hairs that vibrate at different frequencies, do you have-- PROFESSOR: What's the analogy there? AUDIENCE: Oh. Like so, if you had a pixels that were listening for light-- PROFESSOR: Uh-huh, at a particular wavelength, right? AUDIENCE: Like listening for [INAUDIBLE] light. PROFESSOR: So imagine you're trying to build a camera. So right now, I have exactly one pixel, which gets a signal that comes in at particular hertz, and then 0 [? at ?] 40 kilohertz, and then 0, and so on. So thinking in terms of communication, we have a carrier, and we have a signal from that. The 0 [? with ?] 40 kilohertz, [INAUDIBLE] amplitude modulation and [INAUDIBLE] signal around that. That's how you are thinking in communication. And in case of a remote control, you send 40 kilohertz, nothing, 40 kilohertz. That's as simple as that-- a very simple signal. Now on your-- so you have your [INAUDIBLE].. If you use a [INAUDIBLE] image [INAUDIBLE] instead of one pixel, imagine every pixel in a camera is able to decode the signal. So instead of taking the 40-kilohertz signal as a reference carrier, and it all becomes just 1, 0, 1, 0. That's all [INAUDIBLE]. Now imagine if I could put a camera so that every pixel has that property. So I'm going to put a camera. The [INAUDIBLE] pixel here can decode 40 kilohertz and just pick up what is [? illuminated ?] at 40 kilohertz and ignore what's in the room. So in a typical room like this, [INAUDIBLE] you know that this is sunlight. So sunlight, you know, a huge DC, and then a TV remote is giving a little bit of signal. And then all the photodetector does is it just plants it-- that's just the frequency selection [INAUDIBLE] here. And this is the signal, and it ignores all the DC. Now, can I create a camera where every pixel becomes the same as that, and now I can shine the room with my remote so that whole scene is being [? flooded ?] by 40 kilohertz, and in bright daylight this scene will appear as if it was lit only by this flashing LED and nothing else? Is that clear? AUDIENCE: So there's a lens in here, right? PROFESSOR: Sorry? AUDIENCE: There's a lens in this case? PROFESSOR: Yeah, there's a lens and all that. It's just a typical camera with a sensor and so on. This point is being focused here and so on. It's the same thing. It shows that the light I'm having here is [INAUDIBLE] 40 kilohertz and [INAUDIBLE] 40 kilohertz and so on. AUDIENCE: But can you build cameras which operate at 40 hertz, like that [INAUDIBLE] per second? PROFESSOR: [INAUDIBLE] you could. So we're not going to get cameras that look like this. It will happen as the silicon improves and so on. Of course, there's always people that [INAUDIBLE] 40 kilohertz. Now imagine, somebody gives me a flashlight that actually runs at [? 2 ?] kilohertz. And this one runs at 40 kilohertz. And this particular pixel actually captures the signal across 14 and 15, and in software, it can decide what's the [INAUDIBLE] at 14 and what's the [INAUDIBLE] at 15. What did you just [? look ?] here compared to assignment number 1? Two flashlights on at the same time. I want to know how the signals [INAUDIBLE].. This one and this one. So this is A and this is B. And the image I'm getting is A plus B. But in software, I can decompose and say which part of the imaging-- which intensity came because of A and which intensity came because of B. So software, I can tune between this light source and that light source, just like I can-- on your car radio, you can tune between 99 megahertz station and the 80 megahertz station. So we will tune that on the camera. And once we have that, imagine cinematography. You can put all kinds of lights through the movie, and then go in Photoshop and change any light, any color, any intensity. Again, beautiful light [INAUDIBLE].. AUDIENCE: But again, lots of data [INAUDIBLE].. PROFESSOR: Yeah, but it doesn't [INAUDIBLE].. [LAUGHTER] [INTERPOSING VOICES] And they'll be happy to clear [INAUDIBLE].. [LAUGHTER] So there's a lot more to come. So every time you think about how light interacts with the world, say, how can I use that for imaging? AUDIENCE: Is it crazy-- is that kind of how sonar works, or are they just like [INAUDIBLE]? PROFESSOR: What's weird is I can take sound and create images like-- AUDIENCE: You meant sonar. PROFESSOR: Sonar. AUDIENCE: Boop. Yeah, yeah, yeah. [INTERPOSING VOICES] AUDIENCE: Or LiDAR. PROFESSOR: LiDAR-- yeah, all those methods are basically using the principles for [INAUDIBLE].. It starts, it bounces, it has certain properties in terms of presence or [? absence, ?] position, color, space modulation, time modulation, and all these things. AUDIENCE: Just a little [? silly ?] kind of question-- who was the first person to do computational photography? PROFESSOR: Steve Mann right here [INAUDIBLE].. Steve Mann and Ros Picard were the first one to use the term "computational photography," although they used it in a very specific context for high dynamic [? range ?] imaging. And then later on, very limited-- very important people, pioneers in the field such as Shree Nayar, and Marc Levoy, and so on. They were [INAUDIBLE] even before the term was around. Actually, when you look at all these papers and presentations, I would say over half of them are just because of those two guys. AUDIENCE: Because when you talk about this, it's also when NASA explores planets, you also think of [INAUDIBLE],, what does it mean, [INAUDIBLE].. PROFESSOR: Exactly. I mean, what we're talking about is really communication concepts. It's 100 years old. So a lot of concepts kind of get borrowed. And [INAUDIBLE],, you couldn't think of decoding a particular hertz signal, right? And we have moved to digital only what, 15, 20 years ago? And astronomy, all that math and all the techniques that we use in communication become possible in our world. So this is kind of a-- because at the same time, when you are in the communication world, the signals don't have very high dimension. It's usually a two-dimensional signal, and a number of stations, and a frequency range-- basically a two-dimensional signal. Every trans-- every radio station is transmitting from audio. And audio is one dimensional. And since it's a two-dimensional signal that's in our world, and we capture that on our antenna as a one-dimensional signal over time, and we decode that and record back the [INAUDIBLE].. So usually it's not very high dimensional. And even if it had high dimension, they're multi-scale. So just if I'm sending 500 channels on a fiber, that's just 500 separate signals. They are not intermixed like we have here. So although [INAUDIBLE] similar, the problems in imaging are more complex. The high dimensional have [INAUDIBLE] problems [INAUDIBLE]. But research is all about fusion of the similar. So if you can learn ideas from communication, and optics, and quantum computing, and signal processing, you put all that together and mix, and you all this-- you can create magic. And almost every project we see has some element of magic. And that's what makes it very exciting. AUDIENCE: So 40 kilohertz seems quite fast. But it seems like maybe we don't actually need to be quite that fast. PROFESSOR: Yeah, yeah. AUDIENCE: Yeah. So I'm just trying to think, what would the-- how slow could you go and still, basically, eliminate the DC component? I mean, it's part of the fluorescent lights that are 60 hertz or-- PROFESSOR: Yeah. I mean, [INAUDIBLE] like, what, 25 kilohertz to remove the flicker? But yeah, you could use-- I mean, if the camera's 60 hertz, you could just use a 60-hertz strobe and turn it on in one frame and off-- on every odd frame and even-- off in every even frame. And that alone will allow you to do this subtraction. So the only problem is that if you do a pure subtraction, you're going to subtract really two large quantities, two large numbers. So in the first image, you have sun [INAUDIBLE] [? mesh. ?] And the second image is just the sun. And this is very, very small compared to this. You're subtracting two large numbers and expecting to recover the contribution because of the [? mesh. ?] Is that [INAUDIBLE]?? AUDIENCE: Sounds like an error accumulation problem. PROFESSOR: Yeah. But that's [? exactly ?] the problem, communication [INAUDIBLE]. The carriers and the signal is so tiny that's riding in free space over large distances that they use really clever coding mechanisms so that your imaging [INAUDIBLE] increase [INAUDIBLE].. So I just want you to think very broad. I know many of you here have very interesting backgrounds in communication, and chemistry, and interaction, and so on. So try to make the best of that. So temporal modulation, actually, is not used that effectively right now in imaging. So [? certain ?] projects, I'm not going to go into detail, but they're on that wiki that I sent you. So please add more information there. Add your own experiences, some of the things you are mentioning, some of the projects you're mentioning. Please go and add all those things to those wikis. All right. So sometimes, you can't control the illumination, but you can just exploit natural illumination. OK. So here's a project from Washington University, St. Louis. And what they did was they took webcam images all day long at a given time of the day. So on the x-axis, you have time of day. So it's dark in the night, then daytime, and again dark. And on the y-axis, you have day of the year. So those are how many-- yeah, I guess just day of the year. I don't know after many days each was calculated. If the top is 1st of January and bottom is 31st of December, what can you say from this data set? AUDIENCE: Winter has shorter days. PROFESSOR: Winter has shorter days, which means where is this camera? AUDIENCE: In the northern hemisphere. PROFESSOR: It's in the northern hemisphere, right? And you can probably say more about if we just had the ratio of the smallest day to the largest-- longest day, that will tell you the latitude, because when you're on the equator, the longest and shortest days have equal length. But as you go away from that-- there are already a lot of data embedded in this natural illumination. So this project is really beautiful. They did all kinds of interesting things. So they have hundreds of static-- thousands of static cameras, variation over the year, over a day. They put all that together. They can do really interesting things. So it turns out in a traditional lighting, in a typical scenario, light is linear. What does it mean? It means that if I have a scene, I light it with particular brightness, a particular intensity of light, I get certain brightness. If I make my light twice as bright, everything will become twice as bright. As simple as that. This is not true at all the intensities of light. When you go really, really bright light, it's not true. The world starts behaving in a nonlinear fashion. If you have your speaker on your synthesizer, if you pump twice the power to your speaker, do you always get twice the loudness? AUDIENCE: It only increases 1 decibel. AUDIENCE: It begins to saturate. PROFESSOR: It tends to saturate. And eventually, you'll run into nonlinear behavior. And the same thing is true for light as well. But as far as sunlight is concerned and the type of world we are involved, everything is linear, so we don't have to worry about it. And because everything is linear, mathematically it can all be expressed as just linear transforms, and linear algebra, and so on. That's why background in linear algebra is very useful when you're doing any imaging work. So they did some very simple things. Like they took all these images, just did a PCA, Principal Component Analysis. And that image allows them to figure out the haze, and cloud, and the orientation of the surfaces. So this essentially lists-- and I believe they can figure out that this building is facing one way versus this building, and so on, just without even analyzing and doing any sophisticated computer vision, just from the sequence of images. And then they can segment the scene. This is something close. And with distance very far away, they can encode that. And they can even figure out where a webcam is, its latitude and longitude. And Robert [INAUDIBLE] told me that they can do-- just based on the sunrise and sunset data set that we saw earlier, they can localize with 50-mile accuracy. And if you have some speed cameras where you know the locations, then you can interpolate and go down to about 25 miles. And in addition, if you have satellite imagery, so you know how the intensity is changing, then you can do around 15 miles. And then the people at CMU such as Srinivasa Narasimhan and Alyosha Efros, they recently did a paper where they can just look at a patch in the sky. And if you look at a patch in the sky in broad daylight, it always has a gradient. And depending on where the sun is, the gradient has a particular orientation in x or y, the intensity ramp. And that actually localizes the direction of the sun. So now they can look at webcam images and click on the part that shows the sky, and they can localize the cameras down to, again, a few tens of miles. I forget exactly what the numbers are, but pretty fascinating. And they're not even using polarization. If you use polarization, it can get even better, because the sky is highly polarized. AUDIENCE: I have a question about it, actually. Do normal digital camera sensors or film or anything, do they have any polarization dependence at all? PROFESSOR: An ordinary sensor doesn't. But you can always put a polarization-- AUDIENCE: You can always put a polarizing filter, but there's really no correlation [INAUDIBLE]?? PROFESSOR: As far as I know. Yeah. Even the human eye does not have very strong sensitivity to the polarization. But there are some results-- and if you talk to Matt Hirsch, he claims he knows-- he can see polarization. He even has experiments where if you see one way, you see one color, and you the other way you see a different color. He's shown it to me dozens of times, but I don't see the difference. But he's been able to recruit a lot of people to say yes, they see it. And there are very few actually animal eyes can sense polarization. There are some underwater creatures that can do a pretty good job, for instance. So again, they can do the encoding of that, how far other things are, orientation of surfaces. So here, you can see that orientation-- this is different from orientation. That-- how would you figure that out, by the way? AUDIENCE: Shadows. PROFESSOR: Shadows and sunlight, because some faces will be lit [INAUDIBLE] than others depending on time of day. So you don't have to process it in an individual manner. You just throw it in a big matrix [INAUDIBLE] PCA, and [INAUDIBLE]. OK. So let me-- we saw this example last time. So I'll skip that. Let me switch to light fields and talk about our assignment. All right. So light fields. It's one of the most important concepts we're going to learn in this class. And again, realizing that the appearance of the world is higher dimensional, not two dimensional. You have a 3D world. You project it on a 2D image, clearly a lot of information is lost. Now, if you build a so-called plenoptic function, which is-- what is set of all things we can ever see? It was a name, actually, given by Ted Adelson, a professor here in the early '90s. Then it turns out it's a very high-dimensional world. If I stay in one place and think about the bubble around me, I have the azimuth and elevation of every direction, just the bubble-- on Google Street Map, you have a bubble for every location. That's [? three ?] times [? too. ?] And that's over time and over wavelength, different colors and over time. So that's four dimensional. Now, I can put these bubbles in different places. And every bubble can be placed in x, y, z. So there are three additional degrees of freedom. And if you can capture all that information, then you can recreate a movie from any viewpoint at any time at any wavelength. But it's extremely high dimensional. This is seven dimensional. So the world is actually seven dimensional. And if somebody built this magical device, it will have-- if we make a [INAUDIBLE].. Now we're going to simplify that. And let's say, OK, for all these bubbles shown in the blue, all the rays are emanating, and if I think of any point in the world-- and for now, we're going to ignore time and wavelength-- it becomes five dimensional, from seven to five, because we ignore time and wavelength. I can take a point in 3D. And from that point in 3D, I can think of a direction. And the direction is only 2D, not 3D. Why is that? x, y, z for position-- AUDIENCE: Theta [INAUDIBLE] PROFESSOR: But only theta phi for angle. Why is it not three dimensional? AUDIENCE: What would you use the third dimension for? PROFESSOR: Because the roll along the ray does not really matter. So you have yaw, pitch, and roll, but the roll can be ignored, because the intensity is the same even if you have roll. So it's only five dimensional. But then if you have an occluder here, then the intensity of this ray is different from intensity of this ray. If you have no occluder, then the ray intensity remains the same. So now, actually you can go down to just four images rather than five. So the space of all lines in 3D is actually four dimensional if you want to express all the rays, then it's just four dimensional-- ax plus b y plus cz plus [? p. ?] There's four [INAUDIBLE]. Now, you can simplify that further for the camera world, where we're going to assign the plane of a sensor, and the plane of the lens, and so on. So that's what we'll see very briefly. So let's say there's a light field in this room. Rays are traveling from light sources, bouncing around everywhere. If I just cut a plane in midair, I can parameterize that plane as x- and y-coordinates. And for every point on the plane, I have, again, the theta and phi. So this becomes four dimensional. And that's what we're showing here. The positions is s and the direction is theta. So often, we will think about flat land. So we'll just think about the plane of the screen as opposed to the 3D world. So in the 3D world, we have x, y, and theta phi. But in flat land, we have just the position and angle. So it's just two dimensional. So this is called a-- so that was one-plane parameterization, where you had position and angle. And another common way to think about that is-- another common way to parameterize the light field is a two-plane parameterization, where you have one plane that has position, and the second plane that, again, has position, and a ray that connects those two, again, represents the ray space. The coordinates for that represent the ray space. So this is so far two-plane parameterization. And this is very commonly used in computational camera and photography. So let me jump ahead a little bit because of the time left and explain how we're going to do it for our assignment. So remember, we're going to create an effect where we'll put a whole bunch of cameras or take an array of cameras like this and be able to see 12 [? rulers. ?] And the effect is relatively straightforward. And we go to the so-called synthetic aperture photography. We're going to create an artificial aperture to be able to see through [INAUDIBLE].. So if you have a point [INAUDIBLE] focus versus a point that's out of focus, the green point will create a very bright spot. The red point will create a blurred spot. That means its intensity will be correspondingly reduced per pixel. And if you stopped on the picture, what will happen is that the green spot will become slightly dimmer, because less light is reaching the sensor. But the red spot will also focus-- also blur in a smaller region. If you go in the opposite direction and have a really, really large aperture, then the green spot will be very bright, because it's capturing-- a lot of light is being captured-- and that will be over here-- but the red spot will be highly blurred. Now, building such a large aperture is very challenging. So what we're going to do is create that using an array of cameras, like this. And it's the same as synthetic aperture radar, where they use an array of antennas to create, effectively, a much larger antenna. Again, analogies with communication, and RF, and so on. So again, the whole point that-- so again, we're going to subdivide this lens into multiple apertures as opposed to one large aperture. And that will be effectively created with a set of cameras like this. And then if you sum the images from each of those apertures, that's the same as creating an image with this very large lens. And for a different point in 3D, will correspond and create a different image. That's it. [INAUDIBLE] OK. So how does this work? How are we going to create an effect where something that's out of focus effectively is going to be completely blurred? And we saw that if-- even with aperture of my eye, which is only about 6 millimeters or so, if I put an object really close, then I can see the world through this eye so that this is basically-- it doesn't impact. If I put a needle in front of me, it gets completely blurred. And that's the same effect we're going to see. So you take an array of cameras or camera at different positions, take-- collect, say, 25 photos. If you simply take those 25 photos and sum them up, what will happen? So if I just take a camera-- and for simplicity, we're just going to [INAUDIBLE].. If I have, say, five cameras this way, and I will point at the [INAUDIBLE],, this coordinate in each of these cameras is going to be added [INAUDIBLE],, because it's [INAUDIBLE]. It's like when you're driving and you're looking at the moon, it always appears to be the same position. If you're looking at a [INAUDIBLE] which is very, very far away, its coordinate in the camera is going to be the same [INAUDIBLE].. So if I just take these five photos and add them up, sum them up, I basically get the same exact [INAUDIBLE],, because I [INAUDIBLE].. Now let me [INAUDIBLE] something that's nearby. This coordinate in the top camera will be the top of the point of infinity. But in the bottom camera, it will be below the point of infinity. So now you have summed these five images up, this point will be completely blurred, because the coordinates in each of the five images are very different. On the other hand, what's at infinity will be the [INAUDIBLE]. And that's exactly what's happening when you focus. You're basically taking an image from every part of the lens and summing it up [INAUDIBLE]. Here we're doing it in software. And of course, mathematically we're not going to sum it up as is. We can shift each image and sum it. So if I wanted to focus here, what I would do is I would take this picture, keep it as is, and I'd take this picture. And I know that from here to here, this one is shifting left about 5 pixels. I'll shift this whole image by 5 pixels and then add it. I'll take the next one. And I know that's going to be about 10 pixels [INAUDIBLE].. I'll shift the whole image by 10 pixels and then add it. And the bottom one, I have to shift by 20 pixels and then add it. If I do that, then this point will be sharp focus. And the point at infinity will be completely blurred. And using this very simple shift and add mechanism, we're able to focus very close [INAUDIBLE].. Is this clear? AUDIENCE: I have a quick question about this. PROFESSOR: Yeah. AUDIENCE: Does this set some minimum focus distance? PROFESSOR: It could. It could, yes AUDIENCE: Or what [INAUDIBLE] minimum focus distance does it need? PROFESSOR: The field of view is set-- for example, if I get really close, then these cameras can see the sky, [INAUDIBLE]. So that's [INAUDIBLE]. AUDIENCE: So is that the only thing, though? Is it just the field of view that [INAUDIBLE] focal point? PROFESSOR: Mainly the field of view. The resolution [INAUDIBLE] as well. But mainly the field of view. Otherwise, this technique can focus on [INAUDIBLE].. It can even focus beyond infinity. So first [INAUDIBLE]. [LAUGHTER] Because you can-- you're sort of adding them up. I can add them up in the reverse direction, minus 5 pixels, and focusing at infinity beyond. So this is all we're going to do. But that's-- as you'll realize in your assignment, there are a few things [? you ?] have learned. Here, I just choose some numbers, and shift by 5 pixels, and add. What will end up happening in your case is you'll realize that if you put these cameras and take the pictures and then figure out what distance this should be, what the projection of these points are going to be, and if you don't use some meaningful numbers, you'll never get [INAUDIBLE],, because either your parallax, which is the distance-- the change in coordinates as you switch from one [INAUDIBLE] to the other-- too large or too small. And it's really easy to do by just kind of eyeballing it, but eyeballing it not with camera, with your own eyes. So you can just stand in one place and see if you move by 10 centimeters, do you eventually see the point behind? And in the case of the Stanford project, that was a really challenging example of a set of trees and people behind them. You don't have to choose something that complicated. You can choose some set of objects in the front and then some painting in the background. If you want to set up the scene, and you want to put some-- the best would be really just put a pencil first in the [INAUDIBLE] to a [? fence, ?] and then there's a painting in the background. And then if from any single camera this painting is occluded, by taking multiple photos, you can see-- I can do it on a table. You can do it outdoors, for example. There are some trees here. You can see through it. Choose your situation, and you'll be able to answer this. And there will be more instructions on the website. AUDIENCE: Are we allowed to do it all computationally? PROFESSOR: Pure software? AUDIENCE: Yeah. PROFESSOR: Just OpenGL, you mean? Absolutely, yeah. Yeah. But I mean, where's the fun that? You are perfectly welcome to do that, yeah. AUDIENCE: What's the tolerance for the parallelism? PROFESSOR: So you want to be as close to parallel as possible. But-- we are not going to discuss it here, but as you know that once you have-- for example, if you misalign this camera and collect your [INAUDIBLE],, then you know that this image to this image is just a pure one model, a pure, single [INAUDIBLE].. So you could just fix that mathematically if you wanted to. But you should just avoid that for this assignment. Try to keep it parallel. Just put it on a ruler and slide it. And I'll give-- there's more information on the website.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_2_Modern_optics_and_lenses_raymatrix_operations_context_enhanced_imaging_Part_1.txt
|
NARRATOR: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RAMESH RASKAR: --this easy. And it's mostly to get you set up, how Stellar works, how to set up your own website, make sure you have a camera that you can use, make sure you have a way to transfer your images to your program, and as I said, it's completely up to you what environment you want to use. You want use Java, Flash, C++, Photoshop, Basic, Japanese, whatever language you want to use for your programming is perfectly fine. But make sure you show me your own images, not images captured from somebody else or from the internet, but something that you have taken yourself, some photos you have taken yourself. Any questions over the assignment? [INAUDIBLE] for the [INAUDIBLE],, for 531, we have four assignments, so three more after this. And for 131, we have two more assignments after this. While this is getting set up, I'm just passing around a UV light demo. Who has it right now? STUDENT: I do. RAMESH RASKAR: OK, so why don't you demonstrate the concept to everybody first? So when it goes around, they know what to look for. STUDENT: OK. RAMESH RASKAR: You want to show the magic trick. You want to show [INAUDIBLE]. STUDENT: All right, so you we almost nothing, and then we leave it alone. There are actually a lot of things that do this too. My driver's licence, the Massachusetts drivers licence has the Massachusetts State seal. [INAUDIBLE] RAMESH RASKAR: So this [INAUDIBLE] paper in 2007 on how to create ultraviolet inks, inks that respond to ultraviolet light, that color. STUDENT: Yeah, definitely the car technologies are older, but this is color. Yeah, you actually see if many passports and visas, in particular, especially of the smaller the country is, the more colorful the UV image of their visa is. [INAUDIBLE],, but people love to see red, and green, and purplish color. But I've actually never seen, basically, a full color before. That's pretty neat. RAMESH RASKAR: Yeah, and also, if you go to a club, they use UV light. So if you don't use the right detergent on your clothing, or if you are a stain that you think it doesn't look visible in light, but you go to a place, where you want to be hip. STUDENT: I actually have a UV light LED on my key. RAMESH RASKAR: Oh, so before you go stand in the line, you check yourself. STUDENT: Yeah. RAMESH RASKAR: Yeah, you are too cool for this place. So yeah, if you become expert at playing with ultraviolet light and you like to have this in your hand all the time, the loss of jobs. You could be the guy who checks at the entry at the airport at the security line. You know, he's checking, or you could be a bouncer [INAUDIBLE].. STUDENT: Or a banker, right? A lot a lot of the dollar bills have a UV strip. RAMESH RASKAR: Exactly, so the theme for today is going to be lighting and different kind of illumination. And hopefully, it will make you think with today, the budget is, I think, less than $5,000. And, like, last time, I think it was a little over 16k last year, and we'll show you interesting things you can do with illumination. [INAUDIBLE] What is it getting set up? Again, let me show you some other toys. So my collaborator, if you can just hold the book and show it to everybody. This is the book that I and my co-author Jack Tumblin, we are coming out with called Computational Photography, and the way Jack likes to describe the scenario today for cameras and photography is it's a lion who has been shackled for a long time. And if you just put this lion out of the cage, it doesn't know what to do. It just stands there and just looks around. Even an unshackled lion doesn't know how to exploit the freedom, and I think it's the same situation with photography today. We have gone from film to digital. But even today, when we think about cameras, they're trying to mimic how this camera will look like a traditional film camera, a digital camera. Even cell phone makers who have complete freedom on how to create cameras and form factors, they're trying to create an experience that's somewhat similar to a film camera, like they have a shutter at the same location and expect you to hold it with two hands. So very few of them allow a single handed interaction. And I just want a camera that I can squeeze, or I can tap, or I can shake, and then after five seconds, let's take a photo. It can do all these other things, but it still want us to do this two handed interaction. But everything is stable. The button is off center, so you cannot even hold it with one hand. You have to hold it with two hands, so you can press the shutter and so on. So this is how the world works, you know? You always are dealing with the legacy of what came before, and a really a classic example, something that makes me really squirm is look at this beautiful camera that came out from Nikon in the mid '90s. And this was one of the first kind of digital camera that professionals thought it's OK to be seen with, because they had this whole debate about, oh, digital image will never catch up with the resolution of the film. These are the same people who still hold onto the LP saying, see, these are digital, and they don't sound the same as [INAUDIBLE]. But anyway, this camera, which is supposed to be digital, if you cut it open, what you realize is there is still place. This is a digital Camera There's still place for a film cartridge. All they did was they removed the part, where the film cartridge is and where the film clips slap on a digital sensor. Extra electronics at the bottom added some electronic connectors, but why bother? Because people still like the same form factor and all that. There's still a place for film in that. It's just mind blowing, and this is how it works. So a lot of times, when people say, wow, a billion people have cameras, and what is going to change about it? Hopefully, through this class, you'll realize it's just a wide, wide, open world. There's certain space that peek. There's a lot of innovation and research, and then they mature. And there's not much more exciting about those fields, but there are certain fields that just continue to grow. And imagining is just one of them, but just look for the special cartridge here. And if you look sideways, this is where the film [LAUGHS] wraps around. So you can take pictures of this and accurate notes. I just want to spend the first few minutes going through some of the fast forward preview items we didn't get to cover last time and then come back and talk about this topic. We were here, somewhere, last time, where you can create these cameras where, instead of having an expansion-- instead of having an out-of-focus blur that extends like this, you can create lenses where, when it goes out of focus the point spread function actually rotates. This is a relational point spread function. And again, this is just a teaser. So we will study this into detail. And this is very powerful, whether it's for photography, or for scientific imaging, or real-time [INAUDIBLE] applications and so on. And then we discussed about the duality of particles and waves. And we'll be discussing that a little bit. And then we'll talk about this, new types of cameras that are being designed, where it can create a 35-millimeter lens, including the sensor-- the whole package can be built with a thickness of just about 5 millimeters. And so they're building these lenses where light comes in from the annulus, around the edges of the lens. And light actually reflects around, and image is formed in the center. And this is how many of the Sony-- what's the Sony [INAUDIBLE] cameras? AUDIENCE: The Sony [INAUDIBLE]. RAMESH RASKAR: [INAUDIBLE],, yes, works as well, where the sensors is actually in the-- for the Sony cameras, the lens is over here, but the sensor is actually all the way back in the bottom. So light reflects around, and the image is captured at the bottom. That's why the cameras are so thin, so that the dream of a flat camera is something we'll be looking for. And again, by making flat-- I mean, the new iPod Nano is, what, 0.25 inches thick? And it still has a camera. And that's because they're using crappier and crappier lenses now. So in short, just creating a straightforward design where you have a lens and, behind that, you have a sensor. If you have a lens where-- with addition of this concentric reflectors and images on in the middle, then, effectively, you have a 35-millimeter lens with optical parts that's folded in. So we'll be looking at a lot of this interesting designs that I'm sure you will see in products coming up. And then we'll look at, again, some really interesting lenses. A traditional lens is some form of a convex or concave lens, but the new lenses are just flat. Their front and back profile is just flat. So it's very easy to stack them, easy to put them in modern devices. And the way they work is not by changing the thickness of glass, but by changing the refractive index. So the refractive index is high in the center and very low in the edges. So by doing that, that-- effectively creating a lens. And that's actually how a lot of flatbed scanners are built, as well, so we'll be studying that. Very briefly, photonic crystals and how they will change imaging. [? Then in ?] photography, we saw the picture last time, the study how [INAUDIBLE] photography can be built, polarization, underwater. And there are really interesting cameras that are coming up. I can see if the lights can play around. [CLICKING] --can see the [INAUDIBLE]. Yes, slightly better. So these are polarization cameras where, instead of adding a polarizing filter in the front of the lens, the polarization filters are actually on the Bayer mosaic. So a typical camera, as you will study, will have very tiny RGB filters in front of each pixel. So every [? other ?] pixel has red, green, and blue filter. And here, actually, they have different orientations of polarization, vertical orientation, horizontal orientation, +45- and -45-degree orientation for polarization. And from that, it turns out, under certain lighting conditions, such as sunlight, light that's reflected from vehicles or from faces is partially polarized. And they can look at how the light's polarized in all these four different directions and, from that, estimate the orientation of the surface or the surface normal. And from that, they can create a 3D model. So it's pretty exciting. They're getting lots of money from the government for detecting vehicles in complex backgrounds. So this might come in a consumer camera. It might come for scientific imaging. And there's a really interesting sensors. I think we talked about this very briefly, last time, [INAUDIBLE] sensing. We'll spend a lot of time discussing [INAUDIBLE] sensing. There's a lot of hype about it. We'll try to understand where it works, where it fails, and what's the power of [INAUDIBLE] sensing, and where it can be exploited. Some other bizarre photos you may have seen are near the photo finish in sports photography. So this photo looks like an ordinary picture. Here's the winner of this, I believe, 100-meter dash. But if you look at the people behind, they have really strange [INAUDIBLE].. Now, look down here. Anybody has a guess, what's going on? Here's another one with even more distortions. Look at the leg of this. And that's the finish line. This is the photo finish picture in sports photography compared to [INAUDIBLE]. Yes? Because the sensor is scanning, line by line. Mm-hmm. Exactly. The whole photo is not taken at a single instant. The photo is taken, actually, one line at a time. As you can see, all the-- at least when the [? time ?] is finished because the shoulder-- the moment you cross the shoulders is when it matters. So as you can see, when they finish, they all have their body leaned forward, right? So, clearly, not all of the time we're finishing at the same instant. But, so, this particular line, for example, captured first, and then the next one, and the next one, and the next one. So this whole picture was captured over a whole second, but it's actually running at 2,000 frames per second. So in one second, you're capturing 2000 lines, and that simply plays together to construct this picture. And this is very useful because for the judges, for the referees, they can just look at this picture and figure out exactly when every player had crossed the finish line. And it's a nice summary of the finish of the race. But not finished at any given instant, but finish of every instant. So, if you look at the background, you'll see that there are no vehicles. There are no big signs. Same with the [? crack ?] here. And that's because we're capturing the same exact line in the camera and just capturing the same one again. So imagine taking a photo, throwing away all the pixels, except the center line, taking another picture after one-- by 1/2000 of a second, again, just keeping the center line, and just putting it together, so very interesting photography. And I believe I saw a product on-- maybe it was an iPhone app or one of the mobile phone cameras, where they're creating this new, fun photography where, instead of taking the photo in a single instant, you release the shutter and it actually does a very slow rolling shutter. So it's actually a video camera that exposes the first line, then the second line, then the third line, and so on, for every frame. So I can release the picture, and I can simply turn in front of it. And what you will see is, the top of the picture, my forehead is being seen. In the middle is side of my face, and at the bottom, it's the back of my head. So it's this very beautiful pictures, that can be created with this. But in this class, maybe you can convert a video camera into some other interesting projection of this x y [INAUDIBLE] right, because a video camera is x, y pixels, and we are capturing it over time. So it's a three-dimensional data set. And a final photo is actually some 2D projection of that. And this is one type of project-- one way of projecting it, but we'll study how there are some other ways in there for scientific imaging or for artistic photography. AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: For sure. AUDIENCE: Do you know what type of lens they use? RAMESH RASKAR: They usually use standard lenses because they still haven't create-- if you're looking at a very narrow line of view direction, you still have to form an extremely high quality image. So you cannot use a [INAUDIBLE] [? vertical ?] lens or anything like that. So they still use a traditional less for that. AUDIENCE: Is this the main difference between a video camera and standard cameras? RAMESH RASKAR: In terms of its-- I think there are subtle differences in-- when it comes to electronics, of course. So, for example, a camera like this and on a cheap device is actually a video camera. And when you release the shutter-- or any cheap digital camera is actually-- still camera is actually a video camera. And when you release the shutter, it takes the next frame and captures that as a real photo and gives it to you. But it's constantly running as a video camera, and that's why, on the viewfinder, you can see the video of what's been captured. So even if they're selling it as a digital still camera and even if they don't have a video mode, it's actually a video camera. And then there are more purists who build SLRs and so on. And they say, no, no, it is a digital still camera, and you must see through an optical viewfinder. And, but, just like the people who make the camera we saw, those people are a minority. And All this snobbishness about, when you have an SLR camera, you must have a mirror is also gone, fortunately. So that's the fundamental difference between a still camera and video camera. But there's still an issue of bandwidth, and that comes to storage and processing. How quickly can-- even if you're exposing the whole sensor-- so, in particular, you might have a digital camera that's six megapixel, but the video is only less than a megapixel. And that's mainly because of bandwidth, and storage, and so on. So, but it's not really-- it's just an electronics issue, so there's no reason to-- in the future, it will be all fused into-- once the bandwidth and storage issues become not so critical, they'll just fuse into one single device. There are already cameras, I believe, that, once you release the shutter-- you can be in the video mode, and then you release the shutter and just store some of those frames. So that's straightforward. So, in terms of optics and image formation, there's no difference. There are some other differences, of course, in terms of noise. So if you-- you know that, for a digital still camera, when you release the shutter, usually, a mechanical shutter opens and closes to integrate the light for a finite duration. But, clearly, for a video camera, there's no mechanical shutter opening and closing for each frame. So why don't they just do that for digital still cameras? And why not have a-- why not get rid of the mechanical shutter, altogether, and just do an electronic shutter? And, again, purists want that sound, ka-chung, ka-chung, but there's absolutely no reason to do that. And, nowadays, it's true that a lot of cameras have focal-plane shutters that sometimes are mechanical, and sometimes they're just electronic. AUDIENCE: Is there absolutely no reason to have a mechanical shutter? RAMESH RASKAR: So, exactly. So sometimes you have issues like just thermal noise. So if you're taking a picture that's 15 seconds long, then you don't know how the camera's going to behave, how much noise is going to collect over 15 seconds. So a camera maker has a very good model of what the noise will be if you take a very short exposure for it, like under a second. But if it's a 15-second long exposure, depending on what conditions you are in, how warm your hand is-- all these things are going to change the noise properties on the sensor. So, typically, they're two pictures. They'll take a 15-second long picture with the shutter open. And then they will take a 15-second long exposure with the shutter closed and then subtract the two images to get rid of the noise because there could be noise that varies over the frame of the sensor. So, every once in a while, you have to-- it's good to have completely dark conditions created mechanically. But, again, that will change over time. I'm sure they'll come up with solutions where you don't have to do this extra measurement to figure out the noise in the system, of doing the subtraction. I mean, communication world, right? I mean, you have to deal with noise all the time, and you don't always transform [INAUDIBLE].. So I'm sure we can come up with some intelligent encoding. Maybe every fifth pixel is being used for measuring noise, and other pixels are being used for capturing photo. So there will be some interesting encoding mechanisms there, I would imagine. CityBlock Project, that's Guarav Garg. He was my intern. And this was a project that started with Augusto-- I forget his last name. Augusto Roman, I think. And they started this project when they were grad students at Stanford, did an internship at Google, where they came up with an idea of mounting a camera in a truck and just driving in Palo Alto and, again, using a very similar idea of just taking the central pixel of every video frame. So if you can imagine, you take thousands of pictures, and take on the center column of each image, and just put it together. Then you have basically created this orthographic camera. It's a panorama that just goes on and on for the facade of a street. And they applied some other interesting algorithm. Because if you have streets, or people, or cars that are moving or in front of the part that you care about, then, because of the parallax between multiple views, you can eliminate them, as well. So they put all those things together. That was their CityBlock Project and, in it, had very interesting distortion artifacts and so on. But it was beautiful, and what Augusto tells me is that this is what pitched for Google Street View, as the representation. Because, right now, it's all discrete, right? You jump between one bubble to the next bubble. And what it tells me is that the reason why they didn't choose what he did in his thesis and they went for this traditional bubble model is, apparently, in user studies, they realized that people get really confused when you show them pictures like this. People are more comfortable with jumping to the next bubble, and then looking around, and then jumping to the next bubble, and looking around. It's odd to me because I would have been more comfortable looking at something like this. AUDIENCE: Yes. RAMESH RASKAR: But this is what they have right now, of course, the new Google Street View system actually has a lot of additional sensors. They have GPS. They have compass. They have a LiDAR that's actually taking 3D images of all the cities, so if a Google truck is going by, cover your eyes because they're shooting lasers. [LAUGHTER] I'm just kidding. It's eye safe. But they have 3D models, and the reason why the 3D models are not available to us, on the browser, is because they haven't figured out how to use this data, tera-- petabytes of data and how to transfer that and make it available in a streaming fashion. But, again, when that problem is solved, we will see 3D models that are exploited for all kinds of interesting purposes. And this year, I was on the SIGGRAPH committee. And there were just a ton of papers on what you can do with street-level imagery because this is becoming a very hot topic right now, which is good and bad. If it's already hot, it's probably not worth pursuing it because there are too many people doing it. But, for a course project, it's perfectly fine to do, and you might be able to come up with a great idea for a SIGGRAPH paper. We'll be studying a lot of work-- motion deblurring. This is actually a picture we took in Kendall Square. And we have cameras now that can take highly blurred images, motion-blurred images, and recover a sharp photo in post-capturing. So, last time, we saw focus deblurring. This is motion deblurring. And the way it works-- and here's another example where you might have an aerial imagery scenario, an aircraft that's flying sufficiently low, and with a sufficiently long exposure, you'll get a blurred photo. But, again, using this technique I'm going to describe in the next slide, you can do motion deblurring for cars. And the basic idea was, instead of taking a photo where, when you release the shutter, you open the shutter and keep it open for, say, 100 milliseconds, and close the shutter, and get 1 photo-- instead of that, you release the shutter, and then you open and close it multiple times. And at the end, you still get one photo, but it has stopped and started the integration of light multiple times in between. OK? And, of course, if you do that with a mechanical shutter, you'll probably void the warranty, and the camera will be unusable very quickly. So, instead of a mechanical opening and closing of the shutter, we used an LCD, actually, a ferroelectric LCD that becomes transparent or opaque. So when the LCD is, of course, opaque, you block the transmission of light, and then it's transparent. The light goes through. And that [? incurs ?] the motion of the object. So if you have a point light, with a traditional camera, if I move the light very fast, you will see a streak in the photo. But with this camera, you'll see a ham code. You'll see a dash, dot dot, dash, dot in the screen. And that just preserves high-frequency information in the scene, and, as we'll study later, it allows you to reconstruct the original image and recover the sharp features. And we'll study this in the frequency domain. We'll study this using linear algebra. And we'll also study this using pure [INAUDIBLE].. So those of you with different backgrounds, hopefully, will be very comfortable with many of these concepts. Now, this is something-- this is a very worrying trend for sensors. So, 1994, pixels were about nine microns wide. OK? A human hair is about 50 microns, so it's 1/5 of the width of a human hair. And the way you sense the color is you put a filter, green, blue, green, red. And that's the Bayer mosaic. And we'll study this data again. But over time, the size of this pixel is shrinking. Because if you want to buy a phone that has five megapixel, but it has really tiny focal length and-- that's fine. So the sensor is shrinking. And the camera makers want to produce a lot of sensors from a given wafer. So you have a wafer of a certain size. You want to slice and dice it and then put it into each of the cameras. The more you slice it, the more pieces you will get, of course. And so, the logic is, if you keep shrinking these pixels, they can keep shrinking the size of the overall image sensor. So, right now, we are down to 2 micron, 1.5 micron. I just saw a paper from Sony, yesterday, where they're claiming 0.9 microns. And what's the wavelength of visible light? AUDIENCE: [INAUDIBLE]. AUDIENCE: 0.4 to 0.7 microns. RAMESH RASKAR: 0.4 to 0.7 microns, so let's write this down because we'll be talking about this all the time. So this is 50 micron. That's [INAUDIBLE] your hair. This is 10 micron, 1 micron. And it's [INAUDIBLE] here, 500 nanometers plus 1 times 10 minus 6 [? meters, ?] 500 nanometers. And blue is here. The green is here. The [? amber ?] is here [INAUDIBLE].. So the best [INAUDIBLE] all the way around here. Now, the pixel, itself, is 0.9 micron. OK? And the [? ribbon ?] of light is invisible or it's really, really close to it. And this is a challenge, as we'll study later, because of the limitations of diffraction and other laws of physics. It's getting very challenging to do this [INAUDIBLE] with these type of sensors. And, again, we'll study how people are thinking about this in a very traditional way. And some other teams are actually trying to exploit these sensors in completely unique ways. Now, there's a new design where people are claiming that, by using a pixel that's less than 1 micron-- by the way, compared to this pixel-- let's say this is 10 microns, and this is 1 micron. The amount of light one pixel will capture is reduced by what factor? AUDIENCE: 100. AUDIENCE: 100 [INAUDIBLE]. RAMESH RASKAR: By a factor of 100, right? So, with my phone, in 1994, if everything was the same, if I could take a picture in 100 milliseconds, now, how long we'll be my exposure time? It will be 10 seconds, right? But, clearly, over the last 10 years, the exposure time of a typical photo hasn't changed. Otherwise, you'll have a camera shake because of the jitter in your device. So, clearly, the technology has improved, where the pixels have 100 times less area to capture light, and, still, we're able to get photos with roughly the same exposure. So, in a way, the light gathering capability has improved by a factor of 100. Not really, but it's about that order of magnitude. And there are a lot of software tricks that are being played to compensate for the noise in such tiny sensors. So we'll study this issue quite a bit. One common theme toward imaging is the two things you need. You need a lot of light. You'll always want a lot of photons, to take a good picture. And number two is would like to have negative light, and this may sound strange. But people who work in radio and other fields, that are blessed with negative energy, and people who work in optics always have to worry about creating negative light. And as we'll go through the class, you'll realize that if somebody invents negative energy for light, it's like the invention of CO, to just represent nothing. AUDIENCE: Negative what [INAUDIBLE]?? RAMESH RASKAR: Negative light. AUDIENCE: Negative light. RAMESH RASKAR: Yes, a photon that has a negative energy. [LAUGHTER] You have an idea? Think about it very hard. Yes, you'll get multiple Nobel Prizes. [LAUGHTER] All right. And then there's some really interesting biological creatures that we'll be studying, animal eyes. So a dragonfly or a krill has these compound eyes that-- this is the simulation of what this guy is looking at. He's proliferating thousands of images, probably not of that good quality, and they're used for very interesting applications. So we'll study that, as well. So there's the project called Tombo, in Japan, which, I believe, stands for-- I mean, it stands for thin observation module by bound optics. But the word tombo means shrimp in Japanese. What does the word tombo mean in-- AUDIENCE: Dragonfly. RAMESH RASKAR: Sorry? AUDIENCE: Dragonfly. RAMESH RASKAR: Dragonfly. Sorry, dragonfly. So it's a nice play on the dragonfly. I should remember this. It's right on the slide before. And so, again, you have a single sensor. And you have multiple tiny lenses, and this is placed really, really close to the sensor. And the idea is that, if you have an object and you have plenty of tiny sensors right next to the-- plenty of tiny lenses next to the sensor, then, again, it'll form thousands of images. And from that, you may be able to do something interesting. [COUGH] We'll also look at a time-of-flight cameras. We saw, last time, one of the 3D [? V ?] cameras that uses time of flight to compute depth. So, how fast does light travel? 3 times 10 to the 8th meters per second. But what does it last-- what did it-- AUDIENCE: One [INAUDIBLE] nanosecond. AUDIENCE: One foot-- RAMESH RASKAR: One foot per nanosecond. And sound? AUDIENCE: One millisecond. RAMESH RASKAR: One foot per millisecond. AUDIENCE: OK. RAMESH RASKAR: Just good numbers to remember. And so these cameras will study how they work. And it's quite possible that these type of cameras will be available in really cheap devices. I'm talking about devices that are less than $100. So Microsoft Natal is likely to have a 3D-- a depth-sensing camera very soon. And Sony EyeToy is also likely to have one. In fact, Sony EyeToy has been-- Richard Marks, who is a kind of spiritual leader of EyeToy, has told [? me ?] multiple times that they will come out with a 3D camera anytime now. And they were testing it. They just want to make sure the cost is low enough to make it happen. So this is a very exciting time for imaging. And those of you who think, oh, we already have a billion cameras-- how much is it going to change? You'll be extremely surprised, what you will see, just in the next two to three years. And the 3D cameras are also being used in TV studios, where you may want to insert some virtual objects in the scene. So, a traditional [INAUDIBLE],, you just replace what's the background. Maybe you have a blue screen, and you replace the background. But, here, you can put something in front and behind the person, with the right occlusion order. So this really is nice. We'll be spending a lot of time about cameras for human-computer interaction. This is the topic of a lot of interest. And we look at different types of cameras, camera looking at people, camera looking at fingers, such as the frustrated total internal reflection FTIR, how optical mouse works, and [INAUDIBLE],, and so on, different types of motion capture, [? V, ?] and so on. And we'll also look at what are the type of camera [INAUDIBLE] which are interesting, and some of them are just very 20th century. OK? And this is very interesting because if you go to places like SIGGRAPH emerging technologies, where people are really combining the latest generation of algorithms and hardware, you'll see, over time, which projects are interesting and which projects are just stupid and boring. OK? And there's some really, really common ones, and those of you who are doing this for projects, it's perfectly fine. But if you're thinking about using it for research, then think about it. This is something you should avoid because it's been done to death. The most common one, I will say, is where something moves and the music changes. [LAUGHTER] Just get out of the business. All right? Another one is you're going to write some Excel application where it's going to depend on detecting some skin color or something, something. It's not going to work. OK? You might be able to show it as a demo. But as research, it's not going to work when the lighting changes, or the orientation changes, and so on. Another very common excitement is I'm going to take a light, and put a studio camera, and [? crack ?] it, and create a 3D part of it. Done to death. There are things you can buy for $100 that'll do it for you. And then there's a product that's not worth doing research, around the same problem. Again, segment a finger or face by putting some kind of a glow or something for segmentation. Some artistic interactive displays, I can be behind and change something. Don't worry about it. A lot of people know how to do it. And the problem with a lot of these demos, as you'll see, is that you can build this demo and you'll be able to get some people excited about it. And that will give you this positive reinforcement that, oh, this is the kind of stuff I should be doing. Because you can impress some people all the time. [LAUGHTER] --and you can impress all the people for some time, [LAUGHS] but not all the people all the time. And, of course, the world was not impressed in the original [? code. ?] And just remember that. So that's the problem. Let's see. [INAUDIBLE] So what we're going to do in this class is-- AUDIENCE: What's the solution? [LAUGHTER] RAMESH RASKAR: Yeah, exactly. Exactly. Well, [? you'll have to ?] attend the whole semester to see. AUDIENCE: [INAUDIBLE] the [INAUDIBLE] camera lens. RAMESH RASKAR: Right. AUDIENCE: What's the current status of the gaze estimation [INAUDIBLE]? RAMESH RASKAR: Right. AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: Yes, we'll talk about that, as well, in the class. And there are some really good solutions. But I think it's still-- gaze tracking is still pretty challenging problem. So I think that's worth still exploring. And, again, this is not an offense to anybody. If you're doing it as a class project, it's perfectly fine. But, everyone, let's just be honest. When you are an acquaintance, we praise each other's work, and when you're friends, we criticize. And we're just going to be very, very honest about it and see what we can learn and what we can do next, right? So the solution is to [INAUDIBLE] change the game. You want to try to build things that are robust, that do the kind of things that nobody else can do, and they allow you to do things that are not possible to these cameras. All right? So we you just want these smarter sensors, smarter processing. And what you create should be just magic, not in terms of its application, but in terms of its basic building blocks. And, hopefully, through this class, you'll realize there are hundreds and hundreds of solutions that you could be using, instead of using some cheap camera that's available. And just because you have the SDK for it, just because the device is available, you're using it. Let's get away from that, and let's try to build something that's unique and new. OK? Any questions on this one? All right. AUDIENCE: - Question? RAMESH RASKAR: Yes? AUDIENCE: So if you want to build stuff like the [INAUDIBLE] camera or [INAUDIBLE] like changing the motion of the shutter and stuff like this, so what other components are available? RAMESH RASKAR: That's a great question. So let me just repeat [? Vena's ?] last question. If you really want to do all these things, what are the components that are available? Can I just slap together a sensor, and a light source, and all this electronics? The answer is it's not always that easy, unfortunately. It's not-- I wish Canon would just come out with a LEGO for cameras and just sell that. And I'm sure they'll sell millions of those. Because then you can create your own things, but, unfortunately, that's not available. At the same time, what I've seen, especially at Media Lab, are people are extremely innovative. And they go and pick pieces from different places. I think [? Dan ?] has been looking into that. The other-- And SparkFun has been very supportive. And they sell camera modules for $3 to $9 now. You can buy a full-fledged camera for $3. Unfortunately. there's not much you can change about it. But then you can go to companies like Point Grey and buy a little bit more expensive solution, maybe $500 to $1,000, and they'll give you more access to it. So, I mean, all the projects that we do here, we are not building our own sensors. We are not building our own chip, processing chip. We're putting pieces together from different places. And we are definitely on the bleeding edge. So if you want to build something unique, there is no solution available, you can just buy a kit, right now, to do it. I know [? JB ?] has been interested in building a-- what do you call it? What kind of camera, open-source camera, or-- AUDIENCE: Yes. And I'm actually working with a company in New York City that [INAUDIBLE] camera [INAUDIBLE].. RAMESH RASKAR: Excellent. AUDIENCE: But it's very slow. And it's still [INAUDIBLE]. RAMESH RASKAR: Right. AUDIENCE: It's not this small module that you can program in environments and everything. RAMESH RASKAR: Right. AUDIENCE: It's obviously a good start to go to, to think of [INAUDIBLE].. RAMESH RASKAR: Right. Yes, so there are a lot of efforts in that. So I'm glad-- good to hear. I'm glad to hear about your work. And the Stanford, Professor Marc Levoy and his group, has been also proposing an open-source camera architecture, and they also ran into the same issue of how can they get some of the chip makers, and lens makers, and all that excited to put this together. So they're just getting started, and I hope that movement will continue. We have a lot of our own plans here, which we'll be disclosing in coming weeks. And it should be possible to-- There was a time when it was very cool to hike around with your car, right, in the '70s and '80s, if you're a cool guy, if you fix your car and put new tints. Now people say, OK. And then there was a time when it was very cool to build electronics, build cool electronics, do something with robots. People say, that's OK. I can just buy a kit. And what's going to happen now is we're going to get into the physics of these things, not just the chemical engines and the electronic hardware. But we're going to get into the physics of it, whether it's in a UV light, whether it's chemical elements, or whether it's the sensor. And the next generation of kids who want to be cool are going to build things that just create magic. So I'm really excited about this whole area. In fact, there's a group at Columbia University, Professor [? Schneier, ?] and he's putting together a-- I forget the name of the project, but it's also kind of a LEGO for cameras. And that's going to be a lot of fun. They're trying to create a whole high school curriculum based on campus, so it's very exciting. So we look at how Jeff Hahn's project was developed. And remember, his paper came out only in 2005. That's the very first time he disclosed it, and after that, as you know very well, it's everywhere. John King is using it on CNN, to see how Obama is doing versus McCain, and it's just everywhere. Beautiful piece of technology, very old idea that's used in lots of other environments, and to study that. AUDIENCE: Can I ask you a quick question? RAMESH RASKAR: Yes. AUDIENCE: Yes, so if this is a camera, are all touchscreen displays essentially cameras? RAMESH RASKAR: That's a great question. That's a great question. So this one, in a way, is still using a traditional camera. When John King is playing on CNN with all the [INAUDIBLE],, what you don't realize is that, behind him, he requires a lot of space to put a camera and a rear projection screen. Right? It's not just looking-- walking up to some [INAUDIBLE] and playing with it. AUDIENCE: I mean, so if I have two pieces of glass that I'm pressing on and I'm detecting which wires are actually touching each other, I've never thought of it as a camera before. So I'm thinking about it right now. And maybe it's kind of like a [INAUDIBLE].. RAMESH RASKAR: Certainly. That's the way you should be thinking. A camera is not a two-dimensional sensor. It's zero-dimensional, which is a point; one-dimensional, a line or a curve; two-dimensional; three dimensional. You'll see eight-dimensional sensors. AUDIENCE: But we usually think of cameras as sensing light, not sensing pressure. RAMESH RASKAR: Mm-hmm. But you can convert pressure into light. So [? Kimo, ?] I don't know if he's here. He'll come and give a talk about how his device works. And what they basically do is they have created this surface. It looks like Jell-O. And they have built some really, really beautiful demos. I think-- I believe it's going to come in a couple of weeks. And I'm sure it's-- And what this Jell-O-like device does is I can put my finger on it and it transfers that into a highly visible impression. And then, in very simple words, this [? predicts ?] a photo of that, with changing lighting direction. And they can create this very millimeter or micro-- multimicrometer-scale objects and then they take photos of that. So, yes, you always need a transducer. A camera is converting photons into electrons. But you could have other transducers, pressure into electrons or pressure in the photons. Yes. AUDIENCE: I think what he's talking about is [INAUDIBLE].. AUDIENCE: Well, I mean, but a camera is sort of some generalized-- we sometimes think of any kind of sensor that can give visual information as a camera. RAMESH RASKAR: Right. Maybe the other word of phrase is that what is-- what sensors can give you visual information? And then, yes, whether it's a resistive, or positive, or inductive, it still could give you some geometric information. So, but, although, we won't be studying them as such. We'll be studying more based on whatever happens to photons. But, then, going beyond that, how can you create something out of a thin screen? And this is [? Miparg. ?] That was his class project, last year, in this class. And he started thinking about it in the class, and he built an initial prototype for his final project. And this, of course, how it now has become a SIGGRAPH paper, part of this master's thesis. He won the student research competition this year, just out of this class, last year. And we'll study how that works. And then we'll study things like this, an [? Autopen, ?] which has a grid of dots that are slightly misplaced with respect to the center of its position. And every block of, I believe, six by six is unique. And just with this grid of six by six, that can create sufficiently unique cores, so that if you create paper with this core and just laid it out, it'll cover half the land area of the US. So it can print many, many, many papers with this [? cores ?] printer on them. And a pen has a camera that looks like this six-by-six core and figures out its unique location in a coordinate system that could span 1,600 kilometers by 1,600 kilometers. So it's really unique. And then, this way, we know where the pen was, on which page, and which x-y coordinate. And this is a way to basically record the strokes that you [INAUDIBLE]. Yes, [? Jim? ?] AUDIENCE: So [INAUDIBLE] 200 people, and more than half are lawyers. There are 250 patents on this. So it's interesting to see the context of this. So it's very difficult, actually, to have it to be able to go beyond that [INAUDIBLE] or what [INAUDIBLE]. RAMESH RASKAR: Right. AUDIENCE: So [INAUDIBLE]. RAMESH RASKAR: But I think [INAUDIBLE] has license. AUDIENCE: Yes, yes. You have license, but you cannot go, and especially in terms of hardware, beyond what-- RAMESH RASKAR: Exactly. AUDIENCE: ----[INAUDIBLE] maybe you would like to [INAUDIBLE] things. RAMESH RASKAR: Right. AUDIENCE: So [INAUDIBLE]. RAMESH RASKAR: So maybe that should be a project, how to get around the patent. AUDIENCE: [INAUDIBLE]. [LAUGHTER] RAMESH RASKAR: How to invent technology that can [INAUDIBLE] a patent. [INAUDIBLE]? AUDIENCE: This is it. RAMESH RASKAR: Oh, you have one. AUDIENCE: Ah. - Oh, excellent. Do you mind passing it around? AUDIENCE: No. RAMESH RASKAR: Yes, just pass it around. All right. And then we'll talk about [INAUDIBLE],, which is a project with [INAUDIBLE] and Professor [? Herrera, ?] who was here last year. And the idea is how can we create-- how can we exploit properties of cameras for objects that are very far away? How can we add intelligence to the world, so that the world is more compatible with billions of cameras people are carrying. And I won't go into the details, but the basic idea is to convert a point that's in sharp focus, looks like something that is a size of three millimeters by three millimeters, and take an out-of-focus photo [INAUDIBLE] and convert a circle of confusion into circle of information. And we look at motion capture solutions for [? XCI, ?] some other solutions we have built here for inverse optical motion capture and some other [? XCI ?] devices. So we'll spend a lot of time on that. So, an announcement here. [INAUDIBLE],, who's one of the leaders in using cameras for [? XCI, ?] is giving a talk, actually, on Monday-- this changed-- on Monday at 4:00 PM, I believe, in the [INAUDIBLE] Room. And he's the inventor of-- If you're familiar with Microsoft Surface, which is a tabletop surface with a projector and camera underneath, he built a version that's a variant of that, where they put a screen-- the screen is actually not diffused, but it's switchable. It switches between a diffused screen and a transparent screen, electronically. So, in one frame, you're projecting the image on it, and you can see it on the tabletop. In the next frame, it switches to become completely transparent. And the camera underneath can see the world through this diffusion, through the screen, and it can do some gestures on top. And again, in the old frame, it goes back to being a diffuser. So he'll be talking about that. He just received a TR35 Award from Technology Review. So he's here in town for that. And he'll give this presentation on Monday at 4:00 PM. I believe I sent out an announcement, but I'll send out one more. And our own [INAUDIBLE] mystery, who brings this little beautiful sixth-sense display, also got a T35 Award. So those of you who are not familiar with his work, really great [? XCI ?] projects can lead to TR35 hours. [LAUGHTER] All right. Then we'll spend quite a bit of time talking about scientific imaging and [? conversion ?] imaging in sciences. And this is something that I've heard a lot of times, new instruments lead to new discoveries. And in the 20th century, the most important instrument was a computer, right? And what we might see in the future, in my biased opinion, is the most important device we'll have is a really important imaging mechanism. We don't know what it will be, but it could be some permutation of a combination of what we are studying here. So computational imaging, we mentioned, has led to-- it has just transformed our world, unlike a lot of fields in-- we know, which are really important. I mean, if you think about just Nobel Prizes, my background is in computer vision and graphics and there have been no Nobel Prizes in computer vision graphics and not even Turing Awards in computer vision or graphics. Pretty sad. But if you think about imaging, there have been tons of Nobel Prizes in just purely imaging mechanisms, and we'll be studying them, phase contrast microscopy, a lot of CT scanning, and MRI, and so on. I don't know why important fields like graphics and vision are not getting as much attention because we are solving very important problems, as well. But maybe it's not being pitched right, or there's something more there. Anyway, so we'll study computational imaging, in terms of medical imaging, astronomy, applied physics, and biology. And a lot of the ideas are [INAUDIBLE] applicable across different fields, whether it's photography, [? XCI, ?] computer vision, and so on, so tomography, confocal microscopy, and so on. All right. So let me switch over to this topic.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_2_Modern_optics_and_lenses_raymatrix_operations_context_enhanced_imaging_Part_2.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RAMESH RASKAR: All right. So let me switch over to this topic. And let's see some cool demos. So this particular project that Kimo Johnson and [INAUDIBLE] Adelson built, called retrographic sensor, also won the best demo award at CVPR. So we saw this chart last time. They're already selling 2.5 billion sensors in the next year or two per year. And this is the forecast made in 2006, 2007. So it might have changed. And if you see the integration of cameras in mobile phones, a natural question is, will we have-- will we even think about our digital still camera as a standalone entity in coming years? And if you think about the wristwatch, I think most of us don't wear wristwatches anymore. Knowing what time it is is not a difficult problem anymore. I don't need to spend a lot of money. Or I don't have to tether myself to an additional device to know what time it is. And will we be in a similar situation about image? Will we even have a standalone camera for any purpose? So I just want to take a couple of minutes to see what opinions you have. According to Nokia, more than 50% of the people who bought N95 got rid of their point-and-shoot camera And Nokia sells about a million phones every day. They sell 400 million phones a year. So they sell more than a million phones every day. So these are staggering numbers. Which is the largest camera company in the world? AUDIENCE: Nokia? RAMESH RASKAR: Sorry? AUDIENCE: Nokia [INAUDIBLE]. RAMESH RASKAR: Nokia, right. It's not a traditional camera company. And the same answer for, which is the largest computer company in the world? AUDIENCE: Flextronics. RAMESH RASKAR: Oh, yeah, Nokia doesn't build anything of their own, of course. They owe EM everything. But still, they're the largest camera and computer company in the world as well. So all right. When do you think of camera as a standalone device will completely disappear? AUDIENCE: It'll never completely disappear. RAMESH RASKAR: And by completely, I mean less than 0.1% of the population carries it, basically. Something that's-- it's like saying whether the film camera will disappear. And I believe it's gone. AUDIENCE: Oh, but they're going to be gone in different ways. So camera and digital cameras are going to be gone. And then, they're going to be everywhere. And film cameras are gone in that nobody uses them. AUDIENCE: What? RAMESH RASKAR: I'm sorry [INTERPOSING VOICES] [LAUGHTER] RAMESH RASKAR: I'll give you an invented thing. You don't want to answer the film camera question. You'll love it. But when will the digital camera disappear altogether? You don't have anything against digital cameras, though, do you? AUDIENCE: Well, no, I don't have anything against digital cameras. But it depends on your definition of a digital camera. If you're just talking about point-and-shoot cameras-- RAMESH RASKAR: A standalone-- a camera-- a device that's built specifically for the purpose of taking photos. AUDIENCE: So if you're not going to be a professional photographer, then the average consumer, it's probably just going to disappear maybe in 10 years? RAMESH RASKAR: 10 years. AUDIENCE: Five years? RAMESH RASKAR: All right. And either-- anybody thinks it'll happen in less than 10 years? Let's take someone over there. AUDIENCE: Yeah, I think around five [INAUDIBLE].. RAMESH RASKAR: In five years? There'll be no standalone consumer digital cameras. AUDIENCE: Yeah, I think in just a couple of years because if you think about it-- RAMESH RASKAR: In two years? AUDIENCE: Yeah. RAMESH RASKAR: Wow. [LAUGHTER] Let's call all the camera companies. [LAUGHTER] [INTERPOSING VOICES] AUDIENCE: --she's already switching over to emphasizing video. And then also, SLRs are starting to be video also because they're not just normal cameras anymore. RAMESH RASKAR: Right. But I meant, video cameras are still cameras, the same thing. I'm just saying, will we have standalone devices that only goes to-- AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Yes? AUDIENCE: There are cameras that allow you to upload your [INAUDIBLE]. RAMESH RASKAR: Right. AUDIENCE: And there's no reason why it just wouldn't be on a computer, the actual [INAUDIBLE] device [INAUDIBLE]. RAMESH RASKAR: Exactly. AUDIENCE: So basically, it's converting both the cameras [INAUDIBLE],, not just the acquisition. RAMESH RASKAR: Right. AUDIENCE: And the other [INAUDIBLE].. RAMESH RASKAR: Right. You're right. It's going to be some, kind of, fusion. But is the camera going to become a phone? Or the phone is going to become a camera? AUDIENCE: [INAUDIBLE] [LAUGHTER] RAMESH RASKAR: I think in the back. AUDIENCE: I would say a century. RAMESH RASKAR: A century. Wow. AUDIENCE: Yeah, the same way that computers brought us the paperless office and the end of books. And we no longer have a stage, theater, or cinema. We've been predicting that for a long time [INAUDIBLE].. RAMESH RASKAR: So that's a very interesting comment. What's unique about paper or watching movies that may or may not apply to a device? AUDIENCE: But in response to what you said, the need to capture images won't disappear. But the fact that there is going to be one device that its sole purpose is to capture images will disappear. And it will be a device that it's either a phone, and capture this, and [INAUDIBLE],, and [INAUDIBLE].. AUDIENCE: Yeah, the way we print things has changed. But we still have [INAUDIBLE]. [INTERPOSING VOICES] AUDIENCE: That's why you're doing the image, though, because you're shooting stuff on a screen because you want the image quality. So maybe you [INAUDIBLE] enough or simulate it that [INAUDIBLE]. RAMESH RASKAR: Yeah, but it is a hypothetical scenario. If I can buy a phone whose image quality is sufficiently good compared to a standalone still camera, that's the question. Yeah, assuming that the image quality is going to be good enough. Let's ask somebody down here. AUDIENCE: Sorry, go. RAMESH RASKAR: Yeah. AUDIENCE: Yeah, I actually really agree with what he said. I actually think it'll never really disappear for a couple of reasons. I think-- RAMESH RASKAR: 0.1% of the population. AUDIENCE: All right. So I actually will be curious today how many people carry phone cameras. I'm curious. RAMESH RASKAR: One billion people carry a camera in the pocket. AUDIENCE: But so one reason I think is that there's not only image quality. But there's the ergonomic interaction. AUDIENCE: Yes. AUDIENCE: And I think the reason why a lot of people carry DSLR is not just for the image quality, but for having things available, and buttons, and having human hands that are certain sizes, and people [INTERPOSING VOICES] RAMESH RASKAR: Sometimes, that's heavy enough and something to look cool with and so on. [INTERPOSING VOICES] [LAUGHTER] AUDIENCE: So-- RAMESH RASKAR: Is it for function or just for [INTERPOSING VOICES] AUDIENCE: No, the heaviness does add a function because it stabilizes the image a little bit. AUDIENCE: Yeah, but I also think that there's something else too. I'm actually unconvinced in general that devices converging is necessarily good. Friends of mine that I tell this to say that I'm an exception because I carry two cell phones. And I think I'm carrying three cameras right now. RAMESH RASKAR: Yeah, two cell phones will never converge. [LAUGHTER] AUDIENCE: Well, the amount of phones that have two Sim cards [INTERPOSING VOICES] [LAUGHTER] RAMESH RASKAR: [INAUDIBLE] AUDIENCE: No, I think there are certain advantages to having, actually, redundancy in devices and things. So I definitely agree that digital cameras might have WiFi connectivity or other things. But I suspect that a lot of people will carry separate things that are their primary purpose as imaging and a separate thing itself. RAMESH RASKAR: Right. AUDIENCE: And then, they also have a camera in it. RAMESH RASKAR: Well, there will always be a film camera in [INAUDIBLE] house. [LAUGHTER] Even 20 years from now. But we're talking about 0.1%. So let's take a quick vote and move on. So how many people think it's-- let's do the reverse. So we'll say, two years from now, how many of you think will a standalone imaging device will disappear? Kevin? No? All right. We have one. AUDIENCE: No. RAMESH RASKAR: Do you want to say very quickly why? Because you-- [LAUGHTER] AUDIENCE: It would be-- so far, the image quality has been the problem. RAMESH RASKAR: Right. AUDIENCE: There's just not quite enough reliance on the sensors that you can fit into a phone. But I think people are finding ways to get around that with software and with optics. RAMESH RASKAR: Right. Oh, by the way, this-- AUDIENCE: And with the sensor, actually. There's a whole new generation of sensors coming out that allows you to [INAUDIBLE]. RAMESH RASKAR: Remember, right now, the total standalone devices for imaging is 10% already. This is optical mouse, which is not really a camera. We can ignore that. But mobile phone camera, [INAUDIBLE] cameras, gaming and so on. So it's already 10%. And we're talking about 0.1%. So let's move on. And just, let's take a quick poll. So two years, we have only [INAUDIBLE].. AUDIENCE: Quick question, I guess. Is that even a decent metric? Would it be better to say, how many photos are taken? Because a lot of people carry cell phone cameras, but are using the [INAUDIBLE]. My mom doesn't use her cell phone. RAMESH RASKAR: Right. AUDIENCE: But she has one. AUDIENCE: Yeah. RAMESH RASKAR: The next class, we'll talk about, how long will it take before photos disappear? But-- [LAUGHTER] --that's the next conversation. [LAUGHTER] So it's like saying, how many people write with their own handwriting? It's the same situation we want to see very soon. But that's for the next class. So two years, we have only one. Five years? Three or four. 10 years? 15 years? So I guess almost everybody because-- and never? Wow. Wow. All the power to you. AUDIENCE: I say one additional comment. We're thinking about this population. But here, in most of the world, people cannot go and afford this separate camera. They're going to buy a phone. RAMESH RASKAR: A similar point. An excellent point. AUDIENCE: [INAUDIBLE] 1%. RAMESH RASKAR: Yeah. AUDIENCE: It doesn't change that much. RAMESH RASKAR: When the remaining five billion come on board. AUDIENCE: Right. RAMESH RASKAR: Yeah, that's a good point, actually. AUDIENCE: So in camera phones. RAMESH RASKAR: Right. Yes, Jeremy? AUDIENCE: Yeah, I have a prediction. I think that people will still have many devices, like an Apple device. And if you look at Steve Jobs, you have the iPhone with camera and the iPod Touch with camera. RAMESH RASKAR: Right. AUDIENCE: Now they are an iPhone without [INAUDIBLE].. RAMESH RASKAR: Right. AUDIENCE: So it's interesting. RAMESH RASKAR: That's just a marketing gimmick. But-- AUDIENCE: Yeah, but it's interesting to think of marketing in this respect. And I think the point is they [INAUDIBLE].. RAMESH RASKAR: Right. AUDIENCE: Some people can [INAUDIBLE] device. RAMESH RASKAR: Right. AUDIENCE: And the mobile phone makes sense because communication is really [INAUDIBLE].. RAMESH RASKAR: Yeah, very soon, you'll be able to buy a burger with a camera in it. [LAUGHTER] That's your-- AUDIENCE: [INAUDIBLE] RAMESH RASKAR: What's the toy called for chicken? The happy toy. AUDIENCE: The Happy Meal. RAMESH RASKAR: The Happy Meal. [LAUGHTER] And you'll know what you ate and how many calories [INTERPOSING VOICES] [LAUGHTER] RAMESH RASKAR: It's a lot. But-- [INTERPOSING VOICES] AUDIENCE: It will trace the burger through your digestive system? RAMESH RASKAR: Yeah, it will show you how happy you were and-- AUDIENCE: And then, when you get it out, it's [INAUDIBLE].. AUDIENCE: Great. RAMESH RASKAR: Right. [LAUGHTER] AUDIENCE: Yeah, in this graph, the number [INAUDIBLE] has also increased followed by the same proportion as the number of mobile phones. RAMESH RASKAR: Exactly. It's mostly mobile right now. AUDIENCE: Yeah, but see, the network of digital cameras is really part of the [INTERPOSING VOICES] RAMESH RASKAR: It's not growing at the same rate, unfortunately. So-- AUDIENCE: [INAUDIBLE] you can't buy a phone without it. It's hard to find a home without a camera. So it is unfair to look at that. RAMESH RASKAR: But that's the whole point of this class. I think we are at a stage where people are saying, oh, cameras are cheap enough. By the way, do you have an idea of how cheap a camera is today? AUDIENCE: [INAUDIBLE] RAMESH RASKAR: For a device maker, how much does it cost them to pay for a camera? If they would attach a camera, how much does it cost? AUDIENCE: $1 [INAUDIBLE]. RAMESH RASKAR: $1. Anybody else? AUDIENCE: [INAUDIBLE] $5. RAMESH RASKAR: It's $0.20. AUDIENCE: What? [CHATTER] RAMESH RASKAR: So when I said it will come with your burger, I'm not joking. All right? Yeah, of course, it doesn't have power. It has electronics. It has objects. It has sensors. And it has a compression engine. All right? It's $0.20. So things are changing rapidly. And to answer your question on, OK, you cannot buy a device without a camera in it anymore, the problem is, right now, we don't have enough services that go on top of those cameras. But imagine if people in this room say, wow, a billion people have a camera. Let me try to see if I can build something that really exploits that. Or let me see if I can change the game a little bit, add a little bit extra features, maybe add a thermal sensor to it, maybe add UV light to it, whatever it is. Maybe I'll make it into a camera so that instead of $0.20, now it costs me $0.23. It'll completely change the game. And the same way we get addicted to our devices and we just can't live without them anymore. Hopefully, cameras will play a similar role, that you will be forced to use the camera and it won't be just an additional feature that never gets used. So-- AUDIENCE: Did you hope this or [INAUDIBLE]?? RAMESH RASKAR: About what? AUDIENCE: That you'll be forced to use a camera? RAMESH RASKAR: Not forced, but you'll be encouraged. How about that? [LAUGHTER] It's like saying, are you being forced to use a credit card? AUDIENCE: Yeah, that's what I actually felt like [INAUDIBLE].. RAMESH RASKAR: Yeah. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: That's another discussion I'm going to have [INAUDIBLE]. [LAUGHTER] All right. So taking notes. As you can see, a lot of our discussion will be not on the slides. So Sam's taking the notes today. And what you need to do is the slides will be online. So you don't need to capture what's on the slides. And you don't need to capture all the information that's coming up. For example, JB had a comment on a new camera or the other comments in the back. So you want to capture that in the slides or anything I mentioned. And also, summarize some of the question and answer sessions. And we'll draw some-- we'll have some demos, [INAUDIBLE] pictures. And I'll do some doodles on the board, and take a photo, and so on. And we'll have one student assigned for each class if you're taking for credit. And I encourage you to just use a laptop because you can, one, take notes faster. And you can also look up some information very quickly. And then, you can work with me over the weekend. And on one day, we'll post it on Stellar. Is everybody familiar with Stellar website for MIT? How many of you know about the Stellar webpage? Everybody is familiar? And if you don't have access, if you're not taking the class for credit, then you may not-- I think you should be able to access. It's open to anybody in the world. So you can even send it to your friends. And they can look at just the slides. They won't get a lot of the other information, of course. But if you cannot access it, just send me an email. And I can explicitly add you as a guest on the Stellar website. The one benefit of adding explicitly is that, sometimes, the announcements go only to the people who are taking the class for credit. It could be about exams or other things. And if you want to be part-- if you want to participate in that, then you'll get those emails as well. So assignment one was about lighting. And that's what we'll be talking about today, very different ways of capturing light. Does anybody have questions? Because there's have a few things you had to do. You had to have a committed source code. Or if you used a GUI-based approach, Photoshop or something, then you need to show me a lot of intermediate results, input and output images. The images should be captured by yourself with your own camera. If you don't have-- for the first assignment, maybe you can use a cell phone camera. But see if you can buy a cheap camera that has a manual mode. You can buy a pretty good camera for less than $150. I recommend the Canon A series. It usually has a manual mode. And another benefit of the Canon series is that we have very good SDK. So you can control the camera from your laptop or from your PC. So I recommend the Canon A series to buy. And then, you must create a webpage where you will host your solutions for your assignments. And then, you should send me a link. And what you update on the Stellar is just a link. You're not going to upload all the images and so on. And then, it's perfectly fine to use software you find online. It's perfectly fine. Don't use your colleague's software in this class or software people wrote in this class last year. But you can go online. Go to MATLAB repository. And if they have a function for doing x, y, z, it's perfectly fine to use it. And then, we have a Flickr group page. I believe you got an email from me about the Flickr group page. And all your final results, sorry, input and final results will be on the Flickr page so everybody can see it. And we can write comments about each other's work. And this will be a good trial run towards our final project, where we will be doing a lot of commenting on each other's work, giving a lot of feedback, and critiques, and so on. So three things, Stellar, your own webpage, and the Flickr page. Any questions? All right. So this was simple. And if you have more time, you probably want to do in your assignment something that looks like this, where you can do some beautiful light. All right. So some of the assignments, I think we already discussed it, will look like this. And let's start with some interesting ways you can think about illumination. So here's a photo from One Kendall Square. This is [INAUDIBLE] here and so on. And even if you have been to Kendall Square, this photo, it's somewhat difficult to understand what's going on. You don't even know how many stories this, I guess it's the [? Treper Labs ?] building, has, for example, from this photo and so on. So imagine if, magically, you can convert this photo into this photo from here to here? What are some tricks you can use to do that? AUDIENCE: Wait 12 hours? [LAUGHTER] RAMESH RASKAR: [INAUDIBLE] AUDIENCE: [INAUDIBLE]. They have Photoshop's Shadow Highlights feature. RAMESH RASKAR: Yeah, you can try to stretch the [INAUDIBLE].. Or you can try to take a really long exposure photo. But yeah, this is really, really dark. There's almost nothing there. AUDIENCE: [INAUDIBLE] another photo, you can [INAUDIBLE].. RAMESH RASKAR: Exactly. Who's saying that? AUDIENCE: [INAUDIBLE] RAMESH RASKAR: All right. Good. So you just cheat. This was actually taken when I was working at MERL. And this will be from my office, actually. It's a much nicer building than the office I have right now. [LAUGHTER] AUDIENCE: This campus is nice. RAMESH RASKAR: Sorry? AUDIENCE: This campus is nice. RAMESH RASKAR: Yeah, I like the brick texture. It's beautiful. [LAUGHTER] And just take the photo after 12 hours, in the morning. Actually, I took a photo at 6:00 AM when there were very few cars on the street. And somehow, you want to fuse these two photos into a photo that looks like that. So the static parts are captured from daytime. And all the dynamic parts with the cars, and the Christmas lights, and so on are captured from the nighttime. So you want to make some intelligent decision, which when-- which pixel comes from where? AUDIENCE: The color temperature is off. RAMESH RASKAR: And the color temperature is off. And you ought to make some artistic decisions about that. And you'll always have some issues, like this one. I think this guy, I waited all night long. But he was part of my class. He just didn't move. So you can see that. [LAUGHTER] You can see that. So it's a context-enhanced image. And we're really exploiting natural illumination to capture a new photo. Yeah? AUDIENCE: So how was this [INAUDIBLE] to you what [INAUDIBLE] algorithm or-- RAMESH RASKAR: Yeah, it has to be an algorithm. So this is what you can do, just one approach. But maybe in your assignments, you'll come up with another approach. In this case, you can just take-- just compute the scene contrast. You can just take a five-by-five window and find pixels that have very high-variance. And you can create a stencil like this that says, oh, all these pixels look interesting. And this part here, clearly, there's nothing. So I should not take that from the nighttime image. And then, I can create-- I can use the stencil. The black pixels will come from daytime. And the white pixels will come from nighttime. And if you just combine that, combine those two using that stencil, you get a reasonable image already. But it will look extremely ugly because the transition from here to here, from nighttime to the daytime, will not look very natural. So what can you do? Any solutions? AUDIENCE: Blur the mask [INAUDIBLE].. RAMESH RASKAR: Blur the mask. You can do that. But then, the edges of the building will get very fuzzy because it looks like part of the building was in the daytime and part of the building was in the nighttime. AUDIENCE: Could you just start-- could you just say how you made the mask again? I-- RAMESH RASKAR: This mask? AUDIENCE: Yeah. RAMESH RASKAR: So in the simplest case, you can just say, given the pixels aren't bright enough. That is, imagine zero to 255. Anything over 150, mark it as [INAUDIBLE].. That's the simplest thing. But it's not so good. A slightly better way of doing that is you go in a window of, say, five-by-five pixels. And if all the 25 pixels are the same, then maybe it's not very interesting. Here, all 25 pixels are the same. Or in this region, all 25 pixels are the same. But if there's enough variation between them, if I just take the variance or the standard deviation of those 25 pixels and it's high enough, that looks like there's some information in those 25 pixels in a five-by-five window. And you'll have that center of that five-by-five window as something that's interesting. Yep? AUDIENCE: One thing you could do for [INAUDIBLE] the buildings is that you would shrink the components that are not [INAUDIBLE]. RAMESH RASKAR: So you can use some a graph card approach and find some connected components and so on, like that? AUDIENCE: [INAUDIBLE] a little bit, you could have taken any immersive photos and just, between them, defined motion or [INAUDIBLE].. RAMESH RASKAR: Great idea. Great idea, especially for something that's moving to find that. [INTERPOSING VOICES] RAMESH RASKAR: But you still have to take some data images to fill in the static parts that are dark. AUDIENCE: Yeah. RAMESH RASKAR: But that's right. In fact, that you have to do that for cars because some cars, the center of this car is too dark. But it's still important to us. So this is what simple processing would do. But maybe you want to do something smart. So this is just a zoomed in version of that. It's trying to capture-- all these windows did not have lights on, you see here. But there's one window with a light on. I think this is the scaled up after biogen building, Amgen building. And it captures that. But then, it creates this ugly artifact around that. So you want a very nice transition between the two. So the solution for that is actually called a gradient dom infusion. And that's just a fancy way of saying that instead of worrying about absolute intensities, I'm going to worry about differences between every pixel. So I'm going to take a pixel x plus one and pixel x and find the difference between them. And I'm going to blend those ones. So this is what would happen if you just blend the intensities. And this is what would happen if you blend the differences. Now if you blend the differences, which is just the forward difference, then the image that you will get will be also made up of forward differences. So to go from a forward difference back to the signal, we must do integration. And in 1D, it's very easy. And we'll study that later. But in 2D, it's quite challenging. So it's basically doing a 2D integration from all the forward differences and putting the two together. Again, I'm not going through the details. We'll study this more. It turns out you can create various very aesthetically pleasing fusions or blending between different parts. Yeah? AUDIENCE: What if-- this is just an example of a night and day. And what if you multiply the images and then normalize them? RAMESH RASKAR: You could try different things. AUDIENCE: Because the dark parts would come through. And the light parts from the day would also come through. And they might actually merge a little bit [INAUDIBLE].. RAMESH RASKAR: Right. So, Nina, these are some of the things-- it could work in one scenario, but not the other. You're right. And this is exactly what I want you to do in the assignment. The assignment gives you a very simple and very easy problem. But instead of taking pictures under two different flash table lab conditions, you could take a picture under two time instances, one in the daytime, one in the nighttime, or under two different polarizations or anything, and then fuse them. Instead of fusing color channels, you can fuse intensity channels. Or you can fuse the forward difference channels and so on. One great software, it's on the website, I would like you to use for testing is called HDR Shop. And it's available for free. It's from a group at USC led by Paul Debevec. And I highly recommend that because it certainly isn't Photoshop. If you use HDR Shop, again, even if you're doing MATLAB assignments, you can download HDR Shop. And it allows you to do some [INAUDIBLE].. You can sample images [INAUDIBLE].. You can subtract images. You can do forward differences. You can do volume densities and all those things. So it's a really good software for doing arithmetic on images. And the other [INAUDIBLE] is done in floating point. It's not trapped [INAUDIBLE]. So let's say you want to multiply two images. And the intensities are at 200 and 200. It's 40,000 [INAUDIBLE]. And Photoshop only supports floating point numbers. But HDR Shop will support that. [INAUDIBLE] So even before, sometimes, you're writing a MATLAB program, you can very quickly do some visual inspection on this and [INAUDIBLE].. Another thing a lot of people forget about when you're doing assignments or projects that involve images is don't work on the whole image. Work on a very tiny part of the image. Just take a 100 pixel by 100 pixel crop, largest size, but just a crop. So you have some two megapixel images. Don't try to run your core on a two-megapixel image. It might take forever. Just take a very tiny part from a very interesting part. [INAUDIBLE] run all your code on that. And when you feel that all your procedures are working [INAUDIBLE],, then you can go back to maybe some larger [? optical ?] image. So when I was working on this portrait, actually, I was very inspired by this surrealism, AR surrealism, where René Magritte and all that came up with Empire of the Lights. I'm sure a lot of you have seen this painting, where he created this very stark, disconcerting positioning. You have bright daylight here. But the bottom part of the image is a nighttime scene. And he played with a lot of this juxtaposing of disconcerting elements. And the question is, can photography create something that is surrrealistic, not always photorealistic, but surrrealistic? And when we come to a discussion about, will photos survive in the future, this is exactly the question that painters ask themselves when cameras are measured. There was a lot of interest in creating photorealistic paintings. And when camera came around, they all scratched their heads and said, wow, we need to move on and do something else. We cannot create photorealistic paintings anymore. And then, we had pointillism and surrealism. And now, it's just abstract. We don't even care about representing geometry, and reflectance, and so on. And the question is, should cameras worry about capturing intensity, and shapes, and geometry? When I check on Flickr, there are hundreds of images of buildings like this. What's so great about taking yet another photo of yet another building? What I want to create is images that are somehow more expressive and more real. So that's another debate, whether photos [INAUDIBLE].. I personally, by the way, have completely given up on camera. I don't-- I barely take photos when I travel and so on. It's just one of those things. I'm well beyond the stage of taking photos. And I rarely get impressed with beautiful photos. But these look nice. But I never get impressed with that. And hopefully, the remaining six billion people will also start challenging the notion of camera as a device that captures photorealism. People will challenge that notion just like the painters did during the Renaissance and [INAUDIBLE]. So any thoughts, [INAUDIBLE]? AUDIENCE: Huh? RAMESH RASKAR: Any thoughts? [INTERPOSING VOICES] AUDIENCE: There-- you've never seen a photo done by a photographer, let's say, Richard Avedon or Ansel Adams that you really liked, that [INTERPOSING VOICES] RAMESH RASKAR: It's beautiful. But I'm not impressed. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Can you say it louder? She said, what about Ansel Adams and so on? Yeah, back then-- [INTERPOSING VOICES] RAMESH RASKAR: --when they did it, it was impressive. But now, if somebody shows me something as good as Ansel Adams, it's great. But it's not interesting. AUDIENCE: Well, you should look at some of Richard Avedon's more recent works. He does some pretty creative things. There's one, he use a skeleton in some of his photos. And it's very serious. RAMESH RASKAR: Yeah, so the content sometimes is impressive. AUDIENCE: Right, yeah. RAMESH RASKAR: But the art of photography. Comment back there? Somebody had a comment back there? All right. AUDIENCE: I had one [INAUDIBLE]. I'm not sure you can separate both the technique from the content. So this is really discretion. But I think they can come together in creating and influencing each other's thoughts. RAMESH RASKAR: You definitely need better and better tools and technologies to be able to create something that's really expressive, really, really beautiful and creative. But there was a time when photographers could impress us based on their techniques. The guy with the biggest camera was called a professional photographer, not the guy with the best eye and somebody who has the patience, and aesthetics, and all of those things. And hopefully, now we have separated the two. Just because the guy has the best gear doesn't mean he's a great photographer anymore. And that's what I mean by-- yes? AUDIENCE: [? Great book ?] on that topic as well, going back to old master techniques. It's called Secret Knowledge by David Hockney. I don't know if everybody else has read that [INAUDIBLE] all the different [INTERPOSING VOICES] RAMESH RASKAR: Can you just repeat? Is David Hockney-- AUDIENCE: David Hockney. It's called Secret Knowledge. And it's going through all-- he worked backwards based on weird artifacts and their paintings to figure out how they would make weird optical devices that could create the original artwork. And that goes back to the same concept of technique, and artists, and where those two [INTERPOSING VOICES] AUDIENCE: So there were from Ricoh? RAMESH RASKAR: What's that? AUDIENCE: From Ricoh? RAMESH RASKAR: What do you mean? What? AUDIENCE: The company Ricoh, he's from that. RAMESH RASKAR: What? David Hockney. AUDIENCE: The painter. RAMESH RASKAR: Yeah. AUDIENCE: He's a famous painter. RAMESH RASKAR: Oh, sorry. Who's the guy from Rico who did all the same analysis? But this is the one where they looked at-- AUDIENCE: It's an interesting read if anybody's interested on the history of how these optical techniques played into the role of artists [INAUDIBLE]. It's controversial [INTERPOSING VOICES] RAMESH RASKAR: Yes, exactly. I think we're talking about the same thing. It's very controversial. And he claims that the painters knew about perspective and so on even before the Renaissance and so on. AUDIENCE: Yeah, there's a lot of interesting backwards scientific discoveries that he did based off your artifacts in perspective. AUDIENCE: Yeah, I think it's David Stork. RAMESH RASKAR: David Stork [INTERPOSING VOICES] AUDIENCE: --Ricoh. RAMESH RASKAR: Exactly. AUDIENCE: Yeah. RAMESH RASKAR: Is that the same thing we're talking about? AUDIENCE: Different but maybe related? AUDIENCE: I think they do similar things. David Stork writes about optical projections for planes. And so it's probably similar. RAMESH RASKAR: So David Stork and David Hockney. All right. My laptop is recording again. That's fine. So yeah, so I think that's a great challenge we have about how-- the same way a camera put photorealistic paintings. They made them boring and basically put that breed of painters out of business. What will be the next imaging paradigm that we'll put to this camera that would make these cameras boring, obsolete, and out of business? Because there's only so far you can go with a device that records light and tries to be realistic with a good signal-to-noise ratio converting photons into electrons. That's not what humans do. We don't see with our eyes at all. We just record with our eyes. And we see with our brain. And right now, there is very little suppression in our imaging devices about sensing and seeing. But the whole concept behind computational photography is that these two processes are completely separate. They can be decoupled. You can sense in some crazy way with some crazy mechanism and then use a lot of computation to create images or create visual experiences that are very different. And if you look at-- Dan in my group and others have put up together this really beautiful digital totem in our area. And it's constantly streaming the most interesting images on Flickr. And you stand in front of that, you will realize that if anybody is saying that the goal of a camera is to mimic a human eye, and that's what makes it a good camera, that notion will be challenged, because most of those photos which are tagged interesting by thousands of people on Flickr are actually the photos you don't see with your eyes. It has some really crazy focus. It has very different color response curves. It has very different zoom and so on. It's basically things-- people think it's interesting when it's not what you see with your own naked eye. And that's what the goal-- if you take the concept for-- right now, people are trying to use cameras that are built to mimic the human eye and pushing them to create things that don't mimic the human eye. If you take that further, why not start building imaging platforms that have nothing to do with capturing a 2D perspective photo? And that's what we'll see real soon. So I'm looking towards an imaging platform that can just create abstract visual art. I don't know what that will be. But go ahead. AUDIENCE: I guess this is more of a question or a topic for your discussion. But what do you imagine the-- what is the fundamental photography or class of images that makes it lasting as opposed to the art tied to technology, which is batty. In the past year or so, [INAUDIBLE] photography. And they [INTERPOSING VOICES] [LAUGHTER] RAMESH RASKAR: Right. AUDIENCE: So it's entering the life of kitsch. RAMESH RASKAR: Right. AUDIENCE: Or even in HDR, you see a lot of-- it was in the beginning. And now, it feels like it's run its course. RAMESH RASKAR: Yes. AUDIENCE: And so but there's still photography in the traditional sense of capturing what the human eye sees is fundamental in some way, which is separate from the technique, things like [INAUDIBLE].. So I guess the question that I have is, what is the fundamental of the photographic medium? And in terms of technology, what would be the fundamental technological [? ability ?] as opposed to separating that from kitsch? RAMESH RASKAR: I agree. I agree. I think the camera, to me, it's just a sensor. It's just a transducer. And people say, wow, if the camera is going to change, fundamentally, it's still going to convert electrons into photons and photons into-- sorry, photons into electrons, and electrons into pixels, and so on. The laws of physics are not going to change anytime soon. So how we use it, though, it's going to change altogether. And this notion that once you take a picture, that's set, use the best possible optics and sensor so that it captured image and that you can see later is changing. So it will shift. I'm glad you brought it up because that's the very next thing I was going to show. This is going before that. I'll come to this. [INAUDIBLE] some other interesting images are time lapse mosaics. So again, I sat in my office and took these time lapse pictures. And on the left side, it's daytime. Bright time, it's nighttime. And all I'm doing is just picking up strips of the images from different times of day. And [INAUDIBLE] annoying artifacts because it's on some other exposure. But using a gradient domain technique, you can create a very smooth transition. AUDIENCE: [INAUDIBLE] cop car. AUDIENCE: Yeah, cop cars and-- RAMESH RASKAR: Yeah, exactly. It still has those problems. So I don't have as much patience. And I'm only doing it sitting in my office. But it's just a tool that photographers could create, create images that have a very different notion than what your eye will see if you're there sitting in an office. Of course, hopefully, this can be done in Times Square or some really interesting places. And by the way, this one is also catching up now on Flickr, a lot of people who are trying to do this. We did this in 2002. But-- AUDIENCE: So in lots of examples we saw here, the final result is an image like a final master in sound, where you do all equalizing. There is one sound. And lots of techniques now could be also interactive. You could actually go and those will be made. It goes through every part of it, because that is the final goal is to understand it. There is this idea that there is something complex about who [INAUDIBLE]. And it could be interactive as well. RAMESH RASKAR: Right. AUDIENCE: So in one sense, the [INAUDIBLE] goal is always to create a new static image or media. The interactive parts would be so interesting. RAMESH RASKAR: Yeah, I think you're pointing to a very important fundamental, again, constraint we have put on ourselves. That an imaging device should create, eventually, a 2D photo. AUDIENCE: Yeah. RAMESH RASKAR: And that's very limiting, extremely limiting. And we will be spending a little bit of time near the end of the semester talking about display technologies, because unless there is corresponding innovation, and research, and products, and services in display, it doesn't make sense to spend a lot of time on sensors. So yeah, the two have to go hand-in-hand. And there are some exciting directions there. So the last chapter of our book talks all about these other parallel developments that's going to change how we-- so for example, we have 60 displayed now. AUDIENCE: Because even in the simple 2D display, there is this project called Photos Projector. Maybe you know. That's also in this, where you can find any [INAUDIBLE] in time. So it's another way of also exploring this. It's [INTERPOSING VOICES] RAMESH RASKAR: Exactly. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Exactly. AUDIENCE: And by taking-- giving the controls to the user, it will be used to empower him. RAMESH RASKAR: Right. AUDIENCE: Then just to one final comment representation [INAUDIBLE]. RAMESH RASKAR: Exactly. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Exactly. So navigating through photos is a great way. But at the same time-- I love this process. At the same time, we want to think about something that scales. AUDIENCE: Yes. RAMESH RASKAR: And build it so people can use it. AUDIENCE: Yes. RAMESH RASKAR: So that's another challenge. So let's shift. I'm glad you brought it up. This is the latest fad on Flickr. I think, yeah, the HDR, it's reaching its-- it's bottoming out now. But this is tilt-shift imaging, where this looks like a tabletop toy scene. But it's actually taken from hundreds of meters away. This is in Rajasthan, the city of Jodhpur, the Blue City. And that's when you haven't used the tilt-shift class. So in the basic principles, you want to create-- instead of making-- keeping the lens, and the image plane, and the plane of focus parallel to each other, if you tilt the lens with respect to the sensor, then it turns out the plane at which things are in focus, the plane of focus, also tilts. And this is based on the shift-tilt principle. And credit this in detail when we come back to optics and light [INAUDIBLE]. But I just wanted to bring this up because it's fun. And you can play with it. All right. So let's talk about some other things. So illumination. You can use it in very interesting ways. Here's the project, I believe, from Georgia Tech, where they created an anti-paparazzi flash. [LAUGHTER] And as you can see, the way it works is the paparazzi will take a photo of a celebrity coming out. As soon as the flash goes off, it's detected by some electronic device. And it will blast more light so that what the paparazzi will get in the camera is some blown out photo. How does it work? You can-- so let's say this is-- let me make sure I understand. So it comes out a traditional camera. Especially a [INAUDIBLE] camera is actually a retroreflector. I'll explain that in a minute. So if you take a flash photo, especially a cell phone camera, then it appears as a bright spot in the flash photography. So what we're going to do is this guy here is actually-- it's going to be a camera with a ring of LED lights, which is this one here. And right next to it is going to be a projector. So this is mounted in the ceiling, a camera with LED lights and a projector. Now these LED lights are on all the time. And a paparazzi's camera will appear as a bright spot because it's a retroreflector. And this projector then, which is right below the mounted camera, will turn on the lights, turn on some pixels that are pointed towards this paparazzi. And so basically, you'll always see this very bright light, bright spotlight, being put on [? hand. ?] At the same time, the rest of the scene is going to be dark because only those pixels of the projector are being turned on. And that's why your camera, when it takes a picture, where it will just see this really bright source that's blowing out the rest of the scene. Yes? AUDIENCE: What about surveillance cameras? We don't want to bomb them? RAMESH RASKAR: So surveillance cameras are-- what's the question? AUDIENCE: I-- [LAUGHTER] Are they-- if, for example, we want surveillance cameras to just monitor that region, and then we have this device there that bombs every camera that it finds, then probably we are not capturing any information. RAMESH RASKAR: Yeah, exactly. All right. So the point is, can you-- [INTERPOSING VOICES] RAMESH RASKAR: --defeat surveillance cameras [INAUDIBLE]? Yeah, there are-- we will learn a lot of interesting ways you can beat traffic cameras or those cameras in this class. But that will all happen at the very [? chance. ?] [LAUGHTER] [INAUDIBLE] So what is there to reflect? And we'll look at those. Can you guys all see this over here? [INAUDIBLE] So if you have a traditional surface, like a wall, light comes in. And it gets reflected in our direction. And you've got a diffuse [INAUDIBLE].. If you have a mirror, then light comes in and gets reflected symmetric around the [INAUDIBLE].. But most surfaces, like this surface here, over here, light comes in. And it doesn't reflect in a uniform way. And it doesn't behave like a mirror. It's somewhere in between. So you get a bright spot. But you also get some other [INAUDIBLE] and so on. Is that clear? So diffuse, light comes in, gets tilted in all directions. Mirror goes in one direction. And a shiny surface is somewhere in between. There's other surfaces where light comes in and it reflects back in the same direction. And same surface, the light comes in and reflects back again to the same place. And this is retroreflective. And this is very useful material that you can use in many cases. Anybody know where this is used? [INTERPOSING VOICES] RAMESH RASKAR: The back of your backpack [INAUDIBLE].. That's reflector of the [INAUDIBLE].. AUDIENCE: [INAUDIBLE] system [INAUDIBLE].. RAMESH RASKAR: Oh, that's right. [INAUDIBLE], yeah. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: [INAUDIBLE] on same principle as the camera and the signal [INAUDIBLE]. So human eyes also reflect. And we'll see what happens. So this is a principle of retroreflection. How it may work, there are multiple ways you can make it happen. You can just have a so-called [INAUDIBLE] situation. If I put a [INAUDIBLE],, which is the mirror, then it comes out no matter which direction light comes in. It goes back in the same direction. It comes from here. And here, it goes back the same direction. So [INAUDIBLE] is a very useful retroreflector. In fact, if you go to many shops, especially tiny shops which put mirrors on their walls to make it look like it's much larger [INAUDIBLE] reflections, and you stand at the corner of such [INAUDIBLE] shops, you look at the corner, you will see your own mirror image, which is strange because they say this is a shop. And there's a mirror here. And you're standing here. You'll see your own reflection here in this mirror. And you'll also see your own reflection here in this mirror. But you'll also see you have one image here in the corner. And as you move around, this image will move with you. This image will move with you. But this image will always be in the corner. It's retroreflection. I look at the corner, I always see myself. So I'm surprised we don't have mirrors like this in our house so that I don't have to actually go in front of the mirror. I could be anywhere in the room. And I will still see myself no matter what I do. So you could build a mirror like that. It's just that you [INAUDIBLE]. Yes? AUDIENCE: Can you use it also for measuring [INAUDIBLE] laser? RAMESH RASKAR: Yes. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Exactly. [INTERPOSING VOICES] RAMESH RASKAR: [INAUDIBLE] AUDIENCE: Very good angles [INAUDIBLE] for [INTERPOSING VOICES] AUDIENCE: Yeah. RAMESH RASKAR: Yeah, so that's one way of doing it. Yes? AUDIENCE: Is [INAUDIBLE] reduction also required? Because rhetorically-- RAMESH RASKAR: Very good point. Very good point. So whether reduction is like [INAUDIBLE],, you see-- and the color red is because the chemicals are [INAUDIBLE] AUDIENCE: [INAUDIBLE] also, [INAUDIBLE].. RAMESH RASKAR: Huh? AUDIENCE: That could [INAUDIBLE].. RAMESH RASKAR: Exactly. Now both of those are another way of doing it. So you-- so once [INAUDIBLE] the other way is-- [INAUDIBLE] property of this one is you know how people will say, in the mirror, you get confused. If you move your left hand, your right hand moves. And you move the right hand, the right hand moves. The [INAUDIBLE],, that doesn't happen. You can move this hand [INAUDIBLE].. You can move this hand so your image, it's not clear [INAUDIBLE]. Is there a shop around here with mirrors like this? You use the [INAUDIBLE]. All right. So other one is false. If I just take a shiny-- this is the last [INAUDIBLE],, actually. Night comes in. It reflects because of this refraction. And then, it reflects again because, of course, we need another reflection. There's a refractive index here [INAUDIBLE].. And then, it goes back out. And then [INAUDIBLE]. So that's how a glass bead-based retroreflector would work. And then, of course, you'll have a bunch of these. So you can [INAUDIBLE]. All right? And this is exactly how a rainbow works. So you see a rainbow after you've had this rain. This is how a rainbow works. If you ever see a rainbow with maximum-- most vivid colors, what conditions you are satisfying? This is for the sun. And this is for the high-humidity here. What's the [INAUDIBLE] condition [INAUDIBLE]?? AUDIENCE: [INAUDIBLE] RAMESH RASKAR: [INAUDIBLE] AUDIENCE: [INAUDIBLE], yeah. RAMESH RASKAR: So this is the sky. The sun is just setting. You are here. And you're going to see it exactly in the opposite direction here. So the rainbow is here. So very common. You would go to some place with a waterfall. And you want to see the rainbow. In case of a challenge, you can see it sometimes. You don't see it sometimes. All you have to make sure is that the sun-- so usually, it's high enough. You can see the rainbow. Because you can imagine if the sun is pretty high in the sky, you'll never see a rainbow because it's occluded by the surface of the Earth. But when the sun is low enough, you will see a rainbow. On the other hand, you just have a mountain, then even in the daytime, you could see the rainbow if the [INAUDIBLE]. The lowest point by this are allowing you to see the retroreflection. And we're coming to, why this rainbow? Because the refractive index for different times [INAUDIBLE].. So that's why we see [INAUDIBLE].. So it's all fascinating [INAUDIBLE],, retroreflection. So in case of, why does the human eye have red eye or a cat has these bright spots? It's the same reason. So all you want is [INAUDIBLE]. Light goes in, comes back in the same direction. So if you have now a [INAUDIBLE].. So now, if you have, let's say, a camera for now. And human eye was almost similar [INAUDIBLE].. And there's a bright spot, bright light. I'm going to stand right next to it. Light goes in. And this gets focused. That's the point. And as you can imagine, if you're a sensor, it actually has a shiny [INAUDIBLE] in front of it. If you look at a sensor, you'll realize it's pretty shiny because [INAUDIBLE].. Light goes in and reflects. And no matter where light comes out from, because of the geometry of the light, you can go back to the same spot. Now [INAUDIBLE] for the focus, you do not see it because [INAUDIBLE]. But if the point was in sharp focus, on the sensor, it will reflect back. And it will come back in the same place. And so this is the same reason why for a human eye, again, in a situation like [INAUDIBLE],, it reflects and comes back in the same direction. [INAUDIBLE] So the reason why I tell you, what's the new strategy for removing a red light? What should you do? AUDIENCE: Flash phase or [INAUDIBLE].. RAMESH RASKAR: Flash out. Or if you have a cheap camera, where you-- AUDIENCE: [INAUDIBLE] RAMESH RASKAR: [INAUDIBLE] camera. Because you're not looking at the camera, you have-- the image that we formed is forming something that's made of the light [INAUDIBLE].. And so you're not referring back to the camera. AUDIENCE: So the anti-paparazzi system will also blind everybody [INAUDIBLE]. RAMESH RASKAR: If it was-- AUDIENCE: [INAUDIBLE] RAMESH RASKAR: If the-- [LAUGHTER] If the city-mounted light was omnidirectional, then that would matter. But because the projector had constrained light only at [INAUDIBLE] points [INAUDIBLE] this point. [INTERPOSING VOICES] RAMESH RASKAR: Yes. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Or this camera. We don't want to [INAUDIBLE]. AUDIENCE: I think the point JD was bringing up is that human eyes are also retroreflective. So if you look at a spot, it will potentially flash you. RAMESH RASKAR: Is that what you're saying? AUDIENCE: Yes. RAMESH RASKAR: Yeah, so-- AUDIENCE: [INAUDIBLE] RAMESH RASKAR: It's true. It's true. So this is among the rules. [LAUGHTER] So just don't go there with [INAUDIBLE].. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: So also, you know that [INAUDIBLE] are a very technical environment. Usually, [INAUDIBLE] have more problems with [INAUDIBLE] don't have a problem with [INAUDIBLE].. And that's just because of the pigment and so on. But you're right, that If the system works really well, it will also blind all the [INAUDIBLE].. [LAUGHTER] AUDIENCE: So if you have the masks, something to put into your lens, then this will seem, obviously, [INAUDIBLE]. RAMESH RASKAR: Oh, yeah, that's the next [INAUDIBLE],, how to beat [INAUDIBLE]. AUDIENCE: And [INTERPOSING VOICES] AUDIENCE: You can use the flash. [LAUGHTER] Use a camera with a high-ISO [INAUDIBLE].. RAMESH RASKAR: Yeah, there's a lot of ways to get around it, always. If you want to have a very nice discussion on how to change your license plate so that the-- AUDIENCE: [INAUDIBLE] RAMESH RASKAR: --high-speed-- whatever, the-- AUDIENCE: [INAUDIBLE] RAMESH RASKAR: --speeding cameras will not be able to see it. You ought to buy me a beer. [LAUGHTER] All right? AUDIENCE: Yeah. RAMESH RASKAR: You're up for it? AUDIENCE: Yeah. RAMESH RASKAR: You're over 18? [LAUGHTER] AUDIENCE: Yeah, 21 [INAUDIBLE]. RAMESH RASKAR: I did not say age. All right. Good. So before we take a short break and go into the illumination, more discussion, I just want to give you the final projects you could be working on. Remember, we are really here to do something that's cool as well as novel. So many times, I'm going to bring up a list of the projects that are not cool. All right? And again, no offense to somebody who might be working in a similar field. But just, let's be honest. Let's do something nice. So here are some suggestions for types of fun projects you could be doing. You have beautiful user interaction devices that are always fun. You start using a 2D sensor camera. Maybe use the [INAUDIBLE] camera for the finished camera, which can be, by the way, created from a flatbed scanner. You can buy a flatbed scanner for under $100. And it's actually a multithousand frame rate camera. You can just hold it in one place. And if you walk in front of it, you'll basically create a photo finished camera. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Yeah, there are a lot of issues. But we'll help you on that. We'll help you on that. Yeah, like Dino was saying, it's not something you can just buy and just plug it in. You have to get your hands dirty in [INAUDIBLE].. Or just use photodetectors, single-pixel cameras. You can do a lot of interesting things with it. You can include some interesting illumination. Capture the invisible. It's always my fascination on, how can we capture something that cannot be seen with the naked eye? So maybe bring some tomography machine that can see inside the body, some structure like 3D scanning or fluorescence. And we'll see that experiment here just in a minute. And this is a question you were asking earlier. Are there cameras in other domains, electromagnetic, or audio, or resistant. So how about an audio camera, or a magnetic camera, or a capacitive camera? So we won't be discussing it a lot in the class. But I think it's a great way to do a final project. Thermal lab camera, we can help you. Maybe a thermal lab camera that detects emotions. Multispectral camera, cameras that can distinguish between two very similar colored objects. There's a lot of us interested in distinguishing camel from sand. So can you create some mechanisms so that two very similar looking objects can be completely distinguished? There's a lot of market in the golf business, where people want to spot their golf ball on a green background. So you just put this-- you know what to do? Yeah, they sell it for hundreds of dollars. All you need is a blue filter. If you put a blue filter on the grass, which is green, it looks black. But your ball, which is not green, stands up. So you have a black background. And anything that's not green will really stand out. You can-- there are lots of other businesses [INAUDIBLE].. Illumination. [INAUDIBLE] photography, I think we saw earlier. A lot of fun. All, kinds of, strobing. Nonimaging elements such as gyros, GPS, interaction between two cameras. Maybe there is some lighting communication or Wi-Fi communication between them. Optics is always a lot of fun, camera areas, light fields, portrait aperture. Maybe try to mimic a vision of one of the animals. Or study. Bring some worms or bring some cats. And cats are, by the way, really beautiful mechanisms of how they work. And we'll have a whole lecture on animal eyes. I think seventh or eighth lecture. So we'll study that. And time. Timelapse photos, there's so much information in time lapse photos. I wish I could go on Google Earth or Google Street View and it will show me time lapse. It can't be that difficult. They'd have to scale their database only by a factor of 24, which is not that much more, I think. But it'd be nice to see the same place in the daytime and nighttime, and what, kind of, traffic it has, and what, kind of, people it has. Right now, it's very static about its appearance. Somebody had a very nice idea for time lapse photos for Google Street Map, which is-- sometimes, you don't need a video of what's going on there. But the world, at least man-made world, usually, it will only have a few discrete steps. If there's a traffic light, it just switches between three different colors. If there's a bridge on the river, that drawbridge, it's either this position or this position. There are a lot of these discrete things that happen in man-made world. So it'd be nice if in Google Street, you can see all those discrete steps, discrete situations in [INAUDIBLE]. You can create this direct global separation. I think we saw that earlier. Create new types of cameras that are over 10-- these are all suggestions for final products. Mimic animal eyes. Play with photonic crystals. They're are very easy to make now. Photonic crystals, basically, in simple words, these are ordered arrays of materials that have different refractive indexes. In the simplest case, it could be just glass and air. So if you could just create micron-scaled holes in glass, it behaves in very interesting ways. So maybe somebody will play with that [INAUDIBLE] photography and [INAUDIBLE]. AUDIENCE: Ramesh? RAMESH RASKAR: Yeah? AUDIENCE: You say it's very simple. How do you do that? RAMESH RASKAR: By simple, I mean-- AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Yeah, the idea is fairly simple. Actually, when photonic crystals were invented in Bell Labs, they literally took CNC C machines in metal. And they just punched holes. AUDIENCE: You can get access to that or [INAUDIBLE]?? RAMESH RASKAR: We have a microelectronics laboratory. I think Quinn knows about it. And we can-- I think there's some charge. But I can easily take care of the funding for that. And I think you pay a fixed one-hour rental for it. But if it's a research project, maybe we can even get around that. So in fact, you can buy now nanomaterials and photonic crystals just online. You can get two-micron beads and six-micron beads-- sorry, two-nanometer beads and six-nanometer beads online. Very toxic. A little bit dangerous. But there are certain varieties that you can just use. So it's a lot of fun to play with that. And here are some sample projects from last year's class. In our [INAUDIBLE] photography, again, this was the only undergrad in the class. And he won the best award, Best Project Award, from Nokia. Jessie from mechanical engineering did a camera array for particle image velocimetry. And now, it has become his PhD thesis. And he's applying for hundreds of thousands of dollars in grants. Bidirectional screen was a project by Matt, Matt Hirsch. Again, it has become his master's thesis, SIGGRAPH paper, and so on. [INAUDIBLE] Kermani, in my group, he just looked at the theory, he didn't have time to build anything at that time, but looking around a corner, which is, again, a paper this year. Somebody is building a tomography machine and so on. Lots of cool things. As you can imagine, last year, the emphasis was a little bit more hardware-intensive because the people who are taking the classes were interested in that. But this year, we have a more diverse crowd here. So I'm expecting projects in just art or photography, beautiful synthesis of images, real-time XCS systems, maybe some different types of scientific imaging, maybe a new microscope, anything of that variety. And we'll spend a lot of time together. We have four or five mentors assigned. We're working with you on the final projects. All right. So let's take a short break. And the short break, we are going to show you some cool fluorescent demos here. And then, we'll start back in about 10 minutes to talk about other types of image. So here's a big question for photography. 1930s, we had big cameras. The guy had to get under the cloth to take a picture. AUDIENCE: That's actually a press camera. It's pretty small. And people [INTERPOSING VOICES] RAMESH RASKAR: This is more recent. But I'm guessing you have cameras that are bigger than this? AUDIENCE: Yeah. RAMESH RASKAR: And there's nothing wrong in having large cameras [INAUDIBLE].. And that has shrunk down to something, this, and this, and $0.20 with all the electronics, and processing, and so on. But when you think about lighting, not much has changed. It has definitely not reduced in size. It definitely has not reduced in cost or convenience and so on. And in a way, what really distinguishes consumer photography from professional photography is not the camera anymore, but the lighting. So this is a challenge for our community, who we like to think about making everything programmable and easy to use. The same revolution will not happen for illumination. So we're going to explore that a little bit into today's class, then come back and talk some more in the next class. So when it's all about cameras, we try to make it very smart with the lenses, and different sensors, and new processing. But the light source is still a flash. Maybe it has fancy umbrellas and so on. But it's mostly a flash. So what we want to do is replace that into a more programmable illumination in a very high-dimensional way. It's geometry recorded for the illumination field in time and color. All right? So some of the earliest examples, and this whole field is called computational illumination, we're going to use computing even in the lighting. And maybe the pioneer was Edgerton, Dr. Edgerton right here. Very famous examples of bullets going through through an apple. And Santiago saying, you have this one or-- AUDIENCE: The three balloons. RAMESH RASKAR: Three balloons, yeah. So [INAUDIBLE] has beautiful, beautiful pictures. Are they in your dorm room? AUDIENCE: I've seen one. RAMESH RASKAR: The same one? Yeah, great. So beautiful pictures. But they were not captured with smart cameras. They were captured with smart lighting. You have an ordinary camera, where the shutter is open for a certain duration. But then, you have a flash that freezes that motion, which, as I recall, was troublesome. And in this particular case, you have a sequential strobe that captures this whole circle. Nothing smart about the camera. Really smart strobed lighting. And of course, [? Doc ?] Edgerton came up with chemical processes to create very sharp duration light sources. But now, we have LEDs and solid state devices. We can control light down to a few picoseconds or nanoseconds. All this can be done very easily in your house. So again, this is what distinguishes consumers from professionals. And we look at this step-by-step. And this will be a more technical discussion about what parameters of light we can change. So let's think about, what are the things we can change about light? One is clearly the brightness. What are some other things we can change? AUDIENCE: Color. RAMESH RASKAR: Color. AUDIENCE: The coordination. RAMESH RASKAR: The cone. AUDIENCE: [INAUDIBLE] polarizing. RAMESH RASKAR: Polarization. AUDIENCE: Coherence length. RAMESH RASKAR: Go ahead and slant a little bit more exotic, yes. AUDIENCE: Diffusion. AUDIENCE: The direction. RAMESH RASKAR: The direction, yes. AUDIENCE: Color. RAMESH RASKAR: Maybe it's not even a single cone, but just a projector. Different directions have a different intensity. AUDIENCE: Time. RAMESH RASKAR: Time. So strobing, or duration, or synchronization with the captured image. AUDIENCE: Relative fades. RAMESH RASKAR: Relative fades, of course. AUDIENCE: The lenses on your lighting [INAUDIBLE].. RAMESH RASKAR: For lenses, exactly. Don't think of your flash as just a light source. But put fancy optics or masks in front of them to control, again, direction and so on. AUDIENCE: You can put inside the object [INAUDIBLE].. RAMESH RASKAR: Yeah, you can change the environment, not just illuminate the environment, but change the environment by putting some interesting lighting inside there. So this is, by the way, another great way to come up with new ideas. And you say, this picture really bothers me. What can I do? And if you want to do research in this area, you ask this question yourself, what are all the ways I can control the lighting parameters? And I'm going to go through one-by-one and see how I can attack them. And we'll see examples of exactly how that has happened over the last several years. So the simplest one is I can have light or no light. The simplest [INAUDIBLE]. The next parameter is duration, how long it's on or its brightness and so on, its position, its color, using it as a projector, especially angle, modulation in space, modulation in time, and sometimes just natural light, just time lapse, which we saw. All right? So we'll just look at a few projects to get us motivated. And we saw this one last time, a multiflash camera, where we're really trying to solve a new problem. If I want to tell you what's inside my car, I can just take a photo and send it to you. But when car companies want to do the same thing, they hire artists. So the question is, why do we hire artists to draw something that can be photographed? Yes? AUDIENCE: They might know what's more important or less important [INAUDIBLE]. RAMESH RASKAR: The semantics is very important, yes. Yes? AUDIENCE: [INAUDIBLE] reflection, so like edges. RAMESH RASKAR: So yeah, reflections are annoying. And they don't really add any information. And edges just felt more important. AUDIENCE: In that picture, so the central elements are highlighted. RAMESH RASKAR: Exactly. The hand is brighter than the rest of the car. Is that what you mean? AUDIENCE: Yeah. RAMESH RASKAR: Yeah. AUDIENCE: Some unnecessary wires or [INAUDIBLE] can be-- [LAUGHTER] RAMESH RASKAR: Yeah, all these things, it's just clutter. It's not critical to explaining what's in the car. It just looks wonderful. So yeah, unnecessary shadows, clutters, too many colors as opposed to highlighting the shape, and just marking what's moving, and using very simple, basic colors. Yes? AUDIENCE: [INAUDIBLE] strategy [INAUDIBLE]?? [LAUGHTER] AUDIENCE: [INAUDIBLE] RAMESH RASKAR: There are some other names for it. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Not for [INAUDIBLE].. So imagine if you can just build a camera that gives you an image on the white as opposed to black. We are not trying to put the artists out of business. But we are trying to create tools so that they don't end up time rotoscoping and drawing the lines. They get all the line maps very quickly. And then beyond that, the creative process starts. So get rid of this tedious task of finding the edges, and marking, and so on, like all the magic scissors and so on. So of course, the first solution would be just take the photo and try to find intensity edges. It turns out it doesn't work very well. And this is the problem you face in XCI all the time. You have some gestures with your hand. We might have some challenging background lighting. Then a typical method is to maybe do some edge detection, color detection. And you just hope that they are not-- the lighting is just right when you're showing the demo and you have matched-- controlled the background just enough so that everything works out. Or you can be smart about it and use-- distinguish within reflectance edges, which are edges between different materials, and distinguish them from depth edges, which are real geometric edges. So these are all intensity edges. And these are all just geometry edges. So imagine if I could take a photo of this and get this out. It would be much easier for me to write an interactive system that's based on it. AUDIENCE: Well, this is really like a computer-based question, actually. The example with the car [INAUDIBLE] maybe a professional meeting because a human being knows what to abstract [INTERPOSING VOICES] AUDIENCE: --to normally emphasize [INAUDIBLE].. RAMESH RASKAR: Yes, it's multiple goals. So one goal is a diminished reality-- [LAUGHTER] --trademarked by-- AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Sorry? AUDIENCE: [INAUDIBLE] the internet. RAMESH RASKAR: All right. So [INAUDIBLE],, who's a scientist in our group, has a very interesting way of putting this. He says, photography, it's well-known. Photography is the art of exclusion, de-emphasizing things that-- the photographer makes a decision on what to de-emphasize either by using focus or by cropping it and so on so that the photographer can convey to the viewer what's most important. But what we're doing computationally now is adding another level of exclusion. In this case, we're excluding the intensity edges and just keeping depth edges and all these other things. So computationally, we would extend the concept of exclusion. So one could be for nonphotorealistic rendering for creating cartoons. One could be for XCI. And there are lots of other things. Right now, this is being used in Mitsubishi products for some really bizarre things. So we saw the idea last time. You get the slivers of shadows. And by analyzing the shadows, you can continue-- compute the depth discontinuities. And some people like to call it slivers. Some people like to call it occluding contours or shape boundaries. But they're not so precise. And depth discontinuities is what we're really talking about. So when I take a photo, then the depth of this pixel and depth of the pixel on the other side of this screen has a [INAUDIBLE] discontinuity. And that's really what we're talking about as opposed to slivers, where it's usually about just the external part. Well, if I put my hand over here, this is still a depth discontinuity from my hand to my body. So I would like to capture that. And shape boundaries is also not very precise. So depth discontinuity is really what we're talking about. And imagine if this is in the Sony EyeToy or the Microsoft Natal. Then if you have a depth camera, if I put my hand really close to my body, it cannot distinguish the depth with respect to the camera of my body versus my hand. But with this camera, I can still continue to detect that there's a shadow that's created from that. AUDIENCE: How about that laser from-- RAMESH RASKAR: Sorry? AUDIENCE: The laser projector camera? RAMESH RASKAR: Yeah, so actually, the Microsoft Natal uses this PrimeSense sensor, which is not time of flight, but based on triangulation. But anything that senses depth as opposed to depth discontinuity will have the same problem because, as you get closer and closer, you'll not be able to detect that. AUDIENCE: If you were testing for a piece of paper, it seems pretty good. RAMESH RASKAR: It was very good. Yeah, it was very good. Yeah, so if it's a really good depth-sensing camera, then you're all set. But it has poor resolution. And it's a much more complex device. So again, shadows to the right, shadows to the left, and so on. But then, look at the shadows. They are not just one-pixel-wide. Depending on the depth difference, they could be very wide or very narrow. So you want to create an output that's exactly one-pixel-wide. So how does this work? So before we go there, I guess, is there intensity in this? These are [INAUDIBLE]. And we can just put that together and create a cartoon in real-time. In fact, we demonstrated this. So yeah, this is another example, my old '93 Honda Civic. And this is what you will get if you use the intensity detector. This is what you get from the four flashes. And you will see the Honda sign here. That's barely visible here. That's because the Honda sign has a relief. It's a height field. It's a geometric rather than texture. And all these scratch marks and rusting is just texture. It does not contribute to the geometry. And if you try to locate the spark plugs or the dipstick, it's barely visible here. But again, here, you can see it right there. So the shapes are what we're really looking for. And we've got the four images, input images, just four images. You can see to the naked eye, they look identical to a naive observer. But you look at the shadows. Here, the shadows are to the right of these pipes. Here, the shadow are to the left of these pipes. Here, the shadows are at the top. And here, the shadows are at the bottom. And by analyzing the shadows, you can find the [INAUDIBLE].. Yeah? AUDIENCE: In practice, honestly, the farther continuous from 12 feet away, it's not as effective because your flashes are pretty close to the [INTERPOSING VOICES] RAMESH RASKAR: Right. Exactly. So we'll talk a little bit about what parameters you are to choose to get an optimal performance. And yeah, there's definitely a sweet spot where it works in the system. Some more examples here. And I just want to describe to you very quickly the geometric parameters that make it happen. But we were showing this live at SIGGRAPH. And you can stand in front of this camera, where every iconic fourth frame had a different flashlight going off. They put in the lead-in. You can stand in front of it. And it's a cartoon. And I'm a big fan of the A-ha video from the 1980s, "Take On Me." [LAUGHTER] You remember that? AUDIENCE: Yeah. AUDIENCE: Wow. RAMESH RASKAR: I thought I was too old to appreciate that. But so in the A-ha video, there's this very beautiful effect where on one side of the glass screen, the world is cartoon. And the other side, it's real. And so when we were showing this, it goes on for four days, after the end of the first day, people started saying, oh, this reminds us of the A-ha video, the "Take On Me" video. And I said, yeah, that's really true. So we got really sick of those comments. And on our demo, instead of calling it-- we had some really boring technical name. We said, this is the A-ha demo, "Take On Me" demo. And we had it in big letters on a huge screen. And then, on the second day, the curator of the show came to me. And she introduced me to this guy. And she said, this is, I forget his name, David Patterson. This David Patterson, do you know him? I said, I don't know him. He said, he was the music video director of A-ha. [LAUGHTER] And so I said, wow, that's great. We're really inspired by your video. He looks up on the screen. And our demo was inside a closed quote. And they're showing the live output of that on the big screen. And he said, what are you showing there? I said, this is live. We can go inside. And this is being computed in real-time. And he said, that's impossible because when we were doing it in the mid-80s, every frame took them a whole day-- [LAUGHTER] --to rotoscope and so on. And so we were excited. Patterson is here. So he comes inside. He acts very cool. He looks right there. He looks at the images. And we take-- we get very excited. We take pictures with him. He leaves. But after 1/2 an hour, he brings his son with him. And he started explaining to him. His son is two or three years old. He said, wow, remember in the 1980s, I worked on this video? These guys are doing it in real-time now. And we said, welcome. We're pretty happy you brought your son. You must be excited. After a couple of hours, he brings his wife. [LAUGHTER] And then, it just goes on all day long. He just keeps bringing people to show them how this works. So that was a high-point of that [INAUDIBLE].. So how does it work? So you have a camera. And you have your flashlight, which is shown as P here. And then, because of the parallax between the lights and between the lens and the light source, you're going to cast a shadow of this object. If the flash is to the left, the shadow is going to be cast on the right. And if you just freeze the rig from the light source to the object to the shadow, it's going to be projected in the image as this particular vector. Now what's interesting is that it turns out we need at least three light sources. You don't have to use four. But you need at least three light sources. And let's say you have three light sources, P1, P2, P3. And the image of the light source is denoted by E. So the shadow, it turns out, lies along a so-called [INAUDIBLE],, those of you who are familiar with the [INAUDIBLE] geometry. And as long as we make sure that for any given silhouette, any given depth edge, there is at least one light that's on one side of that edge and there's at least one light that's on the other side of that edge, we can guarantee that at least in one image, you will see a shadow. And at least in one image, you will not see a shadow. And while being able to compare-- so out of four, in our case, at least one of them will have a shadow. And at least one of them will not have a shadow. So by guaranteeing that, we can analyze the four images and compute how that works. So here's a very simple example. It flashed to the left. So the shadow is on the right. Here, the flash is on the right. So the shadow is on the left. And all you do is take these two images, find the max operator, take the max of every pixel. And this is what you can do with HDR Shop. I don't know if you can do it with Photoshop. Can you just take two images in Photoshop and find the max of the two? At every pixel, I want to compare the two pixels. And I want to take the maximum of the two. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: You can do that? AUDIENCE: Well, I learned that there are filters that you can do that with one layer and the layer behind it and [INAUDIBLE]. I don't know if it's exactly that. RAMESH RASKAR: Yeah. AUDIENCE: But [INAUDIBLE]. RAMESH RASKAR: Exactly. But nobody cares about doing nice, simple mathematical operations. So HDR Shop is great for these things. All these things, you can do it in HDR Shop in 15 seconds [INAUDIBLE]. AUDIENCE: Another [INAUDIBLE] introduced it. But [INAUDIBLE] image processing, ImageJ is really a nice-- RAMESH RASKAR: ImageJ? AUDIENCE: ImageJ. It's Java-based and all open-source. So if you want to take a look at the code, it's all there. RAMESH RASKAR: Excellent. AUDIENCE: And also, you can script all of these functions with a lot of images using JavaScript-- RAMESH RASKAR: Excellent. AUDIENCE: --which is really useful. RAMESH RASKAR: Just send a link to the class or to Sam so it'll go in the notes. AUDIENCE: Cool. RAMESH RASKAR: ImageJ. I know there's OpenCV and all those things. So I'm sure you'll use them. In MATLAB, again, it's one-line command, max a, b, a vector, b vector. But basically, one-line code to take a max of two images. In Photoshop, every 30 minutes. So all you do is take the max of the two and take the ratio of this image by the max image, which I call normalized. And then, we want to divide that. And if you divide it, all the texture goes away. And only the shadows are left. Wherever there was no shadow, you'll get a value of one. The ratio is one. Wherever there's a shadow, you'll get a value that's close to zero. It's not always zero because there's some other lighting. And now, all you have to do is if you take one scan line here, this is how you plot it. And you can see the shadows very clearly. And same here. In the left image, just scan from left to right. And wherever you see a jump from lit area to an unlit area, it has to be a [INAUDIBLE] touch. In the right image, you scan from right to left. And wherever, again, you see from lit area to unlit area, that's a [INAUDIBLE] touch. So in every one of those four images, you'll be able to find [INAUDIBLE] touches from different orientations. And then, we can take a union of all these four images. And that gives you a full [INAUDIBLE] touch image. So in this case, for example, all these internal edges are completely ignored. And all you see is silhouettes. And in MATLAB, that's all there is, 15 lines of code and no parameters. No tuning required. There's no constant that's used anyway. So it's extremely easy to use. You can implement this. You can use it. I'll give it as an option for one of the class assignments so you have a choice to use this. Any questions on this one? Yes? AUDIENCE: So you said outline. The boundary is only a one-pixel line. But if you actually provide more than one pixel, depending on how thick the shadow is, can we get more depth [INAUDIBLE]?? RAMESH RASKAR: That's a very good point. So depending on how far things are, so here, for example, the shadow is wider than here. And that's because the depth difference is larger or smaller. So you are already right. That thickness of the shadow actually tells you a little bit about the depth difference, not the depth itself, but the depth difference. AUDIENCE: So you get more 3D information out of it. RAMESH RASKAR: So you could get a little bit of 3D information as well. Yeah, that would be nice. AUDIENCE: [INAUDIBLE] capture? So [INAUDIBLE] like a machine in the shop. RAMESH RASKAR: Right. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Yes. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: They just use a laser scanner, usually. AUDIENCE: Yeah. RAMESH RASKAR: Yeah, unfortunately. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: You can just capture the silhouette. But remember, we are not capturing depth. We're only capturing depth edges. AUDIENCE: Yes. RAMESH RASKAR: It's like saying, I'm not capturing intensities. I'm just capturing intensity edges. AUDIENCE: But [INAUDIBLE]. RAMESH RASKAR: So this work has really taken off. There have been lots of papers that are based on this particular technique. And so people have tried to do [INAUDIBLE].. Some people have tried to do some-- it's still open. It's very new. Or it's some three, four years old. And people are trying to do different color lights. So there's a lot you can do. Question up there? AUDIENCE: Could you actually capture depth? Or at least the difference between different edges so you can study the [INAUDIBLE]?? RAMESH RASKAR: If you can process the shadow width. But remember, it's not going to be as robust, because if you look at this plot, you have to estimate how wide this region is that's shadowed. So if you can do that fast enough, then yes, you could do that. So it's not-- in certain conditions, maybe it's possible. For skin, it's very easy because the shadow color is not going to interfere. If something was black in the scene, then that could be detected as a shadow. But in this matter, even if the object is black, it doesn't matter. You'll still get very nice edges. But if you try to estimate the shadow width, then you have a little bit of [INAUDIBLE].. AUDIENCE: Again, all of these techniques assume that you have one [INAUDIBLE] image, which could have [INAUDIBLE]. You can-- then this one-- RAMESH RASKAR: Hey, now you're thinking like a researcher. Yeah, now we thought about all the ways you can change the lighting parameters. Now you're saying, in addition to that, can we change camera parameters and do something more? That's the thinking I want you to have in this class. AUDIENCE: Yeah. RAMESH RASKAR: Yeah, always take an idea, x, and think about how you can do the next x. And I posted some slides on how to come up with new ideas on the Stellar webpage. If you're inspired by an idea x, how do you come up with the next idea? And then, there's a systematic process of how you can come up with great ideas. And also, the same slides tell you, if you come up with a great idea, whether to decide whether to pursue it or not. So I'm curious about your comments. It's just a work in progress. I have been putting it together over the last several months. So again, that allows you to do this in real-time. What are some other things people are doing? You take a flash photo. The person is very brightly lit. The background is not so well-lit. You take a no-flash photo. What has changed in the two? AUDIENCE: [INAUDIBLE] RAMESH RASKAR: She didn't blink. Remember, this part of [INAUDIBLE] flash. But anyway, that's a different point. What information can you recover from this? AUDIENCE: You can separate the foreground from background. RAMESH RASKAR: You can separate foreground from background because the foreground is well-lit. But the background doesn't change. If I just take the ratio of the two images, it will look something like this if I do one minus the ratio. So the background hasn't changed. And the foreground has changed dramatically. So this particular paper is called flash mapping. As simple as that. That was the idea for a SIGGRAPH paper in 2006. They went a little bit beyond that and just said, if we just take pure ratios, then you cannot get the strands of the hair and so on. So they used some gradient domain techniques, some graphical techniques to solve this mapping equation of, what's foreground and what's background? And they can also detect these very subpixel features, strands of hair. And then, of course, you can take that and replace the background. So it was mostly well-lit. It does pretty well here. There's some small artifacts. But it looks pretty good. So just by changing the light, also, this presence and absence of light, you can create some really interesting [INAUDIBLE] or interactions. AUDIENCE: It's the same technique with a flashing [INAUDIBLE] video. RAMESH RASKAR: That's a great idea. So you could do alternate frames. And you won't-- it won't be disturbing for the user. And you can just distinguish foreground from background. So very simple examples. We'll progressively become more and more complex, I guess, here. But let's stop here for today. And we'll talk about all these other cool things next time.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_5_Recent_research_Retrographic_Sensing_for_the_Measurement_of_Surface_Texture_and_Shape.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. [SIDE CONVERSATIONS] MICAH KIMO JOHNSON: OK. Let's get started, again. [SIDE CONVERSATIONS] I'm going to tell you guys about a recent project that I did, which was presented at CVPR this summer. It was a paper there and a demo, and I also was part of Siggraph Emerging Technologies, and it was pretty well received there too. AUDIENCE: [INAUDIBLE] MICAH KIMO JOHNSON: We're starting. AUDIENCE: OK. MICAH KIMO JOHNSON: The main idea is a technique for capturing geometry. I have here, this is our little pocket demo, which I'll pass around, but it's a really simple idea. This is a piece of gel with a painted skin, and if you press something into it, like your finger, it looks like it's been painted. So I'll pass this around. You just take turns pressing your finger into it, looking at it. It's pretty simple and pretty cool, and what I'll tell you about in this talk is how we can actually take advantage of this simple device to measure surface geometry. Now this is been a pretty long standing problem in vision, how can you capture geometry? How can you get more than a 2D image? For example, how do you get 3D shape? And I have some references here, probably not so interesting for this class. Basically, for these different techniques, they are thinking more generally. How do I capture geometry of somewhat general objects? And I'll show you. This has some very specific properties. It's only going to be able to capture surface texture and surface geometry, but we still think it's pretty cool. For example, it can capture geometries of objects like this. We have an Oreo cookie here, so the surface of it has some interesting detail. This is a glass little trinket. It's like a pin. And a lot of existing methods that either use lasers or use structured light have problems with glass, so that's a difficult type of object to scan. This is a metal drawer pole. It's just a small piece of metal, but techniques that use cameras and structure light also have problems with metal because of the reflectance function. And we can even scan the surface geometry of something as fine as a $20 bill, so I'll show a lot of different results. Here's the main idea, again. You have some of this clear gel, called an elastomer, thermoplastic elastomer. We put some paint on top, when you press something in and view it from the other side, it looks like it's been painted. And when you get the pocket version, you can see that yourself. So we can use standard computer vision techniques to reconstruct the surface geometry, and I'll tell you about that in the rest of the talk. But first, what is this material? Well, it can be any kind of clear gel, so we use thermoplastic elastomer mostly, which is a common rubber that people haven't heard of, but it's the same kind of rubber that's in your shoes or in a lot of different products you buy. But you can also use silicones and other rubbers. It's just important that it's clear. And we bake it in the oven. We melt it down into whatever shape we want, and typically, we want it to be flat on top. But you could make it into the shape of your face because you can use this stuff to make molds, so just some interesting things we've done making it to other types of shapes. But typically for this purpose, we want it flat. We engineer a specific paint to put on top, and you can see these are four different sensors, four different paints. And the properties of the paint kind of change the types of geometries you can measure. I'll maybe talk about that a little more. But what we do is we put this into a photometric stereo setup. There's some gel on the top. This is a box that I made. And you've got a camera beneath it. And then inside, there's a Tupperware bowl with three lights, red, green, and blue from different directions. Let me show you actually a video. So as part of the Siggraph presentation, I actually made a few videos, and let me play them because you can see it in action and see how it works. This is a real time capture, so this is the top of the box. You're zoomed in. You're looking at my finger pressing it to it. On the right is what the camera sees. This is looking at the bottom of the gel through glass, underneath the bottom. We have three colored lights around it, red, green, and blue, at different directions. And if you imagine what each of the individual color channels looks like in this image, if you just look at the red channel, it looks like a shaded image with light coming from this direction. If you just look at the green channel, the shaded image with light coming in that direction, same for the blue channel, shaded image with light coming in this direction. So by using three different colored lights and an RGB camera, we can have three separate shaded images. It's gray-scale images, but it's three different light directions. And it turns out that that is enough to constrain the surface normal on the shape of the top of the objects. I can talk a little bit about the math after this, but just to give you an appreciation for what the demo is like-- so this is what the camera is seeing as I move my finger around, but we can do a reconstruction of the geometry in real time. So I'm spinning the viewpoint, and this is all captured geometry and another camera feed just to see the interaction. This is what looks like when you brush your teeth. And this is the reconstructed geometry of the toothbrush and its bristles. And so this is the demo that we did at Siggraph, and also I just did it at [? ITCP ?] last week. And you can see the Oreo, interesting properties. It's high resolution. It's interactive, so you can get dynamic 3D information. Now this is after the Oreo. I noticed there were crumbs on the sensor, so you can see the geometry of the crumbs as they roll underneath my finger. And this is Ted Adelson's pulse. He's put his wrist down on the sensor, and he's reconstructed all of his skin texture and also the pulse that's moving. And finally just for fun, the surface of the bill, and this is all done in real time. I have to press it with a flat plate, and you can see-- maybe not those of you who are looking at an angle, but if you're looking at it straight on, you can actually see the raised printing. So it's got a lot of different properties. I said, real time, high resolution, but it's not a traditional 3D scanner because you're not going to be able to scan your face or something with a lot of depth, can't imagine pressing your face down to the gel to get full 3D. We don't claim it's a 3D scanner anyway. It's really [? 2.5 ?] [INAUDIBLE] a type map in real time. So let me go back to the presentation. How does it work? Well, why do you need three different light directions? Why don't you just have one light and try to reconstruct the shape with one light? Well, that's called shape from shading. And in computer vision that's considered one of the traditional hard problems. So we don't want to have to attack a hard problem, especially if we want this to happen in real time. We use photometric stereo, which is actually a very old technique too, first proposed by Woodham in 1980. And it says, if you have three lights-- and this is a Lambertian sphere lit from three different directions. So you can see the lights are in each of the color channels red, green, and blue. They provide constraints on the surface [? level. ?] We'll talk more about that. Here's the three shaded views, just so you can see. The Oreo's pressed into the sensor. This is an actual picture that I took with it pressed in the sensor, red channel, the green channel, and the blue channel. And you can see the lights coming from different directions. AUDIENCE: Why don't you have broad sources? They're not-- MICAH KIMO JOHNSON: Yeah, they're actually sort of little area light sources, and we found that we get the best looking results in terms of detail by using specular paints, which have a lot of contrast, with area light sources. So there's a lot of things you can fiddle with. You could go to a point-light sources with a Lambertian or a very diffuse paint, and that's better at reconstructing a lot of depth. But to get fingerprints and details on the surface of a bill, we found that it works better to have glossy paint and broad lights. But, yeah, all sorts of things you can fiddle. Actually, let me just go into a little bit of detail why you need three lights just because I think it's interesting. So again, as I said, if you'd only had one light, you'd have to solve shape from shading problem. Why is this hard? So here's a shaded sphere with a single light direction. This is a Lambertian, so it's a diffuse sphere. And let's say you know that the intensity value of a certain pixel is 0.9. Well, here I find a [? iso ?] [? platonic ?] sphere, so [? 0.99 ?] here. But in fact, the orientation that corresponds to 0.9 [INAUDIBLE] because all of these points having a [? normal ?] that makes the same angle of a light direction. And that's how a sphere shaded in a diffuse kit. So just knowing the shading at a particular pixel that is 0.9, it doesn't tell you anything about the orientation of the surface underneath. But by having three light directions, if it's 0.9 in green, 0.8 in red, and 0.7 in blue, or something, you can pinpoint-- we have the intersection of these, these types of [? ice ?] [? coats. ?] And you can pinpoint the location on these [INAUDIBLE] three lights. So it's really pretty much that simple, where you look at the colors in each channel, and you build a lookup table through calibration. So we calibrate by pressing into it this grid of spheres. We know the size, and we know the geometry because it's a sphere. It's very simple. You press it in, and it looks like this. And already you can see that color denotes orientation. If you see a pure green pixel, it means the orientation is something like this. If you see a pure red pixel, it means your orientation is down like that, and blue is over like this. And then if you see a mixture of colors, it means it's some mixture in between those orientations. So you can build a mapping from color to surface normal, so it's a 3D lookup table. We have red, green, and blue indices, and each of them you can get the gradient or the surface normal. So again, if we see color, 10 in red, 200 in green, and 3 in blue, we go to the lookup table, and we get out that gradient, something like that. So you get a different color, you get a different gradient. So then we solved Poisson equation to reconstruct the surface from its gradients. Ramesh mentioned that before. It's fairly common and used in a lot of different problems. But here are some results. So these are the results from the [? CBC ?] paper. This would be really challenging to use laser scanning or traditional photometric stereo. But when you press it in, the skin of the gel potentially paints [? sideways. ?] Everything you press into it looks like it's been painting. So we can reconstruct the geometry, and you get-- this is head-on view and a view from a different angle, and it's very direct and straightforward. So again, here are these objects that I showed before. And you can see we've captured a lot of the details. Now you don't get color information because you're essentially painting it when you press it into the gel. But in a lot of cases for graphics, just having bump maps, having the texture is useful. And finally, the 20 [? probably-- ?] so for the [? CVCR ?] paper, I did a 20. I stepped it up a bit for the video and did a hundred, but they both work pretty well. So that's the height map that we've measured. And now you can render, and you can change the viewpoint. And you can see that we've captured quite a bit of detail. And that's just the security strip [INAUDIBLE].. AUDIENCE: Going to get into counterfeiting then? MICAH KIMO JOHNSON: Yeah. yeah. You can actually sending this to a 3D printer. So recently, we showed this, like I said at Siggraph Emerging Technologies, and we took sort of a graphics slant showing how you can measure surfaces. But this could be used for a lot more. So we just made a second video, and I talk a lot in this one, but let me just fast forward to the interesting ideas. OK. So in this video, we're going to look at how can we change the properties of the gel to measure different things, perhaps beyond the limits of the original system. So how can we get even higher spatial resolution? How can we change the sensitivity of the system? And can it be made into a smaller device? Because everyone asked us when they saw this big box that I had, well, you know, they said, that seems really bulky and cumbersome. Does it have to be that big? So we've made a smaller version, but let me step through some of these results. So spatial resolution, this is an oregano leaf pressed into a very thin piece of this sensor. So it looks like the oregano leaf has been painted. And you can see a lot of detail with your naked eye like this, but then you can put it under a microscope too, and zoom way in. And you can see a lot of these structures, and I don't know the technical terms for them. But you've got these little pieces here, these hairs. We estimate that, I believe, some of these structures are 100 microns. And then at the tip of the hairs, that's perhaps on the order of just a few microns. So you can really see a lot of detail just by pressing this gel into it. AUDIENCE: Wait. Is this the 3D reconstruction or just-- MICAH KIMO JOHNSON: This is not a 3D reconstruction, but the same principle could apply. It's just a different piece of gel, and we just took a picture. If we put the three lights in and calibrated, we could get the 3D geometry out of this. Calibration might be a bit challenging and getting the lights configured, but in principle, this is no different from what we did before. So then we look at sensitivity. A lot of these are just Ted Adelson's results. He's the one who came up with the idea of the gel and this. And he's been fiddling with these things in his basement. I did the 3D reconstruction stuff. But I didn't even know he was thinking about all these different ideas, and he sent me these videos. So this video is his car driving over a piece of this sensor. So you think, how did he do this? Well, first of all, why he did it, so I don't know. But the idea is that-- the gel that a lot of people have played with in our demo is soft to touch. So you touch it, deforms, and you can see your fingerprints and things like this. But you can get these rubbers in a variety of strengths, variety of elasticity. So he went out and got one that was as hard as a tire itself. And then he bought one of these ramps that you used to change your oil, so you can drive a car up on it. He cut a hole in the top, put some of this gel down, put a mirror underneath that had a camera, and then drove his car over it. You can see these are some pebbles stuck in the tire. This is the view of the tire. Now this is not reconstructed 3D geometry, but again, it could be that. You could actually measure tread wear or something like that. It just shows that the system can handle 2,800 pounds of car on top of it. But on the other side, this is him pressing, just poking at with a hair. So this is a very soft piece of gel, and he's just touching it with an end of a hair. Or on the other hand, this is some soap bubbles viewed through clear glass. So this container was clear glass and soap bubbles inside. But then he's put sensor material on the other side, and you can see that just the pressure of the soap bubbles is enough to deform the gel. So on the one hand, we had a car on top of it. And the pressure from the car, to start it, we could get geometry there. On the other extreme, we have soap bubbles, and you can see the geometry, and you can even track how they form and how they change. So there's a wide range of sensitivity to do anything in between. This is somewhat disturbing. So in talking to-- and he sends it with this in his email, and I open it up, and it plays. It's like [? disturbing. ?] And talking to technology licensing, they thought, wouldn't this be interesting to do medical imaging? So there's some exams, like prostate exams that are done by a doctor, putting their finger in, feeling, so the touch-based. And then they write down what they feel, but there's no record of that. There's no image that they take, nothing like that. This is a touch-based sensor that can capture an image of what it touches. So in theory, you can build it small enough, capture the geometry of internal tissues. But rather than make something that we would put internally, Ted just licked it to show internal tissues was being measured. Let's see. We've got, oh, yes, worms. These are worms on the surface of the sensor. And looking from the top, you can't really tell how the worms move very easily because they're translucent. You can't see the details, so they're not so obvious what they're doing. But at the same time, he was recording underneath. So this is the view from the sensor. And you can see that as the worm expands and contracts, you see these protrusions here, these structures that stick out and grip. And I learned they're called [? CT. ?] And perhaps people who are interested in saying worms might be interested in seeing or measuring how they expand and contract and how these little structures come out. So we can just see it visually, much clearer because we've essentially painted the worm. And finally to address the question of can we make it smaller? Ted made a compact version. And let me explain this one a little bit. This is a dental camera, so it's somewhat like a toothbrush. But inside here is a camera and a light. It's very small. And so on top he built like a lens and piece of sensor material. You can see that. It has a little bit of memory in this one. But when he touches it, it's the same principle of what we've done before. It's got some of the sensor material. But now it's in this handheld device, and it could be made even smaller if we just engineered smaller light, smaller cameras. This was just something off the shelf that we bought, a dental camera, so it's pretty small. And we can show that you just touch it into things, and you can see the texture in geometry. And with three lights, we could do our same trick of photometric stereo and actually measuring the geometry and reconstructing it. But this one is kind of fun to play with. You can see this might be useful for skin texture. So one of the last remaining challenges of doing realistic human rendering is getting this microstructure of skin, the wrinkles, the pores, all the things like that. And people have built some very expensive domes to try to measure a lot of the geometry, but they can't get down to this level of detail. We can get it locally, but we can't get the low resolution. We can't get the actual structure of the face, so maybe combining these techniques could yield something interesting. So here's stubble. You can see the stubble actually has quite a bit of geometry, more than you might think, and his hair. That's basically the touch-sensor system. Does anyone have any questions? AUDIENCE: Yeah, I do have a question. So if you use that little [INAUDIBLE] table to get some information and get the direction in the wrong findings, but you may have several problems separately or from just a different path, and you need to straighten that [INAUDIBLE] MICAH KIMO JOHNSON: Right, so-- [INTERPOSING VOICES] AUDIENCE: How would you get that right? You can say, OK. How would you distinguish this [INAUDIBLE] MICAH KIMO JOHNSON: You're relying on the ability to measure orientation accurately. So we're measuring orientation, the surface normal. And we've verified that for relatively shallow orientations, meaning orientations that basically face the camera we're pretty accurate. As you get steep orientations, we've become less accurate. And we could employ other techniques to have more accuracy there, like maybe more lights, more views. But in general, we're not getting steep normals accurate. So we reconstruct the depth from the normals, so there might be some bias. Let's say the surface had a dip like this. If you missestimate this normal, you might only reconstruct it like that, so you can get errors like this. But if you're mostly interested in high-resolution detail, this device is pretty good. If you need to get those low-resolution, depth estimates accurate, you might need more views or more lights than what we're using. AUDIENCE: What is the state-of-the-art in touch senses that you maybe drag over something that aren't camera-based? I mean is there anything in that? MICAH KIMO JOHNSON: Yeah, I think the state-of-the-art is 10 [? prolimens ?] for [INAUDIBLE] internal reflection. That's something which-- [? RAMESH RASKAR: ?] Defining [INAUDIBLE],, NYU? MICAH KIMO JOHNSON: NYU. But [INAUDIBLE] to it because he was on it. RAMESH RASKAR: Now you're talking about the-- MICAH KIMO JOHNSON: Oh, there's the Microsoft Surface, but then there's this new-- RAMESH RASKAR: No, no. Ken Karlin has a new force-based-- MICAH KIMO JOHNSON: Oh, it's force based. RAMESH RASKAR: --multi-touch sensor, which he has shown in-- AUDIENCE: Is it like gel? RAMESH RASKAR: Yeah, the gel one, and then the [? internal ?] reflection is purely optical. MICAH KIMO JOHNSON: I was going to ask about that. AUDIENCE: Is there anything where you directly measure? I can imagine a line of points that you can measure the height that they get to press, and you drag your finger across it. Is there anything like that? RAMESH RASKAR: How would the height change help you? AUDIENCE: Like where you're directly measuring the height instead of inferring it based on images. AUDIENCE: Oh, yeah, there is. The [? Soundscape ?] demo. MICAH KIMO JOHNSON: I think that that one [? Ken ?] [? Karlin's ?] one does measure pressure, so that you can get pressure variation, directly, I think. [INAUDIBLE] But it's low-resolution. So I haven't seen anything that's high-resolution like ours with fingerprints. RAMESH RASKAR: They can do a pan, but they cannot do [? reaches. ?] MICAH KIMO JOHNSON: All right. RAMESH RASKAR: Any other questions for Kimo? MICAH KIMO JOHNSON: Yeah. AUDIENCE: How do you come up with the ideas? RAMESH RASKAR: So what was the question? MICAH KIMO JOHNSON: How do we come up with the idea? So I think it's a couple of things. Ted has young children, and he was fascinated with the idea that touch is actually a very important sense to infants. They're touching things to their lips, [INAUDIBLE] lot of sensory organs that they're touching. And touch is relatively unstudied compared to vision and other things. And one of the reasons is that there isn't a good touch sensor, something that measures touch the way skin does. It deforms when something presses it. And so he thought, well, how can I build a touch sensor, something that deforms like skin. I think at the same time he was having foot problems, so he was just separately buying these gels to make insoles. And then these two things mixed in his head, and he thought, oh, I'll make the gel. RAMESH RASKAR: Did you pass this around? MICAH KIMO JOHNSON: I did. Yeah. RAMESH RASKAR: I was fortunate enough to have one for myself. But if anybody wants to do a project on this, you can collaborate with-- [INTERPOSING VOICES] MICAH KIMO JOHNSON: Or if you want to actually play with the demo in person, we have it set up in our lab. RAMESH RASKAR: Or he could be a mentor if you want it to be a final projects in the class. AUDIENCE: So [INAUDIBLE] have some idea in the space? It can-- MICAH KIMO JOHNSON: We are thinking of what the next generation of this will be, and we're shooting for a Siggraph project based on this. RAMESH RASKAR: All of our [INAUDIBLE],, that's why the classes in the fall. And the final projects are due 4th of December. We have one and half months to watch. Thanks, Kimo. MICAH KIMO JOHNSON: Yeah.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_9_Computational_imaging_a_survey_of_medical_and_scientific_applications.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. DOUGLAS LANMAN: So today's talk is on computational imaging, which as you'll see, there's a lot of inspiration you can take from this work. And I'd just like to mention a lot of these slides came from Marc Levoy's SIGGRAPH course. So if you want any more detail, go to his website, and you can actually see the original course material for this. So the basic idea is that scientific science is really driven by our technology. This is not surprising anyone in the room. The classic example Marc Levoy put here is that if you have Leeuwenhoek, plus microscope equals microbiology. And so really, in some sense in the sciences, you're waiting around for new technology to make new discoveries. We're waiting to put satellites up. We're waiting for birds to stop ruining the Large Hadron Collider. Technology is what we're waiting for. And so the question Marc Levoy asks is, what's the most important instrument in the last century. Arguably, in terms of science, what would some people put up? If you could only choose one thing to take with on a desert island where they do science, what would you take? RAMESH RASKAR: Microwave. AUDIENCE: Computer. DOUGLAS LANMAN: Exactly. This is not news to anyone that the computer is really arguably the greatest scientific device. And so what can we really do with this? RAMESH RASKAR: You know, they asked people in the general population, which is the most important technology that has changed their life? And they said remote control and the microwave. DOUGLAS LANMAN: The microwave, or a remote control microwave is the greatest product. So Professor Horn has said-- and this is, of course, is very similar to the ideas of this class-- that computational imaging is simply imaging methods in which computation is inherent in image formation. And so when we look at medical imaging and scientific imaging, really, this is the key story in the 20th century. And so in some of this talk, I'll show you what the modern way is to take some of image using computed tomography. And I'll show you how they did that 100 years ago, before they had computing power, how they did that using mechanical computing. And so I think that's an interesting process, to see how computation has really revolutionized how we take pictures in medicine and science. And so if we look out well beyond the box we're in, which is optics-- most of what we talk about in computational photography is this wavefront coding, light field photography, holography, things you've looked at in your homework. But if you open the box up a little bit, these same ideas really come from a much broader field. And so the first part I'll talk about is medical imaging. I think the key story in medical imaging is tomography. Really, that's the central problem. How do you take a cross-sectional image of an object non-invasively? And you can do that in two ways. You can use transmission or reflection tomography. We'll talk about both. And then, that story is repeated throughout the sciences. Once they understood tomography, they said, OK, we can solve all of these problems. We can look at geophysics. The exact same methodologies can be applied there, which is really what's happening in optics now. In the late '90s, they found this link between tomography and optics. And then all of a sudden, you have light field photography. Geophysics did that. About in the '80s, they discovered that link and they invented all of these methods for unobtrusive measurement underground. And then in Applied Physics, we'll talk about when the problems become much more difficult. For light field photography, we look at ray optics. Once we start considering coherent optics or scattering tomography, that's when the problems really get interesting. And I think for those of you working on final projects, really looking at scattering tomography and its link to light field photography is where there's some exciting ideas. And hopefully, some of you are already thinking about that for your projects. And then, we'll jump away from tomography. Again, I'm going to talk about all these things. But then, we'll move to something else, which is in biology, the story isn't tomography, necessarily. It's again, how can you take a cross-sectional picture? But the tricks are very different. So we'll talk about confocal microscopy, deconvolution, and deep learning, which some of you have probably seen already. And the final area I'll end up on is astronomy, which is really where most of my work comes from. Many of these ideas, especially in biological sciences, involve refractive optics. And then, we'll look at how you can image without refraction. And so again, for X-rays, gamma rays. But you can't build refractive optics. And so then, you get into the ideas of code aperture imaging and interferometric imaging. And then, the rest of this is stuff you've already seen. How do you how do you build panoramas? How do you take light field photographs? And so this is a great example of medical imaging and the idea of tomography. And so the problem statement is very simple. You want to non-invasively image inside of a living object. And so of course, if any of you have been to the CT scanner at the hospital, you might have had them produce images like this. So the question is, how can you do this without simply slicing open a person and taking a photograph? And so in general, this involves basically taking the last 100 years of mathematics and solving an inverse problem. We'll talk about what that inverse problem is. Then, there's other ways. Instead of inverting a system, we can of course, use endoscopes, thermal imaging microscopy to just simply directly measure inside the body. And I think the key thing here is to notice that the mathematics behind tomography, all of these devices are essentially the same. CT scanners, MRI machines, PET scanners, ultrasound, they all use the same underlying mathematics. The only difference is the wavelength of radiation to zeroth order. And of course, the other main areas are EEG and MEG. And so if you think about designing the non-invasive medical device, the first problem you have is ionizing radiation. CT scanners use X-rays, whereas MRIs use radial waves. So that's the first major benefit. But really, the other problems to think about are again, minimizing the invasiveness, not inserting devices or probes, and also improving temporal spatial resolution, and finally, making these things very inexpensive for the developing world. So now, let's just jump into to what tomography is. Well, it's a Greek word, Greek origin. And it's simply the word tomos, or "a section," or "cutting" plus photography, graphic, taking a picture. So it's a cross-sectional picture of an object. So if you have some volumetric function in 3D tomography, we'll take a slice of that 3D function, and return a 2D image. That's our goal. The question is, how exactly can we do that non-invasively. And it's a very simple idea. The idea is-- this is what I'm describing-- known as transmission tomography. There's also reflection tomography, but we'll get to that later. So the idea of transmission tomography is we have some density function, let's say our human body. And it absorbs electromagnetic radiation. And so we're going to put a single-point source in the scene, like an X-ray source, have it emit isotopically into the world, and put a film plane sensitive to X-rays on one side of the object. And our hope is that some wavelength of radiation that we can find will transmit ballistically through the object. So there'll be a little scattering, but absorption. So along a given ray from the source to our film plane, the electromagnetic radiation-- the photons, if you will-- will simply travel through the object. And the only thing that happens is that they're attenuated somewhat by the object. That's our assumption. Now of course, in the visible wavelength, that's not true for human bodies. You can't do this. And so that's why they chose X-rays, because that's easy to generate, easy to build detectors. And for the most part, human bodies satisfy the scattering assumptions. And so if you look back at CT scanner history, if we go back 30 to 40 years, really, you have two technologies competing against each other. But the basic idea is the same. We're going to have some volumetric function, like our body. And we're going to take many projections of it. Because we're trying to estimate a three-dimensional function, we need to make a well-posed system in inverse. So we need to somehow sample the variation. So a single picture will only take a two-dimensional projection. But then, if we move the source to a new position, we'll get a new projection. If we take a full set of projections, we hope that there's enough data to invert that system. And we'll see how you do that. But really, before they invented the simple fan-beam tomography, there was actually a competing idea. The reason it became popular is that mathematical inversion is actually easier. And the idea is that instead of creating an X-ray point source, we have some cathode tube that generates X-rays. And then we collimate them, say with a series of lead fans. And then, we get a set of parallel beams. So this is the parallel beam tomography problem, which is really the easiest one to solve mathematically. But now that we have plenty of computing power, it's very easy to solve this system, which is easier to build mechanically, which is to simply have point X-ray sources rotating around the patient. So that's our data set. It's simply a set of projections. So now, we have a mathematical problem. It's very simple. There's some underlying density function. And so again, a ray is going to pass through this density function. You evaluate the line integral. And that will be the absorption. And our task is to invert that, those set of line integrals, to recover the density function. So if you just take the case of the simple fan-beam projection, we have an X-ray source that's culminated, a film plate on the opposite side, and it rotates 180 degrees around the patient. And that gives you what's known as a radon transform. Maybe this is sounding familiar to a lot of those in the room. So this radon transform, again, is simply the projection of the density function along a given direction. So this is the data set that a CT scanner will give you, for the most part. And then if you compare that to fan-beam projection, where you can imagine then putting a point source in the corner and then rotating it, for the most part, it looks almost identical. It's just this slight nonlinear transformation of the parallel beam projection. So now that we have ample computing power, it's very easy to resample this data set to be equivalent, and then use the inverse radon transform to solve. But we still don't know how to invert this problem. If I give you this image, what algorithm can you apply to recover the original image? So that's really where the insight was. And so I recommend-- RAMESH RASKAR: Let's make sure everybody's up to us on the forward process. Is everybody clear how you go from the first image to the second image? AUDIENCE: Wait. This one to that one? DOUGLAS LANMAN: Yes. RAMESH RASKAR: First on to second, that's the easiest one. DOUGLAS LANMAN: Is that clear? I mean, just mechanically, it's very easy to simulate. Imagine this line is our collimated X-ray source. So it's just emitting parallel rays along the y-axis. And this is our film plane. And so if I just simulate the light going through here along a given ray, I perform the line integral of this function. That gives me the absorption along that ray. So a given ray would be some rotation angle and position. RAMESH RASKAR: So the first column of that middle image-- DOUGLAS LANMAN: Is the-- RAMESH RASKAR: --is the first image. DOUGLAS LANMAN: --parallel-beam projection. And then we'd rotate both of these slightly, say in 90 degrees. That would bring us already here. We take another projection. Now, we just get the series of projections as a function of angle. RAMESH RASKAR: Do you have a movie, by any chance? DOUGLAS LANMAN: I don't have a movie in this one, no. Yeah, we'll get to it at the end. I probably should put that earlier. And so then, the fan-beam projection is very simple to simulate, as well. Instead of a line source, we have a point source in this corner, or in the center here. And we have a set of fan beams going out. And then, we rotate those too, which is why you get this nonlinear transformation. So that's our input data set. Now we're trying to invert the system and go from one of the right images back to the original image. RAMESH RASKAR: And this is a common problem in a lot of virtual imaging situations, right? Because as Doug said, you don't want to slice the patient. So all your observations are from outside, but you want to infer what's inside. So from external observations, you want to infer what's internal to the patient, or to an object, or to an oil well. DOUGLAS LANMAN: Right. So now, we'll go into the math. There's really one nice result. And again, if anyone's working on this, you should really look at Slaney and Kak. They have a really nice book that's online. I think it's out of print now, but the whole book is online and these figures are there. But really, the core idea behind this-- and you've probably seen this in your own homework in light field photography. I think you did refocusing, I'm guessing? RAMESH RASKAR: Mm-hmm. DOUGLAS LANMAN: I don't if you did Fourier domain refocusing, or just-- RAMESH RASKAR: No, just [INAUDIBLE].. DOUGLAS LANMAN: OK. So you did the shift-and-sum refocusing. And it turns out, this is the first example. Again, Marc Levoy originally put these slides together. And he had a very nice paper-- I think it was 2005, the thesis. But the idea was that for your lexical refocusing, you did shift-and-sum, which is computationally expensive. Using the Fourier projection-slice theorem, you can actually improve the running time of that algorithm. You'll probably see that once we get to this. Let's just consider the tomography problem again. We have this density function in 2D. Let's just analyze this in 2D to begin with. So we have some density function of x and y. So this is our absorption function. And then, we're going to take a projection. Again, this is a parallel-beam projection along some angle. And so if we're just projecting along y, then the projection along y is simply the integral of this function over y. Does that make sense? So we're projecting this function onto the x-axis. Right? That's our first. That's obvious. Now, the question is, let's look at a Fourier domain analysis of this problem. So let's take the two-dimensional Fourier transform of our original density function. And so you can write that. It's very simple. Everyone remembers this. The Fourier transform is the function of the two frequency parameters. It's simply the original function times this complex exponential, which is the sum of the coefficients in each direction. And here's the key insight. What we'd like to do is somehow relate the properties of this Fourier transform to a slice in our primary domain. And so this is really the neat trick behind the Fourier projection-slice theorem. Let's just look at this 1D slice along the F-frequency axis in the Fourier domain. And so that that's just given by substituting 0 for the y into our previous expression. So essentially, this kx and ky can be thought of as a vector. And we're just selecting the one along x. And so here, you see that our y term drops out. And what this gives us is the Fourier transform along the kx-axis, just a 1D slice. So the next step is really where all the magic happens. It's just regrouping terms in that integral. So if you look at this, you can see that it's separable. And so we can do the y integration, regroup it, and then do the Fourier transform. And so now, you can just substitute the first expression and you get this very nice result. So can anyone interpret this? You see what this means? I can explain it to you, but it'd be nice to see if people are following. RAMESH RASKAR: So remember, we started with the first row is p of x, which means it's the photo taken along the x direction. If we put a sensor, we get px photo. And then, you go through the whole math. The second equation is Fourier transform. Third equation is Fourier transform along-- is it along the y-axis? DOUGLAS LANMAN: Along, dropping it. So it's the Fourier transform along x. RAMESH RASKAR: Yeah, along x. And then, what you get back at the bottom is the px term, which was the original photo that you started with. DOUGLAS LANMAN: Right. So looking at this expression, there's now this relationship between the frequency domain, the Fourier transform, or a density function, and the original function. Do you see what that relationship is here? What does this saying. AUDIENCE: The slice is just the Fourier transform of the original photo. DOUGLAS LANMAN: Exactly. Very good. So the Fourier projection-slice theorem-- this is the key insight. It's that if I take 1D slice of my density function, take it's Fourier transform-- 1D Fourier transform-- that's equal to a slice of the 2D Fourier transform. So we have this insight. So how could you use this to now solve our problem? Let's go back. Do you see how you could use this to now go from this image, which is a parallel-beam projection, back to the original density? Can anyone describe the algorithm roughly? AUDIENCE: Just take the inverse? DOUGLAS LANMAN: Take the-- what do you mean? Of? AUDIENCE: --of all the p axis. And then reconstruct all the slices of-- DOUGLAS LANMAN: OK so-- so here's your image. What's the first step? AUDIENCE: Take the line, one line. RAMESH RASKAR: Every column here is a px. DOUGLAS LANMAN: OK. Great. AUDIENCE: And take all of them, take the inverses, inverse of the px, inverse for the transform. DOUGLAS LANMAN: For the Fourier transform. AUDIENCE: Yeah. DOUGLAS LANMAN: OK. AUDIENCE: And then, you will get kx. DOUGLAS LANMAN: Yes. AUDIENCE: And then, keep rotating all the kx's to generate [INAUDIBLE].. DOUGLAS LANMAN: That's absolutely correct. Great. And then the last step? AUDIENCE: [INAUDIBLE]. DOUGLAS LANMAN: Exactly. So let me just summarize that up. That's great. That's exactly what they do. So the algorithm is very simple right now. We're going to get a slice for each angle. So you simply take the projection, take its 1D Fourier transform, and initialize an accumulator that's all zeros. Assign the Fourier transform, then rotate your source and receiver. Take a new slice, 1D Fourier transform, assign it, and you're going to build up this whole space. After a full 180 degree rotation, you'll have populated this entire volume. And you're going to inverse Fourier transform and get everything back. So I think I have a picture here of how that works. Yeah. So you just described this algorithm correctly. So now you can see basically that procedure I outlined. At each step, we're going to populate our accumulator. Once it's full, we can inverse 2D Fourier transform, and we get our image. So what are some problems you can see? This is the textbook-- and there's the textbook again-- solution to the inverse radon transform. So what are some limitations you can already see, practically? If you built a CT scanner that used this algorithm, what would be some of the limitations? RAMESH RASKAR: The resolution of the angles at which you can take-- DOUGLAS LANMAN: Exactly. So you see that the problem here is we need to fully populate this transform. We need to fully populate our 2D Fourier transform before we inverse transform, or else we'll have missing data and artifacts. Does that make sense? And so this insight is really at the core of tomography. You can see a lot of the limitations. So the first one you mentioned is resolution. You mean angular resolution, I think. So if we built a CT scanner that took a projection every five degrees, or every 20 degrees, we'd have lots of zeros left in this data set. RAMESH RASKAR: If we take every 20 degrees, you're filling up this image only at every 20 degrees. It's 20, 40, 60, 80, and so on. DOUGLAS LANMAN: So you can imagine they'll be huge. Near the origin, you'll be OK. Your low-frequency reconstruction will be all right. But your high-frequency detail will be lost, or unknown. You'll have many zeros out here. So that already implies that CT scanners need to have very dense set of rotations. Or, you need to have some prior about the scene. But that's one problem. What's another problem you can see? So if I built a CT scanner, for instance, that could only scan a limited set of angles, not a full 180-degree rotation, then you'd also have a lot of missing data here. And so that's the second. Having sufficient angular sampling is one problem with CT scanning. The other is limited baseline. If I only rotate over a small set of angles, because maybe in whatever system I'm using, it's difficult to rotate over a full 180-degree rotation, then I won't be able to get a complete reconstruction. So really, keep in mind those two limitations of CT scanning. You need to have sufficient angular density and sufficient baseline. And they actually map directly to problems with light field cameras. But then, let's just step back again and think about the [INAUDIBLE]. So, again, our algorithm is we rotate our source and receiver around the object. If it's fan-beam or parallel-beam tomography, you end up with the same thing, where you take the Fourier, the 1D transforms, and you assign them to the frequency domain. And we may have some zeros. But there's a third problem, which I didn't describe. And you can see it here. Again, the algorithm was we initialized and accumulated all zeros. We take the 1D Fourier transform after we accumulated, then rotate, add the next one, add the next one, add the next one. You can see it right here with the opacity. I made all of these equal, and you can see we're building up this really high density in the center. Does that make sense? And so when I inverse Fourier transform, essentially, my low-frequency terms have been boosted. And so we'll lose edge contrast, if we don't do anything here. So the third problem we identified with the inverse radon transform is that you can't simply evaluate this accumulator and inverse transform. So here's an example. So if you only take it every 30 degrees, you can see versus 5, versus 1 degree. There's a huge difference in the density of angular sampling. But then, come back. We have this problem where we get this hot spot in the center. And so if you look at this, it goes in the inverse of the radial frequency. So as I go higher in radial frequency, I don't have this problem as much. And so the solution-- it's up here, but does anyone want to tell me what the solution is to solve the hot spot? AUDIENCE: [INAUDIBLE]. DOUGLAS LANMAN: Multiply by the inverse, right? So that inverse is interesting. So the direct inverse of this function is simply omega itself, absolute value of omega. So as a frequency domain filter, it's simply saying multiply by a gain, which is proportional to your radius, away from the center of the image. So what kind of filter is this? AUDIENCE: Bypass. DOUGLAS LANMAN: Bypass filter, right? So the algorithm is telling us we're multiplying by this bypass filter, which means what we're doing in the primary domain is we're sharpening. Does that make sense? We're convolving with a sharpening filter. And so that adds some other problems. Basically, this filter amplifies high-frequency noise. Right so in general, in CT scanning in the '80s and '70s, basically, they examined apodizing this filter, having it trail off at some frequency, so you don't arbitrarily amplify high frequencies. So again of this is the textbook solution. But it contains all of the ideas of CT. And certainly, how they apply to light fields, all of these ideas are there, as well. In light field refocusing, you need to have these filters to prevent aliasing. In light field refocusing, you need a baseline to sample your light field, et cetera. So for those of you working on final projects, you should really try to think about the connection with tomography. RAMESH RASKAR: Are you going to talk about the teeth? DOUGLAS LANMAN: Oh, yeah. Sorry, I almost missed that. So the whole assumption that we made at the very beginning is that we could find some wavelength of light, or electromagnetic radiation, that travels on a ballistic trajectory through our object. Right? So our object becomes semi-transparent and simply absorbs. But that's not true for the human body, for the most part. Inside bones, skin, and in various cells, that's for the most part true that X-rays travel on a ballistic trajectory. There is some scattering for bone you need to handle. But can anyone guess what these are here? AUDIENCE: Those are fillings. DOUGLAS LANMAN: Yes. So the problem with fillings is they're metal. And so they, for the most part, scatter and absorb X-rays. And so you'll see we get these artifacts where essentially we have missing data. And so when we reconstruct, we get these halos around the objects. Yeah. AUDIENCE: I think they're called starburst in the [INAUDIBLE].. DOUGLAS LANMAN: OK. So you get these starbursts, which have to do with your number of samples you took. Because you can see the starburst pattern, like in the 30 degree. If you have a limited angle, you can actually get starburst patterns. You can see it also in that one on the left. And so then the question is, how you handle these occlusion functions? And there are some simple and more complicated methods to handle this. But modern CT scanners, for the most part, can now scan fillings using relatively simple algorithms. And so now, putting all this together, we've solved the problem in flatland. So now, we can build a CT scanner to do the volumetric scanning task. It's very simple. We're just going to solve the problem repeatedly in two dimensions. And so how many here have had a CT scan? Anyone? No? Right. The older ones of us have had CT scans, I guess. So I think this is great. RAMESH RASKAR: The volume? We have the volume for this. DOUGLAS LANMAN: So if you go to YouTube. This is a CT scanner opened up. So maybe I should stop it and go back to the beginning, just to explain what you're seeing. RAMESH RASKAR: Put the audio here. DOUGLAS LANMAN: Oh. Let's see. We'll see. RAMESH RASKAR: The audio is very important. DOUGLAS LANMAN: So the radiologists at the hospital have to take apart the CT scanner periodically and inspect it for damage, because it's essentially a high-quality motor, like a jet engine. So you'll see in a minute why General Electric and other industrial jet engine manufacturers make CT scanners. But this is the CT scanner with the cover removed. And so basically, the patient lays on this gurney. And they're slowly translated through this doughnut. and So you'll see, this is the x. It's a little blurry in the still, but see this is the X-ray source. It's a point source, so this is a fan-beam projection. And over here, with a bunch of fans on top of it, are the X-ray detectors because they generate a lot of heat. And so basically, this thing will rotate around the patient as the patient translates slowly. So in the frame of the patient, the X-ray source will move on this helix. And at each orientation, it will take a fan-beam projection. And you can resample that data, if you have a dense enough helix, to be just a set of parallel-beam projections. Each of those independently apply the inverse radon transform. That's not exactly what they do. But if you were to implement this in MATLAB, that would be basically good enough. And so then, you'll get this series of cross-sectional images. Then, you can simply do segmentation and thresholding, for instance, high density regions correspond to bone, other properties correspond to blood vessel networks. So that's the basics of the CT scanner. So the radiologists and the techs often have to take this apart to inspect it. And so you'll notice this thing is quite scary with the cover off. And you can listen to what the radiologists are saying as they're inspecting this. So again, it spins up. [SPINNING SOUND] TECH ON VIDEO: Shit. [LAUGHTER] DOUGLAS LANMAN: So if you ever get a CT scan, that's what's happening. But of course, the safest place to be is inside the doughnut. Outside's the problem. AUDIENCE: Outside the room. DOUGLAS LANMAN: Outside the room is a better place. You can see that there's a lot of problems they have to solve, mechanically. So they have-- it's a lot like a hard drive. They have optical encoders here, which track the rotation velocity. And then, they have what's known as slip rings. Because basically, one of the evolutions, believe it or not, was just getting the data off these detectors. And so actually, the bearings are used as a circuit to just transmit data across the bearings. And believe it or not, that was a big insight and a huge patent. Before, they actually could only rotate once, and the wires would get tangled up. So having the slip ring bearing transfer mechanism was the big evolution that made million-dollar CT scanners possible. And so most of these cost $1,000,000. And as you're watching this, just to point out, again, the story I'm trying to give you is that this very basic idea of tomography and the inverse radon transform gives you almost all the medical scanning devices that we were so proud of in the 20th century. You know, of course, X-ray has given you CT. If you move to gamma rays, you get SPECT. If you look at positron emission tomography, it's just a different wavelength of light. And again, if we move to ultrasound-- so we become acoustic-- that's really when you get into ultrasonography, which is a reflection mode version of this. You can reformulate all of these equations, instead of to be transmission, to be a reflection mode. And that's [INAUDIBLE] but I actually won't go into it. So the takeaway message is choose your wavelength, apply inverse radon transform, and you too can image non-invasively. RAMESH RASKAR: So just as a thought experiment, can you see a relationship between this crazy jet engine and an electron camera? We'll come back to it, but-- AUDIENCE: Sort of. RAMESH RASKAR: --just keep that in the back of your mind. AUDIENCE: Just take the Fourier transform. [LAUGHTER] RAMESH RASKAR: That's it. That's all that is. DOUGLAS LANMAN: That is it. RAMESH RASKAR: And imagine now, the light field camera is compact and something you can carry in your pocket. The question is, can you carry the CAT scan machine in the future, in your pocket? DOUGLAS LANMAN: Right. So we'll come back to Ramesh's puzzle at the end and see if anyone's solved it. [LAUGHTER] So before I go on, there's a slide later that I need to explain this for. I only explained the frequency domain version of this. And so for various reasons, if you don't have a fast FFT-- which you do, of course, now-- there's a purely spatial domain algorithm. So remember, our algorithm was to take all of these projections, take the 1D Fourier transforms, and populate our Fourier transform in inverse. But you can actually do this without ever taking a Fourier transform. Does anyone see how you'd do that? So it's a purely spatial domain algorithm. You get the set of projections and you directly reconstruct the density function. AUDIENCE: You just accumulate the projections? DOUGLAS LANMAN: How so? Let's go back to our image. So if you can tell, I overuse the Socratic method. But I like guys to figure things out, because I already know the answers. So you give this data. There's no Fourier transforms now. How are you going to get back to this? RAMESH RASKAR: Without Fourier transform, this time. DOUGLAS LANMAN: Right? This is problem B on the final. Part A was Fourier transform. Now, you're not allowed. RAMESH RASKAR: On The midterm, you mean. DOUGLAS LANMAN: The midterm, OK. RAMESH RASKAR: Which is next week-- there will be a question on tomography. DOUGLAS LANMAN: Yeah. AUDIENCE: I'm just shooting in the dark. DOUGLAS LANMAN: That's great. AUDIENCE: But I guess, I'd take the first [INAUDIBLE].. And since that-- no, wait. Actually, rotation [INAUDIBLE]. So I'll take the first row like this. RAMESH RASKAR: First column. DOUGLAS LANMAN: Row or column? What do you want? RAMESH RASKAR: First column is the first photo. DOUGLAS LANMAN: First column is first projection. AUDIENCE: Oh. OK, first column. DOUGLAS LANMAN: OK, you got first column. AUDIENCE: And then, I'll basically create rays that are representative of those individual vector densities and just shoot them back out. DOUGLAS LANMAN: OK. AUDIENCE: Then, I'll take the next column. And then, I guess it's a slightly different angle. So I'll-- DOUGLAS LANMAN: Just shoot them back. So you're going to start with an accumulator, again, that's all zeros. AUDIENCE: Yes, it's all zeros. DOUGLAS LANMAN: That's your initial estimate. AUDIENCE: And I'm going to take one line of that and basically just fill the entire thing with that. DOUGLAS LANMAN: So you're going to take that value and just replicate it. AUDIENCE: Basically. DOUGLAS LANMAN: OK. AUDIENCE: And then, I'll do that for the second one, but at an angle. DOUGLAS LANMAN: OK? AUDIENCE: And I'll normalize it. DOUGLAS LANMAN: Yep. AUDIENCE: And hopefully, I'll get something that looks like the skull. DOUGLAS LANMAN: OK, that is the algorithm. It's called filtered back-projection. Can you prove why it's correct? [LAUGHTER] Does everyone understand why that would work, even intuitively? Intuition works pretty well here. Imagine I just had a big circle here. It's absorbing 50%. It's just a circle, right? So what I'll see here will just be a big rectangle, right? RAMESH RASKAR: Big cylinder. DOUGLAS LANMAN: Right. At some value, it'll be 50% everywhere inside and 0 outside. Right? So if I use Kevin's algorithm, first, my accumulator will be all-- everything in here will be 50%. Then, I'll turn, and the center will start getting higher. So I'll get that starburst pattern we saw earlier. So intuitively, it works. But mathematically, why does it work? And I think, again, now you can use the Fourier transform, it's easy to see. For those of you familiar with the Fourier transform, is it's pretty direct. Let's go back to-- so any time you have a Fourier domain problem, you always use the same tricks. Take slices, or use Fourier transform pairs. And so we know that the slice here is equivalent to the projection. So if I had some slice, some function-- again, the slice being this value-- and I take its inverse 2D transform of just a slice, what would I get? AUDIENCE: [INAUDIBLE]. DOUGLAS LANMAN: Yeah. It's not easy to see. So give me some-- so if I just start and I have my accumulator-- I hope you can see that. I have kx . I have ky. And I have some slice, we'll say, along x. RAMESH RASKAR: There's a switch right next to you on the right side. DOUGLAS LANMAN: And this function along-- if I was just to plot it-- I'd have kx versus what we call s of kx. We'll say it's a continuous function. So I'd have some function here. And now, what I'm asking is if I take the inverse Fourier transform, what does this function look like in the primary domain, which is x and y? AUDIENCE: It's constant between one of these other dimensions [INAUDIBLE]. DOUGLAS LANMAN: Right. Because this guy, you can write as some function F of kx times the delta function of what? Of y, right? AUDIENCE: Yeah. DOUGLAS LANMAN: And we know that the Fourier transform pair of a delta function is-- AUDIENCE: It's all constant. DOUGLAS LANMAN: It's all constant, absolutely. Great. So we get the 1D Fourier transform is this thing. And then, the values are simply the inverse Fourier transform of the 1D function, here. So we get some different values here. RAMESH RASKAR: Bright in the middle and not so bright in the center. DOUGLAS LANMAN: So this gets to why Kevin said he may have already known the algorithm. But this is mathematically why we smear. And now if we rotate, we have this one, which now comes in as a new smear along a new angle that you had the wrong way probably, but-- and so it's linearity. You can write the total Fourier transform, F of kx, ky is equal to the sum over all of your angles of this function F of, we'll just call it the angle of delta of another function of the angle. And then, by linearity of the Fourier transform, we come over here. We get a sum over angles of the inverse Fourier transform of that thing. Linearity of the Fourier transform, that gives you-- does that roughly make sense to everyone? So just by applying properties you already know of the Fourier transform, you can argue why it's correct using the Fourier projection-slice theorem. But if the end result is you don't need to take any Fourier transforms. You still need to have that sharpening filter, though. Because you can see that you're going to build up too much weight in the center. But other than that pre-filtering step, that sharpening step, filtered back-projection will solve the problem. So this algorithm I just described, rather than using the inverse 2D Fourier transform, it's called filtered back-projection, because the first step is you high pass filter all of the projections. The second step is back-project them. You smear them through, as Kevin described. So there's two ways to solve the inverse radon transform. You can take the 2D Fourier transform, or you can use the filtered back-projection So now, we'll move on to so some my own work I'm just going to plug. So the idea Marc Levoy had in presenting this at SIGGRAPH is to take inspiration from medical and scientific imaging, so we can make better cameras. If we're computational photography through optics people, how can we prove this idea? And so one of the limitations Ramesh identified with computed tomography, why we can't put it in our pocket, is that we have a lot of moving parts. We have these X-ray sources moving around the object. So the question is, how can we remove the moving X-ray sources? And so just as a thought experiment, I'll quickly explain what we did. Basically, this is another version of CT you often find, is where instead of having a rotating point source, we simply have a linear array of sources. And then, we have a film plane [INAUDIBLE] X-ray detector on one side of an object. And then, we simply switch these sources at very high speed. Because only one source can be on at a time, or else we'll have to projections overlap. And so there's a limitation with this design, as you have limited angle. If you think about this as a CT scanner, we've only gone over some small baseline. But using modern tomography, that's not such a big deal. You can use limited baseline reconstruction. And so the main limitation with this type of device is how fast you can switch the sources on and off. And it turns out it's very difficult and expensive to do that. So what we'd like is to have an algorithm where we simply have an array of point sources on all the time. But what you'll have is on the detector, you'll have a linear superposition. You'll have more than one projection overlapping. So now, you have an interesting mathematical problem, which is how you invert those series of projections to get back to your tomographic data set. And as the number of point sources increase, that inversion becomes ill-posed. And so what we did is we inserted a mask here. And so this is basically a lead panel with holes drilled in it. And so now, you can actually optimize this pattern of holes so that the inverse problem is well-posed. So again, our goal is to get the set of projections, but they're all overlapping. So now, we're going to design a mask to invert that system. And so if you want to look up this, we have a SIGGRAPH paper on this called Shield Fields. But that's the basic idea. By putting this mask here, you can avert the system. And if you think about it very simply, it's like a 3D TV. You could cope what's called a parallax barrier here. So you just have a series of pinholes. And then, if each pinhole is separated by the size of the image of this array, then you just get an array of pinhole cameras and you could invert that system. But of course, it doesn't pass much light. So you can optimize it and find better masks. So because we don't like working with X-rays, because I didn't want cancer, we built this invisible wavelength. And so what you can see is this is like a CT scanner built for $100. So we have an array of LEDs, which serve as our X-ray sources. We have an opaque object, a wooden mannequin. And then, we have what simulates a large format sensor. So this is a trick-- maybe some of you are using this in your homework, as well. To make a really large detector, you can use a camera and just a sheet of paper held between glass. So we take a photo from behind. If I was to turn one LED on, I'd see a shadow. To get a picture of a shadow, it's as if I had a giant sensor. So that's a cheap way of making huge sensors. And then, we put a thin sheet of glass here with our high-frequency mask. And so now, we get to the computer vision problem. So that's what it looks like when the lights are on. So you can see the multiplex shadows. And then, this is the magic pattern we actually used, which you can see why, if you read the paper. But again, if you just think about this as a mathematical problem, we have a set of projections that are all superimposed. We're trying to invert and get the individual shadows. And it turns out by putting this high-frequency pattern in, that inversion becomes well-posed. So I'll leave it to you to think why. RAMESH RASKAR: You can also-- if you go back to the previous image. You can also think of this as if you're standing in front of an X-ray machine. And every one of those shadows would be the shadow you would get in front of the X-ray machine. And as you move the X-ray source, you will get one of those shots. They will move. Except here, all the sources are on at the same time. And so you are getting this simultaneous projection of the X-ray. So the question is from this one photo, how can I resolve all the original shadows? DOUGLAS LANMAN: And so here, you can see this is what the data set looks like. So you don't really see anything there. But then, just by inverting that system of equations, you can actually pull out, from that single image, all the individual shadows. So it's as if you have 36 projections all in one image. So the key advantage here is we never strobe the light. So we can record this as fast as the camera can record images. So for really high speed tomography, instead of rotating or strobing the light sources, we can just have a light sources on all the time. We're only limited by the frame rate of the camera. And of course, from these projections, you can use the visual hole algorithm and reconstruct. Again, this isn't exactly tomography, because it's an opaque object. But it's essentially the same idea. We're doing that filtered back-projection. And so you get this reconstruction. You can see, looking at the mannequin, there's this starburst effect you described earlier, where you have this phantom because of the limited baseline. And so again, that just has to do with the algorithm used to reconstruct. If you had some prior on the smoothness of the object, you could take that out. But again, usually, you'd have to strobe those lights on and off. So you'd have to take 36 pictures. Here, we just take one. So that was the inspiration we took from the medical imaging world. And it turns out all of this can be applied to take light field photographs, which is what Ramesh did in 2007, by putting that high frequency mask inside the camera. So you can see all of these ideas link back together. And they all start with tomography. And so now, let's take a quick plug. We have a recent paper where we took this idea in yet another direction. And so the idea is that we're going to build this device. I'm going to step back-- RAMESH RASKAR: Doug, did Matt do a presentation? DOUGLAS LANMAN: Have you already seen this? RAMESH RASKAR: Yeah. I think he showed it. So you can go through it very quickly. DOUGLAS LANMAN: OK. So the basic idea-- RAMESH RASKAR: You can describe the connection between the two. DOUGLAS LANMAN: Yeah. So the basic idea is that we're going to take this and build an LCD screen that can sense depth. And so the trick is pretty simple. Ignore the LCD screen. We're just going to build the exact device I just described. So we have an LCD panel. Instead of displaying an image, we're just going to use it to display [INAUDIBLE].. No big surprise there. Instead of printing the mask, we use the LCD display mask. And then, behind the liquid crystal panel, we'll have a diffuser. Inside LCDs, you already have diffusers behind the liquid crystal. And so again, we'll have that giant sensor. We'll have a diffuser, our LCD panel, and some cameras behind it. And then, we'll be photographing the world. And so it actually turns out that you can capture the light field with that setup. So by just building that setup, using the LCD as a mask, you can capture the light field. And then, you can just time multiplex. On the first frame, we turned the back light on and we use the LCD to display a picture. On the next frame, we turned the back light off and used the LCD to display a mask and get the light filled and repeat at very high frequency, so that your eye can't perceive it. And so now, you're getting an LCD panel that can do multi-touch, like normally see. And then, as the user moves the hand off the table, you can sense depth. And so now, instead of having normal multi-touch, you can also have a z-axis. You can pull things off the table. It was a very simple proof-of-concept demo. But it shows that with just that basic idea, now, we've gone into the HDR direction. Again, Matt Hirsch did all this work. So you should really talk to him to see the demos. And so we're presenting this at E-Tech at SIGGRAPH Asia. So if you have any good ideas on demos you could use with a depth-sensing screen, please email me or Matt. And if you want to implement them, all the better. We'll give you credit when the reporters ask, who was the clever guy who programmed that? So here, you can see just a very simple CAD Explorer. You can choose your model, which is a touch interaction. You can move your hand around and it just controls the rotation, translation and scale matrix being applied to that model. But again, what's neat here is you don't see any cameras. There's nothing hidden in the bezel. So it's to the user, it's surprising that you're getting the light field. And the demo I'm not showing is that you can do light field transfer with this, if you know what that is. Are we doing OK on time? RAMESH RASKAR: I think so. DOUGLAS LANMAN: I can move faster. I think Ankit already presented this project, right? RAMESH RASKAR: Yeah, he presented this one. Yeah. DOUGLAS LANMAN: OK, so I'll just go through this really fast, then. So again, I mentioned that once the computer was invented, tomography became easy. Because they knew the radon transform and they could invert it. But if you don't have a computer, how can you take a cross-sectional image of a patient? So if I just have an X-ray source and an X-ray film, and I had the patient sitting on the gurney, I could turn the source on. I'd get a projection. But the main problem is I'm really only interested in seeing a cross-sectional image so I can identify a tumor in the brain, or something else. But everything else out of that plane, I don't really care about. And the problem is if I just have this perspective projection, everything out of focus will also be imaged on the sensor. so The question is, how can I get an image just along a slice through my volume without using computation? And so I think Ankit explained this. But 100 years ago, they did this and it's called laminography. And it gives you almost identical results, purely with mechanical means. So the idea is simple. You simply take the X-ray source and you mechanically translate it from left to right. And then, at the same time, you mechanically translate your film from right to left. And so it picks some point in the patient and you're stationary. You'll have some ray going through that point. And then, if I turn on the next source, we'll have some other ray coming in, as long as I move the sensor so that same pixel is illuminated. Then, that point will be focused, but everything else won't. So it's a clever trick to get a CT scanner without any computers. And so we used this to publish a paper at ICCP. RAMESH RASKAR: Actually, let's go back to that one. There's a minor difference, though, between doing this, achieving a cross section image-- this method, versus doing the whole spinning and multiple projection thing. What's the difference between the two? AUDIENCE: You don't get as much depth. RAMESH RASKAR: Sorry? AUDIENCE: You don't get as much depth emitted by what-- RAMESH RASKAR: The plane of focus-- we can control that based on how close the X-ray source and the sensor is. So you can get a pretty narrow depth of field. But what happens to stuff that's outside the depth of field? AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: It's still going to be blurred. And it will be still part of the image. So you don't really eliminate what's above and below your plane of focus. It's just out of focus, but it's still there. DOUGLAS LANMAN: Right. You can make it strongly out of focus, but your contrast will be lower in the CT. So Ankit Mohan had a great idea to apply this to photography. And so her already explained this to you, so I'll go quickly through this. But the basic idea is if I have my normal thin lens equation, I have some plane in the world on the left, and my sensor on the right, the lens maps those. And as I stop down the aperture, the blur circle for the autofocus point decreases. And in the limit, if I just have a pinhole, then I get everything in the world in focus. And so if you look at your iPhone camera, your cell phone camera, the apertures are so small that you're essentially taking pinhole images for everything. And what we'd like to do is take images with a cell phone camera that are as if they have a large lens. Because really, when you pay a professional photographer to do your wedding photos, it's really the blur. The blur is one of the big tricks they have. RAMESH RASKAR: You're paying for the blur. DOUGLAS LANMAN: You're paying for the blur, in a way you're. Paying for that big piece of glass and a bit of the talent to focus on the foreground objects so that the background has this really beautiful blur. That's one of the things that you'll immediately notice when you look at a wedding album. It's like, oh, gee. They just blurred the background. Great. So the question is, can we get nice defocus, nice blur, with a cell phone camera? So in a way, we're going to make our camera worse. Right? A cell phone camera takes everything in focus, which maybe that's a good thing. But it doesn't have that aesthetic feel of having a nice defocused background. And so the main trick-- and again, I'll credit Ankit with this-- is a nice insight. Well first, let's consider moving the pinhole. Because we just talked about laminography, and that's like moving the X-ray source. So our pinhole here is like our X-ray source. It's our center of projection. And so if you translate a pinhole-- and again, we have two points in the world. They're mapped onto our sensor. Nothing's moving, other than the pinhole. Then, you see that without moving the sensor, you get these two blur circles. But one of them is slightly larger. And so again, in the interest of saving time, you can go through a mathematical analysis. But what's really important here is it's the same idea as laminography applied to X-rays. Basically, this is like our X-ray source and this is like our film plane. And if we translate them at a certain velocity ratio, we can actually put certain planes in the world into focus. It's as if our patient has moved to the left side instead of the center. Now, we have the X-ray source in the film. And the velocity now becomes in the same direction, instead of opposite directions. But if you think about this, it's identical. It's identical to the laminography we saw earlier. And so then, what we can do is we can just choose this velocity ratio. And again, if you say, look at the blue ray here, it'll always move to the same pixel on the sensor. Because we're moving the sensor at just the right speed so that pixel always stays on the blue ray. But the red one is getting blurred. And by changing that velocity ratio, we can then focus at a different plane, a very simple idea. And so if you look in the paper, what we've done is just recreated what a lens does, but over time. And so everyone knows this equation, the Gaussian thin-lens equation. The distance to the image plane and the object plane are at inverse reciprocals equal to the focal length. Hopefully, everyone knows that. So normally, this focal length is selected by your chunk of glass. You choose some refractive index and some curvature, and that gives you the focal length. Here, you have a virtual focal length. It's very simple. It' the register distance, which is the distance to your pinhole to the sensor, times the velocity ratio. So as we change this velocity ratio, we can make arbitrary lenses. And then, it gives us a virtual F number. And so then, just to blow through this, this is the prototype we built. So again, you can see the story here. We knew about laminography, which is a medical imaging topic. We said, oh, we want to publish something in computational photography. Let's build an optical version. And here you go. So you have two translation stages. You can see those on the bottom. These are from [INAUDIBLE]. And then, we have lens and sensor. And then, you can see, this is the type of image. If you stopped down a lens, this is what your cell phone would give you. And then by adjusting that velocity ratio and making it small, we can focus on the foreground, the middle, or the background. And so we're only getting blur in 1D, so there's some more complexity. But that's the basic idea. So you see that path. We start with medical imaging, we published a paper in-- not SIGGRAPH, but a graphics journal. It's a pretty straightforward algorithm. And so now, just to finish up tomography, applying it to optics, like we did, is obvious. And it's also obvious how you would apply this to other applications. So this is an acoustic version of tomography, because we're just changing the wavelength, in a way. And so the idea is that, say we have some underground region we're looking for oil deposits, or more noble things, like looking for the skeletons of dinosaurs. And so to find the density function, it's very easy. You just set up a set of explosive charges to generate pressure waves. And they're going to travel, hopefully, if we do things correctly and we model it. If we select things right, then they mostly travel on ballistic paths to a series of microphones. And then, again, we can just treat this as a limited baseline tomography and invert that system. And here, at each pixel or Voxel, we're reconstructing the velocity. Because the velocity is proportional to the density of a material. And so then, we can reconstruct underground objects using tomography, using explosives and microphones, which is a cool idea. RAMESH RASKAR: So you can do this for anything. You can do this, not just underground, but this is how ultrasound works, as well, ultrasound scanners. DOUGLAS LANMAN: Right. And so then, Marc Levoy proposed this in the course, which I always liked, which was, if you could convince the Italians that you can set up some explosive charges in their subway-- which they might have concerns with-- and use some microphones, you might be able to reconstruct underground Rome just from that algorithm, at least a rough estimate. And so it's not bad. I mean, if you go out to these archaeological sites, they're huge. And they often discover new sites underground. And so this is another way to compete with other scanning technologies. It's very inexpensive. And so then, as long as I'm doing OK on time, I'll quickly blow through. RAMESH RASKAR: Yeah. It's fine. Go ahead. DOUGLAS LANMAN: So we started with tomography, which assumes everything travels on a ballistic path. And so now, let's start loosening our assumption on that ballistic path. So the first assumption is, let's allow things to refract. So instead of having an object which just attenuates rays, let's assume we have a lens here that's going to bend rays and diffracted them slightly. So we have weakly-refractive media. So we've loosened the prior model of the object. So if you try applying the inverse radon transform to the data set you gather, it turns out we won't be able to reconstruct this. That assumption that things move on ballistic paths is essential. There's some interesting work recently at SIGGRAPH on Schlieren tomography, where they do manage to do that with certain assumptions. But anyways, if we loosen this to allow refraction, then the algorithm changes. And so this is very clever work. Again, this is in Slaney and Kak, if you're interested. But the idea is, let's illuminate the object. Let's make a parallel beam set up again. So we'll start with some monochromatic plane-wave. So we have some laser that's generating one wavelength of light. It's coherent, traveling towards our object. And then, it arrives on some detector. And so we get our projection, again. Again, what we're trying to reconstruct now is the index of refraction, not the absorption. OK? So we receive the scattered wave. And it turns out we also need to measure the phase. So you'll have a second reference beam. So we'll essentially be taking a hologram of the object, for those of you familiar. But you can ignore that. Basically, you'll have a plane-wave projecting through the object, creating some scattered field. And so if you run through the math, you get a new result, which is very interesting, before your projection-slice theorem changes. It turns out in your frequency domain, again, we take the 2D Fourier transform, our index of refraction. And in the transform domain, a plane projection, a parallel-beam projection mask to a curve, an arc in the frequency domain. So now, to do tomography, we could do what we did before. We can rotate the source, or the emitter and detector, around the object. And then, we just have to know what this curve trajectory, which is just dependent on the wavelength of light. And then, we'll populate our accumulator, as before. Inverse transform done. Does that makes sense, hopefully, to everyone? But there's a really clever trick. Again, no moving parts is a usual theme in these things. To reconstruct topographically this object, we have to rotate something to populate this accumulator. But it turns out the clever trick here is that this arc is dependent on the frequency of illumination. So does anyone see, without looking at the thing on the right, what you do? AUDIENCE: Yeah. You change the frequency. DOUGLAS LANMAN: Change the frequency. Great. That was a very good insight. So if you just stuck through the frequency slowly, you'll stick this out. And you'll get at least half of the transform. But what if I told you that was a real-valued function? Do you know what the property is? AUDIENCE: Symmetric. DOUGLAS LANMAN: What kind of symmetry? AUDIENCE: [INAUDIBLE]. DOUGLAS LANMAN: Conjugate symmetric, right? AUDIENCE: Yeah. DOUGLAS LANMAN: So we know that this function is conjugate symmetric because it's a real-valued function. So getting only half of it's enough. Because then, we can replicate it. So again, tomography is basically remembering signal processing. If you remember all those transform pairs, you'll have a good luck in your final project, if you decide to use it. And so now, we can take a sequence of images where we just vary the frequency parameter. We'll populate this guy and use conjugate symmetric symmetry and inverse transform. And so then, you get a new result, which is almost as fundamental in this field as the Fourier protection-slice theorem, which is that a white light hologram basically can reconstruct the index refraction of an object. Because we can use superposition, again. We can illuminate with multiple frequencies at the same time, as long as we can resolve them. Then, we'll get all of the data at once. So you basically have a broadband clean, coherent way of traveling to the object. And from that, you instantaneously get the index refraction. RAMESH RASKAR: So the refraction and diffraction are being used interchangeably here. DOUGLAS LANMAN: Yes. RAMESH RASKAR: Do you know why? DOUGLAS LANMAN: Why refraction and diffraction? RAMESH RASKAR: Yeah. I mean, you're trying to reconstruct something in presence of refraction using a diffraction theorem. DOUGLAS LANMAN: Right. RAMESH RASKAR: Do you know what's the reason for that? DOUGLAS LANMAN: I don't think I understand this at a deep enough level to-- RAMESH RASKAR: OK. DOUGLAS LANMAN: I mean, fundamentally-- RAMESH RASKAR: I just realized why people [INAUDIBLE] confusing the two. DOUGLAS LANMAN: Yeah. I think-- Yeah, I'll have to get back to you on that. This is getting beyond my knowledge. So this is what a reconstruction looks like of a singular object. So now, our artifacts have this arc trajectory. But again, what is interesting is if you look at that Schlieren tomography paper, it means you could get real-time reconstruction without strobing any lights by just doing broadband holographic imaging of your object. And then, this is why I mentioned the filtered back-projection earlier. That was the frequency domain reconstruction transform. So it turns out there's a purely spatial domain transform for this one, as well. And so this one, I don't know. But the 2D Fourier transform of an arc maps to this strange depth-dependent function you see here. So that's what filtered back-projection becomes. It's smearing that along these paths that become wider, as you go in depth. Because that's what the inverse Fourier transform is of this arc pattern. So you can do filter back-projection, but it turns out it's much more computationally expensive. So they generally do this using frequency domain, I think. So that was weakly-refracting. But now, if we move to putting LEDs against your skin, now, we're talking about things that are strongly-scattered. So we've moved from complete, just absorbing along a ballistic trajectory, towards refracting a little bit. And now, we're just scattering completely. RAMESH RASKAR: So imagine putting an LED on your finger and you want to see the bone inside your finger with just visible light. DOUGLAS LANMAN: So this is an example you can see. This is a cross-section to our finger, we'll say. And we have photodiodes all around our finger. And then, we put this little fiber optic right against our finger and illuminate it. And so, you'll scatter light all through that volume and you'll measure its intensity, at some points, coming out on the surface. So this is known as diffuse optical tomography, because diffusion is the key process we're trying to invert. And so in all of these cases, we're just creating this inverse system. We're creating something that we then apply the inverse Fourier transform to invert, or what have you. So in strongly-refractive or scattering media, you end up with a very difficult model to invert. It's ill-posed and nonlinear, which means it's very hard. So what they generally do is they use some tricks to get to bootstrap inversion. So if you can start with a good initial guess of what the-- here, we're looking, maybe, for optical density. We're looking at how dense the material is. That's the 2D function we're trying to reconstruct in a cross section. And if we had a good initial guess for that, then we can use a forward modeling process, where we perturb that density function and see how well it predicts the values we found. And you put that in an optimization framework. So we do some gradient descent and we optimize our reconstruction of the density function, so that the observations match. The predictions match the observations. That's the general inversion framework you have, some of nonlinear gradient decent algorithm. But the question is, how do you get that initial guess Of the density function? And so you see, it says it right there. You can use some other process that is ballistic that's correlated with your density function. So what they generally do here is they use time-of-flight. They strobe this, record the time it takes to travel through-- maybe not in this specific example, but you can imagine doing this. You turn the light on very quickly, and look at the time delay, and that will give you an initial rough estimate of your density function. So generally, borehole tomography also does this. It uses time-of-flight to invert the process, but also to get the initial guess. And then, this is used a lot in non-invasive medical imaging. So if you don't want to do a CT scan because the patient has had too much dosage of X-rays, you can, for instance, put electrodes on their body, like you see here, and at least get a rough something not as high quality as a CT scan, but sufficient to make the diagnosis. So here's an example of how you applied diffused optical tomography for diagnostic purposes. So here, we have two twins, two babies, the left one versus the right one. Again, they're twins. And then, one of them had a-- this is a specific thing-- left intraventricular hemorrhage. So they had a blood vessel rupture on the left hemisphere. And you can see here. So they attached all these electrodes. And they generate these time-of-flight images for the initial guess. And then, they look at the conductivity, and they reconstruct this density function. Actually, they get two density functions. One is blood volume and oxygen saturation. And so here, you can clearly see the hemorrhage, because this is the twin that has the hemorrhage. This is the one that doesn't. You can see there's a lot of blood volume. That's a good indication of a hemorrhage, but not a great one, because your prefrontal cortex has a lot of blood flowing to begin with. But then, you also can see low oxygenation, which tells you that blood hasn't been refreshed in a while. So it's a big, big pile of blood that's not oxygenated, which means hemorrhage. So this is just an example where a CT scan really wouldn't show this, because the density would be the same. And so by using a diffuse optical tomography, you can get a good reconstruction. But because it's diffused, you're only going to get low-frequency detail. And then, here's another example where you can see the non-invasive part is starting to be weakened. This looks pretty invasive to me, because I don't want to take those electrodes off. So here, you can see this is off Wikipedia. But this is just an example I found, I thought was interesting. Say we want to take CT live images of the heart to study some sort of function. We could just coat the patient in electrodes, wire them up, strobe these electrodes on and off. And the density function we're reconstructing here is conductivity and resistivity, which we can then use to look for various problems-- watch the heartbeat, et cetera. And so this problem is actually very difficult. It's called the Calderón problem. And inverting this system is very challenging, which is why the images are so low-quality. But again, you can see this theme is just being extended and extended. We're taking tomographic-like projections of the data set. But our reconstruction equations are not just inverse Fourier transform because of the scatter. And so then, you might think about, well, if we're now going to make a computational photography project out of this, what if I have scattered light and a light field photograph, then I end up with this problem. So there's your final project-- a plus b equals project. So now moving. We're almost through the talk now, but a lot of this was on tomography. But the general idea of getting a cross-sectional image reappears throughout the sciences. And so in biology, rather than using X-rays, they do this optically, and they just use 3D deconvolution. So this is a very simple idea. Probably, all of you understand it already. But the idea is, say I have some specimen in a microscope that has some depth to it, and I want to reconstruct it in 3D. So how do I do that? Well, the simple observation is to model the image formation process. So what if I just had a single beam in space and I focused on it? Then, I see this image. That would be like the impulse response function for focusing on this plane. And then, if I focus slightly below our point source, I'll see a defocused blur. And as I defocus more and more, I see a larger blur. So this is known as a focal stack. I take my microscope and just translate the specimen slide without changing the optics any. And I create a focal stack, this three dimensional impulse response to a point source. So if I look at this as a function of depth, if I just take this slice through the point spread function, and then I look as a function of depth into the object, I have this three-dimensional blur, where you can see it goes out with these two inverted cones. And so now, we can model the image formation process. So you assume that the specimen you're observing does not scatter significantly, then you can use a linear-image-formation model, which is very simple. You have some three-dimensional source function, which is just decomposing your object into a series of point sources. Each point source generates a PSF weighted by whatever the intensity of that point is. And that generates the focal stack I see. Does that make sense to everyone? And again, this is only true under the assumption that scattering is not significant, or else this model is not correct. So generally, what we're doing is we're taking a three-dimensional function convolving with another three-dimensional function. And again, you always use the same trick in medical imaging. The first thing you do is take the Fourier transform. So if we take the 3D Fourier transform of this, then everyone knows the convolution theorem. To simulate this in the frequency domain, we take the 3D Fourier transform of the object, the 3D Fourier transform of the PSF, multiply the two, and that gives us the Fourier transform or the focal stack. Everyone follow? So now, if I want to get rid of the blur-- because remember, we talked about laminography. Things out of the plane will be blurred. So I want to a photograph with a microscope, but I want it to be like a pinhole. I want every single thing to be in focus, so I can look at structures that have some depth to them. But the problem with that is we have the blur. And so then, we just invert this. So we take the 3D Fourier transform of the focal stack, and divide by the 3D transform of the PSF, and we end up with our object, which we then inverse 3D Fourier transform. And that gives us the focal stack all in focus. So probably, most of you are familiar. Who's familiar and have done deconvolution before, understand it? Really? Deconvolution is a new concept to everyone? Interesting. OK. Well, I probably should've explained this in 2D, but hopefully, you followed what I was explaining. So if this was 2D, all of these transform become two-dimensional. That's the basic idea. RAMESH RASKAR: Yeah. So deconvolution is very similar to what you did for your light field assignment, which is shift and add. Shift and add is the [INAUDIBLE].. And imagine if given all those refocused images, you wanted to go back and construct the light field. That would be a form of deconvolution. And instead of doing it in frequency domain for the assignment, you did it in the primary domain. You just shifted and added [INAUDIBLE].. Shifting and adding in primary domain is same as projection of convolution. Or, convolution is basically [INAUDIBLE].. DOUGLAS LANMAN: So hopefully everyone understood that to deblur, to put everything sharply in focus-- so if I apply this algorithm to the impulse response itself, what would you expect to see? What would these images become? Maybe that's a check to see if you understood this. AUDIENCE: Won't they all just become the point source? DOUGLAS LANMAN: All become the point source? Close-- 50%. AUDIENCE: [INAUDIBLE] point source [INAUDIBLE].. DOUGLAS LANMAN: So I'm trying to remove the blur. If I had a point source-- again, these images are at different depths. Right? AUDIENCE: Mm-hmm. DOUGLAS LANMAN: But by applying the deconvolution algorithm, I expect to see certainly a sharp point right here. Right? AUDIENCE: Yeah. DOUGLAS LANMAN: But what about if I move just a little in depth? AUDIENCE: [INAUDIBLE] the other way. RAMESH RASKAR: Remember, there's a blur here, right? DOUGLAS LANMAN: Yeah. AUDIENCE: Right. RAMESH RASKAR: So convolution means blur. And in most of the cases, deconvolution means removing to blur. AUDIENCE: So you wouldn't see anything. AUDIENCE: No, you should see the other image in focus. DOUGLAS LANMAN: Which would you see? AUDIENCE: Why wouldn't you see the image on the other depth in focus? DOUGLAS LANMAN: Because there's just a point in the world. AUDIENCE: Oh, OK. So you don't see anything. DOUGLAS LANMAN: You don't see anything, right. So you get the gold star. And so that checks. Hopefully, if you understand why all of these images look completely dark, except for the center one, then that's deconvolution in general. So then, just to point out-- again, when we're doing this inversion, we're dividing by the Fourier transform of our blur kernel. And so the problem here, if you remember earlier, when we were doing the inverse for transforming and the frequency domain, you really don't want zeros, right? So if I divide by a 0 I'm going to have problems. But if you look at our focal stack and look at that the blur kernel, we have lots of zeros. And these zeros actually come in due to the numerical aperture of the lens. If the lens sees at a very sharp angle, we can make this have few zeros. So since you haven't seen convolution before, I think that's probably enough, just to observe that when you're deblurring, you really need this function to not have zeros. So if you're going to apply the concepts we saw earlier and the computational photography idea, the idea would be somehow to modify the optics, add apertures codes, to make this blur kernel not have zeros. So when you do the inversion, you don't amplify high frequencies [INAUDIBLE].. AUDIENCE: If I understand what you're saying correctly, you're taking multiple images at different depths, right? DOUGLAS LANMAN: Yes. AUDIENCE: So why can't you focus at different depths? Why do you have-- DOUGLAS LANMAN: That's what we're doing. AUDIENCE: So then, if you are taking multiple images, why don't you shift your focal plane to different depths? DOUGLAS LANMAN: That's exactly why we're doing. But it's just like laminography from before. It's a thick specimen. So let's go to a picture. Because pictures are always worth a lot more. So we have this mandible of some insect. We're trying to get a cross-sectional image. But if we don't do anything, if we just focus at some depth in the specimen and it's back-illuminated, you get this halo. Because everything out of the focal plane is blurred. AUDIENCE: Yes. DOUGLAS LANMAN: Right? AUDIENCE: Yes. DOUGLAS LANMAN: So now, I can focus at another depth and generate a new image, which is exactly what a focal stack is. But I'll still have that blur presence. So what I'm trying to do is collect that focal stack and then invert that whole system, so that I get, really, what is a cross-sectional image, so that nothing is blurred in it, which is what laminography does, which is what tomography does, in general. So you can see that theme. Something has to be done to remove the blur. And so in this case, what's done to remove the blur is that division by our impulse response Fourier transform, this deblurring algorithm, rather than using a tomographic-type algorithm. Just to show you the gist, I think, since it seems that deconvolution is a new concept for everyone, definitely go and read the Wiki entry. Because I think that idea has been beaten to death in computational photography field. So you can definitely get some final projects out of it. You can see how a team can use that. So here's some nice results. This is from Marc Levoy's work. This is actually a light field microscopy image, but the ideas are the same. You can do deconvolution. You can see it goes from something that a typical microscope would produce. We have this thick specimen, but it's optically transparent. We produce a nice, tomographic image, which we can then volume render. And so if you go to some really expensive commercial packages, this is what you'll see for deconvolution. So you'll go from something on the left, which has all the out-of-focus blur, just some nice, sharp, all in-focus image of an object with a lot of depth here so you can study the fine structure. And so then, I think we have a little time left. So let's look at some other tricks they use in biology. So again, the theme in all of these-- laminography, tomography-- is always to remove the out-of-focus blur so we get a nice, cross-sectional image, so we can see the tumors, we can see structures clearly. And so this idea is really nice. And again, I don't have time to show you, but there have been two or three computational photography SIGGRAPH papers, again, from Marc Levoy using this idea. So this theme just keeps coming up. You go to the medical literature, look what they were doing in the '60s and '70s, figure out how exactly it maps onto a camera, and it. You're published. That's basically the algorithm. [LAUGHTER] Although, it's becoming harder to do that, because too many of us use that algorithm for papers. So the idea behind confocal microscopy, which you can be proud of. Marvin Minsky is arguably credited with the invention of it here at the Media Lab as well. So the basic idea is I have some plane in my specimen that I want to get a nice cross-sectional image of. And I want everything that's out of focus to not contribute anything to the final image. So I can do that computationally by doing this deconvolution. And that's like CT scanning. Or, I can do it mechanically, which is like laminography. There'll be no computation here at all. I'm just going to make sure that the point spread function-- so if I go back to the point spread function, this thing-- if this was just a point, then if you look at this math, if you convolve the point with the object, you just get the object again. So just like laminography, solve the problem mechanically. If we can make this a point, our impulse response is a point, then we don't need to do deconvolution. And that's the main trick. And so confocal microscopy does this in a very clever way. We have some secondary light source. Put a pinhole in front of it. Now, it goes through my thin lens. And it goes, gets focused based on the focal length, down onto some point in the plane we care about. So if I was to now take a picture somehow of this plane, then all I'd see would be things that get illuminated by the beam of light. Everything else is not illuminated. So we're already doing OK, because things way over here aren't going to contribute to the blur at all. So that's the purpose of the pinhole. Let's make this clear. Again, let's think about the mandible we saw earlier. If we have some point that's higher in depth than the plane we're trying to focus on, then the light will be spread over a disk. Does that make sense? We're going to take this cut through our cone of light. So as a function of radius, or depth, away from the plane we care about, it's going to fall like what, the brightness of that point? AUDIENCE: Square. DOUGLAS LANMAN: Yeah, [INAUDIBLE].. Great. So already, if you think about our impulse response we had earlier, it follows 1 over arc squared, now. Right? That's not bad, but can we make it even sharper? Any ideas? AUDIENCE: Higher numerical aperture. DOUGLAS LANMAN: Higher numerical aperture-- that will spread the energy out. It'll make it somewhat sharper. That's right. So the key trick-- oh, you just saw it. A preview, if anyone saw it. So does anyone think-- so we're only doing things on the lighting side. We haven't considered how we take the picture yet. It turns out, you can use the exact same trick. So to take the picture of this one point on the plane of focus we care about that, we're just going to put a big photo cell here, a big photodiode. So if I didn't do anything and I put a beam splitter here, this light would just fall on the photo cell over a big region, so you'd have low SNR, if you didn't have a lens here. You'd need the lens, of course. So if we put a lens, that focuses on this plane as well. So that collects all the light scattered by this point back onto our photodiode. So if we put our photodiode at this point, we get a nice image. But then, if we took a focal stack, then the points out of focus would have intensity 1 over r squared. But we can do better than that. Because now, we can put a pinhole in front of our photo cell. So if you think about this, our image-- say you take this blur disk and you image it. It Images to a disk, but then we're just going to take that little part in the center of the disk. So it turns out the imaging side, by putting a pinhole on your photodiode, also gives you a function of 1 over r squared. So it actually goes 1 over r to the fourth now, which is quite strong. So that deconvolution isn't really needed anymore. Does that make sense? And so that's why it's called confocal. You have two pinholes, two focal systems. And they're aligned with each other. But then of course, how are you going to get a picture? You're going have to scan this thing, which can take a lot of time. So you're going to have to move the pinholes on the light source and the detector, in 2D raster scan, the whole object. So that's really the limitation of this technique. By doing deconvolution, by doing tricks to make the PSF convertible, we can do it all in one shot. But with confocal microscopy, before we had computers, we could do that without needing inversion. So you see the trick, right? And it should be the same trick as laminography and tomography. And so then, I should finish up, right? RAMESH RASKAR: Yeah, that's fine. DOUGLAS LANMAN: So we're getting right near the end, now. So this is used in practice, not quite the way I described. It took a decade or two to commercialize Marvin Minsky's idea. And this is how it's used in biology in practice now. It's called laser scanning confocal microscopy. The idea is basically the same. You have a laser source, a pinhole aperture on both the source and the detector and a photodiode, and a special means splitter. We'll get to that. What we do is-- really, it's just like CT scans. You don't really just care about the absolute density function. Often, you add some contrast agent. So if you're looking for a pulmonary edema, you inject iodine into the veins. Iodine absorbs X-rays. And so then, you can look at the vein structures and how they evolve over time. So they want to look at specific structures inside cells. They don't just want a gross picture of a cell. They want to enhance certain details. So what they use is fluorescent beads, usually with some antigen tags for the biologists. Anyways, I'll skip that. They basically highlight. They put fluorescent dyes somehow attached to structures they care about, for instance, mitochondria, cell structures, DNA, what have you. And that'll enhance the contrast in the final image. And then, what happens is the [INAUDIBLE] designed to fluoresce at some wavelength which is different than your source wavelength. And then, you have a beam splitter again, but you have a filter on this beam splitter that only reflects the fluorescent wavelength, not the stimulating wavelength. So that improves your contrast even more. So that's the basic trick behind laser scanning confocal microscopy. But again, you have to scan mechanically both the light source aperture and the detector aperture. So this is a slow process. But cellular division is also a slow process. So we can create videos of that. For instance, here, you can see typical cellular division process, where I think, they probably accentuated the telomeres. So you can see them separate, I'm guessing. So again, cellular division occurs on a long enough time scale that we can raster scan that without any difficulties and create videos. And then, just to show you how good the cross-sectional image is, I think this is pollen grain. Here, you can see that this is, again, without any deconvolution, no computation. You're creating this function. And then, you can take a slice of the function. And you can see it evolve on the right, here. These cross-sectional images are very sharp. And again, by using the fluorescent beads, you can enhance the contrast as well. So is this idea taken to its natural conclusion. This is the commercial product you get at the end. And so now, again, to plug some of my own work-- well, this isn't my own work. But I want to talk just briefly about coded aperture imaging. So we talked about biology, medical imaging, a little bit about geology, and so now, let's not leave the astronomers out. And so what ideas can we mine from the field of astronomical imaging? I think the main takeaway message that's impacting computational photography is the idea of coded apertures. So if I'm imaging in X-rays, I'm looking for supernova bursts, what have you, it's impractical to build refractive optics. Really, all you can do is attenuate X-rays. You can build lead sheets, like we did in our project, and block them in various ways. And so then, the question is, how can you image non refractive wavelengths? So any ideas? So the first idea, of course, is build pinhole apertures. So say I want to take an X-ray image of the world, but I can't build a thin lens that refracts X-rays. So I have this X-ray, gumballs. And I just take a lead sheet, drill a hole in it. I have an X-ray film. No problem, I get an image, but I have exposure issues, as always. So any guesses about how to solve the exposure issues, as engineers? What would be some tricks? Any ideas? AUDIENCE: Since it's a pinhole, you want something else. DOUGLAS LANMAN: Yeah, Kevin? AUDIENCE: Coded aperture? DOUGLAS LANMAN: Yeah, it's in the title. [LAUGHTER] So what would that mean? What would that mean? AUDIENCE: Choose some pattern that lets in more light, and then deconvolve-- DOUGLAS LANMAN: Nice. AUDIENCE: --the pattern. DOUGLAS LANMAN: Exactly. So if we didn't know about deconvolution yet-- which, it seems, the wiki will tell you later-- the first step to getting more light is just make a larger pinhole. So if you don't really care about the resolution of your scene, you can afford to blur. So if we're just looking at stars, we can blur them. We can make bigger pinholes and then deconvolve that, maybe, even though it's difficult to deconvolve a circular aperture. So that's not a great solution, but we let in more light by a function of the radius squared. But the real solution here is, again, called coded aperture. And the idea is to drill many holes, as Kevin said, in a way that all of the images that overlap are somehow invertable. And you saw this. Hopefully, you remember back to when I was talking about the [INAUDIBLE] project. We put all these holes in front of the plane so that you could invert that system. And so we weren't the first to discover this, at least for imaging of point sources. And the basic idea is that as long as we design this mask appropriately, so that system of equations is well-posed, we can invert it. And so here, I'll show you something that's not well-posed. This is like the [INAUDIBLE] setup we saw earlier, where we had 3 X-ray sources on all the time. And those three images were overlapping on the sensor. So if we do this in the X-ray domain, we have three images coming from three centers of projection. They're all just shifted from one another. So you can imagine if I just had two pinholes and some prior on the scene, it wouldn't be too difficult to separate two images. So that would double my light and I'd probably be able to invert that. As Kevin said, that would be easy enough to deconvolve, probably, although it wouldn't be a linear process. So then, as we add more and more pinholes, we let in more light. But that inversion, that system of equations, becomes more ill-posed. It's condition number is much worse. So then, that's where we end up in this field of optimizing coded apertures, which is what our work was on. And so if any of you are considering using masks for light field capture, or other things, you'll very quickly arrive at similar results. And the idea here is to use-- in this case, they generally use something called a MURA code. And it's just designed-- I'll tell you the main fact about the MURA code is that its autocorrelation function is equal to a delta function. And so if you think about deconvolution, for those of you [INAUDIBLE] that aren't familiar with it, that should tell you why you can do this. But the basic idea is that the image formation process we can now model as linear. We have a sequence of pinholes of varying sizes and distributions to simulate the image we receive. We can evolve our aperture, scaled aperture with our function. This is the image we get on the sensor. And then, believe it or not, to deconvolve, you simply convolve again. Convolve this blurred image, the superimposed image, with the aperture function. And in the absence of noise, you get back exactly the image. And really, this is the key problem of coded aperture, and something you can investigate. There have been a couple of papers from [INAUDIBLE] and others in recent years, where they applied this. Again, astronomy applied it to computational photography. And they said, OK, we're going to put coded apertures in cameras. But the key trick is what's the aperture going to be so that the inversion is well-posed in the presence of noise-- paper. So that should inspire you, hopefully, for your final projects. And then, this has also been applied for tomography. And so you can do exactly what our goal in our project was, which is to have no-moving-part, instantaneous tomography using some of these concepts. The main challenge is doing it when objects are close to the detector. And so that's really the problem that we solved in our work. So that concludes my talk. So thank you for your attention. If you have any questions, I'll be glad to take them. RAMESH RASKAR: [INAUDIBLE]. [APPLAUSE] All right? So light field and tomography-- one in the same thing. Is that clear to everybody? All right let's draw on the board. DOUGLAS LANMAN: I think that link is the one you're interested in. RAMESH RASKAR: And so similar to [INAUDIBLE] which was all about shifting and adding, [INAUDIBLE] focusing. So you had a bunch of cameras. And you have this [INAUDIBLE] here. And when you want it to refocus on a particular plane, if you just sum up all these images, then you're focusing at infinity. But if you wanted to focus on the close up, then you would appropriately shift these images in the pattern. But what is this particular image? It's basically an image of a scene where all these rays are-- if you take a very simplified pinhole [INAUDIBLE] model here, the projection of the scene onto the sensor. When you move it over here, it's almost the same world being projected from a slightly different [INAUDIBLE].. And then using tomography, it's almost the same thing as when you have [INAUDIBLE] sensors. And you have an X-ray source. And you have an object here. And you have a projection, this object, onto the sensor. And when it's shifted around and put extra source here, you are projecting this from a slightly different viewpoint. So in this case, it's as if the object was outside and you're projecting to this pinhole and taking an image on the detector. And by moving this pinhole, in this case here, you're taking different projections on the scene. And over here, you're taking different projections of what's inside. Here, what's outside. And here, what's inside. But the basic principle is the same, which is you are changing the viewpoint and taking the projection of what's [INAUDIBLE].. And as that explained, this data set is sufficient. You can explicit as a radon transform. And you can invert that to figure of what's inside. So if this was some material that's simply attenuating and not scattering, then you can figure out the density function. In case of refocusing, what you were doing was you were shifting and adding to focus on a particular plane. And that's far more for tomography, or at least, a form of laminography, as you were looking at earlier. Because you're just looking at a slice, which is exactly what the word tomography means, recording of a slice. And the [INAUDIBLE] is identical because, at least to the first order, because the projections of your 3D world on a 2D sensor eventually allow you to compute something about the 3D world. From a set of 2D images, you can say something about the 3D world. In light field tomography, you mostly care about creating refocused images one layer at a time. But if this was also some kind of an object which had some density of pure attenuation, then again, from all these sequences of images, you can reconstruct what the volumetric representation of this object is. There's one major difference, though, between a CAT scan here versus an electric camera that has an area of light field that's made up of [INAUDIBLE].. What's the major difference? AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: Far field beam-- that's one example. In fact, if you have an area of pinholes, it's very similar to the parallel-beam tomography. There are two or three variations that are mentioned here. AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: Go ahead. AUDIENCE: When you have the angular [INAUDIBLE] your [INAUDIBLE] you're losing data? RAMESH RASKAR: You're losing a lot of data, in terms of angle, as well as resolution. Here, you will take thousands of positions of extra source to construct this volume. But here, you may have just a few tens of cameras. And also, here, you may have a full 180-degree rotation of your source and detectors. But here, you're only within-- you're limited almost by the field of view and the arrangement of this camera, which is what you see. So with respect a given point, you might only span, say, 30 degrees, or 40, degrees or so, depending on the field of view. And this is [INAUDIBLE]. So you could have that missing component [INAUDIBLE].. So those are, of course, just limitations, in terms of the type of reconstruction we can achieve from light fields. But again, it [INAUDIBLE] how you set up the system [INAUDIBLE]. So that's why you can construct a tomography machine using the light field idea, which we saw as [INAUDIBLE].. And hopefully, we can convert many of these complex concepts of tomography and deconvolution and confocal imaging and all that, and achieve them with principles that we're unfamiliar with, in the visible spectrum, with optics, or without optics, and make them available on possibly [INAUDIBLE] for cameras.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_1_Introduction_and_fastforward_preview_of_all_topics_Part_1.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RAMESH RASKAR: All right, everyone, let's get started. I am Ramesh Raskar. And this is-- AUDIENCE: I don't know. RAMESH RASKAR: --MAS 131 and 531, computational camera and photography. And we have, I think, about $50 to $60,000 worth of equipment right here. So please hold on to the door if something goes on I really need your help to protect all these things. It's going to be a lot of fun. We're going to see crazy kinds of cameras, crazy kind of photography, medical imaging, and applications in different domains. I just wanted to show this really beautiful picture that some of you may have seen on blogs. This is a real photo taken by an iPhone camera of a propeller blades on an aircraft. Anybody knows what's going on? AUDIENCE: The sensor takes line by line images. And by the time it goes to the next line, next vertical line, those propellers have moved. So it faces the [INAUDIBLE] directions are not exact location-- RAMESH RASKAR: Excellent. AUDIENCE: [INAUDIBLE] separated [INAUDIBLE].. RAMESH RASKAR: Exactly, so when we think about an image, we think about this as some kind of a snapshot, some kind of a progressive photo. But there are two motions going on here. One is the motion of the blade, which is circular, and with radial blades. And then there is motion effectively of the sampler, which is moving from up to down. It's a rolling shutter. There's a very nice animation of that. So this is the rolling shutter the camera is exposing approximately just one line at a time. And as the blades rotate around, you can see that it's tracing this course. So this example should show you that we can't take anything for granted when it comes to this modern photography. And this is mostly an artifact of a really cheap sensor that cell phone cameras use. Because of bandwidth constraints it's much easier to roll and read one line at a time than reading the whole buffer of the camera at the same time. So you get just beautiful artifacts. And one open question is if camera makers start supporting or start cheaping out on rather than just one line at a time-- if they come up with even cheaper mechanisms where they're explaining and some random sequence or-- any artifacts they create, hover photographers exploit that to create this stunning and beautiful imagery. More on the scientific side, let's see the lighting is [INAUDIBLE] here. Let's see if you can play with that. Yeah, I'll do that. See if there's another switch that doesn't like the back-- No. We're going to learn a lot about computational illumination in this [INAUDIBLE]. That maybe OK for now. And we'll switch it back. So imagine you have somebody out here behind a shower curtain and you want to take a photo. And you may be able to come up with a photo where you just see the shower curtain behind the person behind without the person behind the shower curtain. But maybe you can also create a photo where you realize it was behind the shower curtain. And this is a trick that's achieved using a special type of a flash. And as you'll see, we will generalize the flash to a very highly programmable illumination. And with that, photography of this kind is also possible. What else? Here's a great example I'd like to show. Imagine you have a scene and instead of a camera you just have a single photodetector. OK, It could be a light sensor of your SLR camera. Those of you like to think about these photodiodes, just one photodiode. And instead of taking a photo, what I'm going to do is turn on one pixel of this projector at a time and record this. If the projector has one million pixels, I'm going to take one million measurements by turning on one at a time. What picture will I get here? Will I get a picture from those 1 million measurements? Anybody? No, you get a photo that looks like the camera was placed here instead of over here. And it's exactly how a barcode scanner at checkout counter works as well. The barcode scanner doesn't have a camera. It just has a sweeping laser. In the same apparatus there's also a single photodetector. So if the barcode scanner hits the 1010 patterns of the printed code, when it hits the black spot, it doesn't reflect much light and it hits the white spot. It reflects much light. And in this way, a barcode scanner without having to worry about focus and dynamic range and all those issues can figure out what the barcode is. AUDIENCE: Isn't it important, though, that the barcode scanner is a laser, which keeps the pixels spatially localized? RAMESH RASKAR: Very focused. AUDIENCE: Right. Whereas here if you have an incoherent light source, and you're illuminating a real scene, you'll illuminate the whole scene kind of diffusely. RAMESH RASKAR: That's a very good point. So that's why we have to use a projector. So we are turning on only one pixel of the projector. So as in very simple words, it's turning on only one ray in the scene. The projector is not flat field. Not all the pixels are on it at a time. That's why we had to take one million readings while turning on one pixel at a time. So that's straightforward. This is very well known. What are some other things you can do with this particular duality? If you replace the photodetector with a camera, you can do something similar. You can turn on one pixel at a time and take a full photo. So again, I'm going to turn on one pixel of the projector at a time and in this case, take 1 million photos. Now, once you do that, what I will be able to do is create a relationship between what happens when I turn on exactly one pixel of this projector. One pixel is turned on. A photo is taken. I can measure what happens to this particular pixel of the camera. And in this way, you will create a four-dimensional relationship, 2D for the camera and 2D for the-- so 2D for the projector and 2D for the camera. So a million photos here and a million pixels here. So you're going to have a trillion measurements, 10 to the 12. Now, we can invert that and ask for a single pixel of the camera, which projector pixels are contributing. So the question to ask is I'm going to turn on one pixel at a time. If I turn on just this array, for example, it will only contribute to, say, this pixel. But if there was some intersection, it will also contribute to some other pixel and so on. So you're going to have this global transport of light. And from that you can ask this question, for a given camera pixel, what are the other projector pixels that are contributing to it? And so you can do this inversion, which is relatively straightforward to do. And then you can do experiments like this, how can you read your opponent's cards from across the table? You can turn on the projector one pixel at a time, look at the reflection, and it's in the camera, then on the next pixel, and so on. And after you allow your opponent to take a million photos of him, you'll be able to read the card although it [? will ?] not directly visible from your point of view. So you can kind of look around an uploader and what's behind a corner. What's the-- what's the flaw in this argument that you can look in a corner? Yes. AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: Exactly. There's some device that you have to place that's actually looking at that card. So this is a beautiful project from Stanford. And one of their funding agencies, of course, the army-- and they would like to know what's around the corner. But of course you have to go and place a projector in the enemy line to be able to figure out who's out there. So this is the message of this class. Pure digital cameras are extremely boring. And if you look at these two cameras, one of them is digital. One of them is phone. Can you really figure out which one is which? Left is digital. Right is digital. OK, and the rest are confused. So hardly any difference between them, zoom, focus, aperture exposure, all the same old boring stuff. There's not much extra going on. And if somebody claims that digital camera have spawned a new art form, probably not. I mean, you could just take film cameras, scan them, and play with that. They're just faster, better, and cheaper. So when people think about computational photography and computational camera, they start thinking about, wow, OK I have this camera that doesn't do a very good job of dynamic range and field of view and so on. So what I'm going to do is somehow improve the performance of this camera. One common is by boosting the dynamic range by taking multiple photo by exposure bracketing or having a larger depth of field by taking multiple photo by taking focus bracketing and so on. So to increase the field of view, you're going to do a panorama and then stitch it together. If you want to increase the frame rate, you're going to play with exposure time and so on. So this is what a lot of people think about as computation camera and photography. I just want to emphasize that this is not what we're going to talk about in this class. These are concepts literally easy to understand and I have a lot of lecture, notes, and videos online. You can look it up. In a couple of hours. You will get a pretty good overview of all the things that can be done in this space. And if you go to Flickr, there are groups that are just exploring high dynamic range imaging, et cetera. So we are not really going to talk about that because all these techniques are just trying to improve the performance of a camera. It's not trying to change the game of photography. So generalizing it even further, what we're going to look at is cameras that are not just 2D sensors mimicking film. But they could be zero-dimensional sensors that are, as I said in barcode scanner, and we'll look at time of flight, motion detection. One these sensors that you see in flatbed scanners or fax machines or line scan cameras that are used in photo finish at sports events, 2 and 1/2 new sensors. 3D sensors are a different kind and very quickly into 4D and 6D devices that are exploiting deconvolution and tomography in medical imaging or scientific imaging and also displays that are four-dimensional and six-dimensional. So earlier we saw that you can look around the corner by placing a device in the line of sight, which is the projector. Well, that was done three years ago or four years ago, 2005. Do we have some new machinery now that will allow us to look around the corner? You're out here. The meetings are called. It's a good day. You're out here, and you want to see what's inside with a door that's partially open. Is this possible? Or imagine your cameras that are right now in mobile devices. You have this thing that's shrinking every day, getting cheaper but also shrinking and actually degrading its signal to noise ratio performance. But imagine if your whole LCD becomes photosensitive, so every pixel that's emitting on your LCD also becomes photosensitive-- and companies like Sharp and Planar are already doing that. So they have this LCD array where if you put a finger directly on top of the LCD, you can even look at the ridges on this finger. So there's some very beautiful opportunities. And I just saw Kimo and Ted here. They have also a very exciting project where they can use an ordinary camera to look at right down to the resolution of your ridges. Now the question is, if we redefine a camera not to be this perspective device that behaves like a pinhole camera-- but the whole screen is like sensing. There's a fusion of sensor and optics. Think about in the display domain a CRT. There is an emitter, the electron beam, and then the receiver, which is the phosphorescent screen. There's always a separation between what's emitting it and what's receiving it. And over time, of course, we have a LCDs where everything is fused together. And if you think about cameras, it's the same. You have a lens that's collecting, and then you have a sensor that's receiving. And we haven't thought that there's always has to be some distance sufficient distance. If you have a 35 millimeter camera, you ought to have roughly 35 millimeters between your lens and your sensor. And we can throw away all those constraints and all those assumptions, and we might be able to come up with devices where the lens and the optics and the sensor is all just one thing. And that would be kind of the LCD equivalent of a camera. So this is something we can think about. And these devices are coming mainly because they want to support touch sensing. But imagine the future if you have your mobile device, and you can just wave it and take a picture-- and because your sensor is so large, you'll be able to collect a lot of light. And right now this it's not even megapixel yet. But this will be a megapixel, 10 megapixel. And you'll be able to multiplex those pixels for color and infrared and all kinds of beautiful things, different speeds, and so on. So that's how we're going to think about in this class about cameras and photography. And the same with medical imaging. Right now medical imaging is thought as-- medical imaging today is very similar to photography in the 1930s. The guy who took the photo was more important than the people whose photo was being taken sometimes, because those people have to go to the sky and stand for some time. And he had this one specialized device called a camera. Everybody has to stand still for a few seconds. You take a picture, and then you go home. And after a couple of days, you get your photo back. And that's exactly what medical imaging is all about today. We have to go to a special location to get our CAT scan or ultrasound scan done. And our time is completely-- our time is considered not so important. The guy who's running the machine, he's very important because we have to make an appointment with this guy. So can we bring-- will medical imaging evolve to a stage where it's like photography today? Clearly, when we take a photo if somebody spills a coffee, we take a photo. It's a very casual way of thinking about photography. And then medical imaging evolved to a stage where we can do that. And it turns out we can. There are certain directions, certain computational methods that we will be able to develop that will get rid of older mindsets of how medical devices are being developed. So a very brief introduction of where I come from and our group here Camera Culture. So in the past I have worked a lot on [? conventional ?] illumination, different types of projectors, creating multi projector displays, creating virtual reality setups, augmented reality, a lot of work on pocket projectors and augmented reality with pocket projectors, interaction paradigms, work with RFIDs and so on. And in the last eight years or so, a lot of work in cameras playing with shutter aperture, light fields, illumination wavelength, and sensors and so on. And some of the questions we discuss in our group are this. And what will the camera look like in 10 or 20 years from now? I already gave you one kind of possible direction where it might be just a flat screen, or maybe it's just your credit card. But it could even be a retina display retinal sensor and digital sensors. How will the next billion cameras change our social fabric or social culture? If you think about the-- if you think about the internet, it has been transformed by the ability to search. That includes indexing and segmenting and sorting and all those beautiful problems. But image search even today remains an extremely challenging problem. So maybe we can change the game and modify our devices, modify our cameras, modify our displays, modify our storage devices and so on. So that image search can become simpler. OK. If they magically have a camera that's 100 megapixel, when I take your photo, I can Zoom in all the way down to your iris. And if I have an iris detector, then I have very easy identification. And all the photos I take of Dina, for example, they're all indexed with her iris code. So that's kind of a very [INAUDIBLE] level way of thinking about simplifying image search. But we'll look at a very interesting example that exploit thermal IR imaging, electric cameras and so on. What will happen when for this recording is being used not just by the big brother, but it's actually being used for some beneficial purposes in commercial settings? So imagine a scenario something like Google Earth live where right now you can go to Google Earth and fire up the browser, and you can go to any part of the world and see how it looks about six months ago or a year ago. But imagine if you can just fire up your browser and go to this location live as it's happening now. And of course, you can move your slider and go back and forth and back. So if you have a camera on every street pole, every bus, every taxi, every person and all those data all that data is being streamed on the network, so you can go to any part of the world and see it, how will that world look like? I'm sure many of you are definitely scared-- and given all the privacy and security notions of it. But this could become very similar to the safety we have a little bit in our financial system our financial transactions. When I use my card in a restaurant, a bunch of people actually see my credit card number and all the information. The waitress looks at the card. And the owner knows it. The bank knows it. The credit card company knows it. And the government, of course, knows it and so on. And a lot of people know about your financial transactions. But somehow you are completely comfortable sharing your financial data. And similarly can we create cameras and photography and imaging infrastructure so that if you walk down the street, and all these cameras are looking at you, anybody in the world could be looking at your live? You feel completely secure that it's being used only by the right people for the right reasons. And if you don't want them to see it, you have some switch on you that says, I'm completely invisible. And you should be able to walk down the street. Can we create such an imaging infrastructure? Think about high speed cameras and high resolution cameras. Maybe we'll have microscopes and nano scopes with us that will again change the way we think about medical imaging. And what about movie making and news reporting? As you know it has dramatically changed over the last three or four years because, again, a billion people have cameras out there. Whether there is an in Tibet or there's a satellite image of what's going on in Burma or a plane that's about to crash, you get amazing videos. And they're not captured by CNN. They're captured by other people. So how are we going to, again, change this imaging infrastructure, the whole pipeline, to think about the future of moviemaking and news reporting? So overall in this course, we're not going to think purely about software. We're going to think about how we can change the camera, not just use the camera. So that involves optics and illumination and sensor and motion of the sensor, motion of the optics, the different wavelengths, 3D cameras, polarization, probes and actuators and also priors and online collections, the network. And one kind of theme you'll see is after years of research in computer vision, one could argue that we have exhausted the bits that are available in pixels. There's still a lot to be done. There's a lot to be done. Still challenging even today with sophisticated computing algorithms, it's challenging. So maybe we can build feature revealing cameras that will go hand in hand with existing or modern [INAUDIBLE] algorithms so that we can process photons and create this meta structure for our imaging pipeline. So what we're going to do today is I'll briefly describe what this course is about, do some introductions. And the second half, we will come around and do a fast forward preview of the [? core ?] groups, OK. So in the next few slides, I'm just going to give you a quick kind of rundown of what this course is about, the layout. Here's a nice overview of the biological successful vision in the animal eyes. And there's a nice paper in Science. And you can take all the successful biological vision and place that in kind of eight categories. You have eyes based on shadows, based on refraction, based on reflection, single chambered eyes, or compound eyes sometimes with apposition, sometimes with superposition, and so on. We'll come back and discuss this more. If you look at the eyes of a scallop, it's based on a mirror, not on a lens. So you have a mirror down here, and the sensor is up here. And light actually reflects from this concave mirror. And the image is found on the detector. And the future camera in your mobile device on a flat sensor could be on this architecture. It doesn't have to be on this single chamber high refraction. So the human vision is in this corner here. It's a single chamber design with lenses. And all the cameras, at least all the standard cameras that we know of are in this particular part. But this is a lot to explore. And that doesn't even explore computation. This is pretty much raw imaging that these animals are using. So film light, traditional photography, light comes in through the lens. It falls on the detector of the film, and that's the end of the story. That's your photon. And you just transfer it digitally or by chemical processing, and you should see it. Computational camera is a little bit different. We're going to have some crazy optics that's going to think about how rays and wavelengths are manipulated. We're going to have a sensor that's not just mimicking film. It's not going to be a [? flag ?] sensor. It'll have its own geometry and spectrum and so on. And what you see finally will not be just a raw image but will have some reconstruction and computation on top. So that's computational camera. That would be what's in the conventional camera. But there's also an element that's outside the camera, which is like-- and it's sort of just a flashlight. You're going to have a flashlight with sophisticated modulators that are changing the intensity and phase and polarization of light in different directions, different kinds of additional optics, and so on. So once you have a programmable light and a computational camera, we have a framework to really exploit and understand the light transport in the scene. So something interesting is happening in the camera work. We are here 2008, 2009. About a billion cameras are being sold every year, which is fine. Most people know that. But if you just go back about six years, 2001, 2002, 0 cameras were sold embedded in a mobile phone. So we have gone from 0 to a billion in just six years. So it's just an amazing time to think about imaging. And this is very much like all the fun in networking and communication back in the '90s when millions of people were coming on board and thousands of websites were going on. And that was computing. And this is the time for visual computing. And because this game is changing so fast in terms of cost, in terms of performance, applications where imaging was considered not the perfect solution is changing. We are seeing cameras being used in some really casual and very strange ways. And here's an indication of where that's going. So where are these cameras? Remember, cameras are not just 2D sensors for photography, but they're used in various ways. So we are here, 2008, 2009. The pink here is because of mobile phones. About a billion sensors are being sold. Any guesses about what these other slivers are, the blue one, the green one, and so on? There are a couple of million here. About 100 million here are being sold. I guess [INAUDIBLE]. Sorry. AUDIENCE: Optical mouse. RAMESH RASKAR: Optical mouse, very good, very good. AUDIENCE: Gaming. RAMESH RASKAR: Gaming. So if you think about-- we, I think, sold like 30 million remotes. 30 million remotes were sold. And this chart is actually made before that. So that's not even here, but that's going to be big, yes. And of course, there is traditional photography. But I'm glad you got optical mouse because that's one of the largest markets. It's basically a very low resolution, usually 20x20 or 32x32 pixel camera that's running 1,000 or 2,000 Hz. It's a high speed camera that's doing optical flow to figure out where your mouse is even if you put it on a very clear surface. And the mobile phones and digital cameras-- and all this worry about big brother, think about security. It's a very, very tiny sliver. And if you think about the first three categories here, optical mouse mobile phones, and digital and video cameras, these are all personal devices. And this is going to scale with the number of people in the world. So this will easily become 6 billion or 6.5 billion, whatever we have. But these other slivers may or may not grow. Gaming, for example, could still grow very rapidly because almost every person individually might own a game console. So very interesting here. If we look around the internet, cameras are being-- sometimes it's just silly. You may know about this do it yourself green screen effects company, U-Start. And they want to be the Guitar Hero in the visual domain. So instead of playing music that's synchronized with some prerecorded data, here they have pre-recorded video segments. And then you can star in this movie. It's pretty big. If you go to their website, a lot of people are uploading their videos with their software and their experience screen. All they give you is a camera, a simple camera, not even a studio pair, just a simple camera and a big plastic green screen. And they have some software for doing screen mapping and so on. You may be familiar with this video. Which company is that? I forgot. It's starts with M. It's a studio webcam. AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: It's a Japanese company, OK. Some other interesting ones, look at this one here. Panasonic Beauty Appliance, what does it do? Any guesses? It's a pretty big camera. It's a personal beauty appliance where it provides humidification in a very local spot. So you can sit there and sit all day around and it I guess maintains the right level of moisture and humidity around. So your skin won't get very dry. What about this one? The camera is not anywhere near the eyesight. But the camera is actually on the ear. Can I look inside this? I was reading up on this. It wasn't very clear what the applications are. But you can think of it. I mean, if you're walking down the street and you want to be safe, something's coming at you. Maybe it tells you. Or if there's somebody you don't like-- they're walking towards you-- don't turn around. Yeah. And that's going to happen more and more, more crazy places. I like this GigaPan Epic 100 which is for creating panoramic imagery. But it's actually basically a tiny, tiny robotic platform about $300 for an [INAUDIBLE]. And it has a physical lever that presses on the shutter as it rotates around. It's pretty amazing. And of course Fuji is doing some fantastic work just starting with the studio pair. And they're trying to provide a whole pipeline all the way down to printing on a lenticular screen so you can enjoy this studio box. OK, so after you look at all this camera, they say, OK, what else is there to do because you don't have to do much research to do any one of these projects. You don't have to take a class to build these applications. So let's think about something trying to improve the camera just a little bit. All I want is depth per pixel, something humans seem to be doing very well. We can use our two eyes and figure out how far things are. At least we think we know how things are. There's a lot of prior information. But for a camera just to get depth is extremely challenging. After 34 years of research in computer vision, still you can't go out there and buy a camera that's a reasonable cost, which runs at a reasonable speed and gives you that perfect [INAUDIBLE]. It's amazing. So we'll show you what the state of the art is using this multi-thousand dollar camera. This was donated to us by [INAUDIBLE].. And [INAUDIBLE]. So this is camera that uses time of flight to [INAUDIBLE].. Oh, we don't use [INAUDIBLE] cameras as much. OK, so this is time of flight. All those alleles are being-- those of you who have a cell phone camera, just look at this lights through your camera. You'll see all of them are lit up in the air. And as you can see here-- what is it looking at? Is it [INAUDIBLE]? OK, so the people in the front are marked in green and somebody in the back over there. Yeah, Martin. So it's giving you some estimate of depth by doing a time of light calculation. Now that is the camera. And it works OK. But as you can see the quality even in this relatively easy configuration, most of the objects are diffused. There is no resistance. The movement doesn't have some light to overwhelm this active light. And even then the quality is OK. It's not that great. And I forget the exact cost of this one, but they cost about $10,000, these cameras. Go ahead. AUDIENCE: What's is it called? RAMESH RASKAR: This one is Canesta. Yeah, but during the later classes we'll learn about all the different 3D cameras, whether they're based on active light or stereo or structured light, polarization, and so on. So this one is going to start. The other company is 3DV, which was recently bought by Microsoft. And then there's a couple of companies in Germany who are building 3D cameras. And apparently this market is now being driven by game consoles because PlayStation and Xbox are all interested in using 3D cameras in the house for gaming, for interacting with gestures. So they want to distinguish this from this, for example, because right now all the games are in the gulag. The hands are away from your body to play the games. I-toy and Xbox and so on. But the new ones will allow you to do gestures that are more intricate. So there are too many challenges just to get 3D because we have to use some kind of active light to compensate for ambient scenes. You have to stitch geometric capture from multiple views. If you have something like marble or skin, it has subsurface scattering. So that's difficult to deal with. I cannot do triangulation. And objects that are diffused have diffuse reflection component. If you go into something that has glass or are dark, it's just out of caution. So computational camera, it should do something more than just capturing a 2D image. And those of you who are here building real time and C applications for robotics, anything like that. There's no camera right now that can deal with objects like this. This is amazing. So we have a billion cameras out there. But none of those cameras can solve this problem. So that's what I want to look at in this class. AUDIENCE: Can I ask a quick question about that? RAMESH RASKAR: Yeah. AUDIENCE: So I'm just thinking back to the traditional film type camera that has autofocus sensors that are contrast based or a rangefinder type thing. So that in a way gives you depth by moving optical elements. But this doesn't have any moving parts, I guess. Does that count as a way to try to make a camera that perceives depth by having something that scans physical optics-- RAMESH RASKAR: Exactly. AUDIENCE: --around. RAMESH RASKAR: I think you're asking a very important question. Let's see if I have a slide. So if you think about a different way of scanning in 3D. And this is by the way-- it's one of our visiting students Doug Lanman and Gabriel Taubin. They have a beautiful course at SIGGRAPH on all different ways of doing 3D scanning. And they're contact based and then non-contact based. And right now we're talking about mostly active by using time of flight and so on. But they're of course passive as well studio and motion and so on based on focus and defocus. That may be a good segue way to actually look at this camera here. And Ms. Emily, you want to run it? AUDIENCE: Yeah, [INAUDIBLE] RAMESH RASKAR: Yeah. So this camera is completely passive. It's like a full camera, which has-- You want to talk about it. AUDIENCE: Yeah, basically it's 25 separate images, each one of these is a separate camera. RAMESH RASKAR: Which is on a single chassis. AUDIENCE: Yeah, and they're just 25 to 30 minutes. But each one's from a slightly different perspective. So you can do things like add all the images together but slightly shift them. So I can focus down at infinity towards the end of the table. Or I can change the focus by just simply by shifting or moving the images to focus right there. RAMESH RASKAR: So remember he's doing all this operation in software. You can take those 25 images. And then in software you can refocus it anywhere you want. And then, again, based on maximum contrast or one of those operators. You could potentially figure out what's in front or what's behind. But of course, if you put in some transparent object, it's going to be quite challenging to figure out where this is. If I just put something that's really flat, then it wouldn't know whether it's in focus or not in focus in the middle of the paper. It might be able to do an OK job on the boundaries of the paper. Or in the middle of the paper it always looks like it's low frequency. So these kind of cameras are coming. You can buy this camera for about $20,000 from U-Plus. So far we are up to what, 30, up to 30K. We did 10, 120. The next one will be-- we'll get to 60s very quickly, 60K. And if you, again, think about your cell phone camera, the current mindset in camera makers is that they want to shrink it, something that's smaller and smaller and smaller. But if you want to create enough baseline, create some kind of focus based depth extraction, for example, as we're looking at here, we must have some baseline between them. And that's what this camera does. It's still pretty compact. I forget. I think 6 centimeters total, something like that. I don't know. It could be on your phone. It does have that much space. So if they would just turn it around and make all the pixels as sensing pixels, you have enough baseline from the left edge of the camera, left edge of this cell phone camera to the right edge of the cell phone camera to create those effects. But that mindset has to change. Right now, as we will see in the class later on the section on sensors, there is tremendous innovation in how photons-- sorry, image sensors are being built that are wafer level cameras back start elimination, 3D VLSI, and so on. I mean, it's all great, and it's going to help us. But unfortunately all of them have a single tract mind. They just want to make a higher signal to noise ratio that must collect as many photons as you can. And they want to shrink the sensor as small as you can so that they can sell it for a cheap cost. But if you think about-- if somebody had told you 30 years ago that the TVs are really expensive, so what we're going to do is start building TVs that are smaller and smaller because they'll be cheaper and cheaper, all right-- but that's not how it works. People want larger and larger TVs. And they're willing to pay for 40 inch and 50 inch TVs. And there will come a time when people say, it's OK if I have to pay a little bit more, but make my sensor and make my sensor array larger and larger so that it's almost the size of the whole device. Even for a camera like an SLR camera, not the whole camera is sensor. Only a small part of it is sensor. But once we get around this notion that silicon is really expensive. And we have to shrink it, and we have to create this wafer. We can slice and dice it into millions of tiny sensors for cameras. Hopefully things will change around and we won't be watching those tiny TVs as they have predicted some time ago. So that's kind of where some of these things are. So let me go back-- so getting 3D very challenging, and that I would say is kind of version 0.1 of a computation camera. I mean, it has to sense the three-dimensional world to do anything interesting. And it is being used in other scenarios, I mean, [INAUDIBLE] challenge. This is an animation for movies.-- where you can build vehicles that can navigate through any kind of terrain, including urban neighborhoods. So the first version of the [INAUDIBLE] challenge actually had a lot of cameras. But the sad aspect from a camera point of view is that the second version that was-- Stanford was the winner in that one. In the traditional definition of a camera, there were no cameras on the whole vehicle. There were zero cameras on a car that's built to navigate in a city. It's pretty sad. I mean, if you think about it, a human driver driving through a city-- I would say almost all our input, all our actions are based on visual information. But cameras are so primitive to build this self navigating vehicle that no traditional cameras are used. Now, of course, I'm taking it to the extreme because the kind of devices they did use were similar to range scanning devices. They had laser scanners LIDAR, reducing time of flight and so on. And in a way they're capturing information that's sensitive to different directions. So in a way, it's a camera, but it's not a traditional visible range camera that they're using. It's a detector, a single pixel detector that's measuring light coming from different directions. And on the other extreme, that's one extreme where cameras are not good enough to self navigate. On the other extreme, you have lots of people online-- again, this is a slide from Doug Lanman building their own 3D scanners. So of course, you can take a Logitech webcam and put some [INAUDIBLE]. You can calibrate that and move it around to create 3D models and a whole bunch of-- We'll look at this during the-- people are using just a wineglass to create laser stripes. So I can take a laser pointer and shine it at a wineglass to create a laser stripe. I don't even have to buy expensive optics. And they just scan it and develop a cheap DV Cam to create 3D scanners. This one is probably most interesting. They want to create, again, a very accurate 3D model from a LEGO rig. And this is how it works. You can probably guess it from the picture. There is a milk pot. They want to take a character and scan it. They'll put this character in a milk pot. And take a photo from above doing a very simple segmentation. And then over time, they're going to put a little bit more milk so that the level of the fluid will rise in this square bucket and this rectangular bucket. And they'll continue to take pictures and they'll create this 3D model section by section. Pretty amazing. AUDIENCE: They did it in New York with humans. RAMESH RASKAR: Sorry? AUDIENCE: They did it in New York with humans. They go in the milk and scan you. [INAUDIBLE] RAMESH RASKAR: Excellent, I would imagine going the other way is easier. Just drink. Just straight up [INAUDIBLE] and drain it. It will go out almost concentrated. AUDIENCE: True. RAMESH RASKAR: But, yeah, this could be a great class project. So here's a slightly different direction that some people are taking. And I really like this work from [INAUDIBLE] Washington. Some of you are-- OK, let me state the question first and see if anyone of you has an answer to that. Let's say you go to Rome, and you are in front of the Trevi Fountain. You take a photo, and you don't know anything about Trevi Fountain and all the hype about it. You want to take a photo and figure out which part of this photo is interesting according to everybody else. So why am I here? Should I be looking at-- if I go to the Old Town Square in Prague, I'm not going to look at a fountain here. I'm going to look at another part of the castle. But somehow when I'm at Trevi Fountain, I'm not looking at the buildings. I'm looking at the Fountain. So how would I determine without looking in my guidebook which part is most interesting? Any clues, any answers? AUDIENCE: I don't know if I should answer. RAMESH RASKAR: Go ahead. You're [INAUDIBLE] yeah. AUDIENCE: One thing you could perhaps do is take a consensus for all the photos that have been taken of this object and basically see where the feature tracks overlap. An alert [INAUDIBLE] sort of. RAMESH RASKAR: Excellent. So this is actually an offshoot of the photo tourism Microsoft Photosynth project where you have millions of photos of the same tourist location. And once you have registered them in 3D, then you can just shoot the rays back to see which rays will intersect the pixels. If most of the rays seem to be shooting you just kind of just a histogram, you'll realize that most of the photos are looking at the fountain. And in this case, what the photos are looking up at the top part of the [INAUDIBLE].. AUDIENCE: So maybe it's what is popular here. Everything might be a different [INAUDIBLE].. RAMESH RASKAR: You're right, you're right. So the next question is, how can we-- it's like popular versus interesting on Flickr. AUDIENCE: Yes, that's true. RAMESH RASKAR: So here's our question, how can we ask-- how can we answer this question? AUDIENCE: There may be certain features that the human visual system finds particularly interesting, things with high levels of non-repetitive detail or certain shapes or ratios. RAMESH RASKAR: Yeah, certainly. So I think there's a lot to be done. So maybe if we want to build a camera that directly detects the building pattern, it doesn't care about finding it, capturing a real photo. But it does a very good job of finding out if there are patterns we find interesting whether it's symmetry or repetition or the right scales the right aspect ratio and so on. And the same project. So this is the Pantheon again in Rome. These are all the places from which people take photos. And if I just tell you this is the view of the Pantheon, and you usually are-- up here is a fountain. People start from here and then go in and roam around and take a picture of this big hole in the Pantheon. So the question is if I take a picture here at the entrance and from inside the pantheon looking out at the door and a picture from outside, what's the part that would connect these two photos? It's not a straight line path. I don't have an image. I don't have it. But if you actually have this, again, voting scheme, you've realized the best way to go from this view to this view is follow this particular path. So this is data that's been captured from visual media, like photographs, but inherently it's more about geometry. And I would call it-- it's almost non-visual. So that was computational camera, and let's think about computational photography. And computational photography and my collaborator Jack Tumblin at Northwestern. This is how we like to define the two parts. We want to capture the scene, and we want to synthesize this thing. And when we capture, we want to capture it in an extremely rich fashion so that it's machine readable and the machine can understand what's out there and re-synthesize. We want to synthesize in a hyper realistic manner so that it represents the essence of our visual experience. And within that there are three major teams. One is epsilon photography, which is basically generalized bracketing. So whether I want HDR or panorama, I'm going to take my camera, Google's exposure bracketing or focus bracketing or view bracketing and so on and just create a very nice picture and mostly to overcome the limitations of a camera. That's just epsilon photography. I'm going to change the parameters of the camera within an epsilon neighborhood. And then people think of that as the ultimate camera. But again we're not going to focus much on this part in this course. The next part-- that's quite interesting-- is so-called coated photography. So the comment we had earlier I really care about some mid-level feature. I want to know if they're symmetric. I want to if there's a repeating pattern. I want to know where the edges are. I want to know how the regions can be segmented and so on. And it's worth taking multiple photos such as in bracketing. I just want to take one or maybe two photos which are reversible and of course the information of what the world in my image. And the image may not be a single 2D image. It could be a light field camera where you have 25 images or a single snapshot or you could have a time of flight camera where you really can't call it a photo because it's actually measuring the amount of time it takes for light to travel back and forth in a given direction and so on. And it would be very useful for scene analysis. So a lot of our course would be actually discussing this part. And the last part is [? instance ?] photography which is really to see if we can go beyond this low level of features, pixels and mid-level features such as motion and foreground and background and symmetry and so on to a higher level understanding. So that it's not just mimicking human eye, but it's doing possibly things like this, telling me what's popular for example. And I claim that only when we have computational cameras and computational photography supporting this, we'll be able to create new visual art forms. So within that, this is a chart that we'll be talking about throughout this class. I won't go into too much detail right now. But we have certain goals, the mid-level features, low level features, high level features, and so on. And we have certain tools that can capture the raw image. We can capture the incident angle and spectrum such as UV and thermal IR. We can capture high dimensional reflectance field. We have non-visual data such as GPS and identity, metadata, image priors and so on. And with that we're going to explore this whole space of camera and photography. Of course high dynamic range and so on is right here. We're not going to spend too much time on that. But we're going to think about how can we insert a virtual object, how can you take a photo and relight, change the lighting in the scene and so on. Material editing for example from a single photon, Ted Olson is here, and he's done a lot of work in this space. And if you look at this chart, you realize that even the human vision is not at the top right of this diagram. With human vision I cannot look around the corner. I cannot see what's inside the body. I cannot tell you what's behind the curtain. I cannot tell you when I'm in Rome what's interesting and so on. So you really want to create this augmented human experience by using these tools and using the mechanisms that we have available. So we'll be coming back to this. Just keep this in mind. And a lot of this is actually available in the book that I and Jack Tumblin have published. The PDF of this book will be available throughout the course. And the real book should be out any time. All right. So just a couple of more slides, and we'll take a break. Let me skip over this one for now, some of my favorite examples. All right. So we also got to spend a lot of time thinking about cameras that, again, show what's not seen. And I realized that Mathias is here. His company is the founder of Redshift, a thermal imaging company. And you want to say a couple of words? MATHIAS OMOTOLA: Sure. So this was my second startup, and the idea was to build an ultra low cost thermal imaging camera and do it using a standard cell phone camera as, actually, the image sensor and using a little thermally tunable film as a translator between infrared and visible. RAMESH RASKAR: Exactly. And the goal is to dramatically bring down the cost of a thermal camera. So let's bring in our next toy. Will you need some time to set up? GUEST SPEAKER: Oh, yes I need-- RAMESH RASKAR: OK. Yeah. I we'll set up this other time, which will very quickly take us towards the 60K. And you can do some amazing things. So this is not from Mathias's company. Is that a maker or just a-- they actually make cameras or they're just integrators? And they can use it in sports analysis? This is in the game of cricket where you want to figure out if the ball was hitting the pad or the pad and it's a very difficult call for the referee but if they have a thermal imaging camera, then the ball here if it hits the bag, it will leave a hot spot on the bat. But if it hits the bat, it will leave a hotspot on the pad. So just by looking at an image a couple of seconds after the ball has left, the player, you can figure out if the ball hit the-- the ball hit the bat or the bat. And this is major. People get really angry and all the sports analysts will be writing or the referee next day in the press. This really saves their life. But the goal for Mathias and other companies is to make it available possibly at the same cost as your traditional camera. I mean, if you think about a cost of a digital SLR, it was $30,000 dollars. And now it's worth about $500. It's just silicon. It's not that expensive. If Canon or Nokia decides to have a thermal camera in every cell phone, they will be available for very low cost, very extreme low cost. What is your guess? What would be the cost in five years? AUDIENCE: Five years? It's still $500. RAMESH RASKAR: $500. That's not bad, right? And as you'll see later in the class and you'll see in this demo-- we can do this in the break as well. So take your time. It can do some amazing things. All right. So that was computational camera and computational photography. And in this class we're going to look at both aspects. This is how we're going to do it. We have two numbers 131 for undergraduate and 531 for graduate. The main difference is we have four assignments for the graduate version and three assignments for the undergraduate version. And we have a midterm exam where there will be fewer questions for the undergraduate version other than that, the rest of the course is very similar. So in the assignments we're playing with hands on with optics and illumination and sensors and other elements. And we'll have all these toys available for you. We'll have projectors, different light sources, lasers, and things like that to play with. I will also have a best project award. At the end last year there was exactly one undergraduate student, and he won the best project award, pretty impressive, pretty impressive. He did an amazing job because last year we did not have an undergraduate version. It was only a graduate version. Midterm exam will be early November. A big component of the class and your work would be a final project, which should be novel as well as cool unlike a lot of other fields. It is possible to come up with an idea that nobody else has even thought about in this field. So you could come up with ideas that are not just incremental but that could be game changing. And I'll tell you later that three projects, final projects from last year led to SIGGRAPH or ICCP papers. And two of the projects are becoming multi hundred thousand dollar projects. So it is possible to come up in this class with ideas that are game changing. If you are taking the class for credit, you'll also take class notes for the lectures. We'll have plenty of guest talks in line and online discussion. This class, most of the materials is on slides. But starting next week, about half the material will be on slides, and the other half will be just on whiteboards and demos and so on. So if you're looking at a slide, you'll get an idea of the type of material we're covering, but what I realized last year was most of the discussion was actually not captured in the slides. And if you're a listener-- and I a lot of you are going to be listeners in this class, which is perfectly fine. But what I would like to do is participate in discussion. Bring in a new viewpoint. I want to learn from you. I'm glad Mathias is here, for example. He's the world expert world's expert on thermal imaging. So I want all of you to who are taking it, who are just listening in, to contribute to the discussion and give a new viewpoint. I will not be offended at all if you jump in and give a new perspective or a new reference about what we are discussing. And if you are a graduate student or a postdoc, then I would also like to spend some time in sharing with us one short presentation, maybe a cool idea or some new work you're doing. But that's what I would expect if you are not taking this class for credit. In terms of the credit breakdown, we have about four or three assignments depending on which version you are taking 40%, final project, 30%, midterm 40% and class participation. That includes discussions often online as well as taking the notes. Pre-requisites. Let's see if I have-- I'll come back to that, another slide. The emphasis for this class is really on fundamental techniques in imaging. Yeah, it's coming up. It's going to be fun. And in class as well as in homework, emphasis is on techniques. And they include all these key words, signal processing, applied optics graphics vision, online photo collection, statistical techniques, electronics, visual arts, and so on. And so in that sense it's not a discussion class. We're going to learn about techniques. All right, there is entertainment here. It's very addictive to play with cameras. OK. So if you can just focus here for a couple of minutes. And there are three areas we want to focus on. And just keep this in mind and know this. We're going to focus on photography. We're going to focus on active techniques, real time Techniques. And we're going to focus on scientific imaging. So within photography, we're going to think in higher dimensions. We're not going to think about just HDR or focus stack and so on. We're going to think about high dimensional imaging light fields, thermal imaging range cameras, and so on. And active computer vision, we're going to think about HCI applications, robotics, tracking and segmentation and how we can change the game by using feature revealing cameras. And a lot of concepts from scientific imaging such as composition sensing, wavefront coding, deconvolution, tomography, point square function, and so on-- And at first glance, these three areas might look very distinct and have very different techniques. But fortunately they all use very similar principles. And what we realized over the course is that this fusion of this dissimilar ideas-- I mean, tomography and wavefront coding seem very far away from traditional photography or HCI. But you'll realize that many of the problems you may be encountering and many of the visual arts that you might be interested in could be impacted from some of these new techniques. So in terms of the prerequisites. The last time I taught this course, there was a request to be supportive of students with different backgrounds. So this is what we're going to try. We're going to try to tracks, a software only tract and a software, hardware tract. And software intensive tract is for those of you who just because of your interests or want to do things in a particular way-- and you may be able to use some key based software. And the software hardware tract you will be using a lot of programming, OpenCV, MATLAB, C++ , Flash, whatever you like, Java. It doesn't really matter to me. But you'll be doing a lot of programming. And those of you who might be thinking about the software only track-- and you might be trying to use GUI. Actually it will be a lot of work also. And those of you who use Photoshop know very well that if you want to do simple edits to a photo, you take an hour, two hour, sometimes six hours. And you'll realize that sometimes writing a small program you can do that task much faster, but it's your cost. What is helpful but not absolutely necessary is some knowledge in linear algebra, signal processing, image processing. But what's critical is that you should be able to think in 3D. And this is a skill that's absolutely necessary. If you're going to think about in higher dimensions. 3D is just a beginning, but [? we're ?] thinking about 40, 60, 80 and so on. Now we're trying to keep the math to basic essentials. I like to use a lot of diagrams and visual analogies in the visual analogies to explain the concepts. But when you're doing your own assignments, you will have to go back to the questions. I might just flash the question, but I'll try to explain that again by drawing on the board. But it's possible for you to go through the whole class without actually writing down a lot of equations. At the same time, many of these concepts are complex, and they will arrive at a very fast pace. And we'll be discussing a lot of concepts. Now, if you're the kind of person who can just sit back and watch a presentation and grasp a concept, where you ask questions. It's interactive. This is an ideal class. On the other hand, if you're the kind of person who likes to really look at the math and see what the relationship between the variables is and so on, that's fine as well. So you need to have one of those two skills to be able to do well in this class. Now assignments for this class is also going to be a little bit different. As I said during the class, you'll be listening to a lot of advanced and complex concepts. But the assignments are structured in such a way that they have increasing level of sophistication. So you can do pretty well without much background for up to the first 60% or 70% of the assignment. But to do the last 10% or 20% you will need a good background in some of these areas. So that's another way of thinking about how you might be able to take this class, even if you don't have very strong mathematical, algebra, or signal processing background. So you can do pretty well, up to the 60% or 70% of. It and again in the superior of supporting students with varying background, what I will do is I will normalize your performance in those homeworks based on what background you have. So if I know that you don't have a linear algebra background, and you're taking this class, I will think that if you reach 70% of that assignment you have done a pretty good job. So I'll normalize it again by how much you know in that particular-- the knowledge required for that particular assignment. So you can really pick your level of how you want to do it. And we did this last time, and it worked out pretty well. So we'll see if we can do the same thing. And those of you who are taking the 131 class for undergraduate, please come and talk to me. And we can similarly figure out based on the classes you have already taken, how we can structure those assignments. For all those assignments, the four or three assignments, there's always an option. There are two assignments, and you can pick any one of them. So that gives you, again, an ability to pick an assignment that's appropriate for your level of understanding. All right, so let's see. Any programming environment is fine, OnePlus notes. Send me an email, [email protected], to put you on the mailing list. And we also have a sign up sheet that I'll pass around in the break. And, remember, our class runs into the happy hour on Friday. So after the class, we'll all go over to the marriage house and continue our discussion over beer, if you are over 18. No, 21. Is it 21? So that's at 4:30. This is a rough outline of how we're going to proceed. We finish on December 4th. It's funny because the classes end on December 10th. And our last class would have been December 11th. So I think having a Friday class with no final exam means you finish really, really early. So the semester ends on December 4th, which means final projects. Unfortunately, the week before that is thanksgiving. So many of you will be thanking me for the delay and the procrastination that led you to work through the Thanksgiving break. Yeah, there's just a list here. All right. So one of the things we will not cover or cover not in detail are the art and aesthetics of photography, the 4343. Software image manipulation, there's a great course by Fredo Durand on digital computational photography which is completely focused on software. So these two classes, first class and my class, is actually a very good complement of each other because in this class, the emphasis is more on hardware and optics and sensors and of course, software. But emphasis is on the hands on elements. And I believe Fredo's not going to teach the class in spring. But Will Freeman, who's also a great instructor, is going to teach the class in spring. That's what I hear. I haven't confirmed it. And there are excellent classes in computer vision. I think, Ted, you are teaching a class on scene perception. TED: Shape. RAMESH RASKAR: For shape perception that's on Mondays, just Mondays. TED: No, we don't know when. RAMESH RASKAR: All right. TED: First class is Monday, [INAUDIBLE].. RAMESH RASKAR: OK, but I already sent an email about Ted's class on the mailing list. So once I hear from you, I'll send it out again. There are a couple of great optics classes. This class is not about optics. You'll be learning a lot of concept. But the emphasis is not on optics. The emphasis is on imaging and photography. We will not learn about Photoshop. Actually, I don't even know Photoshop that very well. So if you're going to do your assignments in Photoshop, please come and tell me. I'll look forward to it. And we won't talk about anything that's included in the instruction manual of the camera. So you will not learn about how to set the exposure and how to change the aperture. I'm happy to do a separate crash course, one of the evenings. And we can all sit together and do a course. On the many people here it is super expert with all kinds of cameras, and he'll be happy to do that as well. So there are a lot of resources available if you have those questions. But this class is not really a program. So as I said a few classes I teach a class that's more discussion oriented in spring, the computational photography class and optics class. And Professor Han, I don't if he's teaching it in Spring. Yeah, I don't know. All right, so any questions about the structure of the class and what you expect? Rose, yes. AUDIENCE: Will you be posting the assignments in advance so we can kind of see what sort of work will we be doing and engaging with during class? RAMESH RASKAR: Excellent question. The question was, will I be posting assignments in advance? What's going to happen is based on the feedback I receive. Some of the assignments will change. But if you look at the OCW page from last year for this course, which is only an indication by the way because of course it's changing significantly this year, you will see the-- let's see the projects. You'll see the type of assignments that were given out. It has all the details as well. So four assignments, writing and so on. So you can already get a sense of the type of assignments that were given. And so, again, the options. There is 4A and 4B for example. So you can choose which one you would like to do. Again, the OpenCourseWare page is for last year's course. And we are recording it for this year. But it's not up here till next fall. So there's a long delay before the model appears. And next year we'll be doing something entirely different again. Other questions? All right, so let's go around very briefly. I realize there are quite a few people. So maybe 30 seconds per person no more than that just to get a sense of who's here and why you are here. So before we get started. Are you a photographer? All right, almost everybody. How many of you are videographer? You create videos. It's funny. Why there's a distinction between photographers and videographers? Do you use cameras for computer vision. Pretty good. Do you use cameras for real time processing like HCI or robotics and so on? OK. Do you have background in optics or sensors? OK, all right. So we have pretty good distribution here. So so let's start and very quickly go to the just your name, department here, and why you're here, just 30 seconds. I take this opportunity to say that Professor Oliveira is the top scientist in graphics and vision from Brazil, and he'll be in our group. And we are sending him and Professor Mukaigawa who is a famous researcher from Osaka University and both of them, as well as Ankit Mohan, who is a scientist, those 3 and some additional people will be the mentors for this class. So if you have any questions, you can come to me or if you want to brainstorm about projects or ideas or have questions, they will be available as well during the week. Oh, this is amazing, just an amazing set of people, chemistry, arts, communication, vision, HCI. It's going to be fun. Night vision goggles. Night vision goggles. That would have taken us over 100k. All right. So let me just quickly make a couple of announcements, and we'll break. So there's a great opportunity for those of you who are taking this class or credit. And there's a conference on computational photography which is the second edition of Fredo Durand who some of you may know from CSAIL. And Mark LeVoy and Eric Zaleski from Microsoft did the first one, which was this year in April. And myself and [INAUDIBLE] from Toronto and Raphael Bastion from Colorado was one of the world's experts in wavefront coding. Three of us are doing the second edition of this conference. And it will be held here in the new building in the last week of March. And the papers are due in November. So if you're doing well in this class, and one of the project ideas is interesting, you could even attempt to submit a paper for this particular conference or at least as a demo or a poster or something like that. So it's right here on campus. And most likely we'll have additional opportunities who are not in the peer review track to show off your work as well. So this is really catching on. A lot of big companies like Nokia and Samsung and Canon and HP all have started big research groups in computational photography and computational cameras. So there's a lot of interest. Another thing we're going to do in this class is-- as I said, it is possible to come up with completely new ideas in this field. It's such a new field because of the intersection of lots of interesting domains. So we're going to learn how to come up with new ideas. And we're going to learn how to write a good paper. We're not going to do it exactly it in the class. But toward the semester during discussions with me and your mentors, we will help you actually write a good quality paper. And writing a good quality paper actually has really simple methodologies which is amazing. We never learn about it in a formal class. So we'll try to do that in this class because the final project is really important. So just deciding if an idea is worth pursuing is half the battle. And we'll help you do that, you can actually just use the Heilmeier's rules which the military uses to decide whether they should pursue a project. And raise simple questions. If we answer those questions, you can very quickly make a decision whether you should pursue that project. And as I said last year was extremely, extremely beautiful. There were papers that became SIGGRAPH submissions. Matt Hirsch, who took this class last year, went to SIGGRAPH and also in the student research competition this year based on the last project he did. And there are two major research teams that are coming out one in mechanical engineering. Some students, they're starting this multi hundred thousand dollar project based on the class project here and so on. So we really want you to focus on novel ideas that are cool and publishable. Of course, those of you who have a design background or art background, we also will try to think about how they can be given the right exposure. And when it comes to technical publication, there are some simple rules that you can follow to write a reasonable quality paper. So will help you do that. So let me stop there with this image to make you go a little bit dizzy. And then we'll reconvene in about 10 minutes. And in the second half, we will have a fast forward preview of the whole class. So we'll spend about 2 to 3 minutes on each of the projects. We have 12 classes. We'll spend about 5 minutes on each class, and we will see all the pieces. So in the break, we'll also have the IR. camera you can try that. And Dan [? Sarkis ?] suggested for all of us to build this-- what do you call it? A pinhole camera from just a piece of paper. So you can just take a piece of paper, cut it up, and build a pinhole camera. So if you want to do some projects in that space-- AUDIENCE: Hi. [SIDE CONVERSATION]
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_1_Introduction_and_fastforward_preview_of_all_topics_Part_2.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RAMESH RASKAR: So let's start with a thermal IR camera. And as you can see, it has very strange properties. Who are we looking at here? That's you? AUDIENCE: Oh, yeah. RAMESH RASKAR: [LAUGHS] And then, as it turns on his lighter, it caused an automatic gain mode, I guess. And that's why the image gets very dark. And who's in the picture, anyway? Oh, that's Seth. Excellent. Thought I could recognize you in thermal IR And if you just put a cold finger on your cheek, you're going to see it. And take it off. Yeah, you can see the mark. Things that are completely transparent in visible are actually completely opaque. And the focus is-- yeah. That's completely opaque. Actually, if you just put your glasses down, this-- might as well. You won't see, but others will see. AUDIENCE: Yeah, I will see. RAMESH RASKAR: [LAUGHS] So they're completely opaque in-- AUDIENCE: These are actually glass lenses, not plastic ones. There might be some difference. RAMESH RASKAR: OK. AUDIENCE: I don't know. But-- RAMESH RASKAR: So it's really looking at about 8,000 nanometer to 12,000 nanometer, which fortunately is also the wavelength in which the human bodies-- the black body radiation of the human body peaks. So you can do some amazing things with this. Chrysler and BMW are thinking about-- or maybe already put them in automobiles so that you can see things very far away. So one benefit of this, as you can see, is even if you turn off the room light, the intensity will not change. So if somebody must take the-- it's completely independent of the room lighting, because it's not looking at the visible spectrum that's being emitted by these tube lights. But it's a function of just thermal radiation. So at night when you're driving, if there are any animals or even intruders in your backyard, you can detect them with thermal IR AUDIENCE: There's a lot of reflection on the table. RAMESH RASKAR: Yes. Yes, because remember, this is a very large wavelength. So that wavelength-- the table's roughness disappears and it becomes highly reflective. Is this-- let's now try to focus on things that count. All the people with glasses look very cool. [LAUGHTER] AUDIENCE: What is the lens made out of? RAMESH RASKAR: So that's a good question, because typically, glass will not transmit thermal IR. AUDIENCE: Right, [INAUDIBLE],, yeah. RAMESH RASKAR: Yeah. That's a good demonstration. So a lot of these lenses are made up of germanium. And also certain types of plastic can be used for this. But the image quality is not that great. But germanium is very common. AUDIENCE: Does this have different focal characteristics because it's such long lengths of long wavelengths? RAMESH RASKAR: It has 8 to 12-- just in terms of the ratios, it's 300 to 400. It's the same as 8 microns to 12 microns. So it should be able to focus in the majority of that band. But because you need a larger lens, the depth of field is pretty narrow. AUDIENCE: Yeah. RAMESH RASKAR: So with this, you can build a glass detector. And if I look outside the window, for example, it's completely opaque. Can you wave, the last person? Yeah. So it's completely opaque, the glass. So you can build a glass detector. You can take a photo with a visible, regular camera. You can take a photo of this. And if it's opaque in one but transparent and other, then that's glass. So you can do all kinds of interesting things. And again, if Mattias is successful and other companies are successful, not in five years but in six years, it might be in your cell phone. [INAUDIBLE] All right. So let's continue here. Has anybody seen this snake illusion before? Yeah. Does everybody see rotating snakes? AUDIENCE: No. RAMESH RASKAR: No? AUDIENCE: [INAUDIBLE] a head-on [INAUDIBLE].. AUDIENCE: Yeah, it doesn't work [INAUDIBLE].. RAMESH RASKAR: At an angle? AUDIENCE: Mm-hmm. AUDIENCE: Yeah. RAMESH RASKAR: All right. Maybe I can-- how many of you can see it? How many can see and how many cannot see a rotating pattern? So OK. I've heard this before. Because this is different than those really annoying random dot stereograms, the Magic Eye that a lot of people have trouble seeing. But this is not based on the same-- it's based on a different principle. It's just looking at the decay in your retina, how long it takes for your retina when it's doing circuits to basically do center surround subtractions. So it's a very different principle. So those of you who cannot perceive motion in this one-- do you also have problems with Magic Eye? AUDIENCE: Yeah, I can't see anything. I can't see those Eye ones. RAMESH RASKAR: Yeah, I think Magic Eye is probably-- how many people have problems with Magic Eye? Yeah, I think that's a lot. But so it looks like the two sets don't have any relationship with those type of sets. So how can I take a photo of a scene that has motion printed on a piece of paper and create an illusion of motion? That's the first problem for your project. AUDIENCE: You could print that out. RAMESH RASKAR: Yeah, but that's only for this particular scene. I want to take a photo of a car moving, or somebody running. And I just want to take a couple of, maybe five frames of that video and create one photo out of that so that it looks like the person is constantly moving. I think we can put on the lights again. Let me see this. So all right. Here was that question, and how can I look around the corner. So this is actually a huge project in our group of, how can you look around the corner? And the way we do it is, we actually use some really cheap devices-- a so-called transit imaging camera, where we use an impulse response of a scene. We transmit a signal. Maybe I'll use that one. We transmit a very tiny pulse, which reflects off of the door, bounces around inside the scene, reflects back from the door. And it is captured back again at the camera. And by analyzing these multipass reflections, we can figure out what's inside the room by just looking at the door. So the devices are extremely cheap. We need a femtosecond laser. That costs about $200,000. And we need a photodetector. That costs about $3,000. But we find some cheaper versions. And then, we also need a 10-gigahertz scope. That's about $50,000. So after about a quarter million dollars, you can look what's around the corner. So maybe not in 5 years, but maybe in 10 years, or 20 years, you will have devices which will allow you to look around the corner. And right now, in our lab, you can do it. And this is going to completely change the way you think about photography, because line of sight is almost a fundamental assumption. We think it's almost-- a lot of us think that's one of the laws of physics. We can only see things that are within the line of sight. But we are not violating any laws of physics. If we're violating the laws, you're welcome to report us. but-- [LAUGHTER] This is possible. So throughout this class, what you'll see is that the laws that you take for granted and the laws that are shackling the way you think about visual capture and visual displays are just there because somebody taught you that. But you can challenge all those assumptions with modern tools, whether they're sensors, and optics, and modern computational methods. AUDIENCE: What's the point of the femtosecond laser if it has such a short coherence length? You can't really be using the laser. RAMESH RASKAR: So the reason why you need an extremely short duration laser is that if you just-- imagine you build a camera. So those cameras, this time of flight camera that Jay was showing-- images of pulse at about 50 megahertz. So it's 29 nanosecond repetition. And light travels how much in one nanosecond? AUDIENCE: 1 millimeter per microsecond. RAMESH RASKAR: Yeah, but that's too complicated here. How much in one nanosecond? AUDIENCE: About a micron. RAMESH RASKAR: No, much more. AUDIENCE: An inch. RAMESH RASKAR: More than that. AUDIENCE: 100 meters. AUDIENCE: A foot? RAMESH RASKAR: A foot. AUDIENCE: I guess-- RAMESH RASKAR: That's right. That's right. AUDIENCE: Sorry, yeah. RAMESH RASKAR: So a bit simple thing to remember. The light travels one foot in one nanosecond. And what about sound at room temperature? How much does it take for sound to travel one foot? You know all the numbers. It's 330 meters per second for-- [INTERPOSING VOICES] RAMESH RASKAR: --sound and say that's 10 to the 8 meters per second for light. But that's too complicated to think about. We want some simple rules of thumb. So-- AUDIENCE: 10 milliseconds. RAMESH RASKAR: 10 milliseconds for one foot? It's one millisecond, right? So very easy to remember. Light travels one foot in one nanosecond. Sound travels one foot in one millisecond. So light travels one nanosecond. Sound travels one millisecond. Unfortunately, there's nothing that travels in one microsecond. If somebody can, with a new physical propagation, physical propagation channel that travels one foot in one microsecond rather than milli and nano, then you'll see a completely new range of applications. So you have electromagnetics, P-M spectrum. That's one nanosecond per one foot, and sound, one millisecond for one foot. So sound is too slow, and light is too fast in almost everything we want to do. And so here, we want something that's even faster than traditional light propagation. So in this room, if I just send a beam of light, very narrow pulse, by the time it goes to that wall and comes all the way back to me-- let's say this is about 20 foot-- it's going to take me about 40 nanoseconds to come back. On the other hand, if I just want to see where somebody is within a couple of feet of that wall, then I need to start measuring in picoseconds-- not 10 to the minus 9 nanoseconds, but 10 to the minus 12 seconds. And that's why you need to do this extremely fast to be able to do any-- so a traditional time of flight camera will just look at this door. And that's it. There's nothing more you can do. But if you want to look at the reflections, then you need to be able to resolve time at a much higher temporal resolution. So femtosecond lasers are not something-- they're not exotic. They're used in OCT. They're used in two-photon microscopy. And they're used in a lot of their applications, but still not consumer applications. They're used in medical imaging. And once they become solid state and easy to carry, we can do that. A lot of LiDAR also happens in not femtosecond, but nanosecond ranges with very high power. So if you can look around the corner like this, what about looking around a beautiful artifact like this one? If I have a bottle-- where's my bottle? If I, as a human, when I look at this bottle, I look around it. And I create a mental representation of what this looks like. But if I capture a photo, it's going to be only from a [INAUDIBLE] view. It's trying to mimic my perspective and so on. But my mental representation is actually something like that, right? So how can we build a camera that takes an object and creates a roll out imagery? Now, this particular object is straightforward. I can just put it on a flatbed scanner. And I can just roll it. But some other objects are not so easy. This object, for example, which is not completely cylindrical, doesn't have a constant radius around its axis of rotation. So if I just roll it on a flatbed scanner, I will not get that. So maybe I need a special camera, or I can use my existing camera and use some interesting tricks, computational tricks to create rollout imagery. Another such problem. And I'm sure a camera company would love to have this feature, as you have all the boring features-- AV, TV, panorama, movie. And then, you have rollout mode. So that would be fun. And maybe you can do it with an ordinary camera. Or maybe you can do it with a femtosecond laser, if you have a quarter million dollars. All right. So let's do a very fast forward preview of the rest of the class. And here, I'm mostly going to talk about what's the input, what's the output. I may not go into the detail of exactly how this works, because again, these are teasers of what's coming in the class. And what I would like you to think about during this preview is how this applies to some problem you may be already working on, or what are some parallels with things you already know? And again, most of these techniques will be about changing the rules of the game. If you have a project where you're tracking your fingers with a cheap webcam and it's not working, because when the light changes or a person with a different skin color walks into the scene, there are solutions here. If you're worried about how to track crowds, there are solutions here. You want to see what's behind a glass that's murky and diffuse, the solution's here and so on. And then, of course, there are really interesting devices that you could use, and new forms of photography. So here's a really simple example of how we can get started. So Paul [INAUDIBLE] in '92-- simple idea. Take an object, turn on the flashlight on the left. Turn on a flashlight on the right. And then, you can form this image by combining these two. How will you do it? This looks like a blue light, and this looks like a red light. Just from these two photos, you want to create this. Yeah? AUDIENCE: Just mix the channels. RAMESH RASKAR: Exactly. Just take the blue channel from here and red channel from here. That's it. And it creates this beautiful lighting artifact. So that's going to be, actually, assignment number 1-- just a warm-up assignment. All you to do is take an object, and take two or three photos by moving your light source. And then, mix and match the color channels to create very beautiful color artifacts. And this will help you to get your whole pipeline for a sensor going. You will have your own camera. You're welcome to use your regular camera, like even a cell phone camera. But ideally, you should start using a camera that has more manual controls. And it will get you set up with your MATLAB, or Java, or Flash, whatever you want to use-- C++, Open CV. There's a lot of easy ways to do this. But I would like you to set up your environment so that these type of operations are very easy to do. And that will be your assignment number 1. Should be very straightforward. Of course, this is what you would do at home. But if you have a couple of million dollars, then this is how you would do it in the Hollywood. So this is a project from Paul [INAUDIBLE] and his group-- really excellent set of work, where they're building different light stages. And it's just turning on one light at a time, or moving it manually. They have a dome with about 150 lights. And then, they use a high speed camera here, and turn on one light at a time. So they can photograph this actress under 150 lighting conditions in 150 frames very quickly. And then, I can start from the beginning, and so on. That's one way to do it. The other way to do that is, let's say all you want to do-- insert this actress in a scene that may have been shot somewhere else. So she's in LA. And you want to insert her on top of a photo that was actually taken in Milan. Now, if you just take the photo and superimpose it, it looks very fake, because the lighting doesn't match up. So the trick they use is, some guy puts a shiny sphere in Milan and takes an environment map of that courtyard. Then, you feed this image onto this light. So the left corner is reddish. The light on the left side of this dome is reddish if it's yellowish here, and so on. So for all this 4 pi, you turn on the lights correspondingly. And now, she's bathed in light as if she was in that courtyard in Milan. And now, if you take her photo and cut out and superimpose on that background, it will look more realistic. And they have done videos where they took these shiny spheres. And Christmas is coming up. So you can pick up your shiny spheres. And you just put the shiny sphere, and move in the courtyard with it. So you're constantly capturing the environment map as you move along. And in the scene, she just stands in one place. But the dome is lit up by the environment map that was captured from these shining spheres. So in a video sequence, it appears as though she's walking through this environment. At least, she's lit up as if she's walking through this environment. So these are some of the tricks that are being used currently in major productions. So Matrix-- all the movies that came in the last 10 years or so are using this particular mechanism to create maps that have correct lighting, match lighting. And if you don't want to spend $2 million, back to something really cheap-- to create those silhouettes and so on, you can use a Multiflash camera. So here are some cameras that you can buy. This is, I believe, a little more. Yeah, this is a little more. So I think it's about $30. And what it does is, let me see if it's-- did the lights go off? So when you release the shutter, it-- but too cheap to put the film inside of that manually. I'll put that on. I think we're out of battery here, unfortunately. But anyway, when you release the shutter, it takes four photos by exposing one pinhole at a time. And at the same time, because it cannot recharge the flash that quickly, the simple solution was to actually put four different flashes. So this shutter goes off. Then, this light goes off. Then, this shutter goes off. This light goes off, and so on. So it takes four pictures. Now, instead of putting the lights all in one place, if you place them around the camera, you can do something interesting, which would have been, the flash is to the left. You know that we get very annoying slivers of shadow in an ordinary photo. Now, if you intentionally press the flash-- so you can see it here. So there are all these slivers of shadow. And that is continuous. And you probably see it in your own photographs. If you place the flash intentionally to the right, then the slivers move to the left. If you put the flash at the top, the slivers are shadows slivers at the bottom, and so on. So by taking these four pictures and analyzing those tiny slivers of shadows, you can figure out where the depth discontinuities are, where the foreground is separated from the background-- not just the whole person and the wall behind them, but also any internal changes. See, for my hand here, it will create a boundary between my hand and my body no matter how close or how far I am from that. So by doing that, you can estimate all the shape contours. And this is the edge map you would get if you had an ordinary camera. And this edge map you get with a Multiflash camera. So now, if you have an application-- you want to track a hand or tract a gesture, instead of taking a standard 2D image, if you take a Multiflash image, you get very clean contours. And from those contours, you can build an XC application that will perform very well, even in strange, ambient light. And it's independent of the color of the foreground object. It only relies on the shadow. So again, if you want to track your hand, you're not dependent on the skin color anymore. So you can play these tricks to overcome the limitations of a traditional 2D camera. Let me skip ahead a little bit here. Another assignment we'll be looking at is this vertical optical bench. And it's a very nice toy that Andrew Adams at Stanford put together, which is a Flash-based application where you can insert lenses, and occluders, and mirrors, and ray emitters, and so on. And it can basically do a very quick setup, a very quick optical design of a setup. And what we'll do is, we'll start with this. One option is to start with this his code, his source code, and modify and insert a few more optical elements-- maybe a prism, maybe a grating, and so on, OK? This will be one option. And as I said, you'll have multiple options for each assignment. So thinking a little bit more about lenses, one concept that we'll come across quite a bit in this class is light field. And now, this particular camera that we saw of the five that Rod was showing us is actually a light field camera. But that's made up of an array of cameras-- physical cameras. And what we're going to do instead is, we're going to take an ordinary camera and convert that into an array of virtual cameras. So this is an array of physical cameras. But it's expensive. And so that will take an ordinary camera and convert that into an array of virtual cameras. So this is how it works. In a traditional camera, you-- if the object is in sharp focus, the radius along each of these directions is convergent on a single pixel. So you get a very sharp image of the point. But any information about the radius along each of these directions is completely lost. So you get 2D image. You have a 3D scene. You get a 2D image. So it's flattened. The world is flat. A trick you can do, which was actually invented by Ted Ellison and his student Wong that just left is trying to capture the radiance along each of these directions. So how can you do that? You just displace the sensor a little bit back. And in front of that, you put a microlens array. This is the same microlens array you use in lenticular displays, those displays that change with viewpoint. So if you put that microlens array, then as you can see, each of these rays is actually incident on a different pixel. And then, you can capture the variation along each of the categories. Now, why would you care about capturing each of these rays? It turns out that the appearance of the world coming through a lens can be completely described geometrically, completely described by a four-dimensional function, which is this light field. And that's a very powerful concept, because if you do capture this full representation, then you can do-- that's all you could ever capture. Once you have this 4D representation, you can manipulate that in many interesting ways. So Ted Ellison and [INAUDIBLE] at Stanford, who also has a company now called Refocus Imaging, are building these type of cameras with [INAUDIBLE].. Now, how did the Stanford team do it? They started with a medium format camera with a digital back. And on the digital back, they put this microlens array, where the pitch of the microlens is 125 microns. So pixels are about nine microns. So under each square tile, they have about 14 by 14 pixels. And so again, going back here, under each microlens, they have a 14 by 14 array of pixels. So what they're going to do now is, they're going to take the 16-megapixel detector, 4,000 or 4,000 pixels, and have 292 by 292 pixel array-- sorry, microlens array. Under each microlens, 14 by 14 pixels. So at the end, they have this 116-megapixel image, which after reshaping, gives you this 292 by 292 pixel image, 1 under each microlens, OK? So they had given up a lot of resolution from 16 megapixel right down to 292 by 292, OK? But with that, we can do some amazing things. We can do digital refocus completely in software. OK? So you have given up a lot of resolution. But now, you have complete control over where you can focus. And as you can imagine, this is the same question you asked earlier, from tests I can also estimate depth. Because depending on when things come in focus, I can assign a depth to each pixel. So suddenly, from an ordinary 2D sensor, I have a camera which has how many virtual cameras? Here, we have five by five. How many virtual cameras here? AUDIENCE: 200-- AUDIENCE: 14 by-- RAMESH RASKAR: 14 by 14 cameras. What is that, 228? No, 496. AUDIENCE: But then, each camera is 292 by 292 and-- RAMESH RASKAR: Exactly. It's very low resolution. And so first complaint is, yes, it gives you all this power, but very low resolution. And the argument nowadays against that is, whether you have 6-megapixel camera or a 16-megapixel camera, it doesn't really matter. We have reached-- we have diminishing returns after six megapixel. So why not use those pixels for capturing some other information? So that's what makes it extremely powerful. And refocusing is only one. Depth sensing is another. Interaction, dealing with aberrations in the scene-- a lot of interesting things you can do. And again, this is starting with a static 16-megapixel camera. So it's not video rate. But this one is video rate, although it's only 25 virtual cameras. AUDIENCE: Does the microlens have any way to read what's in and out in a way so that you can combine the images? RAMESH RASKAR: You could do that. Unfortunately, the precision that you require is micrometer precision. So it's a little bit challenging. But you're right. I'm sure when cameras were designed in the beginning, they had physical apertures that couldn't be changed. And over time, people figured out how to create variable apertures and so on. So creating these dynamic elements is going to be the key for future cameras. And people often ask me, what are the things that you're going to see in the camera next? And we already have high dynamic range. Next is color. And we get a lot of journalists asking questions like this-- the future of photography. A lot of popular magazines with a lot of junk in it, including people who are not credible, the pictures in there. So to me, the answer to that question is light fields. That's going to be the next big thing. If you think about the top five features that will appear in a camera, it's exposure, color-- number 3 is light field. So we'll see how long it's going to take before we have a full-fledged light field camera as a consumer device. And again, initially, they're going to say, hey, but it's only 292 by 292 pixels. But my guess is that by then, we won't care about the pixels in our business. So my group was extremely inspired by this work in 2004, 2005. But we thought, this is very challenging to create because you need a microlens array. So we said, instead of using a microlens array, can I just print a transparency at home and create this light field camera? So that's what we did, which is called a mask-based light field camera. And this is how you do it. You start with a medium format camera-- in this case, a Mamiya. On the digital back, you just remove the IR filter. And it turns out, there is already some glass on top of the sensor, which is about 1.2 millimeters thick. So you just drop a transparency on top of it, snap back the IR filter. And that's it. For about $2, you can convert a medium format camera into a light field camera. And the design looks something like this. Traditional camera, traditional sensor, but about 1 millimeter in front of it, you have a printed mask. Very cheap. Using that, we were able to convert this 2D camera into something that captures a 4D function. And the concept is actually very similar to radio frequency heterodyne. The reason why you can listen to multiple radio stations on a single antenna is because all those stations are transmitting using either amplitude or frequency modulation. And then, in software in your car, you can tune into any one of those channels and decode any one of those radio stations. And what we're doing here is very similar. We are doing that in the optical domain. And that's what we call it optical heterodyning in space, not in time, where you have the object. It's forming an image on the sensor. But we're going to take this photographic signal, which is four-dimensional, not two-dimensional, and use this carrier, and then create a modulated signal. So again, for those of you with a communication background, this analogy will work. And then, software-- knowing this carrier, we can demodulate that and recover this four-dimensional light array. So it's possible to do it with a very low cost. Sorry, can you hit the lights again? AUDIENCE: This one here? RAMESH RASKAR: Yeah. Maybe the other one. Thanks. And this is a photo that we captured with our mask-based light field camera. If you zoom in, the in-focus parts are actually OK. Autofocus part has a really strange encoding because of the high frequency mask. But then, software-- and this is how the mask looks like, the printed mask. And in software, it turns out that by applying appropriate signal processing framework, we can decode that and recover it. So this is a 2D frequency transform of a traditional photo, where most of the energy is in the [INAUDIBLE]. And after applying this very high frequency mask in the optical part, it actually encodes this information-- in this particular case, a 9 by 9 windows of the Fourier transform. And this intentional aliasing or heterodyning allows you to capture the additional two degrees of freedom. And so the process is very simple. You take this photo, which is about four megapixel. You take its 2D Fourier transform. You reshape that to a 4D function. And then, you take the inverse Fourier transform to create these 81 virtual cameras-- in this case, 9 by 9. This was 5 by 5. This is 9 by 9. And the best way to show that is to see how that photo will look like when there is a small parallax between each of the virtual cameras. And again, from these 81 images, you can estimate depth. You can create refocused images and all that-- the same thing that Rod was showing for refocusing from here to infinite-- yeah, I guess infinitely back and forth. So you can do that with an ordinary camera, with a small change on the [INAUDIBLE].. And all the software is online. And this will not be part of any assignment, but you're welcome to take that up as an assignment or as a project. Maybe we can get the lights back on and start. What are some other things you can do? As I said, in terms of the desired camera features, it's dynamic range color, and then light field. In terms of color, Ankit Mohan, who was a scientist from the group-- as part of his thesis, he said, let's think about color as not multispectral imagery, but more like an audio synthesizer. If you have an audio system and you're listening to rock, or jazz, or pop, or country music, you tune your bass and treble accordingly. Or maybe you have a profile so that the frequency profile of your synthesizer is appropriate for that particular type of music. The same thing should be possible for photography as well. If you are in the woods, you would like to look at most of the green channel to see how the variation of different leaves and all the nature is captured with sufficient variation. Maybe you don't care so much about varieties and looks there. On the other hand, if you are near an ocean, maybe you mostly care about the blue shades and so on. So what I would like to do-- and photographers do this all the time. They carry a set of filters with them. And if they are looking at broad sunlight, they put one type of filter. If they are on a beach, they put another filter and so on. What I would like to do is create a knob right on the camera just like an audio synthesizer that says, boost red, suppress green, and create any profile you want. So creating this programmable wavelength would be extremely powerful. So Ankit's project basically achieved that, which it calls Agile Spectrum Imaging, or programmable wavelength imaging. So it's a very powerful concept. And hopefully, it will appear in cameras as well. You can imagine, this is very useful for medical imaging. When you go to the dentist and they're putting-- what's it called? Enamel? AUDIENCE: What? RAMESH RASKAR: To change the color of your tooth? AUDIENCE: The whitening? RAMESH RASKAR: Yeah, the whitening. What is it called? Enamel? AUDIENCE: Bleach. RAMESH RASKAR: Bleach. AUDIENCE: Yeah, yeah. RAMESH RASKAR: And the problem with that is in the dentist's office, everything looks fine. But you go elsewhere, and somebody takes a flash photo and the guy with the fake teeth or bleached teeth-- it looks extremely different. And that's [INAUDIBLE] because the wavelength profile of a flash is very different from the profile of tube lights. So what doctors would like to do is see the neighboring teeth under all different lighting conditions so that they're still matched, for example. And this is true of-- I did show you the vein viewer, where you want to see the veins. And depending on oxygenated or deoxygenated blood, the hemoglobin you can figure out which veins should be used to poke the needles. And again, that can be looked into very narrow wavelength. So they don't-- in that case, they might know. But in different applications, they may not know probably which wavelength you should be looking at. And so they by creating this programmable spectrum camera, you can do, again, very interesting things. So we'll be looking at that. And glare is another challenging problem, right? If you have bright sunlight, it's going to be glare. Sometimes, it's for artistic effect. Sometimes, it's just annoying. So can you take a photo that has these concentric rings because of glare and either boost the glare, create some cheesy effects, like it's a rainbow transition here from blue to red, or actually suppress the glare-- again, all from a single photo? So it turns out, glare can also be captured using a light field camera if you have a bright light such as this-- let's see. This is a bright light. And this is some other scene. The bright light will create a sharp photo. But because of interreflection, the Fresnel reflection in the lenses also create a glare effect and contribute to the wrong part of the image. But again, what we will learn in this class is that by doing this 4D sampling, you have complete control over the lens glare and certain types of glare. Again, light field concept for a camera array-- Stanford and Professor Mark Levoy are the world leaders in thinking about light fields and light field cameras. So they built this amazing camera array, the electronics for its optics, and so on. In this case, I believe about 51 cameras. And then, they can do very interesting things. So here is a scene. We have about 51 cameras looking at the scene behind the bushes and trees. And this is how it looks. Focus on that part. And by doing refocusing, you can see what's behind those trees. So this is just-- you're just doing virtual refocusing in the scene. And by doing that, by using an extremely large aperture, you can see what's behind these bushes. And it's pure refocusing. You can do additional computational techniques to recover what's behind the trees. So a lot of things in computational camera and photography are really about magic. How can you look around a corner? How you can look behind the trees? How can you look inside the body, and so on? So that's why I like this field. It's like magic tricks. And once in a while, you come up with your own magic trick. And some other times, people show you a magic trick. And different people figure out different ways of achieving the same magic. And that's why it's such a vibrant new field. The way synthetic aperture works is the same way-- if I hold a needle next to my eye, not poking my eye but just right next to my eye, then if I focus on the needle, I'll see it. But if I focus far away, then this needle just becomes blurred. And this does not occlude what's behind. And the same concept is for synthetic aperture. It's used in radar. It's used in astronomy. And all you have is an array of receivers, whether it's antennas, or whether it's microphones, whatever it is. In this case, cameras-- array of cameras. And if you have a very, very large aperture, then a point that was occluding some point behind ends up being extremely blurred. So it does not impact what's behind that. If you have a very narrow aperture camera, you cannot do that. But you can do that if you have a very wide aperture camera. So we saw that. OK. So what about medical imaging, such as computer tomography? And we're jumping from photography to tomography. But both are recording. One is recording light. The other one is recording slices. It turns out you can use very similar principles. So what's happening in tomography? You have an X-ray source that's emitting in an omnidirectional fashion. And you have detectors here. Should I use this screen, because most people are-- which screen is better for the majority of people? That one? Sorry. AUDIENCE: Yeah, that one. RAMESH RASKAR: All right. I'll use this one. Sorry, Rod [LAUGHS] AUDIENCE: It's fine. RAMESH RASKAR: This one was convenient for me. You have an X-ray source. And you have an array of detectors. And basically, when you put your head inside this X-ray under tomography, CAT scan machine, the X-ray source spins. And the detector moves in the same direction, right? And let me show you a video of how this actually works. And let's start from the beginning. All right. It's been opened up. So you know what's going on. That's what's happening. Imagine your head is inside that. [LAUGHTER] All right? That's what the CAT scan machine is doing. And while they put an eyepatch and you're just resting inside, it's basically the a engine. It's totally crazy, OK? It's totally unnecessary. We are the 21st century. [LAUGHTER] We are in the 21st century. And we're building these devices that are based on 40-year-old principles. It's unbelievable. It's totally ridiculous. What we need to do is completely rethink how this imaging is done and use new computational methods to overcome these totally bizarre, multimillion dollar devices. So that will be one of the things we'll be learning in this class-- how we can take principles from signal processing, photography, scientific imaging, and mix and match them to build new things. Both were tomography-- another very interesting problem. All you do is, you create your drill hole. You put explosives here. And you put sensors here-- or in this case, a [INAUDIBLE].. And then, you fire off these bombs, effectively. And based on how long it takes for sound to travel to these detectors, it tells you what the density of material-- whether it's rock or oil, or other types of formations. And from that, the oil companies can figure out whatever oil there is, and where it is. So you can create a 3D map of what's inside. Again, multibillion dollar situations. Microscopy, deconvolution-- also used in photography, also used in a machine vision computer, which we'll talk about that. Coded aperture imaging-- an idea that was used in astronomy, because astronomy-- you're looking at gamma rays and X-rays. And you cannot build lenses for them to form an image. So you can either create an image with a pinhole of the sky, or you can use a coated aperture so you can collect more light. Now, that idea-- we'll learn about that-- can also be used for photography. So what our group did was, we used a coded aperture in the lens. Instead of having a clear, disk-like aperture, we put a-- you can barely see it. I'm sorry. It's supposed to go this way. [CHUCKLES] You have a coded aperture, which looks like a crossword puzzle-shaped mask. And from that, you can take a photo, again, which could be out of focus, and then digitally refocus that. So this is autofocus photo. This is in-focus photo. And you can capture a glint in the eye, or even a strand of the hair. Again, very minor change in the camera. And originally, we were talking about successful biological vision. If you think about the simplest possible biological vision, which is a single pixel detector in a world, right? Just a single pixel. It's in muddy, marshy waters looking for food, or maintaining its orientation. It doesn't need a full-fledged camera, it just has a single pixel detector. It just knows if there's light, there's no light, if there's light, how much there is this. So you know when it's dark, when it's day, when it's night, which way is more light, which is less light. But even that single pixel detector has some very interesting optics in front. It has this very intriguing shielding pigment in front of it. Can anybody guess what's the reason for that? How does it benefit to have some random pigment that's blocking the light from different directions? AUDIENCE: For orientation? RAMESH RASKAR: For orientation. So if this worm wants to maintain an-- if there's a light source and if the sensor was hemispherical, then if this worm moves a little bit, there will not be much change. But if they have a very high frequency pigment, if this worm moves even a little bit, there'd be a big change in the lighting, right? It's as if you have a very high frequency mask and you're looking at the sun. As you move, the light goes up and down. So the worm knows that as long as it's maintaining the same level of light, it's maintaining its orientation. That's all it needs to do. AUDIENCE: Does it also increase the effective dynamic range of its sensor? RAMESH RASKAR: It's possible. AUDIENCE: As a neutral density filters along certain directions, and-- RAMESH RASKAR: It's possible. It's possible. Maybe when it's looking at one direction, it's too bright. So it tries to block that. We don't know. And if you read this book, there's a beautiful book called Animal Eyes by Land and Nilsson. We have a whole class on animal eyes. I think it's the seventh class, if I remember correctly. And we'll discuss all different types of animal eyes and why they do it, whether it's eagles or land creatures, underwater creatures, worms, and all that. And frankly, most of these biological visual systems are based on hypothesis. And they're verified in a very, very scientific, but at the same time, very primitive equipment. And one of the great projects would be to take some of these worms and put it in a controlled lighting setup to really verify if this is how they work. That'd be a lot of fun. I'll provide the worms. Don't worry. [LAUGHTER] So the squared aperture is somewhat similar. Let me skip over this part, because we talked a little bit about how these mask. All right, wavefront coding. This is a concept that was invented by Cathey and Dowski in 1995 for shaping light or shaping the wavefront of incoming light. So you have a traditional imaging setup. You have an object. You have a sensor. You have some lenses. What they proposed is placing an additional optical layer in-between, which is not a lens or a prism, but has variable thickness or variable refractive index, OK? The simplest way to think about that is, the light in the top part might travel at one speed. Remember, if you have a glass with different thickness or different refractive index, the light is going to slow down. And when it comes down on the other side in air, it can go back to one foot per? AUDIENCE: Nanosecond. RAMESH RASKAR: Nanosecond, very good. But before that, it's going to travel slightly less than one foot per nanosecond, right? If it's 1.5, it's going to travel 1 by 1.5, 1/2 foot per nanosecond. So anyway, so by adding glass of different thickness or different refractive index, each of these rays are going to be slightly out of phase with each other. So when they combine on the detector, they will interfere either constructively or destructively. And from that, you'll form new images, right? So this is how they explained it. And if you read the papers, unfortunately, they are very difficult to understand. And what you'll realize is that instead of going into the math for your optics and so on, in this class, we will use very simple ray diagrams and understand how this works in a very visual manner. OK. So basically, what wavefront coding camera does is, in a traditional camera, rays converge to a single point. And you get a sharply focused image. So if the sensor goes in and out of focus, you get a large blur. In case of wavefront coded camera, actually, you never get a sharp spot. What you get is basically, think of taking a lot of straws and all of them converging to a single point, taking them and then twisting them so that they go out again as the straws. But in-between, there's a part where all of them are-- the cross-section of them is roughly-- OK? And by doing that, it turns out, for a sufficiently large depth range, the defocus is equivalent. And we'll study this in detail, and how this works, and how you can use the same techniques for new types of photography and scientific imaging. Now, this is also a very hot topic in night vision goggles, by the way, where they want to wear night vision goggles that when you look far away, it's very clear. And again, night vision goggles have very large apertures. And when you look closer, if you want to read a map, for example, it should still be in focus. So how do you create a passive device that can focus on infinity and very close up at the same time? And they have been using wavefront coding as well. Wow, I can barely see this. All right. This is a project called "Decoding Depth via Defocus Blur." This is from Colorado from Rafael [INAUDIBLE] and [INAUDIBLE]. And this is very counterintuitive. If you take a point light-- and I wish I could do this experiment. I can do it here, but it'll take some time. If you take a point light and form an image on this particular guy-- AUDIENCE: Do you want a little flashlight? RAMESH RASKAR: Yeah. Thanks. So as you can see here, that's the image plane, right? As I move it in and out, its shape is going to change-- the spotlight. Now, imagine-- and that's what happens in the bottom part of the image. As you are in sharp focus, you get a small spot. As you go out of focus, you've got a bigger disk. But the kind of optics this group designed, when you're in focus, you see two spots that are left and right of each other. And when you go out of focus, these two spots rotate one way or the other way. If you are closer than the other focus, it rotates in one way, if away from other focus, it rotates the other one. So this rotational point spread function is a very powerful concept. And right now, they're using it in microscopy for resolving fluorescent beads at nanometer precision. But it could also be used in photography. Nobody knows how to do it. This could be one of your research projects. And we can get some of these prototypes from our colleagues. So it's a lot of fun to play with. Another interesting thing we will be looking at is the relationship between Fourier optics and ray optics. Now, in high school, we were taught, there is particle and wave duality, and a lot of confusion. And in high school, we have to answer questions of what phenomena can be shown in particle way, and what can be shown in using the wave propagation model. And one of the standard answers was, oh, if there's interference or diffraction, then it can only be explained with Fourier optics, not with particle nature of light. That's too simplistic. It turns out that you can really show that there are the duals of each other and explain diffraction, and interference, and all these mechanisms using purely ray propagation. And so with Sadek-- his name is not here-- Sadek O and Josh [INAUDIBLE],, this is-- for mechanical engineering, we have a project where we have created a so-called augmented life field, which can actually support all the wave optics effects as well. So traditional light field that we just described earlier, where you can capture with an array of cameras and so on, can do this position angle representation and the four-dimensional representation. And at that time, I made a claim that if you can capture this 4D incoming function, you have captured everything-- everything-- geometrically that has come through the lens. So it's a complete representation of the light. And some people would say, wow. But then, you're not capturing the phase. And you're not capturing all these other things. But it turns out that using the same exact setup, you are also able to capture phase. The Fourier representation actually includes the amplitude and phase of the incoming wavefront. And so simply by using different mathematical terminology, we have augmented this light Fourier presentation to also represent, to also model wavefront effects. So we'll study that a little later. What about photographs like this? If you look at this heater, it's creating these beautiful streams of hot air. And also, this lamp from the lamp ship. Again, we are trying to visualize that cannot be seen with the naked eye. So this is known as the Schlieren photography, which is, again, looking at very minor changes in the optical path. In this case, because of heat, on a hot day on a highway, you see the mirage. But that happens at extremely high temperatures. Now, even at not-so-high temperatures, you can actually capture this mirage and create a very beautiful focus. So this is student photography. And we'll study that. Polarization is beautiful. You may have used polarization for taking photos of the sky, or on water. But underwater photography can be dramatically improved with polarization. So we'll study that. We'll also study some new type of sensors, not just two-dimensional sensors, but these other sensors-- a single-pixel camera, compressed sensing. How many of you have hardware compressed sensing? Right? There's a lot of hype about it. And again, if you read the papers, sometimes, it's very difficult to follow what exactly they're trying to say and how it's done. But as you will see in this class, you'll get the whole idea of compressed sensing in less than five minutes. And we'll go through what works, what doesn't work. And actually, Rohit is working on a project which hopefully will show that compressed sensing is actually not such a great idea for imaging. But it is good for something else. And we'll study what that "something else" is when he and Rashad's project is reaching some mature stage. So very exciting ideas there. So the original idea was to create a single-pixel camera. In creating a megapixel camera, you have a single pixel that's going to take a coded combination of incoming light. This may be a scene. It's being focused on the whole reflective mirror array. And you're usually going to flip this mirror on and off. So the light you collect will be a product of the scene with this vector. And if you change these flips, you get some other sum. You're going to make a linear sum of incoming light and predict a million such measurements. You can reconstruct a megapixel photo. But the claim of this group from Rice is that you don't have to actually take a million readings. You might be able to get away with only 10,000 or 100,000 readings. So you can capture a megapixel photo with possibly only 10,000 pixels of a camera. And we'll look into that and what part of that statement is true, and what part of the statement requires more analysis. And you'll also see that this can be used in a lot of other situations. So for example, my group has built a strobing camera that can be used for laryngoscopy, where in a laryngoscope, you use very high speed strobing to slow down to visualize the motion of vocal force. But we just came up with a new method that is dramatically simple and uses dramatically less light. So you don't burn your throat when the doctor is looking at your vocal force. And that's based on compressed sensing. So let me stop here, because we're almost at 4:30. And we'll come back. We'll come back and look at some of these other projects. So these are the kind of project assignments you will see-- relighting, the first one that's already described. Dual photography, where you can read your opponent's card. Virtual optical bench, light field capture, high-speed imaging, thermal imaging, multispectral imaging, range imaging, and so on. And then, a completely open-ended final project, which you can choose in any area. This is the first assignment. The instructions will go out on the Stellar web page. Please make sure you have the sign-up sheet. Has it gone around? OK, please make sure your name and email is on the sign-up sheet. If you don't get an email from me by Monday morning, please send me an email, which means that I could not read your email address correctly. So yeah, these will be some of the assignments you'll be doing. And this one is due on September 25. And every class starting from next class, we'll have a volunteer who takes notes for the class and posts it, because a lot of our discussion is going to be on the board and so on. I'll send out specific instructions. So we need a volunteer for next week. You want to do that? AUDIENCE: Yeah. RAMESH RASKAR: What's your name? AUDIENCE: Sam. RAMESH RASKAR: Sam. All right, Sam is going to volunteer for next week. And we'll decide who's going to do that another week. [SIDE CONVERSATIONS] AUDIENCE: [INAUDIBLE]? RAMESH RASKAR: Sorry? AUDIENCE: Do you have a [INAUDIBLE] from the [INAUDIBLE]? RAMESH RASKAR: Probably not, because most of the people are going to be-- many people, not most-- but many people are going to be listening. So as far as I know, it's not going to cross the limit. But if there is, there's a cap. I don't think it's going to reach the campus. There's a cap of authorities. I don't [INAUDIBLE]. [SIDE CONVERSATIONS] And we'll see some of these demos next time.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_5_Lightfields_part_1_Part_2.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So looking at some things here, right, x and theta and how they can be represented. But you can create some really beautiful images out of that. You can even create-- some of my slides are not on here. You can create some really interesting pictures like this. So if I-- this picture is made in that x-theta space. The coordinates are a little bit changed here. This is the slide from [INAUDIBLE].. And it's a robot that Andrew Adams built at Stanford just using LEGOs, and you can do this for your assignment as well. You can build a LEGO-based Gantry that can shift and take pictures. For your assignment, you're just doing 1D translation, but you could even do 2D translation. And then imagine if I put a camera here, and I simply translated it. And now we're in flatline. When the camera's here, I will take a picture and put it over here, this much. There I'll put the camera, and I'll stack those images. So for example, when I'm-- so if you look at this particular green object over here, you'll see that it creates, actually, a line because when I start translating it, it gets that line. On the other hand, this particular white sphere ends up mapping, which is really strange. So this is the visualization of the same space that you saw earlier, which looks maybe simple and very boring. OK. So once you start putting interesting objects, that's how [? your light ?] [INAUDIBLE]. And this is [INAUDIBLE]. Here is-- t and theta are the same as x and theta. And the appearance of this room through this window, all you've got to do is look at this wall through this window. You cannot go inside. So let's see from outside. Everything that can be captured about the scene through this window is represented in this 2D light field. So now if I want to create any refocusing effects, any depth of field effects, all the information is available here. And if I just reduce the pixels from here and take projections and so on, I can create effects. This-- we're going to twist your, twist your mind around it, and then it starts making sense. For those of you who have a computer vision background, this is also known as AP polar plane images, or API. OK. So let's go back to some of this concept we were talking about earlier, where you're going to represent each part of the lens as a different camera. So [INAUDIBLE] subsections [INAUDIBLE],, except we know that if all the focus at infinity, I can just take the sum of all these images, and I'm done for average of those images. But if I were to focus closer, then I cannot just take the average. I will shift and add it. And what this prism is doing in the real world is doing the shifting for you. So let's go back to this example of the discussion of shifting and adding. So I'm going to compare this with the five cameras' picture. [INAUDIBLE] point here from the first camera. The main focus, here. The second one, it was there. Third one, it was there. Fourth one goes here, fifth one goes here. And that's my pinhole camera. So as you can see, if the point is close by, the coordinates here are not the same. Here is near zero. And here's further up from the center, and here's further down, down from the center. That's all. So when I have to bring these five images, I need to shift these images so that [INAUDIBLE] overlap on top of each other so that this point will be sharp focus. The lens does that for you automatically because the prism, if I put a painting here, and there is just a pinhole here-- this is ABC. This will come back as ABC. OK, but this one here would have created an image that looks like this. But because of the prism, it rotates the image. And then this thing shifts down. So they're on top of each other. And because of this part of prism, it shifts. It will have-- the image will have been created here, with ABC. We'll shift just a little bit and so on. And for the other one, the image would have been formed here. We shift up. So the prism does the job for you, basically, of shifting these images. So it's a very simple way of thinking about how we can emulate a very large lens by [INAUDIBLE] [? cameras. ?] So the top cameras here should shift down in the images, and the bottom cameras should shift up. And by doing that, you can focus on your wave [? marker, ?] except that, in case of a traditional lens, once the photo's taken, you're done. In case of an array of cameras, you can take those pictures and shift as much as you want and add them in interesting ways. And that's why light fields are so powerful. So instead of using now an array of cameras, we're going to build landslide arrays or some other fancy optics so that we can directly capture this type of images in a single slideshow. Now, this is three ways of creating-- yes. AUDIENCE: So would these [INAUDIBLE] be able to [INAUDIBLE]? PROFESSOR: This is conceptual. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, you're right. So when you think about the Stanford camera, you want each of these cameras to have [INAUDIBLE] appropriate. So everything is in sharp focus, which means it's a pinhole camera. So conceptually, yes, it's a pinhole camera. But in the real world, you cannot have a pinhole camera. So-- AUDIENCE: You still try to keep [INAUDIBLE].. PROFESSOR: They got to keep this aperture small. But-- and if you use a really cheap camera, like a cell phone camera, they usually have a pretty large depth of field anyway. So they're as close to pinhole as you can get. OK, so basically three ways of capturing, light through it-- something that exploits shadows and can hold a ray, 1908-- something that uses [INAUDIBLE] [? array. ?] Also, early 1900s and so-called [? heterodyne ?] of 100 years in my group, OK? So 200 years to come up with a third solution. Think about how you can-- a stop aperture is basically a pinhole with a prism. So I can take each of these pinholes and just-- so you have this, you have this. And the third one would be-- I can just put a pinhole. And in front of this, I can put a prism. In front of this, I'll put a shallow prism. In front of this, I'll just put a piece of glass. Here I'll put a-- all right? So [? lens ?] is basically out of [? pinholes ?] with the setup process. And as I saw, if you have two corners of this pinholes, they [? measure ?] before the [? good ?] [? luck ?] sensor. But unless there's a [INAUDIBLE] of just bending light [INAUDIBLE]. OK? So this figure is from Lippmann, I believe in 1930. Sorry, this one is Lippman. This one is Ives in 1933. And he said, OK, if you really want to capture each of the [INAUDIBLE] individually, you can just put a pin for that. So [INAUDIBLE]. OK, so if I put a pinhole out of there and take a picture-- so now, let's [INAUDIBLE] just a pinhole error. Then you end up getting a picture that looks like this. And if you zoom in on that, you'll see that under each pinhole, it will create a disk. So remember, we have a lens slit. And we had a lens slit here. Instead of that, we just want to have a [INAUDIBLE].. And then [INAUDIBLE] here's a pinhole error. So that's an example for just a pinhole error. So this is all solution number one. Here's our solution number two. And solution number three we'll see in a second. And it turns out, if you think about a glare in a scene, like a lens flare, a 2D image might look like that on the top right. But if you zoom in, it turns out, in the 4D space, the flare manifests itself as the bright spots in a [INAUDIBLE] image. So you can just go through this image and eliminate all the bright spots, and you can get rid of lens flare. You know, so you can remove the outliers. Just do some kind of immediate filtering in our neighborhood, and the lens flare [INAUDIBLE]. All right, so the second way of doing it is by using lenslet array, which we already saw. So I don't want to repeat that again. And now comes the question of, what happens to points that are in focus versus out of focus? And this is a very key concept. You have to understand. So let's say I have a lenslet array. And I have a point that's in sharp focus at one of these lenses. And same situation here. The point is in sharp focus at this pinhole. Mike, we'll go back, and I'll get this here. But listen, this is a red crayon. All the rays are red. And when they come back to this pinhole, this whole thing was red. So that's why, in this part of the image, where in the original photo, with the original camera, the red crayons are in sharp focus. The whole disk is there, which means that the whole-- every part of the lens is getting a red ray from this point of view. But let's go to a part that's actually out of focus. So I believe this one is over here. So that's on the boundary between maybe a yellow crayon and a blue crayon. So even in the original photo, it's out of focus. It didn't have [INAUDIBLE] focus. And here you can see part of each blob under a lens flare is yellow, and the other part is blue. OK. Hold that up. So we have a situation where we have a-- it's out of focus. We have a yellow and blue, yellow and blue. But that actually autofocuses. So if I start shooting the rays on that, [INAUDIBLE].. If I just take the one from the yellow-blue boundary, for this part, the light is in focus closer. OK, it's in focus closer. And the image that shows. So for the bottom part here, all these rays are going to be [INAUDIBLE]. But this part here is not [INAUDIBLE].. And for the blue part, again, we shoot the rays. And what we realize is that-- because this thing [INAUDIBLE]. If I had a yellow cone coming through and a slightly offset blue cone coming through. And because they were not in sharp focus, they will contribute to one people so that part of it is yellow, part of the block is yellow. Part of the block is-- now, let's go in the-- this one is the [INAUDIBLE]. Let's do the one that's farther away. It's too bad the image [INAUDIBLE].. All right, let's look at this one. [INAUDIBLE] example. So if we just look at the green and the one that's kind of blackish next to it. So this one was very easy. The blue was to the left of the yellow in the original picture and even inside each blob, the blue is to the left of here. But if you see this one, the green's to the left of the dark region. So it has [INAUDIBLE]. So here the left recording is maintained. Here the left recording is switched. Can anybody tell me what's going on here? Just [INAUDIBLE] exactly because in front of the focus plane, the left [INAUDIBLE] remains in the same [INAUDIBLE]. When you are behind the focus plane, the order is switched. And that's why you cannot simply take this image and reconstruct the original high frequency, high-resolution image. There was a lot of processing for this to be able to recover that. So it's a lot of fun to look at these images. There's all this information that's encoded, the four-dimensional information. OK, so here's our solution. Instead of placing the lenslet array, we're going to place a mask [INAUDIBLE] array, but especially a printer mask. You can just take a medium format camera, such as [INAUDIBLE]. Just remove the [? IR ?] protective glass. And then simply plop a film on top of the medium-format camera sensor. Put back the put back the protective glass, and you're ready to go, and you have a [INAUDIBLE] camera. So in this case, we are not putting a pinhole error, but so I'm going to go through the slides [INAUDIBLE].. Instead of putting a pinhole array, we're going to place a different type of mask. We captured that. And a few concepts that we had to look at before we get there is that there's this concept of [? conjugate waves ?] that hopefully is in sharp focus, our sensor, then this plane, and the corresponding sensor plane are conjugated with each other. If I put an object closer, then I must move the sensor further back. And those two are conjugated to each other. And that is defined by the lens equation. 1 over f equals 1 over u, just 1 over u, OK? Now, the key concept here is that a lens copies a light film from the outside world to the inside world. What does that mean? If I have-- so basically, what we were doing, we were assigning some coordinate system. We called it a taa. And we call this one the x. These are two-plane [INAUDIBLE]. Now, there's a little bit of fudging going around here because theta here doesn't really correspond to absolute angle, but a coordinate on this axis. So theta is 0 here, and it says plus 5 here and minus 4 here. And x [INAUDIBLE]. And this is over x, the relationship between any point here and any point here if I [INAUDIBLE].. Now, it turns out-- let's say I have two points here, a and b, and two points here, a and b [INAUDIBLE] here. I can also assign a new plane here [INAUDIBLE].. This light stream, which means this x-theta relationship is maintained in this theta-x relationship. So if I take array here from-- I'm going to assign this also-- let's say we're operating here. And I'll assign the coordinate of 1 over 1,000 and get us a coordinate of 1 over 1,000. Let's look at some real numbers. Then if I shoot a ray from either 200 plus 4, I can guarantee if the plane wasn't conjugated, that the form will map back to [INAUDIBLE].. It'll map to the [INAUDIBLE]. And of course, you can have 200 mapping to 3 here. 3 will automatically map it also. And that's only because this painting is in sharp focus [INAUDIBLE]. This notion of conjugation means that the light stream of this x-theta array is mapped to this theta x, for this. So I can take [INAUDIBLE] I can shoot all the rays here. If there's a point here, then I shoot rays out of them. The same point will be sharp focus here, and all the rays will [INAUDIBLE].. OK, if I have two rays that are starting in the same theta direction, they'll come out of here in the same theta direction, OK, and so on. So basically, I'm creating an exact replica of light through here [INAUDIBLE]. And this is possible only when the lens is extremely good [INAUDIBLE]. So it might do a pretty good job, like, it wants to do that. So pretty good, do a good job of popping it in the center, but not at the edges. Or it might be good for one [INAUDIBLE],, but not for others and so on. And this notion of popping the light switch makes the lens very unique because we know that the appearance of the world is 4D. It completely describes the appearance of the world. And what the lens has done is optically, it has copied the appearance of the world faithfully to something that's inside. And now we just need a good-quality sensor to basically take the hologram of what's out there. And I'm using the word hologram to indicate the 4D appearance of the window into the world. So this window that we have here is exactly copied 4D. So anything that's-- behind this window, it's faithfully reproduced over here. Unfortunately, in a traditional camera, you take that window and map it back to a 2D image. So the problem is the sense that the optics is doing its job of going from 4D A. So 4D A, right? 4D A. But the sensor does a terrible job of maintaining that 4D [INAUDIBLE].. But if you have these three solutions, you can recover this 4D image. So let me just explain this concept one more time of how the rays, where there is sharp focus versus auto focus, look differently. And I promise you that after we understand this concept of rays and ray space and all that, everything else-- out of focus and depth of field-- will become extremely easy to understand. So just see how this works out. So now let's see. We have a point here. OK, we can either represent the light through here, or we can represent it here. Doesn't really matter. Now, here what we have is a point that's emitting light in all directions. How does the light actually look like? There's a point x, has [? emanating ?] light in all the directions. Now we know that the lens is going to make an exact replica of that inside. So when I come here, I have a particular x, and light's coming from all different directions. So what's coming at the plane is exactly that, OK? From this, how do I form an image? How do I realize that, if I integrate the radiance along each of these rays, I will get the intensity [INAUDIBLE]?? So we're going from a 2D world now to a 1D world. In the general case, we go from a 4D world to a 2D sensor-- 4D representation, 2D sensor. Here, we have 2D light fill and 1D sensor. How would you do it? So with that, I'm just going to sum up everything that comes out of here now, as I'm learning all these values and sum them up. And mathematically, that's just taking a projection. I'm just going to take this 2D world and flatten it to a 1D world, or you could call it light integral. So imagine I have all these tennis balls here, and I just drop all of them under the force of the aggregate. The number of balls that would come here would be the intensity of the prism and [? is black ?] everywhere else. Now, what happens when something is out of focus? If it's out of focus, then the lens is doing a good job of transferring the 4D here, 4D here. But it did the right job for this plane, not for this. So how do we represent the new space of light field? This [INAUDIBLE] simple. There's just one [INAUDIBLE]. They have a bunch of rays and want to represent that. Previously, the light was reaching only one x point. Now it's reaching an array of x points. And for each of the x points, there's only one direction in which [INAUDIBLE].. So there's still a light once the x is missing. OK, so I have a bunch of different points here. And for each of them, there's a directional on which the lines are. So there's the notion of shear. We have a straight line with shear development. And now, how do we compute the light? How do we compute the intensity from the surface? Again, the same operation of, imagine we had all this tennis balls here, and we're going to let them drop? If we let them drop, we're going to get intensity that goes over a set of pixels rather than just one pixel. And that's the captured photo, you'll see. When something is out of focus, you don't see a sharp point, but you see a blurred set of lights. So again, we start from here, exact copy. For a given x, we have all the thetas, we project, we get a sharp point. That's the intensity of the captured photo. If it's out of focus, the set of rays can be represented on a slanted line now. When we project that, we get the integral. And that will be [INAUDIBLE]. Is this [? here? ?] So now when you're thinking about, not only thinking about doing assignment, and you have a set of photos, and you want to do a refocusing, you can think of it in multiple ways. If I stack on the photos-- OK, so remember, taking a picture with a of cameras is the same thing. So I'm going to put an a of cameras here. And the corner of the camera is theta. And the framework of the camera is x. So I can take a picture of the first camera, and I'll put that as the top row of this. I'll take a picture with the theta equals 2. I put it over here. Sorry, theta equals 5. Over here-- 4, 3, 2, 1, 0. If I'm nine cameras, I'll put them over here. And if I just want to take a picture that's focused at infinity, I'm going to start all those pictures and just take the sum while I'm holding the direction. And that will focus at infinity. If I want to focus closer, then I should just sum them up as they are. But I need to slightly shear them. So I'm going to keep the center camera fixed-- center camera image as this. I'll take the theta equals 1 camera and shift it to the left by 1 pixel. At [INAUDIBLE] equals 2, I'll shift it by 2 pixels, 3 pixels, 5 pixels, minus-- sorry, minus 1, minus 2, minus 3. And the one over here, I'll shift that plus 1, plus 2, plus 3. If I sum them up, I'm actually focusing on a different plane. So this is the main concept behind refocusing, that it's shifting and adding. And you can think of that within a single lens, or you can think of that using [INAUDIBLE] cameras. Is it clear so far? So next time you're taking a picture, think about how much work the lens is doing. It's copying the light field, right? It's capturing the hologram of what's out there. This is using the popular terminology. It's capturing the hologram of what's out there. It's recreating that hologram very close to the sensor, and all that the sensor is doing is just recording that as a 2D image because the sensor doesn't have an ability to record in a traditional sense a 4D hologram. It can almost sense a 2D image. And explain all of that at the speed of light. Isn't it great? And that's why computational photography is exciting because there's some work you can let the computer do, and there's a lot of work you can just let the physics do for you. But when the physics does it for you, it happens at the speed of light with almost no additional cost. So doing this core design of the physical device and the computational device brings in the real power in computational cameras, computational photography. All right, so let me switch to some other things I wanted to show you. By the way, is this all clear so far, this part here? This will be on your exam, remember. This is a fun part. All right, so now that you understood all these concepts of rays and 4D space. Let's see how it impacts-- we already saw all the focus. But let's see how it impacts some other elements of it. So let's say you have a point right. You take the photo in sharp focus. It looks like a point. If you take the photo, autofocus, it appears as a disk. And if you look-- if you saw this resolution chart, then it was blurred as well. And here's an example, to answer [INAUDIBLE] question, where we started with a 2D image and now we have another 2D image, but the blur, the image was done more effectively by this disk. So we took every point, base your disk around it, summed up all those values, and assigned it over there, and which was achieved using the same effect as this. Right. Every pixel in the world, like this one here, actually is contributing to a disk. The other way to think about this, if I just take one pixel here and go up vertically, it's coming from different points in the world. And that's how we can specify [INAUDIBLE].. So now we have that. I'll come back to the [INAUDIBLE].. But every once in a while, we can create some really interesting-- and this is called bokeh in Japanese vocab. What's the right way to say it? [INAUDIBLE] AUDIENCE: In Japan? PROFESSOR: Yeah. AUDIENCE: Bokeh. PROFESSOR: Bokeh. AUDIENCE: [INAUDIBLE] PROFESSOR: All right, good. And, you know, the exploitation of out-of-focus blurring. But sometimes you can do some interesting things. Instead of keeping the aperture completely open, you can insert a special pattern. OK, so we're going to place this crossword puzzle shaped mask in the aperture. And now if I take the photo out of focus, instead of getting a disk, the LED that you saw earlier, out of focus would [INAUDIBLE] something like this. The same 7 by 7 pattern that we have here ends up actually going. [INAUDIBLE] and this whole pattern [? fits in. ?] So you can do some really interesting things with it. So again, photographers want to take pictures that have really beautiful bokeh. So when it's really tiny aperture, things are to focus over a large depth of field. But with a very large aperture, the background is completely out of focus. So now you can start playing some really interesting [INAUDIBLE]. You can take a scene, which has the tiniest bright spots on a [INAUDIBLE] fashion. And in your picture, you can, instead of putting the 7 by 7 crossword puzzle shaped mask, you can start pulling some alphanumeric [INAUDIBLE].. OK, so I'm going to take seven different pictures-- one with this aperture, one with this aperture, one with this aperture. And if you just take an additional, those three photos-- so building a picture with this aperture, you get a vertical line for every bright spot out of focus. For this one, you get, so a vertical line, vertical line, vertical line at a different position. You just sum them up. Every out-of-focus spot will have a little [INAUDIBLE].. So you can-- I believe there's an animation here. So you can see letter eight appearing at all different places. So let's try to understand what's going on here. So that's with a disk aperture, like a traditional aperture. This is the one with some special aperture. And if you want, you can even say, happy birthday, Jennifer, and it would show up there. So if you take a picture of all the candles, every candle will say happy birthday. OK, so how is this taking place? It's [? difficult ?] to explain purely as out-of-focus blur. But if you think about what's going on over here, it's [INAUDIBLE]. So we have our lens and, you know, sensor. You have one candle or one bright spot. If you're out of focus, it will create a disk here, right? But imagine you started putting some [INAUDIBLE].. For simplicity, I'm just going to make 1010 [INAUDIBLE].. So I'm going to put a full year. That's open, closed, open, closed. So in this part of the lens, I can go through [INAUDIBLE].. This part of the lens I'll block [INAUDIBLE].. Over here, I go through. So I get over here, and this part of the lens is blocked. The part [INAUDIBLE] here will be the whole system. That's a little blurry. And if I put a different pattern here, we'll get [INAUDIBLE].. And that's how you can [? code ?] [INAUDIBLE].. How do you think about this in the next data sets? But [INAUDIBLE] is not as simple. There is not just a point, but a whole bunch of things going on here. And all of them are [INAUDIBLE]. If you go to that single space, what we have is we have the x space here, [INAUDIBLE] here. And blocking this part and this part, what I'm saying is that the darkest point have blocked this part here, this [INAUDIBLE]. And this part is open. That's fine. And again, the bottom part is [INAUDIBLE].. So what the optical system is doing here is taking the light from [INAUDIBLE],, and it's just deleting all the rays here. It's just blocking all the rays. And then [INAUDIBLE] picture can create a [INAUDIBLE].. That's the [INAUDIBLE]. And then, again, you can play this cheesy trick of taking multiple images with different blur. And because of linearity, which we discussed last time, of flight, you can just take the addition of two images to create an illusion that the aperture actually had those three apertures. So instead of putting a seven in the aperture and taking one picture, you can take three pictures with each of those different apertures and just take some of that. And that's the beauty of light and interaction of light at normal intensities. You can first take a picture and then take an addition as opposed to adding it in the physical domain and then taking a picture. This is good? So it's a fun project if somebody wants to try it out, you know, create nice bokeh, beautiful patterns. You can also put an LCD in the aperture so it's not just a film, and it can change the LCD so that depending on the event, you can change the different pattern, and you'll get very interesting effects for certain photos. OK, so what's going on with the animal eyes? And we'll have a full lecture a little bit later on animal eyes. And compound eyes of animals are also very interesting. These are basically array of lenses. But there's hundreds and hundreds of [INAUDIBLE].. And this is kind of an artistic rendering of what the creature must be seen, just an area of very tiny images, which is not true as we'll see later. But that's how the rendition looks like. So there are projects such as [? Tombo, ?] which tries to mimic this concept to basically reduce the thickness of your camera. So if you have a 35-millimeter lens, it will be about 35 millimeters deep. But imagine if you want to create a camera that's 3D [INAUDIBLE]. One way of thinking about that is I can just split my lens into a set of tiny lenses. So we should go back to this diagram over here. I'll have a sensor, and lens flare, and main lens. I'll just get rid of the screens. All I [? have. ?] And the question is, can you do something useful with that? Any guesses what you can do, what you cannot do? That's the situation. So there is no main lens out here. Every point in the world maps to every block. Just got rid of [INAUDIBLE]. So if you have 50 such lenses, every point will [? be seen ?] 50 times. But each of the image is actually pretty low resolution. AUDIENCE: Isn't this basically the same as the light-field big rig? PROFESSOR: Repeat that. AUDIENCE: Isn't this basically the same setup as the light-field big rig but smaller? PROFESSOR: Big rig? AUDIENCE: Yeah. The camera rig. PROFESSOR: Yes, yes. OK, so that's a good point, right? So let's say I tell you that you have a 16-megapixel sensor. Now, 16 megapixel, in flat [? land, ?] you have 4,000 pixels. And you can use the photonic pixels any way you want. And you have some resolution, in [? theta ?] [INAUDIBLE] resolution, and x. And if I don't use the light-field camera, how focal decreases. If I place a light field, and let's say under each lens, I get 10 pixels here. I'm subdividing my lens into [? tensors. ?] And my actual resolution's only 400. So I have lens flares going from 1 to 400. And after each lens flare, our pixel-- that's how it enters this. So in that sense, my resolution in x is 400 and resolution in theta is 10. And you can see that in the x-theta space. It's this way. So this is only 10. And this is 400. And it's like this photo [INAUDIBLE] because how we form those measurements. [INAUDIBLE] in a difficult sensor, all of them will go exactly for one. There is no [INAUDIBLE]. So I'll have 400 pixels with no resolution in theta. In a light-field camera, I could have 400 pixels and theta resolution of [? f. ?] Or I could go in other directions, where maybe this is even thicker and more [INAUDIBLE].. I could do, for example, 200 [INAUDIBLE].. I'll keep going. So I'll start it with something good, 10, 5, [INAUDIBLE].. If I could [INAUDIBLE] fade lens-- I'll give you a hint-- what you end up getting is-- [INAUDIBLE],, you get 400 in theta, and I'll make 10 [INAUDIBLE]. So you have flipped from this situation to this situation because now what you have is, if I have a [INAUDIBLE] here, x, then from every point, I can-- there's always theta to 400 different lenses. So this one is gone. I can measure light coming out of this point in 400 different directions. But under each, only [INAUDIBLE].. So this is what I like. So the key lesson here is that a lens-- in the lower lens, you are basically flipping the resolution in x theta. If you have a lens, you get more spatial resolution, but a little bit [INAUDIBLE] resolution, which is not an ideal case. The world doesn't change that much. We change what we want to [INAUDIBLE].. But in certain [INAUDIBLE] scenarios, as we see [INAUDIBLE] and so on, these aren't real. So lens flips the original [INAUDIBLE]?? AUDIENCE: My [INAUDIBLE],, it depends on an object [INAUDIBLE]. But if the object was really close to the lens, it's getting a higher station in the lens. PROFESSOR: Exactly. So-- AUDIENCE: If this location has the object [INAUDIBLE].. PROFESSOR: That's a great point. Did you hear that? I'll just try to get what he was saying. So if my point is very far away, I don't need to sample this point in 4,000 different directions, 400 different directions. I think I can sample it in 10 different directions [INAUDIBLE]. And then having the lens [INAUDIBLE],, OK? But what Rob is saying is, I can start coming closer. I'll bring the object very close, OK? Those 400 directions are sufficiently wide, spanning 1x. And so you come really, really close, right? And it's almost 1x to 1 here. Then I really want to see all this directions. AUDIENCE: Got you. PROFESSOR: And then the analogy of what happens when you go from here to here is that, remember, the [INAUDIBLE] standard do a very good job of mapping the 4D light field for the real world into a 4D light field on the inside [INAUDIBLE]. I'm getting to have a little bit of analogy, like [INAUDIBLE] don't have a lens. That's not true anymore. You have a light field here and have a similar light field here for this plane. And so imagine you've got a hologram. And the hologram has exactly 10 directions. So it's a 400-pixel hologram. It has 30 different directions. If I want to capture that, I should a lower lens here. But if now a hologram where I wanted 10 pixels, and it has 400 different directions-- I'm so close to it-- then it makes sense to get a lower lens, so in microscopy and so on [INAUDIBLE]. So Mark [INAUDIBLE] has done some work on light fields and microscopes. He uses different modulations [INAUDIBLE].. And the example that you saw for [INAUDIBLE] the sensor for looking at the aberration about light, there it is the main [INAUDIBLE] that are expecting to see a point that's very, very far away. But they're not-- they aren't interested in taking an image of the point with the setup. They're just interested in finding the aberration. If they have a point very far away, and the waves are coming straight-- [INAUDIBLE] main lens. If they're coming straight, all the images are at the center. And in the aberration, the sharp images are offset. So there are many configurations when you can decide on a main lens. And one thought experiment for you would be, what happens if there's more than one [? tensile ?] [INAUDIBLE] or more than one [INAUDIBLE] element? So you should look up something called the [INAUDIBLE] super lens. It's a very fun concept where you actually have to put in two lenses and [INAUDIBLE] putting two lens flares right next to each other. With the right gap between them, you can create one [INAUDIBLE].. And it has very interesting properties because the focal length of the lens is in parallel with that. In a traditional lens, it [INAUDIBLE] copies of light from inside to outside inside, for [INAUDIBLE] super lens. It does a very strange transformation. Think an [INAUDIBLE]. And once you start thinking about the world as not just 2D but 4D and a photograph having not just a position coordinate but also an angle coordinate, you'll realize there are lots of other examples where this 4D representation starts making sense. So near the end of the semester, we'll be studying medical imaging and scientific imaging using tomography and deconvolution and so on. And all those concepts in a CAT scan machine, they all work on this principle of being sensitive to position as well as angle. So in case of a CAT scan machine, you have-- on this thing, they have a [INAUDIBLE],, sometimes this chamber. And the patient goes in here. The very first class, we saw how this behaves like a [INAUDIBLE],, and there's typically a [INAUDIBLE] [? engine. ?] But what is it doing exactly? Because a set of detectors and an emitter. And your head is in here. You've done on this X-ray source, and you take this image, take shots. Then you move this light source, and you take new shots. And this is basically capturing the light field of your body using lenses. To simplify this diagram, imagine this is light. And in case of X-rays, the source is moving, and the sensors are moving as well in a [INAUDIBLE].. But to simplify this diagram, you imagine I slap on this set of sensors, and I put the lights at different locations and just do that [INAUDIBLE]. I'm going to cast a shadow [INAUDIBLE] here, [INAUDIBLE] here. And this is basically your [INAUDIBLE],, and you see an X-ray. And every ray here can be represented in the x axis. And then it turns out that this redundancy in your space such as [INAUDIBLE] inside your body. And independent of which direction the X-ray comes in, we can [INAUDIBLE] to the same factor. So there's bone or muscle or nerve. Then, although your light field is four-dimensional, the inherent data is only two- or three-dimensional. And so you can invert that and recover the opacity on each [INAUDIBLE]. So the same thing can be done with light field. If you capture this four-dimensional X-ray, now you can go back and estimate the depth of every point in the [INAUDIBLE]. So typically, you use a stereo pair. There's two cameras inside of some correlation to estimate correspondence and [INAUDIBLE] estimate that. But on your second assignment, what you're going to do is capture the light field and planning from the light field estimate time. And the way you're going to do that is you're basically going to line up those images. If the point is in sharp focus in the [INAUDIBLE] be focusing, all these [INAUDIBLE] here will look the same. OK, but if something is out of focus, [INAUDIBLE] shift in [INAUDIBLE].. So you'll have a shift at some point. And then all these values will be [INAUDIBLE].. And the fact that all the values are the same indicates that now you [INAUDIBLE].. Yes. AUDIENCE: How would we achieve [INAUDIBLE] focus? Like, focus in a [INAUDIBLE]. PROFESSOR: Yes, so that's a great point. So think about, you have a point, which is at different depths. And you're saying that in flatland, at top of image, we should focus here. In the middle of the image, we can focus here, and so on. So this is my plane or the surface of focus, rather than x. So all you have to do is for the top point, you have to keep-- there's a little bit more you have to do. But I'll give you a very, very high level [INAUDIBLE] of how you should do this. For this point, for this-- we have a lens here. For this particular direction, [INAUDIBLE] quite a lot of cameras. For this particular direction, you can just calibrate your [INAUDIBLE]. Put a box 1 meter wide. And I can say, within this box, at the top left, I want the box to be in the top left and merge in the back of the frame and the front. But you just calibrate that. And you can move the camera. And say that all the rays that go here, I'm going to add [INAUDIBLE]. But for the next pixel, I'm not going to add up all those rays. I'm going to add up some other rays. How do you assure that in the next [INAUDIBLE]?? You know that for the point here, there'll be some value over here. But for a point here, there is some kind of a slope. So if I just wanted to create an image that's-- so let's make [INAUDIBLE] very concrete. Before we focus on infinity, I know I should just [INAUDIBLE].. I just sum up everything along vertical lines. If I want to put focus on my closeup, I need to shear this. So I'm going to take this, give the area same space like that. And I'm going to sum up these values. OK, and store the value. I'm going to sum up these values and store the value. And this means I'm focusing at this point. If I want to focus on a slightly different plane, then-- so I guess vertical is center, or to focus on some other plane is a less map in focus. So now your question is, how can I do it differently for different pixels? So on the edges, I will just do a vertical prediction. In the middle, I will do a standard prediction. And for all the places in between, I will do this line. And then I'll come back. For that, we'll get [INAUDIBLE]. And you can imagine all the crazy tricks you can do here. So one part of your segment, you're going to look at-- you're going to look at-- you could see through it. And for that, what I want you to do is take your [INAUDIBLE] cameras and then create some kind of [? a fence ?] here. So in the real world, I'm guessing that sometimes we'll [INAUDIBLE]. It is a single part. And then there is some book or some painting back here. And you're going to take 16 images. And this is red, and this is green. You'll be able to get it [? off ?] purely by focusing on the back. So when you focus on the foreground, you'll maybe shift a few pixels. And [? the shift ?] on the background will shift backwards. But as an extra credit, what you can do is not just refocus but also do some analysis of when you shift and add those pixels, if all those pixels have the same value or not. So if the red object wasn't there and you just focus on the green, all the pixels here would be green. All these values would be green. But if you have some object here, some of these objects, some of these pixels would be red. Yes. AUDIENCE: What is [INAUDIBLE]? PROFESSOR: Here? This is x, and this is theta. Theta is the camera number, and x is still the camera lens. So the image from the first camera is placed here. The image from the second camera goes here and so on. And because you're doing this column only one [? beam, ?] every row is independent. So think of this as only the center row of each of the system cameras. So going back just a couple of red versus green, for your first sample, you'll see, you know, refocus on green, and the object will look mostly green but in some areas. But then you can do a simple [INAUDIBLE].. But since there's 16 cameras, and only four of them, so 12 of those are green, and only four of those are red, you don't have to sum all 16. You can say, what's my-- what's the color that's the majority? And that color is green. So you will not get a reddish tinge to your photo, but you'll get [INAUDIBLE] red [INAUDIBLE].. If it's not a simple linear prediction, it's not just the sum of pixels. But you're going to pull out colors that are in my [INAUDIBLE]. And that will get much better [INAUDIBLE].. And again, this is in the second subpart of this assignment. You don't know how to do it, that's fine. But the main concept you want to learn is [INAUDIBLE].. When you're taking the pictures, I have plenty of instructions in the assignment. When you're taking the pictures, make sure they're placed at equal distances and you roll the camera in [INAUDIBLE] fashion. Make sure the scene, it has vibrant colors. So this color [INAUDIBLE] is sufficiently different from [INAUDIBLE] so that [INAUDIBLE].. And before you do any of this, use the data set [INAUDIBLE] test [INAUDIBLE]. And just take 100, 100 pixels [INAUDIBLE].. And just run your code, make sure everything's fine. And then you can go and take the raw images. So you're looking for [INAUDIBLE] images. Run your code on those [INAUDIBLE],, because if you start with your own images, and you're not getting correct answers, you don't know if the problem is with your photos or with your code. And does Photoshop allow you to shift [INAUDIBLE]?? AUDIENCE: [INAUDIBLE] AUDIENCE: You can put-- you can just put a whole bunch of data as well. You don't [INAUDIBLE]. [INTERPOSING VOICES] PROFESSOR: Go tinker that. AUDIENCE: [INAUDIBLE] PROFESSOR: OK, but you can [INAUDIBLE] and do the same thing. And you can just shift because you can tinker in the numbers, shift right, shift right, shift right. And you can also ask, so you can do that. Of course, I recommend doing this [INAUDIBLE].. In MATLAB this is where you find the [INAUDIBLE],, read images, shift, [INAUDIBLE]. But just because it's fine now [INAUDIBLE],, don't wait till the last day [INAUDIBLE].. Start today because the fun part of this is actually taking these images and creating scenarios where you can see through. And, you know, this should go on the Flickr page. And we will comment on each other's work. But the first assignment, only there were five or seven dozen submissions available on Flickr. So make sure your-- the first assignment was all about getting your pipeline up and running. Almost everybody got an A, so you should feel good about yourself. But somebody got an A-plus, and others got an A. So [INAUDIBLE] good job. So yeah, make sure it's on Flickr and all those three places, and feel free to send me an email. You can talk to Professor Oliveira or Professor [INAUDIBLE] about any of this, or also [? Ankit. ?] And [INAUDIBLE] is also here if you want to talk about 3D scanning and so on. And use the forum on the Stellar website and so on. So today's lecture was very much focusing on the theory behind how we start thinking. And then starting next week, we'll be looking at very different applications and different tricks we use in optics. So just go through the slides I post today. They're conceptual. But it will take you some time to just grasp it. And again, as I said, it might feel somewhat hard to think in a dual space of x and theta, as opposed to just trying to face directly. But as you will see later, especially in your projects and so on, this will greatly simplify, this will greatly simplify the way you think about your problems. Cool? Have a good weekend. AUDIENCE: Thank you.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_3_Epsilon_Photography_Improving_Filmlike_Photography.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. ANKIT MOHAN: Hello, everyone. I'm Ankit Mohan. I'm a post-doc with Ramesh. And I'm sorry he's not here today. I'm going to be talking briefly about what's called epsilon photography, or film-like photography. It's actually how can we improve film like photography. And I'm not sure what part-- some of this might be very straightforward and obvious to many of you. So if it seems like I'm going too slow, please let me know. Or if you want more detail on any of these topics, again, stop me and ask the question. So before we can try to improve film-like photography we should understand what I mean by film-like photography. And this is basically what's been the camera obscura model, where you have a pinhole or a center of projection and you have rays of light that goes through that point and form an image on the sensor or a film plane. So what you see over here is-- on the left, this is traditionally how you draw an optics ray diagram. You have the object of the scene on the left. And rays always travel from left to right. And there are people who do hardcore optics who can get really annoyed if you don't follow this model. So it's always a good thing to go from left to right. So you have the scene on the left. You have a center of projection, which is a pinhole, in this case. What I haven't shown here is-- basically, you can imagine a box that's surrounding the central projection and the sensor. And only a single point allows light to go through. What this gives us is a single ray from every point in the scene is allowed to go through the camera and forms an image on the sensor. Now, most objects around us are actually diffuse. What that means is-- technically, it's called Lambertian. What that means is the rays-- when you have an internal light coming on an object, it reflects light in all directions. And most objects are diffuse, in that all the rays that come out of a point on the object have roughly the same intensity, whereas the other case would be a specular object, which reflects light in primarily one direction and not in all directions. So because most objects are diffuse, when you have a pinhole camera taking a photograph, it looks very similar to what it appears to the eye. So it captures most of the information coming from the scene. Now, what a lens does is slightly different, is that it actually integrates over an angular exchange. So in this case over here, you have rays coming from a point in the scene. But not just one single ray gets imaged on the sensor. But you have a whole cone of rays that get imaged on the sensor. So in this case, all the rays coming from a point in the scene that go through the lens aperture get focused onto the sensor plane. And this is basically how a lens works and how a camera, modern camera works. Now, again, this is very straightforward stuff. But a lens obeys certain properties, in that the ratio of the distances has to obey certain properties. And what this basically tells us-- and I'm going to skip over some of this stuff-- is-- this is, I think, the most important thing. If you have a lens, then only one plane in the scene gets imaged onto the sensor exactly. And there's a one-to-one correspondence between which scene is going to get imaged based on the focal length and the distance between the lens and the sensor. This was not the case if it were a pinhole. In the case of a pinhole, everything appears in focus. And you have what's called an infinite depth of field. So unlike a pinhole camera, a camera with a finite aperture lens actually has a finite depth of field. Now, depth of field has an interesting definition. If you look it up in Wikipedia, in the case of film-based photography, it was defined that as when you take a picture of a scene and you print the picture at a certain resolution-- at a certain size paper, and then a standard human observer stands some finite distance from it, can he or she has a difference between two points, whether it's one is in focus or not in focus? So based on that and certain perceptual tests that they did, they came up with this definition of how far you can get from the plane that's going to be in perfect focus and still give the appearance to a viewer that the plane is in focus. So in the case of the digital camera, what it roughly translates to is that when you go away from the plane of focus, you are going to get-- if you look over here, you're not-- rays are not going to focus to a point, but they're going to create a test-like blur. And if the size of this just like blur is smaller than the size of a pixel, usually you cannot tell the difference between whether it's in focus or out of focus. So there is this finite region around a plane of focus that's called the depth of focus. And it's actually-- it's not symmetric. It's usually greater behind the plane of focus and smaller in front. And there's a corresponding depth of field on the sensor side. So are there any questions about that? Is this obvious stuff? What's interesting is that the size of this depth of field depends on the side of the aperture. So in the case when we had a pinhole where our aperture size was infinitely small, the size of the depth of field is infinitely large. So everything is in focus. But as we increase our aperture size-- like you can see from here, we went up here. The corresponding size-- because these cone forms are much larger cone angle, the region in which the size of the blur would still be smaller than a pixel, it becomes smaller. And you have a much shallower depth of field. So there's something that photographers often use when they take pictures, like portraits or macro-photography, is they try to open the aperture or keep the aperture as wide as possible. And that results in a very shallow depth of field. So only the plane of interest is in focus. Anything behind and in front of it appears like a blur. It has a nice blurry appearance. On the other hand, if you're doing something like landscape photography, you want the tree that's 10 meters from you to be in sharp focus and also a mountain that's five kilometers away to be in sharp focus. So usually, people use a smaller aperture size in order to get everything in focus. So we'll come back to this a little later when I talk about how you can computationally modify the depth of field and things like that. But in general, it depends on the application. The application dictates what kind of depth of field you need. And most cameras give the photographer an opportunity to set the aperture size, which sets the depth of field. So because there is only a single plane in the scene which is actually in sharp focus, if you use a camera that does not have a pinhole aperture, which is most cameras, you need to be able to select which plane you want to focus on. And that's usually done these days using autofocus. The cameras use different techniques for autofocus. The most common one these days in SLR cameras is the space-based autofocus, which is a really interesting technique that I think was first proposed by Minolta way back in the late '70s. And what they essentially do is they form two separate images from the different-- so this is I think likely from the pattern. This is the main aperture of the main lens. And what they essentially do is create a rangefinder, where the baseline is equal to the diameter of the lens aperture. What that means is essentially doing stereo or creating one image from one corner of the lens and another image from the other corner of the lens. And looking at those two images, if the scene is in focus, those two images are going to be exactly the same. If it's not in focus, there's going to be a phase mismatch. And by observing the phase mismatch, you can determine which direction the lens needs to move in and by how much. So it's a single-shot focusing technique where, by just getting this one reading, you can move the lens in the right direction and get an in-focus scene. And this is usually very fast because you don't have to constantly keep searching. And the downside is that you need the special hardware in your camera. And most SLR cameras have this kind of hardware in them. Another technique that most point and shoot compact cameras use is contrast-based autofocus, where-- since you have a live view coming from the sensor directly, you can look at one frame. And you can try to maximize the contrast and move the lens back and forth until you get the maximum contrast. And since you don't get an estimation of the phase, like the previous case, you cannot-- it's not a single-shot operation. You have to usually search through the span of the focus settings and find out which one has a maximum contrast and stop over there. That's where it's in focus. And it's usually slower than the previous case, but you don't need dedicated hardware in order to do this. And most compact cameras use this. Another technique that some of the older film-based compact cameras used was using ultrasound or infrared-based estimation of how far a scene is. And it's something that's not very prevalent anymore. And it's also not very accurate. Another technique that I don't mention here is what's called a rangefinder camera. And usually, that's a separate unit from the main camera. The difference here is that this lens-- the autofocus occurs through the lens. So what gets on the image plane is what's used to determine whether it's in focus or not in both these cases. In the case of a rangefinder camera, there is a separate unit, which basically does this shifting and trying to find when it's aligned. And it's usually done manually rather than automatically. I think the important point over here is there's lots of work that's gone on even before computational photography came into its being in the area of trying to find very quickly, and effectively, and repeatedly set the focus automatically on a camera. And there's lots of engineering that's gone into that. So focus was the first thing that a camera needs to worry about when it tries to take a picture. The second thing is what's called exposure. And what I'm trying to show here is that the brightness of something that's daylight versus something that's dark is widely different. And you have just 0 to 255 8-bits or, at most, 12 bits or 14 bits to work with in order to compress all of that information in there. And usually, this span or the dynamic range does not-- it cannot go through more than two, or three, or four decades at most. And so what needs to be done is need to decide what exposure to use on a camera. And so this is a scene that goes underexposed versus overexposed. Overexposure means you let too much light into the camera versus underexposure, if there wasn't enough light and the scene was-- the image was dark. So exposure itself is comprised of these three things. One is aperture size-- the larger size, more of the light coming in, and the brighter the image is going to be. The shutter speed, how long you keep the shutter open for-- if you keep it open for longer, you get more light in and the image is brighter. And the film sensitivity, the light coming in, how many-- in the case of film, how much chemical can it translate, can it change or chemically modify? In the case of your digital sensor, it's the digital-- it's the analog to ADC converter gain is what is set by the ISO. So these three things together determine what the exposure should be on your camera. So if you set a certain shutter speed, you need to determine what the corresponding aperture size and the sensitivity should be in order-- before you can take a picture. And once again, older cameras require you to do this manually. Usually, you would have film, which was of a certain sensitivity. And you set an ISO 100 on it. You would select the aperture size. And then you would have to-- based on some of rule of thumb or using an exposure meter, you would determine what the shutter speed should be. So this was drastically changed by Nikon in, I think, mid or late '80s, where they proposed this Nikon Matrix Metering Scheme. And the idea over here is-- so this is what the SLR camera looks like. You have the main lens. You have a mirror. The film plane is back here. The light coming in gets reflected up here into the pentaprism. And inside the pentaprism, it bends, points, and it goes through the viewfinder into the viewer's eyes. But what happens here is another small mirror reflects it up to the top, where there are these-- there's these five-- I think you have five different zones. And they had a light meter at each of these zones, which was basically capturing how many photons are coming in at that zone. So even before the picture is taken, the camera knows how bright the scene is. And based on that and based on some heuristics that they came up with, they determine what the correct exposure should be for the given photo. And this was supposed to be a very revolutionary technique back then. It did away with all the various rules of thumb that people had come up with before this in order to estimate a good exposure. And this is what led to the change where you can just Auto mode on a camera. You can just press the shutter release. And you don't have to worry about either the focus or the exposure. And once again, I come back to these things in the realm of computational photography and computational cameras in a bit. The one last thing I want to touch on is the concept of color in digital cameras. And most digital cameras have what's called a Bayer filter. And it looks kind of like this. So adjacent pixels have different colored filters placed on top of them. And usually, there are two green filters for every red and blue filter. And what this gives them-- so the image you get is this interspersed blue channel, red channel, and the green channel on the same sensor. And then they use demosaicing or interpolation techniques in order to recover a high resolution image in color. There are other sensors, such as the Foveon sensor, which does this spinning in depth rather than spatially. So for each pixel, they get a red, green, and blue color value. One more thing over here I wanted to say is that the electromagnetic spectrum that ranges from radio waves to gamma waves, it's only a very small portion that we are interested in for photography. It's usually from 400 to 700 nanometers. And this region gets actually split up into these three channels, the three color channels that you have-- red, green, and blue. But the only reason you have these three channels is because of the human eye, which also has a similar three channels. And cameras try to mimic the functioning of the human eye in that sense. But if you look at multispectral cameras-- and I think we'll come back to that in some other class-- you can have a whole number of channels between the 300-- 400 and 709 range. So this is the CIE Chromaticity Diagram. This is how the human eye visually interprets color. So what you have is-- it's on an xy scale. And what you have on the edges over here are the pure colors, or the color-- primary colors that correspond to pure wavelengths going from 380 or 400 to 700 nanometers. And so anything that lies outside here is a pure color, just a single wavelength. That's what a laser or some LEDs would give you. Anything within this is a mixture of various colors. And the interesting property of this color space is that if you have any two points on this color space and you mix those two colors in various proportions, you're going to get a color which lies on the line that connects those two points perceptually. And so if you have-- if you have a triangle like this, which is a color space, the XRBG color space in this case, which is what most monitors and LCDs use, you would get-- if you have color primaries that are at the three vertices and you mix those color primaries in various proportions, you're going to get a color within that triangle. And by simply varying the [? wave ?] of the three primaries between 0 and 1, you can go from completely white, which is in the center, to one of the three colors. And that's what the color response-- the curve for just the red, green, and blue color primary looks like for film and for a typical digital sensor. What's interesting to note is that even though we've advanced quite a bit from film to digital, the basic technique still remains the same. We still have the same three color primaries. They look almost identical. There's very little difference between them. And that's one of the goals of computational photography, is to do away with the film with the baggage that we still have associated with film. And part of this lecture is actually going to go in the other direction and say, how can we improve on that? So the rest of the class is going to be more about how can we get away from film, whereas this class is more on how can we improve on film. AUDIENCE: In the graph that you have shown, it looks like the film has the colors more orthogonal being sensed rather than the digital sensor. You see the blue is leaking into the green and the red is leaking? But there, it seems it's-- in some sense, it's very less leakage. ANKIT MOHAN: Yeah. AUDIENCE: Is it in general too? ANKIT MOHAN: I'm not sure, in this case, why it's like that. And also, note that this is just one film which is optimized for certain kinds of photography. I think Velvia is supposed to be very good for landscape photography, and sunsets, and those kind of things. And that's something that you could do with film. You could have a film that's suited for a particular task and different-- has different primaries, whereas for cameras, it has to be-- for digital sensors, it has to be something that goes across the board for different kinds of scenes and things like that. So that could be the reason why it's like that. AUDIENCE: In the previous slide, [INAUDIBLE] two green cubes, two green squares and only one for red and blue. ANKIT MOHAN: Yeah. AUDIENCE: Is that because the eye is more sensitive towards the green channel? ANKIT MOHAN: Yes, it's because when-- I think that's roughly the proportion of the cones in our eye also. And green, if you look at the value, v, the luminance, chromaticity relationship between RGB and that, green is the one that has the most corresponding-- most weight. Yeah? AUDIENCE: [INAUDIBLE] You mentioned the film can be more specific [INAUDIBLE]. Would it make sense-- would it be possible to actually have different kinds of sensors that would be specific for different kinds of photography in digital? ANKIT MOHAN: It's hard for you to change sensors once you have a sensor and it's baked in. AUDIENCE: Yeah, I mean, if we could change the sensor-- ANKIT MOHAN: Yes, yes. And some of the stuff that, I guess, at some point we'll talk about in this course is there has been work on how to make more flexible digital sensors, not just digital sensors, but making-- how do you make the whole camera more flexible so you can programmatically change those responses? And you could do something of that sort. But it turns out that, for most photography, it doesn't matter that much. And just by doing things in Photoshop, if you have enough bit depth over there, it doesn't matter too much. But it does matter for things like remote sensing, where you need-- even between 400 and 700, there'll be 30 or 40-- they're divided into 30 or 40 different channels, which are almost completely orthogonal. And so going back to what you were saying, if you look at the response curve of the human eye, even that has a huge overlap. So it's actually quite similar to this one. It's not [INAUDIBLE]. Any other questions? So that was a very quick overview of what I thought would be useful for you to know about film photography in general. And what I'm going to talk about during this class is what's epsilon photography. And this is a term that-- this is a term that Ramesh coined some time back. And the idea here is-- the goal of epsilon photography is to improve on film-based photography, not to try and do something new, but just how to do what we could already do with film, but do it better. And the way it's done in almost all cases is by taking multiple pictures or capturing more data. So you capture multiple photos, each with slightly different camera parameters. And usually, the parameters that you vary are the exposure settings, the color settings, the spectrum settings, the focal settings, the camera position, and the direction in which it's looking, or even the scene illumination. So you change one of these settings. And you capture a whole number of images. And then you somehow use an algorithm to combine those images together. And you get one image that looks better than any one of those individual images. That's basically what epsilon photography is. And there are a number of ways in which you can do this epsilon photography. And I'm going to go through each one of these one by one. You could do-- you could take multiple pictures over time. You could take one-- you can take one picture, save it, take a second picture, save it, take 10 different pictures, and then combine them together somehow. Or you could do it over sensors. You could have 10 different cameras co-located at the same point and take one image, one picture with each camera at the same time. Or you could do epsilon over pixels. And that's-- I'll get back to that in a minute. Or you could do a combination of all of these. So epsilon over time is something which is the most common. And it's what most photography manuals refer to as bracketing. And the idea of bracketing is a little different because, in the end, you end up using just one image. So when you're not sure of what the exposure should be or you're not sure of where you should focus, you take multiple images with slightly different exposures or slightly different focus settings or aperture settings. And most cameras have inward features for doing this. So you just have to press the Shutter button once and it takes five images for you. And then when you go home, you can decide which one is the best and just use that. But epsilon in time is similar. You take multiple images. But then you use some algorithm or some-- something smart to combine those images together and get one resulting image. So the case where it's the most commonly used is for high dynamic range photography. And I believe Ramesh talked about this. In last class, he mentioned it. So as I was saying earlier, that you need to have the correct exposure in order to get the image of a scene. It turns out that, for many scenes, even if you have the correct exposure, you cannot capture everything that the scene contains. Your scene can have very bright parts, such as daylight, and very dark shadow regions. And the contrast ratio of these two can be as high as [INAUDIBLE]. And most cameras would not capture anything more than a ratio of about-- excuse me-- about 1,000. So one way of going around this limitation is to capture a number of images, and then use an algorithm to combine all those images together and create what's called a high dynamic range image. And I'm sure we've all heard of this term. If you just go on Flickr and search for "high dynamic range images," you will get millions of pictures that people have captured using this technique, just capturing a bunch of images and putting them together. And there's been lots of research into how you should put these images together. And it turns out that once you've done all of this, there's a related dual problem, which is how do you display that image. And I'll get back to that in a minute. But one way of displaying that is what's called tone mapping. And there is work on sophisticated algorithms on how do you compress a 12-bit or a 14-bit image back to an 8-bit image. And there's interesting work in that area, which we're not going to cover in this class. Another example over epsilon over time is this example that I really like. This is-- I think-- I'm not sure-- but it's one of the first color images-- or it's from the set of one of the first color photographs produced. And it was by-- this guy's name I cannot pronounce. But he went around Russia in the early 20th century during-- I forget, but during the early 20th century. And he took a whole bunch of photos of people just living their lives-- going and farming, hunting, and just sitting, and things like that. And then the way he took these pictures was he would take three images-- one with a red filter placed in front of the camera, then one with a green, and one with the blue filter. And then once he had processed his images and so on, he developed a projector which would project a red image, a green image, and a blue image on the same screen. So when you were viewing it, you would view a colored image. And as recently as about 10 years ago, until about 10 years ago, there was just these films that were lying in the Library of Congress, which were then digitized and hand-aligned. And now, you can download all of these color images from their website. So this is the very simple case of epsilon in time. You just take three images with three different pictures in front of your camera. Another example, which is actually used-- it's probably using this projector-- is that there's-- most DLP projectors have a color field, which stands in front of the DMD mirror. And part of this-- it's a little hard to see. But I think this is red, green, blue, and then it's green again. And I think this part is just white in order to increase the intensity of the-- maybe this is red and this is just transparent. But when you have the red part of the wheel in front of the DMD, you project the red image. Then when you have the green part, you project the green image, and so on so that when you actually view the projected image-- and this happens really, really fast, that the eye integrates over time and actually gives you the full color image. And one way to see this happening is if you take a camera and you just capture an image with a really fast shutter speed of about 1 over 1,000, you can actually sometimes get half the screen green, half the screen is blue. Or you can get really interesting and [INAUDIBLE] if you try to do that. Now, this won't work if you have an LCD projector because an LCD projector actually uses a color LCD. And you get all the colors at the same time. It's actually spatially interpolated. It's a spatial sort of multiplexing instead of a temporal thing like this. So this was doing epsilon over time. The next one is doing epsilon over sensors. And this usually means two things. You can either have multiple cameras or you can just have multiple sensors within the same camera. And multiple sensors within the same camera is what's popularly called a 3CCD imaging system. It's what's used in most high-end video cameras and camcorders. And you have this trichroic lens with a prism, which actually-- depending on the index of refraction, the rays get-- they just pass through or they have total internal reflection. And so the red, green, and blue images are formed on three different sensors, which are exactly the same optical distance away from the scene. And so when you take all of those three images-- I think that image over there shows you have white light coming in from behind the prism. And then you have the green, blue, and red components getting separated as they go through. And so at the same time, using three different sensors, you can capture the three color channels. So it's similar to putting the three filters in front of the camera, but it happens at the same time. So you can use this for moving objects and so on. And yeah, so also the sensor itself is-- it usually has a very broad spectral response. So it actually responds to any incoming wavelength between 400 and 700 nanometers. It's only the prism that does the separation. AUDIENCE: So why is this being used? Why is that not being used in digital SLRs? ANKIT MOHAN: It's just big and clumsy. I mean, I think the question I would ask is the opposite. Why don't they use Bayer sensors in camcorders? And I'm not entirely sure why. I think this might be something that's just stuck around since the first camcorders were developed. And that was probably before Bayer pattern filters became popular. AUDIENCE: [INAUDIBLE]. ANKIT MOHAN: Right. So probably those edges-- edge effects show up more in a video camera than they do in digital, just still photography. I'm not really sure why they're still in use. I mean, they're definitely better. They do give a higher resolution, as was pointed out. So recently, Morgan McGuire and others at Mitsubishi Research, which is just across the street, they came up with this really-- they took this to the extreme. And they said, instead of just having three of them, why don't we have eight of them and make a whole tree of these kinds of multiple optically co-located cameras? And so they came up with a very interesting beam-splitter arrangement. And each of these eight cameras are actually optically co-located at the same point. And so the image formed on each of them, if there was nothing else changed, would be exactly the same. But what this gives you the flexibility to do is now each of these eight can have different focus settings, for example. They can focus on a different plane. Or they can have a different color filter in front of it. And you can get eight different spectrum-- spectrum channels at the same time. And what I think, on the right, it shows is just that he's shown-- he shines a laser through the cameras to align them to see that that ray of light actually goes into each one of those cameras. So he used this setup for-- or a simplified setup for doing matting or defocus matting, where he used, I think, two or three cameras to focus on the plane and one that's focused in the background in order to do a separation between what's in the front and what's in the background. But it's certainly something that can be used for various other things. It's basically the same concept as multiplexing over sensors. Another way you can do this is simply by using camera arrays. And this was work, I believe, on one of the first camera arrays. And it was done at CSAIL. And it's epsilon-- the difference between camera arrays and a SAMP or 3CCD is that this imposes a certain epsilon on your setup, that there's always going to be epsilon in the camera position. You can put other epsilon on top of it. You can have a different filter in front of these cameras or you can have a different focus setting on each of these cameras. But just by itself, it gives you an epsilon in the camera position. So each of the cameras in this camera array is actually-- they're not co-located. They're slightly translated from one another. And that itself can be used to give interesting things, like I'll get back to that later in the talk. But this is another way of doing this kind of epsilon. And Stanford has their own version of camera arrays. And now, actually you can just buy a camera array, which is a 5 by 5 profusion camera. We have one upstairs, which-- it's one unit. And it actually has a very well-aligned and precisely calibrated camera-- array of cameras. The last one is epsilon over pixels, where different pixels are actually capturing different information. And we already talked about this one, which is where the Bayer filter is essentially doing this. Each pixel has a different color filter in front of it. And [? Roarke ?] is going to talk about another technique a little later today, which is a very clever way of extending this and allowing you to do various other things without having to place the filters on the pixel itself or [INAUDIBLE] elsewhere, which is easier to do. So going back to-- whose question was it? Someone asked, can you change the shape of the filters of the color response? I think you want to know? But [? Roarke ?] will shoe you a way of doing that by simply putting something in front of the lens. And then you can have epsilon in multiple axes. So this is a very cute camera that also we have upstairs. It's got four lenses. And so it forms four images on the film. It's a film camera. And it also has four flashes. The reason it has four flashes is not because of four different flashes, but because it can strobe them very quickly one after the other. So it opens one lens at a time. And when that lens is open, it strobes one of the flashes. So you get four images, which are from slightly different viewpoints and taken at a slightly different timing sensors. And there's a whole website full of creative stuff that people have done with these kind of cameras. But this is epsilon in time and also in the position of the camera, so in sensors. Yeah? AUDIENCE: So is this on film? ANKIT MOHAN: This is on film, yeah. AUDIENCE: And so does it give you four pictures on the film? Or are they superimposed? ANKIT MOHAN: They're not superimposed. They're four distinct pictures. AUDIENCE: Right. ANKIT MOHAN: So this is a work done by [? Srini ?] sometime back, which is-- it brings it all together in one nice package, where you can do all of these things together. And it's what they call generalized mosaicing. Are people aware of what mosaicing or how you capture a panorama? Basically, if I want to capture a panorama of this whole scene, I will take an image here, I would move, take an image here, move, take an image here, and take an image here, and then just stitch them together to create a mosaic, which has everything in here. So what they came up with is instead of taking an image-- each image with the same setting, they put-- so that's the camera. They put a filter in front of the camera some distance between the scene and the camera itself. And this can be a filter which either has a ramp in neutral density gradient or different spectrums, different colors of polarization or even focus. And you simply-- instead of taking one image at a time, you take a video as it's rotating. And from this, from the data that you capture, you can get either a high dynamic range of the whole scene, or a multispectral image of the whole scene, or an image that's focused at different points. So the way to think of this is that when you're taking an image here, different scene points-- something over there-- I'm going to get the blue channel of a pixel here. But when I rotate it, I'm going to get the green channel of the same pixel. I rotate it some more, I'm going to get the red channel of that pixel. So you just do this complete panoramic motion. You have some missing data for the edges. But for anything in between, you'll be able to recover the complete information of this. That's why they call it generalized, because you could use it for any one of these things. And I think they also built a camera prototype like that which was more portable. And they just put this filter in front of the lens. So we sort of already discussed this one, which is doing HDR capture by multiple images. You just take a whole bunch of images. And you can combine all that information together. And this is doing HDR over time. So that was this one. So I just wanted to take the example of high dynamic range imaging and see how we can do this over time, over sensor, and over pixel. So this is the first one, which is doing HDR over time. This is the second one, which is the generalized mosaicing, which is sort of in between the three settings that you just put this filter in front of your camera, and you rotate the camera, and take a video. This is using multiple detectors. This is similar to the SAM or the 3CCD setup that we saw. You have multiple cameras that are optically co-located that take multiple-- that take images at the same time with different filters in front of them. So they have different exposure settings on each one of them. And as you can see, each of these areas has had lots of work done in them. So the last one is this-- using what's called assorted pixels. I think that that's a more generalized way of the other two. Yeah, so it's similar to the Bayer mosaic. But instead of having just an RGB Bayer mosaic, they actually had two or three different levels of neutral density filters also placed over each pixel. So each pixel-- this blue is different from that blue in the amount of light that it captures. And so you can do an interpretation in the color. And you can also do interpolation in intensity in order to get a high resolution image. And this is essentially what they call assorted pixel. But it's more like generalized Bayer pattern, Bayer filtering. And you can put polarization filters also on top of it. Or you could have other colors other than just RGB. And again, [? Roarke ?] is going to talk more about this later. And this was actually work done at Columbia in collaboration with Sony. And Sony actually made a camera that did this. It was only a prototype. It was never sold. But they were able to get this picture from an assorted camera-- assorted pixels camera, which has a much higher dynamic range and captures all three color channels at the same time. This is another example of doing this over time, this high dynamic range capture. And the way this was done is that you place an LCD in front of the sensor which is of a much lower resolution than the sensor itself. You capture the information on the sensor. And you see certain pixels go-- get saturated. They're too bright. And in the next timestamp, you actually put a darker patch over those pixels so that you compensate for the brightness. And going through, iterating through this, you can get an image which is-- which has a lower dynamic range on the sensor. But once you combine that with the information that you pumped into the LCD, you can then recover a high dynamic range image. The reason I wanted to just mention that is for this one, where this is actually a work done by Wolfgang Heidrich, who gave a talk in our group some time back at University of British Columbia in Canada. And this is stuff that's been brought over by Dolby. And they're actually using this stuff in Samsung LCDs now. But this is a way of generating a high dynamic range display. And the setup is very similar-- it's very similar to the previous one. And it's actually very simple, in that instead of just using a projector and projecting on the screen, which is what a Bayer projector display does, they have a projector, and then they have an LCD in front of it. So they have two layers of control. And they get twice or the squared of what they had earlier as much control over the dynamic range of what they can display. And so they can control the-- I think this is a very early prototype over here. So they have two layers of LCDs-- so one inside the projector itself and one placed over there. But you can also just do it with two layers of actual physical LCDs. And it turns out that the LCD at the back has to be-- can be of a much lower resolution than the LCD in the front. And just using that, you can get very high dynamic range. And I think most HDTVs and so on, also they have this thing where they can dim the backlight. So they get this very high dynamic contrast, which is sort of confusing. But it's essentially not just modulating the LCD, but also modulating the backlight. That's essentially what this is doing, but not just the whole backlight. Backlight is modulated differently in different parts of the screen. Yeah? AUDIENCE: Well, how come you can use a lower resolution for the front LCD? ANKIT MOHAN: No, not for the front, for the back LCD. For the front, you still need full resolution because the back LCD is essentially acting like a backlight. I think they also had a diffuser here, which anyways reduces the resolution of the back LCD. Otherwise, you might get weird edges and so on. So that was a little about high dynamic range. The next thing I wanted to talk about is what we discussed earlier, this concept of focus setting and how we can extend the depth of field. There are many applications where you want a very large depth of field, like I said, for example, landscape photography. You want the tree next to you and the mountain faraway to be in focus. But as we discussed, in order to do that, you need to stop down the lens. You need to have a very small aperture, which means you are going to get very little light coming into the camera. And so your noise goes up. Or you might have things move while you're taking your exposure. So a number of techniques have been proposed over the past 30 or 40 years, especially in the area of microscopy, in how can we extend the depth of field while still keeping the aperture size reasonably large. And there is recent work done in the area of light field cameras. And one I didn't write over here is tape recording, which I'm sure Ramesh will come back to later, which also allows you to extend the depth of field while still having a large aperture. So the first technique that's the most interesting one here is what's called focal stacks. And the idea is very simple. It's basically epsilon over time again. And you take multiple images focused at different planes. So for example, you have this ant sitting under a microscope or this-- when you focus in the foreground, you get things in the foreground are in focus, but its hind legs and the rest of the body is out of focus. If you focus on the background, you get focus-- the foreground is not in focus. So what they instead did was they took a whole series of images. And I'm going to just flip through them. They're focused at different planes. So I think that's about 10 or 12 images that-- you can capture all of those. And you can do this because the object or the scene, in this case, is static just over time. Or you could use the SAMP kind of setup, where you capture all these images at the same time, but each camera is focused at a different plane. And then you combine all of this information together in order to create one image that's completely all in focus. And so a similar [INAUDIBLE] from University of Washington proposes a very interesting and clever technique of how you can combine them together of finding out regions in-- so each image has certain parts that are in focus. So you do a contrast-based estimation of what parts are in focus. And that's what's shown on the right. And then you do a gradient domain merging of various parts together. So the end result doesn't have any weird discontinuities. And it looks nice and smooth. And everything is in focus. And I think Ramesh is, at some other point, going to talk about this technique itself in more detail. But what I wanted to mention is more of the focal stack. You can just take a whole bunch of images focused at different planes. And then you can put them all together in order to get one all in focus image. AUDIENCE: Was the analysis done in a computer vision? Or did it use [INAUDIBLE] from the camera or the actual picture was taken? ANKIT MOHAN: You mean this data? AUDIENCE: No, the way to combine the images. ANKIT MOHAN: So the combining the images was actually a different technique. That's this gradient-based merging technique, where you have stuff from one image and you have stuff from another image that you want to put together. But if you just cut that image and put it here, you're going to get weird discontinuities. And colors are going to be different. But it turns out if you do that in the gradient domain and then do a [? percent ?] solver to integrate the image back, you are going to get a nice smooth image. And all the error is going to get distributed as noise throughout the image. So that was what the technique initially proposed. And they just used that technique for this focal stack in order to get this. I think the example-- I'm sure Ramesh is going to talk about this at some point. The example they had in the paper was that you have a scene like this. And if you take a picture from here, you might get someone not looking at the camera, or someone caught yawning, or someone is-- just has a bad face. If you take 10 such images, each one of those images is going to have some people who are OK and some people who don't look OK. But there's not going to be one single image that has everyone looking at the camera. So they developed this technique in order to combine all of those images together to get one image that has everyone looking at the camera the way you want it to be and still look like a picture that came from a camera. And I should have put that in somewhere. But it's called digital photo mosaic, I think. And it's [INAUDIBLE]. Yeah, a photo montage, digital photo montage. So you can do a similar thing with a light field camera. I don't know if Ramesh has introduced the concept of light fields yet. Has he? AUDIENCE: Yes. ANKIT MOHAN: Yes? So a light field basically captures all the information coming into a camera. And the way it's traditionally usually done is by putting a microlens array in front of the camera sensor. If you don't understand that, that's fine. I'm sure he'll go into more detail. But what you can get from the light field is you can extract the focal stack out of the light field, and then do basically what we did in the previous case and extend the depth of field if you want. So a light field-- essentially, what's, important to remember is the light field can be used to extract the focal stack if needed. So that's another way of extending the depth of field. There was another paper recently, which again, I did not put over here, which is interesting because it was from Sam Hasinoff at CSAIL, where he-- instead of taking one image with one aperture setting, he claims that if you take multiple images with two or three different aperture settings, and then you combine them, you are going to get much-- your total exposure time is going to be much shorter, and you're going to get less noise, and so on. So that's yet another way of combining. It's similar to focal stacks. It's just [INAUDIBLE] focal stacks more smartly because focal stacks focuses at each plane. And what he said is that you can find an optimal set of planes that you need to focus in order to get the best results. So that was extending the depth of field. The opposite problem is how do you make the depth of field shallower. And this is something that comes naturally when you use an SLR camera with a large aperture lens. You have a very shallow depth of field if you open the aperture all the way out. And so your main object is in sharp focus, but the background is nice and blurred. But if you use a small point and shoot camera, it's very hard to get that kind of an effect since your aperture size of the camera is usually very small. And so the question is, how can you still use a small aperture camera and get results like the one at the top? So there's been a couple of-- three or four papers in this area that try to attempt to solve this problem. The first one is again from CSAIl by Fredo Durand and his student. And what they did was-- you start with an input image. And then from just one single image, they estimate the depth of each point. So sorry, before I go into that, the reason why this is a hard problem is because firstly just from this image, it's hard to estimate what's in focus and what's not just by looking at the contrast. And even if you can get that, even if you know that the foreground is in focus and the background is out of focus, if you have multiple layers in the background, each one of those layers are going to be out of focus-- more or less out of focus, depending on the depth or the distance from the plane of focus. So it's hard to estimate the 3D shape or the 3D structure of this from just a single image. It's much easier to do it from two images. But what this paper, "Defocus Magnification," did was they tried to estimate the 3D structure of the scene from just one image, and then use that one-- that 3D structure and the image that was captured in order to increase the defocus by simply applying a spatially very blurred filter on the image. So you can see the background is more out of focus than the image that they took over here. And this is not really epsilon photography. I just put this here because it's important to the overall structure. But this is more of an image processing technique than anything else. Another way of doing this is what's called synthetic aperture photography. And this is something that was proposed by Marc Levoy's group at Stanford. But it's something that's more general. And it's been used in radars and so on for a long time. The idea is actually really simple, that what you want to do is you want to simulate a large aperture lens, such as the one shown here. However, you don't have the physical resources to create a large aperture lens. But what you can do is create a number of small aperture lenses, and then somehow take the information coming from each of those lenses and computationally combine them in order to simulate a large aperture lens. And it's essentially what-- one way of thinking of this is from a light field camera kind of point of view or just a camera array. But if you just think of it simply, you can combine each of these rays coming together into the lens if you can find out what those rays are and get what you would have gotten from just this one large lens. And now, if you look at a different point in space, you combine a different set of rays, and you get the intensity corresponding to that point. So it's essentially doing this-- what gets done with a large aperture lens in optics. You're doing that computationally by combining all these rays. So this is what their setup looked like. This is one of the setups. I think they had five or six different optical configurations. But this one allows them to see-- so they used this in order to focus on something that was behind a bush. So let me see. Yeah, so that's what you would get with just one image. And what's on the right, if the video plays, is-- OK, that's weird. But anyways, so you can simply, by computationally combining the rays-- and you'll actually learn how to do this kind of computation. I think it's in one of your assignments also. It's probably the next assignment, where you combine information from multiple cameras. And by simply shifting and adding, you can focus on a different plane. OK, so maybe I don't have the video for this. It's just you can focus behind on a different plane. And anything in front goes out of focus. And since your depth of field is so shallow because you have this large synthetic aperture, everything over here actually goes out of focus and just blurs out. And you can see behind the foliage in this case. AUDIENCE: Do you know if the bushes are moving in time? Does that-- ANKIT MOHAN: I don't think it matters because it takes just one image from each camera. It's a camera array. So it's not over time. It's over sensors. But that's a good point. You could actually do this by taking a camera, and moving it around, and taking multiple images, which is exactly what the next paper does. This is actually work by Professor [INAUDIBLE] who was a visiting professor in our group last year and his students in Japan. And they generalized this thing to instead of having a fixed, rigid camera array, you can just take a camera, and move it around, and take multiple images. And then using computer vision techniques to line up various synced components, they're able to get results kind of like that. So this is just with, I think, three or four images that he took just by moving a camera and just taking random images without any structure or any sort of calibration or anything. And from that, he's able to focus now-- in this case, focusing on the foreground and the background is defocused. And in that case focusing on the background. And the clock in the front is in focus. This is also-- it's similar to the camera array thing, except you don't need a camera array. You can just take one camera. And over time, you can move it. So in this case, if the scene was changing, you would have problems reconstructing it. Finally, the last technique I want to talk about is something that I worked on last year and really quickly try to go through this. I think I might have put too many slides in for this. But the idea over here is to do it more optically rather than computationally. And the basic idea is that instead of keeping the camera and lens static while you're taking an image, you intentionally move the camera lens and sensor. And that's what we call image destabilization. You move the lens and the sensor synchronously with one another during the exposure. And so to give you an intuition for how or why this works-- so once again, going back to the image we had in the beginning, if you have a plane in focus that's plane A, all the rays coming from a point on plane-- from the point A get focused on A prime. And the rays coming from B get focused on a point, which is a little in front of the sensor. So you get this defocused blur on the sensor. And the size of this blur on the size of the lens aperture. If you reduce the size of the aperture, it goes down. If you have a pinhole, then you just get one ray going through, which is what we saw in the beginning. So now if you take the pinhole and you translate the pinhole over time, then you're going to get a blur over here which corresponds to the motion of this pinhole coming from the point B and similarly point from point A. Now, what's interesting is if you compare these two, they're not the same size. They're actually-- they have a different size. And the size ratio actually depends on the distance between A and B, and the distance between the points, and the pinhole and the sensor. So what we do instead is while moving the pinhole, we also move the sensor, but we move the sensor such that the ratio-- such that one of the points actually remains fixed on the sensor. And the other point produces a blur. So now, we've taken-- I haven't showed the actual motion. But through this sequence of images, point A was always focused on the same point on the sensor. Point B was focused on a different point. And so point B results in this blur which point A does not. And you can use this sort of a setup, where you have the lens and the camera in two different planes. And the camera is moving at a different velocity than the lens. And by the ratio of the velocities, you can focus at a different plane in front of that camera. So this is an image you will get with just a small aperture lens, I think F22 or something, where everything is in focus. And this is the image you get using our technique, where just the thing in between-- this is still the same lens. You get something which is in the focus in the middle. And everything in front and behind goes more and more out of focus by simply translating the camera lens and the sensor over the exposure time. And so the advantage of this is that you can simulate a large aperture lens or an SLR lens using a small camera lens. So you don't have to spend-- many of those lenses cost more than $1,000. So you can use a [? $30 ?] lens in order to create a similar effect as that produced by a $1,000 lens. But the disadvantage is that you need this motion. And so you need some space around the lens where it can translate, which is not all that hard because if you look at the space around the lens, most of it is wasted and not really used. OK, at this point, maybe it might be a good idea to just switch to [? Roarke ?] and see what his technique lets us do. I have a couple of other things here. But I think we can do that after his stuff. But I don't know, do you want to break for 5-10 minutes first? AUDIENCE: Yeah. ANKIT MOHAN: Sure, so we can break for 5 minutes. And at 2:45, yeah-- AUDIENCE: [INAUDIBLE]. There is something called crosstalk. And there's optical crosstalk [INAUDIBLE] crosstalk, meaning you end up having those [INAUDIBLE] side by side. And you can have some photons that actually pass [INAUDIBLE].. ANKIT MOHAN: Do you want to draw that image? AUDIENCE: I mean, if you go back to the [INAUDIBLE] then you can actually-- ANKIT MOHAN: I think what you're saying is that you have [INAUDIBLE]. AUDIENCE: Red and green in there. The photons can actually cross say-- yeah, exactly this. And it's going to [INAUDIBLE]. And then also, there is the fact that-- I mean, photons or different wavelengths have different energies. So it might be that one blue cross is Jupiter. But we'll set up a wall a little bit and actually [INAUDIBLE].. ANKIT MOHAN: Right. AUDIENCE: So this is another kind of [INAUDIBLE].. And if you like-- I mean, yet another reason for that-- I mean, the refraction varies with wavelength. So in fact, if you have a single plane here, you're going to have photons with different wavelengths of focus [INAUDIBLE]. So this is what he calls chromatic variation. So you have to add extra objects in order to focus it. ANKIT MOHAN: Right. AUDIENCE: But people know how [INAUDIBLE] one of the major reasons why-- for high-end applications like manifold applications [INAUDIBLE]. ANKIT MOHAN: Right, of course, yeah, and also for the Foveon sensor has similar advantages in some of the cases. And I mean, of course, using a 3CCD sensor or multiple distinct sensors is always going to give you better results than the Bayer filter. It's going to be more expensive, perhaps. But I think the question that I think you had was, why is it that we use 3CCD for video but not for still? Why does it matter more for video? And I'm not entirely sure. AUDIENCE: You just get more light. You need more light for video in general. Otherwise-- the Bayer filter functions by blocking the light. ANKIT MOHAN: No, but then even-- AUDIENCE: 3CCD actually splits up the light. So you use all the light. ANKIT MOHAN: That's true, yeah. Yeah, so you've got three times as much light.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_6_Cameras_for_humancomputer_interaction_HCI.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RAMESH RASKAR: All right, so that was light waves. And we're going to switch topics a little bit. We saw, if you think about light fields, you can build really cool, like, CI devices. If you have a camera array, you can do all kinds of crazy things like-- of course you can do refocusing and see-through uploaders. But you can do also other things. Like if I have a light field camera-- let's say this one-- which is interesting. There's actually a camera here-- [LAUGHS] --as we're talking about that. Anyway, if this was an array of cameras and you shake it, you have image stabilization problems typically. But if you have a light field camera then-- uh, OK. If you want to stabilize the image, then you have multiple solutions. You can either put big mechanical rigs so that they stabilize your motion to get rid of camera shake, or you can put some optical stabilization techniques to dampen out the effects and so on. But if you have-- sorry? AUDIENCE: For taking many pictures [INAUDIBLE] do all that? RAMESH RASKAR: No, this is a video. AUDIENCE: Oh, video, all right. RAMESH RASKAR: Yeah, this is video. AUDIENCE: Cool. RAMESH RASKAR: So you know, there is Schwarzenegger and he's jumping in his truck and there's a camera moving and it's following. How do you keep the camera stabilized although it's being shot in a very rough terrain? That's the classic video stabilization. Maybe that's what-- I should say video stabilization, not image stabilization. But if you have a light field camera, then you can let it shake and create a very smooth video of that. How would you do it? Remember, there's a five by five camera array. And I'm going to shake it but I want to create an illusion as if there was a single camera that's traveling through this space in a very slow manner. Yes? AUDIENCE: [INAUDIBLE] a bunch of [INAUDIBLE] So what you can do is take two frames of video and try to find the last one, the closest previous one. RAMESH RASKAR: Exactly. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: That's it. As simple as that. You have 25 cameras and in the very next frame it was shaken. So the camera from which we are shooting the-- let's say the center of the five by five is your actual camera. But in the next frame, that may be too jerky but some other camera took its place in a straight line. So you can just switch between different cameras and find the camera that looks like it's going in a straight line. Yes? AUDIENCE: So can you switch and you'll see the blur where it's like-- [MAKING SSH-ING NOISES] RAMESH RASKAR: Yes, yes. So you'll do a little bit more. AUDIENCE: Yeah, yeah. RAMESH RASKAR: You'll do a little bit more. But as opposed to doing traditional video stabilization, where you have to shift an image and do some warping or something, here you get very nice clear fits. So there was a-- this was a ICCV paper this year-- 2009-- International Conference on Computer Vision. This simple idea-- take 25 cameras and create a stabilized view. Because the way things are going, 25 cameras are going to be much cheaper than buying a video stabilization rig. I mean, again, it's a dollar right now-- under a dollar camera. So you can imagine in the future, your phone has 25 cameras on the other side. And it can do anything you want, and from that you can create all these effects. And you can do stabilization. And, as we'll see, we'll be able to do [INAUDIBLE] come back [INAUDIBLE]. Really interesting things are going to happen. And we'll be at the forefront of that. So we're going to talk about cameras-- cameras looking at people, cameras looking at fingers, you know-- when you have camera, how we can think about complexity of emitters and receivers, and motion capture. These are all classic problems in cameras for HCI. And this is a chart we saw early in the-- we saw early in the class about, what are some HCI projects that were actually boring? Because in the field of HCI and the field of cameras and multimedia, is not that new anymore. And so you are to be very careful when-- even at MediaLab, we had a lot of great things were invented in the '90s. Back then, very few people did this kind of HCI. So anything you do is likely to be new. You don't have to worry about it. You just start the project and you work on it. And it's pretty much guaranteed that what you're doing is brand new. But that's not the case anymore. Everybody has access to these technologies and computing power and so on. And if you just start a HCI project, it's very less likely that what you're doing is actually new. So you would expose yourself by going to events and seeing what other people are doing and so on. Maybe your technology's new but your application is not new. Maybe your application is new but the technology is not new. And you have to just have the right mix of all of that to be really impressive. So always remember this-- you cannot impress all the people all the time. And I just gave a talk to the first year RA class-- I guess you were there [INAUDIBLE] class. And one of the things I mentioned was that for HCI, it's very tempting to work on HCI projects because you get instant gratification, first of all. And second, especially for a place like MIT MediaLab, you get a lot of press. People look at it and say, wow, cool, it's so interesting, I wish I had it, and so on. And that is good-- any positive reinforcement is good-- but it's also very detrimental in the long run. Because if what you're doing is not really new or impactful, then you get this positive reinforcement because the press says it's interesting. And people who come and see your demo say it's interesting. You think, wow, this is exactly what I should be doing. But be careful. Talk to people. Try to publish it. Try to expose yourself and subject yourself to some peer reviews, and then see if it's still worth doing it. So submit to conferences-- high level conferences, not the ones that are organized by your friends or former colleagues-- high level conferences. And just go to those events and see what's going on. So that's my high level advice for doing HCI. And what we're going to do is a little bit quick run-through of several technologies. But you know, you're very welcome to explore an HCI demo as your final project it's always fun. I mean, I personally love seeing really cool interaction on the devices. But again, the field is saturated and you have to make sure you're doing the most interesting things. OK, so by HCI, I'm mainly going to focus on device-oriented HCI. And I will not be talking about traditional cameras and HCI done with that. So one of the most classic examples of camera-based HCI is interaction with touch screens. And the classic project from [INAUDIBLE],, now almost 10, 15 years old, is-- you have a projector that's projecting on the screen like here. You have an IR camera that's looking at the screen. And then you have some lights that are illuminating the scene. If your hand is very far away from the screen, it does not reflect light back. But if your hand is sufficiently close to the rear projection screen, then it will reflect light back to the camera. And because the projector is working in the visible spectrum and the camera's working in the near IR spectrum, they don't interfere with each other. And then you can recognize a picture of the output-- recognize the blobs and from that, you can figure out. Very beautiful, very classic, but at this stage, not worth pursuing. Too much has been done in this space. If you want to build a product, maybe. But if you want to do research, don't follow this path too much. Some other interesting thing-- this was done, I believe, by '99-- yeah, '99. As you know, this mouse is-- the camera in a mouse is one of the largest markets for image sensors. If you remember the chart at the beginning of the class, about 100-- 300-- 300 million-- I think-- I believe-- sensors are sold in mouse, which is amazing. And it's actually a camera that's very low resolution-- about 20 pixels by 20 pixels-- and it's running at very high frame rate-- close to 1,000 frames per second. And all it's doing is looking at optical flow. If you take an image at one instant versus the next instant, is anything changing? And that's why you can work it-- you don't have to have it on a rough surface, like you used to have with those spinning balls. You can use it on a smooth surface as long as there is texture. If you put it on a completely white piece of paper that has no texture, this will not work because it's looking at image difference-- frame to frame difference. And it has everything embedded in it. It's taking the two pictures-- I mean, it's taking a video picture. It has a DSP that's doing this optical flow comparison to basically get this 2D vector. Are you moving? What's the X vector and what's the Y vector? That's it. And of course, you can buy optical mouse for what? Just tens of dollars now. What's the cheapest one? But really, really cheap. Right? AUDIENCE: Yeah, for sure. RAMESH RASKAR: Sorry? AUDIENCE: $10. RAMESH RASKAR: $10? OK, yeah. And then this group at Microsoft-- Ken Hinkley and others-- said OK. What if you-- this is only two degrees of freedom. Can you add the other four degrees of freedom? So rotation-- retinal rotation is not covered because the image will not change if you stay in one place-- but also able to lift it and tilt it just from the mouse. So they just placed a surface on which they can pivot. And then the grid itself was this array of dots, which you can see in a grid form. And the distortion of that tells you-- if you're head up, then all the disks will look the same shape. If tilted one way or the other, you'll get some [INAUDIBLE] distortion. And in addition, they placed this markers within each disk to get absolute positions. And then between these [INAUDIBLE] shells, they can get the related position. So it's a pretty clever idea-- slightly difficult to implement because this all has to run in real time. And if you have a printer pattern like that-- you must have a mouse pad of that shape. And people just like to have it completely free. The mouse should not have any constraints on where it is sitting. All right, then FTIR-- which has become really popular now-- Frustrated Total Internal Reflections. And let me show the demo first and then see how this goes. This is Jeff Han's demo from 2005. And FTIR is used in many other fields. So FTIR itself is not new, but the way it's used for [INAUDIBLE] interaction is really beautiful here. OK, so how does this work? You have an LED that's emitting light. And if the refractive index of the glass is sufficiently different from air-- so the air is 1, the glass is about 1.5-- and then the light is hitting this interface between glass and air at a sufficiently steep angle, it's going to continuously reflect within the bounds of this glass. And that's exactly how all these other technologies work as well. So fiber optics, light pipes, and all these things also work on principle. So fiber optics, you have-- in the simplest case, you just have a glass pipe constant [INAUDIBLE] air and glass, and you have sometimes coupling of laser. Light goes in. If it goes at the right angle, it will just reflect back to the photo and will be transmitted over hundreds of kilometers this way [INAUDIBLE] imagination. But if you hit it at the wrong angle, there is what? [INAUDIBLE] refractor. [INAUDIBLE] And of course the [INAUDIBLE] fibers are a part, single fiber, multi fiber. You have air and 1 and 2. So light goes in, hits the flags. The flags [INAUDIBLE] and so on. So by getting different indexes of refraction, you get [INAUDIBLE] something [INAUDIBLE] as opposed to pure [INAUDIBLE]. And [INAUDIBLE] losses that we assume [INAUDIBLE].. If you just have glass and air, some amount of light will actually get through. And a field like [INAUDIBLE] ratios-- [INAUDIBLE] numbers and ratios [INAUDIBLE] will tell you how much of light you're actually [INAUDIBLE].. But if you use this gradient index, then it turns out everything was stable in [INAUDIBLE].. Light pipes work the same way. Instead of a projector, you might have a bulb, but it actually might just-- [INAUDIBLE] pretty far away. So they use light pipes to guide light in a very similar fashion, and so on. So for low light transmission, the idea of FTIR is very well known. So that's total internal reflection, or frustrated total internal reflection. So this is just a sequence of total internal reflection. And now what we want to do is frustrate the internal reflection and let the light leak from them. And in this case, at the interface between the glass and air, if you allow the light to escape somehow-- you know, put a finger here, or just put some dust particles here. OK? On the fiber-- if you just put dust particles, your light will try to reflect, but because of the diffusion here, it will also go out. The same principles that you may have seen in some of the menus that they have at restaurants or in movies-- they have this glass slabs. They'll have a slab like this and the lights at the top. This is just a pipe. And then when it's just clear glass, you just see it. But you can just take chalk or any simple marker, and you can write on it. And it starts to glow. Again, because light is trapped within the [INAUDIBLE],, it just goes away. But when you put some pigment on it, light [INAUDIBLE] internal reflection, at the end it gets diffused and it comes out. Everybody heard of this? Many restaurants will have their menu written [INAUDIBLE] which is the same exact principle of frustrating, at some stage, the total internal reflection. There's even a company called [INAUDIBLE] Display that built a TV that's based on this principle. And the way that works is-- you can probably look up [INAUDIBLE] Display. Sorry. Right. AUDIENCE: Thank you. [INTERPOSING VOICES] I got it now. RAMESH RASKAR: OK. Do you know what we just did? AUDIENCE: Sorry? RAMESH RASKAR: Do you know what we-- no, never? OK, it's fun. It's really cool, cool idea. So let's say home projector from which you want to create a big flat screen TV. If you think of a projector, there a couple of solutions. Either I can do it this way, or I can put some mirrors to create another functioning TV. But this company in UK came up with a very clever idea, where you have a [INAUDIBLE] glass and then you have a projector. And it projects the light on this side. [INAUDIBLE] still here will, you know, [INAUDIBLE] a steep angle. And they put a film here. It's actually [INAUDIBLE]. And the angle between this wedge, it's chosen in such a way that if you shoot an array at a particular position, it will reflect, reflect, reflect. But eventually, the angle will not be shallow enough, so that the light will actually come on. So the projector reflection doesn't-- if it was a straight pipe, if it comes in, it will maintain that angle forever. But with a wedge, eventually, it will drop off. And if you shoot from other angle, it'll come out [INAUDIBLE]. So you might have a projector-tied image here to reflect and end up creating this nice, 15 inch wide image. A very beautiful product that just-- I believe it got acquired by Microsoft-- the Microsoft Surface. [INAUDIBLE] And those people are working with-- yes? AUDIENCE: Is there [INAUDIBLE]? RAMESH RASKAR: You know, it just-- I just saw, actually. I believe they have a diffuser. But it's not [INAUDIBLE]. And now I just realized that because of the diffuser, it would frustrate at every point-- [INTERPOSING VOICES] AUDIENCE: So is there a gap between the [INAUDIBLE] diffuser? Then-- RAMESH RASKAR: Oh, that's a good one. All right, perfect, OK. That's right. That's right. If it-- so if there's a gap between the [INAUDIBLE] and the diffuser, extra debate is on me. It's a good one. [INAUDIBLE] [LAUGHTER] So if there's a gap here, then you still get internal reflection. But when it eventually comes out, it will diffuse. Then it can be seen. AUDIENCE: [INAUDIBLE] presumably you could also get the same conditions [INAUDIBLE].. RAMESH RASKAR: What are you [INAUDIBLE]?? AUDIENCE: So like [INAUDIBLE] and you have-- you have one projector projecting both images-- RAMESH RASKAR: Right. AUDIENCE: Because they come out of the [INAUDIBLE] RAMESH RASKAR: Excellent. Did everybody get his question? He's saying, OK, if I don't have a diffuser-- so let me zoom in on this part. Light comes in and then it goes out. And it just [INAUDIBLE]. It's not internal reflection anymore. It just straight goes out. And I think what you're asking is, can I make it in such a way that it goes out this way but also goes on this way? And then if I just control this, maybe I can [INAUDIBLE].. Is that it? AUDIENCE: Yes. RAMESH RASKAR: It's an interesting challenge. It's a very interesting challenge. There's some properties you could exploit. So typically because of the [INAUDIBLE] and so on, [INAUDIBLE] sensor with a dual photography. If a light goes in one part and it does a strange thing and comes out, you could shoot array in the same direction and come out the same. And this duality can be very quickly used to-- it's like the perpetual machine. If someone starts proposing to you a design that sounds like perpetual motion, you can ignore that. And the duality of light can also be used as a quick check to see if your design is going to work. So here it seems like it could be breaking duality, right? Because when light comes in, and it goes into two different directions, right? [INAUDIBLE] so that if I shine light this way, both of them will come out in the same way. And usually when the splitting and combining happens, the duality principle says [INAUDIBLE] aren't possible. So that's the only exception that's [INAUDIBLE].. A beam splitter, for example-- then light can go here but light can also go here. So both of them are traveling along the same direction. Light will shine like this way. Light will also go this way and this way. So there are some places where you cannot say, OK, one ray will correspond to one ray out. Or how can the same ray be in two places? So sometimes it can be explained. This is a situation where maybe you can't explain it. [INAUDIBLE] AUDIENCE: [INAUDIBLE] project. I think that's why we suggest first, I mean, actually-- RAMESH RASKAR: So two different projectors. AUDIENCE: Yes, or two different images somehow are different angles or something. RAMESH RASKAR: [INAUDIBLE] also very easy. But they are very easy because I believe there's a sweet spot. If you project from the right angle, you'll get this image. If you project from a different angle, this phenomenon [INAUDIBLE]. There may be a very tight sweet spot that works. But it's interesting. It's interesting. So this notion of frustrated total internal reflection is used to make this. Also, some of the best head-mounted displays also use these principles. You have your eye, and you have a single thin sheet of glass. And then you form an image here. And then this pixel travels back and forth here. And then you have [INAUDIBLE] observer [INAUDIBLE] refractor index, so this comes out. [INAUDIBLE] Very, very quick, the image is projected here and seen [INAUDIBLE]. I think they're one of the best head-mounted displays. [INAUDIBLE] So how do you use it for HCI? Again, the same principle. We're going to do the simple reflection. In this case, instead of the diffuser, we're going to use a finger. And the finger is going to prevent it from doing pure internal reflection, and will also [INAUDIBLE] light on the other side. And then in this case, you can just put a scatterer so that this is reflected in [INAUDIBLE] direction. So which way is the projector here? AUDIENCE: [INAUDIBLE] RAMESH RASKAR: So this is only for capture. There's a camera down here that's looking at the scatterer. And there's a projector as well that's projecting light on this diffuser. So in this case, the finger is on the right and the camera and projector are on the left of the screen. Only 2005, that's when the paper came out. By 2007, 2008, [INAUDIBLE] company, it was a big hit during the elections. In 2008, CNN was using this product. All started with a paper in [INAUDIBLE] conference. Anyways. So once you have scattered light, put it on the camera, and then only where you have physical contact, you see very bright points. If you remove it, if you hover it or-- a glass-- you can see-- again, the same principle. But if you provide a [INAUDIBLE].. [INAUDIBLE] in sheets of water, there's liquid crystal where you have droplets of water and then you shine it underneath into that. And the light travels through it so you see some-- on this side, it is completely transparent. But if you break a beam by putting your hand or [INAUDIBLE] like that, the light comes out. OK, so one disadvantage of this particular scheme is that you need to have sufficient separation between the screen and the projector camera. Because it's looking at this-- it needs sufficient area to create this-- you could do, like, a [INAUDIBLE] TV and [INAUDIBLE] and so on. But nevertheless, it's not as convenient if you want to create this on a laptop or on a mobile screen. So that's why this project started [INAUDIBLE].. You might be able to create that and put it in the same [INAUDIBLE]. [INAUDIBLE] programs-- this group at Microsoft [INAUDIBLE] and others, they are building FTIR mounts, where you just have an acrylic colored piece of glass, and there's a camera here that's looking at the gestures on top of it [INAUDIBLE]. AUDIENCE: Think you got the Best Paper Award. RAMESH RASKAR: Sorry? AUDIENCE: Think you got the Best Paper Award. RAMESH RASKAR: It was just presented a couple of weeks ago. AUDIENCE: Couple weeks ago? RAMESH RASKAR: [INAUDIBLE] beautiful. So yeah, this is still happening. AUDIENCE: Where was that? RAMESH RASKAR: UIST, I believe. Is that right? Yeah. User interface-- the same place where Jeff Han presented his paper four years ago. And the same paper also talks about a site mouse where-- you can see the two fingers here. Just barely? And two fingers here? What they're doing is they're creating a sheet of light that's traveling almost parallel to the table. So if I put my finger-- only the bottom part of my fingers will be enough. And it's looking at how the light is being obstructed by the fingers. And there's a camera that's just looking straight at it. But most of the pixels are wasteful, and other [INAUDIBLE] pixels are being used. AUDIENCE: But this is actually only one of these things. RAMESH RASKAR: It's only one of these things. That's right. That's right. AUDIENCE: Sort of, but I think they can tell how far away your fingers are based on the size of the radius of the-- AUDIENCE: OK. RAMESH RASKAR: I think they just moved the camera and there's some triangulation. But they want to keep it cheap. And this principle of having a sheet of light and see where you cut it, very obstructive-- it's used in many other projects. So if you remember, there was a canister projector where they wanted to create a keyboard anywhere in space. So all they had was a canister projector which projects a keyboard pattern out here. And then at the bottom, again they had a sheet of light that goes out the same way. So the sheet of light kind of goes here like that. And there's the camera. So when you put your fingers on it, it gets blocked. But because the projector is also projecting the keyboard here, you know exactly where you are touching. So it's kind of a keyboard out of a pen. So again, here's the projector, camera, and the laser sheet. And because it's a slide projector, it's very cheap. [INAUDIBLE] And then those big Smart Board-- these pens you may have seen, have also got a similar structure. They have a big board and there's a little bit of a baffle. And the corner of this screen, that are cameras that are looking out. And this camera actually mostly [INAUDIBLE] looking at [INAUDIBLE] and illuminating the whole board with a [INAUDIBLE] sheet again. So when you put your finger, one or more of these cameras will see where you're touching with your finger or you're pointing with your pen. They match the colors. And the triangulation they can figure out [INAUDIBLE] screen. So it's huge-- I don't know, $100 million, [INAUDIBLE] million dollars [INAUDIBLE] Smart Board. [INAUDIBLE] The board that's in [INAUDIBLE] room, is that a Smart Board? The big one? [INAUDIBLE] Anyway, this is very powerful. [INAUDIBLE] The aim is to put this in every classroom of K-12 [INAUDIBLE].. And of course, all this space will have projector and whiteboard. So you have a projector that's projected on the whiteboard, and then you [INAUDIBLE]. All right, and then of course you are familiar with the Wiimote, which was quite revolutionary because it placed a camera in the remote. So the Wiimote actually has a XGA camera-- runs at 100 Hertz. And it can track four plots of light. So when you play with the Wii, there's a sensor bar on top which is actually not a sensor bar. It's an image bar. See how the bar at the top-- which has, I believe, four LEDs. And then when you're playing with your Wiimote here, there's a camera that's looking at this [INAUDIBLE],, and it's just like a four-point sensor. And that's, like, the most [INAUDIBLE] with respect to this kind of [INAUDIBLE].. And because IR-- you may have seen examples where people remove the sensor bar and just put a camera. [INAUDIBLE] anything that's near IR [INAUDIBLE].. But to me it was really amazing because this is completely wireless. So they had to do all the processing onboard of these four points, and they transmit it back over Bluetooth, or whatever other wireless they have, to the Wiimotes in your hand at 100 Hertz. And you can buy the whole system for, again, just tens of dollars. And the fact that they were really great, sufficiently sophisticated processing was just amazing. And so Johnnie Lee from CMU-- he was working with me on some topics I'll show you later on-- really exploited this emerging platform for automation. And he said, OK, let's use this camera to do [INAUDIBLE].. So [INAUDIBLE]. So it's not transmitting back a whole [INAUDIBLE] image back to the base station. It's just transmitting the coordinates of these photons, and that's what makes it very low bandwidth [INAUDIBLE].. And a similar trick is also used in the Vicon motion capture that we have [INAUDIBLE],, where you have a high speed camera that's emitting idle light, and then the [INAUDIBLE] reflector that's [INAUDIBLE] back to that point. Except in this case, you may have an image that's 2,000 megapixels, but I think you have [INAUDIBLE].. And you might [INAUDIBLE] and send only the coordinates from the [INAUDIBLE] that uses the pattern. So you know, Johnnie Lee, I'm sure, is involved with demos where he created Smart Board, sort of Wiimote, and he changed [INAUDIBLE].. Really interesting things he's done-- even harder, 3D tracking. Because now this Wiimote, which is going over here-- this is a camera. And we know where you are in the field. [INAUDIBLE] OK, this is looking at this multiflash camera. The one trick I didn't mention was, you can use colored lights. So previously we saw that using multiflash camera, you can get very high quality [INAUDIBLE],, which are ideal for visual interactions. But you can actually use three lights at the same time. So shadows are cyan, magenta, and yellow. [INAUDIBLE] is on top. [INAUDIBLE] Just by displacing the lights with respect to the main camera, we have a very robust image. All right. Then we have these techniques where we're not looking at the coordinates, the transform between the camera and the point. But we're looking at a transform between a camera and a fixed spot. So this [INAUDIBLE] has a camera, which is very good for the design because it's really close to the fan. And looking at the patterns that are encoded can [INAUDIBLE] ink. So it can be printed on-- you can print on top of a document. But because usually carbon is transferred through near IR, [INAUDIBLE] dots-- dot patterns on the printer, it can be still seen through with this pattern. And the way they do that is they simply displace these dots from a uniform grid. That gives them basically a [INAUDIBLE] encoding. So a six by six encoding for them apparently is sufficient to create [INAUDIBLE] 36 spots-- 36 dots or 36 [INAUDIBLE]. And then-- so if you print something that occupies about half the use, they can provide unique coordinates-- every point in this 1,600 [INAUDIBLE] 1,600 [INAUDIBLE].. So you can print a lot of paper, and every six by six block on this paper is completely unique. So the camera will take a look at a picture of the pattern, decode what that pixel space really is, and record that. And then if you just move your pen on top of this printed pattern, it will know what trajectory you're following. It's pretty amazing idea. I believe they have a lot of [INAUDIBLE] in this space. And of course, the pen not just writes where it is, but it can also write. Sometimes we forget [INAUDIBLE]. A pen that can write. And of course, they have other [INAUDIBLE] Bluetooth. So that they can just write with a pen and [INAUDIBLE] all the trajectories have been saved, all your nodes are uploaded [INAUDIBLE] to your [INAUDIBLE]. I think it's a very powerful idea, that you can encode a pattern in a fixed [INAUDIBLE] object, and be able to uniquely identify it with a [INAUDIBLE].. So building a camera like this is almost like building a microscope. And the problem with that is, your depth of field is going to be very narrow if you go out of focus. Like a microscope, if you shift the dial a little bit, of everything goes out of focus. So this camera does have a problem. So they're engineered well enough so that, most of the time, the pen is looking straight down. But then it's [INAUDIBLE] the distance between the camera and the [INAUDIBLE] is changing a little bit. [INAUDIBLE] You generate it well enough so that you can [INAUDIBLE].. All these problems of looking at binary factors and be able to deploy them is-- we will see that when we start talking about coding and [INAUDIBLE] and so on. It's really very interesting because [INAUDIBLE],, I think, goes on a popular-- iPhone [INAUDIBLE]. AUDIENCE: I don't know about popular, but-- RAMESH RASKAR: Popular amongst a few people. AUDIENCE: Yeah. RAMESH RASKAR: You know, recording software. And everybody always complains that, oh, it has to be our focus. Our focus is difficult to decode and so on. But mathematically speaking, if you know that the image that you captured is binary, then you can let it blur significantly, and you can still recover the original environment. If you give me an arbitrary photo and look, it's difficult to keep blurring it back and forth. But if it's a binary image, it's very easy to go back and recover [INAUDIBLE].. And so a lot of the techniques are exploring that. I still haven't seen an iPhone app that actually exploits smart image [INAUDIBLE] to recover in the future. So the stakes are [INAUDIBLE]. Somebody needs to [INAUDIBLE]. AUDIENCE: It's like the third most popular data capture app, and all it is a one [INAUDIBLE]. But it works really well. RAMESH RASKAR: But I'll give you a good example. It does a little bit of-- what name is it-- [INAUDIBLE] But it still requires sufficient data to do something here. I don't know. Obviously [INAUDIBLE]. So this is a challenge in a lot of these ad hoc efforts, that they aren't thinking about the whole problem the whole time. They just realize the type of-- they can just study the iPhone blur an iPhone camera produces, and then write new software that's suitable for the camera. They'll get much better outcomes. So I really feel like starting my own-- writing my own app and [INAUDIBLE] just for myself. Because it's not going to be well engineered for [INAUDIBLE]. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: So when we come to the coding, you'll see that's how-- all the steps are extremely powerful. So optical mouse, of course-- 20 by 20 pixels, 9,600 frames per second. The camera you have on your mouse is 10,000 Hertz camera. And I took it from the [INAUDIBLE] electronics manual. And the resolution is 800 dots per inch. Even if you shift a little bit, it can detect the variations. It's pretty amazing. It can deal with motion blur up to 40 inches-- 40 counts per-- oh-- 40 inches per second. It's just amazing, all the specifications of how beautiful this whole thing has been [INAUDIBLE].. Another popular problem in HCI is being able to track the case of [INAUDIBLE] tracking. But [INAUDIBLE] move your [INAUDIBLE] it's quite challenging. So there are several solutions. This is one of the problems that are still not solved as well. Most of the solutions they have are active. So they eliminate [INAUDIBLE]. They eliminate some [INAUDIBLE] light and look at basically the reflection of that bright light using the camera. And if you move your heads, the reflection will appear a different place. You can pick up your eye of the [INAUDIBLE].. As it moves around, the reflection of that bright spot changes. [INAUDIBLE] So for big issues-- basically they're looking at [INAUDIBLE] all the time, I think, to be able to do that. And another problem with [INAUDIBLE] tracking is that, because of [INAUDIBLE] and because of things, it's very difficult to predict. [INAUDIBLE] has the camera [INAUDIBLE] first, and also the motions are discontinuous. So you cannot just put something simple-- small [INAUDIBLE] figure out what [INAUDIBLE].. These are very interesting problems. And if you do have really lightweight case tracking solutions, then this will completely transform how we think about interacting with machines. You sit there and [INAUDIBLE] with my eyes, because wherever I'm looking, I want to go. Of course, if there is some interesting distraction, [INAUDIBLE]. [LAUGHTER] But you can imagine in non-critical scenarios, this would be extremely difficult, because we get tired speaking. We get tired typing. There are fatigue issues. But with convenience, you can do a lot of things. All right, so let me show one last one and then we'll stop. So thermal IR motion detectors-- I really love them because the principle behind it is so simple and it works so beautifully. Those thermal detectors-- is basically a one-pixel camera. It's able to detect a motion of an object-- just one. Actually, there are two pixels. So something that opens the door for you, or turn the light on and off, and so on. It's just two [INAUDIBLE] pixels. How does it work? This is a nice cartoon. Let's say each of those two pixels that are represented here have an aperture that's looking at very narrow regions. As some warm body-- because it's looking at body heat, like modern radiation-- moves through the space, first you will trigger this detector, and then you'll trigger the second one. Now, if you didn't have this differential measurement-- and by the way, the whole point of that is just the difference between the two. So in the beginning, let's say, the first one gets a spike where the second one has no signal. So the difference is positive. When the one part is in the middle, distance is back to zero. When the one part is here, this one is positive, this one is negative. The difference is as I discussed. This one is a signal, this one's no signal, the difference is negative. OK? So if something moves in front of this, you get this signal that has a very high radiation [INAUDIBLE]. On the other hand, if just the temperature in the room increases uniformly, this is looking at background radiation. So all the [INAUDIBLE] in the room will also start radiating energy at [INAUDIBLE].. But then both of them will increase by the same factor so the difference between the two will remain constant. So this is a very simple way of eliminating slow changes and picking out only the first one. So this is a simple cartoon. But in general, you have much more complicated configurations where you have a Fresnel lens, usually in front of the detector. And instead of just two zones, each of them actually look at an overlapping [INAUDIBLE].. So they go from zone A to zone B to zone A to zone B. You're going to get very high variations in this output. Although again, it's filtered and simplified, but you can easily distinguish it from an increase in the DC level of the [INAUDIBLE].. So that's how a thermal detector works. It's basically a one-pixel camera because output is just one screen, although it's been sensed with two pixels. So think about how you can build-- this is basically a coded aperture-- it's a smart aperture that's allowing you to use just a single detector to do measuring of motion-- not [INAUDIBLE] but motion. And many animal eyes also use the same [INAUDIBLE] to detect motion. They have just a few [INAUDIBLE],, but by using a clever aperture, [INAUDIBLE].. Any questions here? All right. I'll send out the third assignment later tonight or this weekend. And remember your second assignment is not due today, but is due on Tuesday. And feel free to ask me questions, or Professor [INAUDIBLE] or Professor Oliveira. And also [INAUDIBLE].
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_4_Computational_Illumination_dual_photography_relighting_Part_1.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RAMESH RASKAR: So we'll finish up the computational illumination that I couldn't finish last time. And then, we'll talk about light fields. We'll talk about assignment number two, which is on optics. We have two options. Some other announcements. Make sure you're checking the wiki and the reading material. There's a lot of material we will not be covering in detail, for example, how a camera works, and depth of field, and apertures, and so on. There's plenty of information online. This particular tutor on YouTube seems to be pretty reasonable. The guy looks scary and has very shady lighting. But other than that, he seems to do a good job of explaining for free. And this particular one is about different SLR concepts. I don't know why he has a pink curtain behind him. Very strange. [LAUGHTER] But he does a very good job, basically, understanding all those things. This URL is up there. Good. So there are lots of ideas for final projects. I would like you to start thinking about them. Just come and join us on this side as close to this corner as possible. [LAUGHTER] Yeah, lots of ideas. Just come and talk to me. And we have several mentors listed for the class. Professor Mukaila is there. He's one of the world's leaders in computer vision. So you can talk to him about your ideas. Or Ankit Mohan, who presented last week, is also a mentor. So there are a lot of people who you can talk to. We also have Mathias here. And you can brainstorm some ideas with him about sensors and so on. And I just wanted to give you an update. So these are some of the class projects from last year. So as I said, actually, [INAUDIBLE] photography was the best project award. The bidirectional screen got the student research prize at SIGGRAPH. And we just heard last week that the looking around the corner project won the Mark Prize, which is the top prize in computer vision. So it's extremely prestigious. And Kermani, who was working on this project, received the number two prize. They gave out two prizes, the best paper award and the second best paper award. And he won that. So hopefully, your project will also be with that level of fame and fortune. All right. [LAUGHTER] So homework, there are three things you need to do. And I don't think those are very clear to everybody in the first assignment. You need to create your own website where all the information is kept. You need to submit the link onto Stellar, and then to submit the input and output photos on the Flickr group for the class. Right now, I think I have only about six people on the Flickr group. That means not everybody was able to post their pictures on the Flickr group. And the reason why we had it on Flickr group is that we can comment on each other's results and so on. Of course, the rest of the world can as well. But we can. And hopefully, as we go along in your assignments, you'll be able to create pictures that are not just visually interesting, because the guy has spent a lot of time with a lot of patience, with very expensive care, but because they're really beautiful computational techniques. And you'll be creating magical photos. Last time, we were talking about people are-- there was a fascination with high-dynamic range. Right now, there's fascination with the tilt-shift. But hopefully, the next fascination that you'll see on Flickr will originate from some technique that you invent in this class. Maybe a light-filled picture could be the next one, which is the assignment for us this week. So there are two assignments, two choices, sorry, for the second assignment. The first one is extending Andrew Adams, who's a PhD student at Stanford. You'll hear his name all the time. He's just done wonderful, wonderful work. And we'll be extending his so-called virtual optical bench. So the way it works is you go to his virtual optical bench, which is built in Flash. But you can use anything you want. You can use Java, C++, MATLAB, anything you like. And it allows you to do operations on light as if you're on a vertical bench, so optical bench. So you can put things here. So you can put lenses. And then, you can put mirrors. And you can put blockers. You can put, I believe, diffusers and so on. So you can put any of these elements and, of course, rotate them and so on, to create a really, really useful set of tools to understand how light propagates. So I don't know, if you're building an XCI system, rather than having to draw everything in Illustrator or on a piece of paper, you can just fire up this application. And you can design your whole thing and also understand how light behaves, insert the focal length, and so on. And you can very nicely build your optical setup. So what Andrew has done, and he's graciously offered his source code with us, is he's provided some basic functionality of inserting lenses, and mirrors, and blockers, and diffusers. But it's not complete. So for example, if I change the focal length-- so let's see our optics 101 here. So as you can see, here's our plastic optics. If I put the lens here, then all the rays come into sharp focus. As I move the lens to the left, the focus points also start moving. At some point, the rays start diverging. And at what point will they be compatible if I keep moving the lens to the left? AUDIENCE: [INAUDIBLE] focal length? RAMESH RASKAR: At the focal length. AUDIENCE: Yeah. RAMESH RASKAR: So if I move the lens all the way to the focal length of the same length, then all the rays coming from that will become parallel. And even if I add a second one here, then all those rays will become parallel. And as I'm being even closer, they'll start [? damaging. ?] And you can use this to demonstrate how to build a telescope or a microscope by adding a second lens and so on. So as you can see, you can play with multiple lenses, and things, and so on. So it will be a lot of fun. So what you have to do for the assignment is add more elements such as prisms, and gratings, and so on. And right now, it's just creating-- just shooting rays. But it's not forming an image. If you go to optical bench, you'll be able to actually form an image. So you have to write a very simple routine to integrate the light from multiple rays and actually form an image, which is relatively straightforward because if I put a sensor here, all I have to do is go to this pixel and sum up all the rays that reach that point. And that's the intensity of the picture. And as I move this around, the same pixel has no rays arriving there. So the intensity is zero and so on. And if I add some other point here, then this should look-- this should add up to become a blue pixel, this should add up to become a vertical pixel, and so on. So it's a really simple operation. And maybe you have a window here that shows the form image. And also, it doesn't have an ability to actually enter the specific numbers. So if I want to say, the focal length should be 30 millimeters, it doesn't have a way to do that. It's all based on GUI. So a few minor additions you have to make. And it's an open-ended assignment. The more you add, the better. I'll give you some suggestions on how you can improve it. So that's option one. And this requires programming, [? section ?] programming. Option two is synthetic aperture photography. So we saw this in the very first class, where you have an array of cameras. And you can see behind an occluder. So with an [INAUDIBLE] camera, if you're seeing through these trees, by doing some software operations outside of images, you can actually see around-- see through the foliage, and the leaves, and so on. So this is what you'll be doing for your assignment. Now, of course, you don't have this million dollar camera array. So you'll have to come up with a shortcut. And the way we will do that is we will just translate a camera and take multiple pictures. So instead of having the camera array, you can just take your XLR or webcam. Whatever camera you have, even your cell phone camera, just transfer it and take maybe about 30 pictures or 40 pictures. And then, you're going to do the same operations. And we'll learn about how exactly it's done, how you can see through some of those. So you'll be creating these see-through effects by eliminating the foreground pixels. So it's a lot of fun to do this assignment. And there's a third option which you may want to take, which is same-- so for this one, you actually have to make sure that you're moving the camera in a reasonable way. If you have experience with building LEGOs, you can just build a LEGO robot that moves the camera and takes pictures. Or just manually, you can just mark it on a ruler and just take the camera here. Take a picture. Take a [INAUDIBLE] camera here and so on. But all of this involves a little bit of physical work. If you want to just stay in your office, and look at the screen all night long, and do the programming, and never have to go and play with real things, you can do the whole thing just in software. But again, using this virtual optical bench. So you can synthesize the images maybe in OpenGL or using the virtual optical bench with different viewpoints, and add them together, and so on. Not as much fun, but maybe you can create even more interesting effects. So it's just an option. I would suggest going with the real thing as opposed to doing it all in software. All right. So let's go back and finish the few topics we left. We couldn't cover develop-- well, it's coming up. The reason why I wasn't here for the last class was I was at an event called Gadgetoff in New York. It's a really great event if you get an opportunity. It's the gadgets equivalent of [INAUDIBLE].. And people are blowing up things. Crazy displays. Crazy robots. Crazy cameras. It was a lot of fun. If you go to YouTube and just type Gadgetoff, you will see many of the [INAUDIBLE].. While that's coming up, let me show you something else. So if you remember, we were talking about Google Earth live. And imagine if you can fire up Google Earth and you can go to any part of the world and see it live because, very soon, we'll have cameras on every street light. We'll have cameras on every bus, every taxi. And even people carrying their own cell phone cameras will subscribe to some service. And they'll be broadcasting. They'll be-- we don't know for what. But they will. And when that happens, when you can really fire up, and go to any part of the world, and see it live, it will be a very different notion than what we think about as mounted cameras. Nowadays, we think of them as mostly for surveillance. But over time, they will become-- maybe we'll be able to use them for beneficial commercial solutions. So the question is when it's going to happen. And some of the things are already happening, like this particular project. I'm sure it scares a lot of people, that you'll be able to go to any part of the world and see it live. But you can also imagine it will be used for some good reasons, definitely for commercial reasons. You can have an index of, what's people's health in this area? If you just put two points and see how long it takes for people to walk across those two markers, that tells you how healthy people are in this town. Or at least, how much in a rush they are all the time. If you are a real estate agent, you can figure out, what's the property value of this area based on what foot traffic you get at a particular restaurant? If you want to go to a restaurant, you can find out, is there a long wait there? And so on. So this was a project at Georgia Tech, [INAUDIBLE] group. And it's an interesting concept. So remember, we had a poll about, when do we think a digital camera as a standalone device will disappear? And most people were between five to 10 years. There were a couple of people who said even two years. I think the mean was somewhere between five to 10. So here's another question. When do you think we'll have Google Earth live when you can fire up, and go to any part of the world, and see it live? You won't be able to do it in the middle of Iowa. AUDIENCE: What percentage of the world are you saying? RAMESH RASKAR: Let's say at least one city, let's say. AUDIENCE: Wait. At least one city? RAMESH RASKAR: Where you can fire up and go to any one particular city, let's say Manhattan, and see it live. AUDIENCE: So one particular city or any-- AUDIENCE: One city, or any city? AUDIENCE: Not one [INAUDIBLE]. RAMESH RASKAR: Any one particular city. Any one city. AUDIENCE: One city. RAMESH RASKAR: Yeah. AUDIENCE: It's going to have to be-- RAMESH RASKAR: It could be Tokyo. It could be New York. It could be Sioux City. [INTERPOSING VOICES] AUDIENCE: You probably need one [INAUDIBLE].. AUDIENCE: Oh, yeah. RAMESH RASKAR: Yeah. [LAUGHTER] RAMESH RASKAR: But remember, it's not serverless. [INTERPOSING VOICES] RAMESH RASKAR: It's not serverless. AUDIENCE: It's any [INAUDIBLE]. RAMESH RASKAR: It's A commercial service, the same way people are thinking about creating free Wi-Fi networks. AUDIENCE: Can't you do that-- they have a [INAUDIBLE] TV show. Didn't they have something with web cams, publicly available web cams, all along the border that [? essentially ?] watched [INAUDIBLE]. He watched the-- [INTERPOSING VOICES] RAMESH RASKAR: Minuteman. Minuteman [INAUDIBLE] or something. [LAUGHTER] AUDIENCE: People running across-- RAMESH RASKAR: Yeah, I think that's what it was called. AUDIENCE: So I guess that's a film. RAMESH RASKAR: Right. But again, think of it from surveillance. All the efforts we have seen so far are surveillance-based. And the video that you just saw is not about surveillance. It's about watching sports games, getting a sense of what the traffic is on the street, so making-- right now, all these street level maps and aerial maps are completely lifeless. They're just static snapshots. And you want to add some dynamic element to it. It may not be-- it's not realistic. Maybe you have-- there are technologies that shows a flow of people. But you cannot recognize who's who. There's all these other overlays on top of that. So just like when Google Street Maps came around, you could see the people. And now, the faces of the people are blurred or the license plate number. But in the beginning, everything will be free for all. [LAUGHTER] Pretty scary. But-- AUDIENCE: Yeah, it brings up a lot of queer issues of voyeurism. [LAUGHTER] RAMESH RASKAR: Certainly. Certainly. The same issue that Google Street Maps. Yeah? AUDIENCE: I think [? Google Labs ?] were developed [INAUDIBLE] on satellite-- AUDIENCE: Which is when? [LAUGHTER] RAMESH RASKAR: But satellite, that's in the geosynchronous satellite or somewhere that's like a blimp that's floating around? [INTERPOSING VOICES] AUDIENCE: [? The media have ?] their own satellite. And they can take their own picture of the [INAUDIBLE].. RAMESH RASKAR: That's too far away. AUDIENCE: Really? RAMESH RASKAR: And the satellite is too far away. AUDIENCE: You have a small satellite [INAUDIBLE].. RAMESH RASKAR: But you also want to exploit infrastructure set up by somebody else. So you might send your own satellite to look at your ex-girlfriend. [LAUGHTER] But you want to also exploit existing networks. So maybe you won't have to because there'll be enough cameras. Today, Google might come and say, I'll pay you $5 a month. Just aim the camera outside your office, or outside your dorm room, or outside your shop. And we'll pay you $5. Just give us the stream. And for me, it doesn't matter to me. I can just aim it outside. But by using that network, they can provide this service. There's a bus-tracking proposal [INAUDIBLE] cameras outside your house. RAMESH RASKAR: See? Yeah. AUDIENCE: They were on route. RAMESH RASKAR: Yeah. AUDIENCE: You can use that to figure out where buses were in the city, because there wasn't enough money in the city to pay for traffic buses. RAMESH RASKAR: See? AUDIENCE: But also, I imagine that-- imagine getting a camera on every cab or something and [INAUDIBLE]. RAMESH RASKAR: Yeah, exactly. AUDIENCE: And then, it's like you're having a fleet of Google [INAUDIBLE] drive your car. RAMESH RASKAR: And the light because they can figure out the condition of the roads. They can tell about traffic. They can tell about potholes. They can tell about rain or no rain. AUDIENCE: Yeah. RAMESH RASKAR: In fact, there was a very nice project in Tokyo where they placed just on-off detectors-- not on-off detectors, but detectors on the wipers of taxis. And from that, they figured out-- they got a map of the whole city and how much it is raining in which part of the city. And that was, of course, much more accurate than the weather reports that they were getting because they just measure it in some bucket, how much it's raining. And here, just, they're not measuring rain. They're just measuring how fast the wipers are moving all over the city. So that's an indirect way of capturing visual data. So one city, how many years it would take? AUDIENCE: You're going to do a poll? RAMESH RASKAR: Yeah, we'll do a quick poll. And you're going to put your vote on the line. AUDIENCE: I'd say two. RAMESH RASKAR: Two years? AUDIENCE: Yeah. [LAUGHTER] AUDIENCE: Yes. AUDIENCE: Either be a European city-- [INTERPOSING VOICES] [LAUGHTER] RAMESH RASKAR: Were you two years last time as well? AUDIENCE: Yeah. RAMESH RASKAR: Yeah. [LAUGHTER] All right. So you're a two-year guy. All right. 10 years? AUDIENCE: Yeah, one city? RAMESH RASKAR: Just one city, yeah. AUDIENCE: One city in 10 years? RAMESH RASKAR: Yeah. AUDIENCE: So which city, Las Vegas? [LAUGHTER] [INTERPOSING VOICES] AUDIENCE: It would be probably some city that wants to-- No, [INAUDIBLE] doesn't want to be in that [INAUDIBLE].. RAMESH RASKAR: It has to be a city where something interesting is happening. [INTERPOSING VOICES] RAMESH RASKAR: So I'm guessing it's not in the US, first of all. We've got too many privacy issues. AUDIENCE: Yeah. RAMESH RASKAR: It's going to be-- [INTERPOSING VOICES] AUDIENCE: It would be like a Hollywood, but maybe in another country, the equivalent of-- [INTERPOSING VOICES] RAMESH RASKAR: Yeah, I can imagine a city like Hong Kong or something. AUDIENCE: Yeah. AUDIENCE: Singapore would do it. AUDIENCE: Yeah, Singapore. [INTERPOSING VOICES] [LAUGHTER] RAMESH RASKAR: Yeah, all right. Yeah, any city in the world. [INTERPOSING VOICES] RAMESH RASKAR: So 10 years? AUDIENCE: Sure. RAMESH RASKAR: Yeah, and we should go the other way. Five years? Sorry. [LAUGHTER] Wow. 10 years? And never? [LAUGHTER] Wow, Jamie. Only satellites? AUDIENCE: Yeah. RAMESH RASKAR: [INAUDIBLE] [LAUGHTER] Don't cross your path with Jamie because he'll still-- he'll put his satellite on your trail. AUDIENCE: $8,000? AUDIENCE: What? AUDIENCE: It cost $8,000 to put in. AUDIENCE: It's much improved if you have a small satellite. And you guys, you can implement [INAUDIBLE].. RAMESH RASKAR: But the imaging is not that easy. AUDIENCE: The optics aren't easy. AUDIENCE: It's not easy. RAMESH RASKAR: Horrible image from 3,600 kilometers. AUDIENCE: Yeah, that's the problem. [LAUGHTER] RAMESH RASKAR: Yeah. AUDIENCE: [INAUDIBLE]. AUDIENCE: [INAUDIBLE] software reconstruction. [INTERPOSING VOICES] RAMESH RASKAR: Yeah, just zoom, zoom, zoom, yeah, in software. [INTERPOSING VOICES] AUDIENCE: In software. [LAUGHTER] [INTERPOSING VOICES] RAMESH RASKAR: Is that it? AUDIENCE: It is also a lot of data storage. You can store the whole world's history-- RAMESH RASKAR: Right. AUDIENCE: --based on this Google Earth. If you have the records and archives of this, then you can actually track back all the crime scenes to [INAUDIBLE]. And you need super huge data storage for that because storing in complete words, live history is difficult. So it depends what's important and what you really want to store. AUDIENCE: Store it on [INAUDIBLE].. RAMESH RASKAR: Certainly. [LAUGHTER] Storage on the [INAUDIBLE] would be happy if you come up with ideas where you will store more and more. I remember back in the days when Mosaic was around in '92. I remember if I opened the Mosaic homepage, they listed every website in the world. Do you remember this, Mathias? [LAUGHTER] Right? And over time, of course, there's no way you can make a list with everything so dynamic. You still have places like archive.org, I believe, where you can time shift and look at how a page looked over the last 10 years or so. I don't know how far back they go. But you can look at-- you can go to my homepage, for example. And you can see how it looked every few days over the last 10 years. AUDIENCE: There's a [INAUDIBLE],, though. You have to be a certain popularity, don't you, to make it there? RAMESH RASKAR: I'm sure, yeah. But still, the fact that they are keeping a snapshot of the whole world wide web over time and over time will make a record of the whole world visually, not just on the websites. So it's going to happen. AUDIENCE: [INAUDIBLE] with the book? Because based on it-- because all the URLs [INAUDIBLE].. RAMESH RASKAR: Yeah, everything printed or anything. AUDIENCE: Yeah, that's something that would be like, [INAUDIBLE] telephone. You couldn't pick up. RAMESH RASKAR: Right. AUDIENCE: [INAUDIBLE] [LAUGHTER] AUDIENCE: Yeah, [INAUDIBLE] like the telephone book. I imagine if there's equivalent of the internet, most every year, you've got this bigger and bigger Yellow Book. It's awful. RAMESH RASKAR: So the complexity of data, and bandwidth, and computing, and memory just grows exponentially. I remember when I was in grad school, the hot topic was how to assign IP addresses from mobile devices like mobile laptops. That was a huge research problem. And now, we are flipping through it all the time completely seamlessly. So we don't even think about that as a problem. So it's a similar situation here in the visual domain. Remember, this is going to be the decade for visual computing. But we're not just talking about social implications or business opportunities in this class. We really want to think about, what, kind of, cameras make sense? If you're going to put a camera on every street light or on every taxicab, what, kind of, cameras should be developed so that it's compatible? If you put a camera on a taxicab and you keep taking photos, it's all going to be blurred. It's going to be totally useless. If you put a camera on a streetlight, it'll be a wider field of view. But you will not be able to see anything specific. There are all these issues that come up. Maybe it should exploit different wavelengths. It should use different optics, different processing. Maybe they should talk to each other, and do some coordination, and so on. So these are some really interesting challenges. So it could be one of your final projects. All right. We can go finally up here. So last time, we were talking about computational illumination. And we saw several examples like creating cartoons, [INAUDIBLE] for CI, or [INAUDIBLE] flash matting by taking pictures in the foreground and background. And we'll cover just a few more projects before we switch over to optics and light fields. So here is a [? sensitive ?] paper where if you wanted to create an image that's more comprehensible, like on a leaf, where if you take any one of the photos, it may not show the structure very well. You can just put an object, keep the camera fixed, put it in the video mode, and just move the flashlight around just randomly. And you collect a set of photos, maybe three, four, five, 10. And then, you do some computation on that to create an image, where you can enhance the shape or you can-- see here, you're just enhancing the shape. Here, enhancing the detail and so on. So there are lots of techniques that they showed, for example, there are five and four images. Yes? AUDIENCE: What do you mean by the shape and [INAUDIBLE]?? I don't see the difference [INAUDIBLE].. RAMESH RASKAR: So in case of the leaf, you want to show the leaf, how the folds-- AUDIENCE: But literal shape versus for the texture shape. RAMESH RASKAR: The terrain. AUDIENCE: All right. RAMESH RASKAR: That's what I mean by shape, not just the outline. But for example, here, it's the shadows. It's illuminated as if the light is at a very crazy angle. So the relief of the terrain is more clearly visible here. So you can see shadows here and so on. So we know that this particular structure is much higher than this particular area. And so the height field of that is enhanced. On the other hand, this one shows all the texture in great detail. But it looks like a very flat leaf. So you can have knobs in your software which says, show me more of the detail or show me more of the shape. And you can get it off the internet. And they came up with multiple methods. This particular method, again, put this 3D object, move it around, and create this particular-- so this is my older method, the one I showed for doing the day and night images. So they compared with that. And their claim is that if they use my method, the shadows are preserved, which is true. And in their methods, they can create more beautiful combinations by using multiscale decomposition of bilateral filter. And we'll talk about that at some other point. So this is very unique. As a photographer, you will never think of setting up the lights in such a way so that in post-capture, you can decide whether you want to highlight the shape or the detail. So that was, so far, light position we were changing. But there are lots of other parameters we can change for light. So let's look at this project called dual photography, one of my favorite projects. And we also saw it as a teaser, where the experiment they want to do is read this card from this camera. Although, it's facing away from the camera. So what they're going to do is place a projector in the line of sight and some reflective surface. If you shine light from this projector on just one spot, it's going to bounce around on the book and eventually arrive at the camera. And by shining one light at a time, this is what the camera will see directly. So again, this card is facing away from the camera. You're going to shine different pixels on this card. And you do this a million times because you have a million pixels in the projector. So if you're taking a million photos, this is what you'll be able to see from the camera. So this is a dual photo. AUDIENCE: What does the [? plan it look ?] like? RAMESH RASKAR: [INAUDIBLE] looks like this. AUDIENCE: Oh, OK. So that's it. RAMESH RASKAR: Yeah, any one of them, all the photos look almost like this. Except when you shine the red part of the card here, this book will look a little bit reddish because it'll be cast. And when you shine the yellowish part, the projector shines the yellowish part on the card, there will be a yellowish glow. That's it. And from that, you can figure out how this works. AUDIENCE: It still requires line of sight. RAMESH RASKAR: Sorry? AUDIENCE: It still requires line of sight of the object, right? RAMESH RASKAR: The projector has to be line of sight. [INTERPOSING VOICES] AUDIENCE: So you had to put something in the line of sight. RAMESH RASKAR: Yeah, but it's cute. I agree. It may or may not be practical. But it's very interesting. There was a very-- a project that received a lot of press about five years ago, where some guy came up with a technique to figure out what you're reading on your monitor by just looking at the window. So imagine it's a nighttime scene. And you are in the room working on your monitor. And if you stand outside-- somebody stands outside the window, they can see the glow from your monitor in the room. Now the question is, can they figure out what's on the monitor by just looking at the glow in the room? AUDIENCE: Yes. RAMESH RASKAR: How? AUDIENCE: Because they can synchronize with the scan of the monitor. RAMESH RASKAR: Exactly. So it only works for LCD. But if it's a CRT monitor, only one pixel is being eliminated at a time. So if you have a photodetector that's aimed at the window, and it's running at the speed of the CRT, and you synchronize it, you'll be able to actually read what's on the CRT monitor. So for TVs, at least old TVs, it's running only at about 30 Hertz or 60 Hertz. So it's really easy to figure out what's-- if the image is sufficiently high-contrast, you can figure out what was on the TV or the monitor. AUDIENCE: This is how they try [INAUDIBLE].. RAMESH RASKAR: No. AUDIENCE: So if that light from the electromagnetic radiation [INAUDIBLE] component on your screen is tracked by an optical, then you can deduce what [INAUDIBLE] is only for two devices emits any radiation. So that [INAUDIBLE]. license. So it [INAUDIBLE]. RAMESH RASKAR: Yeah, but they're looking at RF. They're not looking at visual spectrum. AUDIENCE: Oh. RAMESH RASKAR: Right? AUDIENCE: [? One is ?] spectrum RF. RAMESH RASKAR: Yeah, RF spectrum. AUDIENCE: Yeah, and actually, a while ago, there was Tempest Proof Fonts that were supposedly able to mask your online activities from Tempest spying. AUDIENCE: Yes. AUDIENCE: Yeah, I don't know how good those work, though. There's new methods also to directly read memory off of people's computers. So you don't even need to think about what they might be running. Just go err. I was like [INAUDIBLE]. RAMESH RASKAR: Even disk memory or RAM? AUDIENCE: No, RAM. Wireless. RAMESH RASKAR: Based on the-- AUDIENCE: EM. RAMESH RASKAR: Based on electromagnetic. AUDIENCE: Yeah. RAMESH RASKAR: So there's a lot more coming. AUDIENCE: Yeah. RAMESH RASKAR: Does-- at Google Earth live, you're worried about only what's in the line of sight. There's much more. [LAUGHTER] All right. So how does this work? So again, you have a camera. You have a [? projector. ?] You're going to take-- this is how the camera looks at it, and this is how the projector looks at it. But eventually, you'll be able to compute the image that looks like the one on the right. And all the [INAUDIBLE] taken shadows, and refraction, and reflections, they're all captured that as well. All right. So I'm going to go through very quickly because we covered it. But here's the point, that you're going to turn on one point and some light will reach the eye. And that's just a regular photo in the primary domain. Now it should replace the light and pi. And this is the Helmholtz reciprocity for in the optical domain. If you replace the two and shine the same spot that I was looking at, you will receive the same exact intensity. And it takes into consideration the bidirectional reflection distribution function, the R-squared falloff, and all those things. So the duality is extremely powerful. And it can be exploited in many ways. So this is how a barcode scanner works. When you go to the aisle, you have a scanner that's just scanning a laser stripe across the barcode. And there's just a single photodetector. You hit a white spot on the barcode, light gets dispersed. And you can detect it. If it hits a black spot, then you don't see it. Now instead of using a photo sensor-- sorry, let me go back. So this way, by scanning light and using a photodetector, what we have created is basically a situation where you have a camera and an omnidirectional light source like a bulb. So it gets a barcode scanner. It's as if the barcode was lit by just a flashlight and you took a picture with the camera, except the barcode scanner can check out eyes. It does not have any camera inside. It just has a laser scanner and a photodetector. So that's the simplest version of dual photography, that you can record light by the so-called flying spot principle. You eliminate the spot. And you see how much light was reflected from it in aggregate. So that's the basic reciprocity. Now, so let's look at the math for this very briefly. And it's very straightforward. So just follow me. One step at a time, So let's say I turn on the first pixel of the projector and record the intensity of light. There is-- can't get my [INAUDIBLE].. I record the intensity for first pixel. Then I turned on the second pixel. I record the intensity per pixel and so on. Now, if you replace the photo sensor with above and the projector with the camera, the claim is that you will see the same exact intensities. So with the light, I'm going to just floodlit the scene directly. And the claim is that the first pixel will receive the same light that the photodetector would have received over here and so on. Is that clear? AUDIENCE: But you said, for example, the first pixel is getting reflections from all over the rabbit? Yeah. RAMESH RASKAR: That's the point, that it will reflect all over the place. But if you look at that point directly, because of the duality of light, you can replace your eye and the light source. And then, you see the same exact intensity in that particular direction. In case of the analogy I gave you of being able to see what's on your monitor by looking out the window, you're just-- the light from your monitor lights up the whole room. And some of the light leaks through the window. But it doesn't really matter. There's some proportion of light that leaks through the window. And when the next pixel turns on, the same proportion of that light leaks through the window. AUDIENCE: Yeah, I get that, because every pixel is being turned on and off in order. RAMESH RASKAR: Right. AUDIENCE: But here, all the pixels are being [INAUDIBLE] at the same time. RAMESH RASKAR: No, in the experiment, only one pixel is being turned on at a time. AUDIENCE: But the light bulb is showing-- it's also flickering the same way the projector is-- [INTERPOSING VOICES] RAMESH RASKAR: No, it's not. It's not. That's a good point. So here, you have to understand that only one of them is turned on at a time. And we are recording it on the photodetector. Now when you have the light source, the point you're making is that light will reflect-- light will be coming from here. It will bounce from here. And it will reach here. But light could also bounce from somewhere else and go in the same direction. [INTERPOSING VOICES] RAMESH RASKAR: It will not because when you shoot away from this projector, it will hit only one point. And that particular point is lit by this light in only one direction. There is still just [INAUDIBLE] mapping between the two. AUDIENCE: Pictures of light, it bounces at some spot. If it's a mirror, it bounces on that spot again. RAMESH RASKAR: That's a very good question. So when you have inter-reflections, we'll see that in the second half, how we deal with it. So this is a very simple demonstration, where we have only one-to-one correspondence between the direction of the light and the direction of the pixel. AUDIENCE: So you could do a camera just with the projector, and ADR, and the sensor? RAMESH RASKAR: Yes, basically. AUDIENCE: Because you have prior knowledge of the scene layer, imaging may dramatically reduce the number of-- RAMESH RASKAR: Measurements. AUDIENCE: --measurements they need. For that card example, you didn't actually need to do a million to figure out what color it is and how they do it. RAMESH RASKAR: How would you do it? AUDIENCE: Figure out where the symbol is first. AUDIENCE: Adjust the corner. AUDIENCE: Find the corner by-- [INTERPOSING VOICES] RAMESH RASKAR: So you're saying that-- AUDIENCE: --figure out [INTERPOSING VOICES] RAMESH RASKAR: You're saying the white part of the card, I don't need to illuminate that at all because I know it's only white. Is that what-- AUDIENCE: Sorry? RAMESH RASKAR: The white part of the card, playing card, you're saying, I don't need to shine those parts because there is no information there. Is that what you're saying? AUDIENCE: You don't have to shoot the whole card. Hell, you can just shoot the little tiny corner where the K or the Q or the seven is. And then, that's all you need. You can-- RAMESH RASKAR: But then, you capture only the K. You'll not capture the rest of the card. AUDIENCE: Well, that's all you need to-- [INTERPOSING VOICES] AUDIENCE: Yeah, but to get-- [INTERPOSING VOICES] RAMESH RASKAR: All right. OK. [INTERPOSING VOICES] RAMESH RASKAR: Now you're thinking about actually playing bridge or something. AUDIENCE: Yeah, right. [LAUGHTER] Irrelevant information is-- RAMESH RASKAR: Yeah, it could be. But you don't know where it is. AUDIENCE: Well, if you use the projector to first scan one horizontal and one vertical, you get enough information to figure out where the-- RAMESH RASKAR: Where there's some-- AUDIENCE: --information is. RAMESH RASKAR: Certainly. You can come up with some optimizations. If you really want to-- [INTERPOSING VOICES] RAMESH RASKAR: If you really want to read up on those cards, if that's all you want to do, I'm sure you can come up with interesting things. AUDIENCE: But for other applications, where you know what you're looking at, but you're looking for some very specific thing. RAMESH RASKAR: Exactly. [INTERPOSING VOICES] RAMESH RASKAR: So you could do some binary search. AUDIENCE: Right. RAMESH RASKAR: So for example, when JB asked, can I just use a projector and this photosensor to look at something, if all I want to do is locate some retroreflective dot, then, yes, you might be able to just shine a whole vertical line, sweep across, and then another line and sweep across. And that will quickly give you the x and y-coordinate of that spot. So yeah, you could come up with quick-- some variations in the [? scan. ?] All right? But this is a standard technique that we have seen many times. We're going to go beyond that now. But is this part clear now, that we're going to measure the intensities and they'll do the same in both directions? Now, we're going to go a little bit further. All right? So this is what they get. This is the images they get by using this mechanism. All right? Not great quality, but reasonable. There's a bunny there. All right. And the flying spot camera, that's exactly how it works. You have a spot that's moving around. And that's how scanning electron microscopes work. It's not a high-quality camera, but a high-quality light source that's very focused. So you just have a sample. But you just eliminate the sample at only one location. And then, you have a detector that integrates over the whole sample. And so you can do a very good job of that. Now instead of-- for the sensor, we're going to make it a little bit more complicated and replace that with a camera. So what you're going to see throughout this class today is how we can start thinking in higher dimensions and realizing that the appearance of the world is not just flat 2D. But actually, it's much higher dimensional. In this particular case, how many dimensions do we have? When we had a photodetector, the photodetector is zero-dimensional. And the projector is two-dimensional. So the measurement was two-dimensional. And we got an image out. How many dimensions do we have here? AUDIENCE: Four. AUDIENCE: Four. RAMESH RASKAR: Four. We have two photoprojectors-- x, y, or the frame for the projector, and two for the camera, the UV. [INTERPOSING VOICES] RAMESH RASKAR: Sorry? AUDIENCE: What's the time dimension? RAMESH RASKAR: There's time dimension as well. But right now, everything is static. And things are not changing over time. You could say there's wavelength and so on. But we are ignoring that as well. We're just assuming it's just [INAUDIBLE].. So now, we're going to think in four dimensions. All right? So now, what we're going to do is exactly the same thing. We're going to turn on-- we're going to send some coordinate system, PQ for predictor, NM for camera. And we will realize that just the same way we swapped the eye and the light source, we'll be able to swap the camera and the projector. Coming back to [INAUDIBLE],, same trick. We're going to turn on one particular pixel of the projector and [? re-word ?] it, one particular pixel of the camera. Except because it's the camera, I can read all the NM pixels simultaneously in one snapshot. I don't have to read them one at a time. But the projector, I can only turn on one PQ pixel at a time. All right? So that's a 4D mapping. I have four functions, four parameters, PQ, NM. And for every PQ, NM, I have one value. It's a function of four values. All right? And then to capture that, it's relatively straightforward. I'm going to turn on one pixel at a time of the projector. And I'm going to record one image, m times n. And this is a very standard method in linear algebra, where instead of representing it as a 2D, you represent it as a single vector. So let's say-- let's take some numbers here so it becomes easier to remember. Let's imagine the projector is 1,000 pixels by 1,000 pixels. And the camera is, again, so let's say it's 2,000 pixels by 2,000 pixels. So this particular vector is now going to be 1,000 times 1,000 times one. So you'll have one million entries corresponding to each pixel of the projector. The image that you get is not 2,000, but 2,000. It's four million. So four million entries in this particular vector. Good so far? Now we're going to come up with a relationship that maps these values to these values because, remember, when I turned on one pixel here, it made contributions to all these 4 million pixels out here. And we're going to try and represent that mathematically. And it's really, really straightforward. We'll just go slow, one at a time. And that transformation we're going to specify as this matrix team, which is now, what's the dimensionality here? M times N is-- AUDIENCE: 4 million. RAMESH RASKAR: 4 million. And P times Q is? AUDIENCE: Million. RAMESH RASKAR: 1 million. This is a huge matrix. 4 million by 1 million. So this is already in terabytes. Sorry, gigabytes [INAUDIBLE]. So 4 terabytes. All right. Now this is how we're going to build this matrix. We're going to fill up the values in this matrix. Remember, every time I turn on a projector pixel, I get 4 million values. Next time I turn on projector pixel, I get 4 million values in the camera. So 1 million times 4 million is what we would need. So very simple. I'm going to turn on the first pixel of the projector and see what image I get. And that value will go here. All right? So the 4 million values will simply go here. And that will be measured. That will be captured. So again, turn on the first pixel of the projector. See what image I get, which will be these 4 million values. And those values are simply the first column of this transform projector. Then I turn on the next pixel of the projector, record the image in the camera. And that becomes the second column of this transformation matrix and so on. Now, it's very easy to see what's going on here. If I turn on both pixels, the first pixel as well as the second pixel, what image should I get? AUDIENCE: I have two. AUDIENCE: Sum of both the columns. RAMESH RASKAR: The sum of the first two images I got. So again, if I turn on the first pixel, I took a photo. I turn on the second pixel, I got the second photo. If I turn on both pixels at the same time, I will simply get the sum of those two photos. And that's what is being represented here mathematically. So if I put a one here and a one here, then the matrix multiplication will say, these two values multiplied by one and one here, the sum of that goes in the first value. And again, the same two values in second row [INAUDIBLE] is different two values in this row multiplied by, again, the one and one here for the second value here and so on. So we're going to set up this huge linear system by just probing the scene one pixel at a time. And so filling up this T-matrix is relatively straightforward. I turn on the third pixel, take a photo. And that photo becomes my third column. Very simple. And you keep doing that. And you can build your T-matrix. Again, a lot of data. But that's research. All right. [LAUGHTER] Now let's say you have this wall or a simple screen. And you put a projector and a camera. What would happen? If I turn on one pixel of the projector, how many pixels are the camera will get a non-zero value? AUDIENCE: One. RAMESH RASKAR: Only one. If I turn on the next pixel of the projector, some other pixel of the camera will get the value. But only one pixel in the whole image will [INAUDIBLE] non-zero. So if you have a scene that's really, really simple, maybe just flat or a convex object, in T, every column will have only one value that's non-zero, or maybe a couple of values if the camera has a higher resolution. So the matrix T will actually look very sparse. Most of the values are zero. Only some values, almost along the diagonal, will have non-zero values. On the other hand, if you have a scene with lots of inter-reflections, if you're looking at a corner of a room or there's a glass bottle and all this complexity, when I turn on one pixel of the projector, many pixels in the camera actually get the intensity. So first because of the bottle. The next because of the screen. The third because of inter-reflections in the screen. And so multiple pixels in the camera will be lit. And so the matrix T will be very nice because, remember, every column there, more than one entry will be non-zero. So that gives us some-- if I just look at the matrix T, if somebody just shows you the matrix T, the gigabytes or terabytes of data in just a visual representation, I can tell you from what, kind of, scene it came up. So just to make sure it's clear. So the projector and the camera is a transform matrix. But not the first pixel. The photo that appears because of the first pixel, I'm just going to put it here. If the scene was very simple, we had to have a projector, and a camera, and I turned on one pixel, and it matched only one camera, then only one of the values will become zero. The rest of the values will be all zeroes. And in fact, on the next pixel projector, only one of the camera pixels will be lit and so on. So maybe you'll have a value here, a value here, a value here, value here, maybe a value here, and so on. Every column will have only one value non-zero. Everything else is just zero. And that's a sparse matrix. On the other hand, you've got a scene that has a lot of complexities. There's a glass bottle here. There's an inter-reflection here and so on. But you turn on one pixel of the projector, it reflects on here. It reflects on the back of the bottle. And some other pixel, it hits this wall. And some other pixel, light reflects off [INAUDIBLE] pixel. So maybe 10 or 15 different pixels of the camera [INAUDIBLE]. So in this case, when I write the matrix, for the very first entry in the first column, multiple values will be non-zero. And when I turn on the second pixel, similarly, a lot of value will be non-zero and so on. So this matrix will actually look pretty nice. A lot of values are non-zero. So by just visually looking at the structure of this matrix, I know how complicated this is. This is very useful. And this will keep coming up as we go along. Is this clear? So this is what we call our primal space, which is done on one pixel of the projector. Take a photo. Now we want to create the dual space. That was our dual photography problem. How will the picture look like if I put a camera where the projector is? And we're going to solve that by doing very simple operations on this matrix. So again, we spent all this time taking a million photos by turning on one projector pixel at a time. Now is the time to see the money. This is the problem I want to solve. If I put a camera where the projector is, how does it look? So I need to come up with a transformation, T, a modified transformation that will map my camera pixels to projector pixels. Right now, we're mapping projector pixels to camera pixels. That's the question we have to answer. And let's call it k double prime. And it's dimensionality is now-- the upper one was one million by-- sorry, four million by one million. The bottom one is one million by four million. That already gives you a clue of what you could do. So if I turn on pixel j of the projector, pixel r of the projector gets a certain value. And it corresponds to this column and this row. Now, let's call it, well, pixel [? back. ?] And now, when I put the camera at the projector, I want to say, which pixel of the camera-- when I look at a pixel i of a camera, which projector pixels contribute to that? So I took these million pictures. Only when some of the project pixels were turned on, I got a non-zero value at this point [INAUDIBLE] pixel. And that's also a very similar structure. So the hint for that is the matrix T double prime is simply a transpose of the matrix T. You just flip it along its-- you just split the horizontal and vertical coordinates of the matrix. And that gives you the T double prime. As simple as that. And from that, we can compute an image that appears as if the camera was placed at the projector. So here are some examples. So here's a picture taken by when the projector was floodlit. This is how the camera looks at it. We're going to probe it multiple times. And we compute an image as if the camera was placed at the projector. In this case, it's still all diffused, only [INAUDIBLE] mapping. There's no inter-reflection. There's no class and so on. So it's really easy to compute. Here are more challenging examples. So where is the projector in this case? It's a little bit to the left. You can see all the shadows going back very deep. And now, we're going to create a grid from this photo as if the camera was placed at the projector. Will we see shadows when we create that picture? What will we see? AUDIENCE: Well, you won't [INAUDIBLE] shadow. RAMESH RASKAR: You won't see the shadows because now your camera is placed where the projector was. But you'll see something else. [INTERPOSING VOICES] AUDIENCE: --shadow on the horse. RAMESH RASKAR: Sorry? AUDIENCE: You can see that shadow on the horse. RAMESH RASKAR: Exactly. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: So why do we have a shadow on the horse? [INTERPOSING VOICES] RAMESH RASKAR: So now, although the horse was lit where the projector was, we swapped the projector with the camera. And from the camera's viewpoint, the horse was actually occluded. So when I placed-- when I saw the two, now I don't see the light on those [INAUDIBLE] see shadow. AUDIENCE: So are those real? Or are they simulated? RAMESH RASKAR: This is real. This is-- yeah, they spent days and days capture-- [INTERPOSING VOICES] RAMESH RASKAR: Days and days capturing this data. AUDIENCE: So this is in all the shadows. RAMESH RASKAR: Yeah, you can think of it as an occlusion. What is occlusion? AUDIENCE: The [INAUDIBLE]. RAMESH RASKAR: How do you know-- [INTERPOSING VOICES] RAMESH RASKAR: How do you define occlusion? AUDIENCE: You [INAUDIBLE]. RAMESH RASKAR: How would you define it more precisely? AUDIENCE: Light can't reach it. No light. RAMESH RASKAR: Light cannot reach. So if I shoot a ray, we're going to talk about rays a lot in the second half, when you shoot a ray, a ray is occluded. And it stops the path of the ray. And that's occlusion. So when we think about light, it's very obvious because the ray starts from the light source, and it's occluded, and there's a shadow on the other side. But even for cameras, there's occlusion. If I'm looking at this monitor, I cannot see what's on the other side. So this is the inverse ray because I'm shooting the rays from my eye. In fact, the Greeks used to think that the way we look at the world is we shoot the rays from the eye. And they were right. [LAUGHTER] Because the reality of light, you can just shoot the rays from your eye. And it's the same math after that. AUDIENCE: So is it like in computer in the rendering graphics, [? raycasting. ?] AUDIENCE: Raycasting, [? feature ?] install. And the question here is that, was there totally dark before they illuminate all. [INAUDIBLE]? RAMESH RASKAR: No, [INTERPOSING VOICES] RAMESH RASKAR: There are certain. AUDIENCE: --shown everything. RAMESH RASKAR: They're going to [? live with ?] it. AUDIENCE: So we're going use this in a room like this with standardizing unless we [INAUDIBLE]. RAMESH RASKAR: No, no, no. [INTERPOSING VOICES] RAMESH RASKAR: Whatever additional term you have, it will remain. Or you can take two photos, one with the light source, one without the light source, and subtract it away. So you can do all those things. But you're right, that I cannot do this in broad daylight because the light-- it's like taking a flash photo in broad daylight. The flash doesn't make any difference because it can't override the sunlight. Or it doesn't make a significant contribution about the sunlight. It's the same thing here. AUDIENCE: So obviously, talking-- how how easy it is-- is it to get that [INAUDIBLE]?? RAMESH RASKAR: If you have enough memory. [LAUGHTER] [INTERPOSING VOICES] AUDIENCE: [INAUDIBLE]. AUDIENCE: You can do it in MATLAB. But It's [INAUDIBLE]. RAMESH RASKAR: So it's not that easy to take the-- if you have a small matrix, taking a transpose, it's very simple. Just M squared. Just take the value and just change its x and y-coordinates. Make x y and y x. It's very simple. Even a million by-- 1,000 by 1,000 image, 1,000 by 1,000 matrix, no big deal. But eventually, you'll run out of memory. If you load the whole matrix and try to invert it, it's going to be a challenge. Just think about taking a sheet of paper. And all I want to do is do its transpose. If the room is limited size, I can take bigger and bigger sheets and do the transpose. At some point, if you give me a really large sheet, I won't be able to do the transpose. Although, the task is very easy. AUDIENCE: You don't even need to do that. You just omitted that issue? RAMESH RASKAR: Sorry? AUDIENCE: All you need to make the images you have are each of the columns. So can't you just-- RAMESH RASKAR: So you can do it mathematically. [INTERPOSING VOICES] RAMESH RASKAR: But you may not be able to even load the whole thing in memory. AUDIENCE: [INAUDIBLE]. AUDIENCE: But you don't need to. You just access-- RAMESH RASKAR: Exactly. [INTERPOSING VOICES] RAMESH RASKAR: Exactly. So you just do this-- AUDIENCE: --forget about it. RAMESH RASKAR: Exactly. So you won't be able to do this in real-time manner. But if you do some disk access, then you'll be able to do it. And in the paper, they describe all these challenges. And if there's time, I'll just show you very briefly what approximations they came up with. I'll ask them to load this quickly. AUDIENCE: Very briefly, does it exist, projector and camera in the same package? Then you can switch between shining light through and [INAUDIBLE]? Because It's always different in [INAUDIBLE] projector and the camera. RAMESH RASKAR: Right. AUDIENCE: But now that they are friends-- [LAUGHTER] --they're-- and you could easily see through, probably seeing things, and-- RAMESH RASKAR: Right. AUDIENCE: But does it exist in one package or [INAUDIBLE] one? RAMESH RASKAR: Just to clarify, in this particular case, it doesn't matter where the projector is and where the camera is. AUDIENCE: Sure. RAMESH RASKAR: You can just put them arbitrarily. AUDIENCE: Yes. RAMESH RASKAR: But they can't be too far away. But as long as one can see the impact of that. Now are you saying that they should be optically coaxial? AUDIENCE: [INAUDIBLE] it could be interesting for [INAUDIBLE].. RAMESH RASKAR: Certainly. My thesis, by the way, my PhD thesis was projector as a dual of camera. AUDIENCE: Can you build one? RAMESH RASKAR: I built many versions. But actually, in Hershel's group, there's a concept called I/O bulb. And then-- AUDIENCE: And this concept [INAUDIBLE] the work. And it was more asking of a physical device [INAUDIBLE].. RAMESH RASKAR: But I remember John and others actually built projectors and cameras that are coaxial and for interaction and so on. Other people have built it as well. But they show a lot of interaction on top of that. So the concept of putting them together for some specific task is well-known. But what this group showed was that mathematically, you can create these magical pictures. [INTERPOSING VOICES] AUDIENCE: Did you get a resolution to any? RAMESH RASKAR: Of course. So that's a great point. So you might start with, I don't know, a 10-megapixel camera. But if your projector is only a megapixel, then your dual image is only going to be megapixel. And there's going to be a lot of aliasing issues and so on. So that's a very good point. And the projector's color properties are going to impact how you capture the image as well. It's not purely dependent on the camera quality now, camera sensor quality now. It's also dependent on the projector's illumination quality. AUDIENCE: Is it a real challenge [INAUDIBLE]?? Did that [INAUDIBLE] of that? RAMESH RASKAR: I didn't hear the last part. AUDIENCE: Real technical challenge. RAMESH RASKAR: Right. AUDIENCE: Did the author of this paper ever say, [INAUDIBLE] light could be useful for this-- [INTERPOSING VOICES] RAMESH RASKAR: Yes, I'll show you their motivation. But as you can imagine, this can be used in many other ways. It's not always practical because you'll take a million images. AUDIENCE: Yes. [LAUGHTER] RAMESH RASKAR: But at the same time, I'll show you the reason where it does give a lot of benefit. Any other questions? All right. AUDIENCE: You don't always have to take a million. Don't they share how to do the-- RAMESH RASKAR: Subdivision. AUDIENCE: Yeah, so subdivision? RAMESH RASKAR: Yeah, I'll show that very briefly. It's a second-order effect. And I don't want to go too much into detail. And now, you can do really complicated scenes such as some global illumination. So the [? crossticks ?] and so on is all natural. And then, we can start doing some special effects. So remember, the picture on the left was taken by the camera. The picture on the right was computed as if the camera was in the projector. On top of that, now that we have swapped the projector and the camera, I can convert the projector into a slide projector mathematically and see how the image looks. So if you go back to this matrix, I can just put all ones here, that will be flat field of the projector, and see how the image looks. And similarly, I can put all ones here to see what happens when I switch the camera and the projector. But I don't have to put all ones. I can put 10101010. That means as if I put a slide projector where every pixel was on and off. Every argument pixel was on and off. So you can create slide projectors to create those effects. So you can create interesting effects like this. AUDIENCE: But you can't do that backwards on the image? RAMESH RASKAR: You can do both. Yes, of course. In the primary domain, it's very easy because I can turn on one pixel of the projector, take a photo. Next pixel, I don't turn on. The third pixel, I turn on. AUDIENCE: So you [INAUDIBLE]. RAMESH RASKAR: And I can just take addition of those 1/2 a million photos. And I'll automatically get this effect. So in the primary domain, it's very easy to do. AUDIENCE: I think in the parallel domain, it's even easier. You can just literally project that slide. RAMESH RASKAR: Exactly. [LAUGHTER] Exactly. But the point is that if you had collected this terabyte of data from that, you don't have to think in advance which slide you want to project. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: You can change that slide in software. AUDIENCE: That's good. RAMESH RASKAR: You understand? AUDIENCE: Yeah. I-- [INTERPOSING VOICES] RAMESH RASKAR: There's the same effect as we saw earlier, where the woman was in a dorm. AUDIENCE: Yeah. RAMESH RASKAR: In the Milan versus LA. And post-capture, I can decide how she should look for a light. All right. So let me skip over the rest of the slides, it will be available for you, and go to the next part of-- the motivation for that was this. And again, thinking in even higher dimensions. So we just realized that the lighting and appearance-- lighting is two-dimensional. The photo is two-dimensional. So it's already four-dimensional. But we're going to go now in much higher dimensions. So instead of putting just one projector, imagine if we started putting multiple projectors. So you start turning on one pixel of one projector of the given projector. Now I have four million pixels. A million pixels in each of the projectors. So this adds an additional two degrees of freedom because if I have a projector, I can place it anywhere in 2D. So you can just think about the hemisphere. Any azimuth and any elevation, I can put this project. And every projector has a buffer with x, y. So the illumination itself now becomes four-dimensional, two degrees of freedom for the position and two degrees of freedom for the pixel coordinate. And the camera is still two-dimensional. So how many dimensions do we have now? AUDIENCE: 10. RAMESH RASKAR: Sorry? No? AUDIENCE: [INAUDIBLE]. AUDIENCE: Yes. RAMESH RASKAR: No? We have two for position of the projector, two for coordinate of the pixel of the projector, and two for the camera. [INTERPOSING VOICES] RAMESH RASKAR: It's six. So we're increasing our dimensionality here. And the problem is getting more and more crazy. So imagine if you want to do this. How will you do it? It will take four million steps. There are four million photos. But there is a shortcut based on what you just saw. How can you reduce the total number of pictures? So this is thinking in primary domain. AUDIENCE: Duality. RAMESH RASKAR: Duality. How will you exploit that? AUDIENCE: If you switch the-- instead of the camera and the projector, switch [INAUDIBLE].. RAMESH RASKAR: Exactly. And the benefit-- [INTERPOSING VOICES] AUDIENCE: What was that? What was that? RAMESH RASKAR: So now, if you switch the cam-- I think someone [INAUDIBLE]. So that's 4D. That's 2D. So the total is 6D. And other people have done things like this, where they have a camera, and they have a projector, and they move the object and capture the 6D. So people have built these rays all the way back in 2003 in [INAUDIBLE]. Now, again, it will take 4 million pictures if you're to do that. [INAUDIBLE] and so on. All right? But now, if you switch them, what's the benefit? AUDIENCE: [INAUDIBLE] [INTERPOSING VOICES] RAMESH RASKAR: Because all the cameras can work in parallel. All the projectors could not work in parallel. Only one projector could be on at a time. But all the cameras can take the picture at the same time. So now, all we have to do is turn on one pixel of the projector. And all the cameras take the snapshot at the same time. So instead of taking four million pictures, I can just take one million. And I can put as many cameras as I want. And by switching the role of the projector and the camera, I reduce the total number of samples I have to capture, total number of time slots [INAUDIBLE].. Total number of samples remain the same. Kevin is not happy. AUDIENCE: Well, I have a question. But I think it's going to be answered on the next slide if I remember correctly. [LAUGHTER] RAMESH RASKAR: All right. So we'll do that. So it's just 6D. So this is how they did it. Instead of creating 16 cameras, they just took one camera, 16 megapixel, and they just aimed it at these mirrors. So that becomes a virtual camera. And then, one projector. And from this, they're going to create an illusion that they have 16 projectors and one camera. So results, hopefully? So card experiment, which-- so do they need a camera for the card experiment? AUDIENCE: No. RAMESH RASKAR: What do they need? AUDIENCE: Light or a color-sensitive photodetector. [INTERPOSING VOICES] RAMESH RASKAR: They just needed one photodetector. [INTERPOSING VOICES] RAMESH RASKAR: They could have done it. But for some, that is the [INAUDIBLE].. [BANGING] The vending machines. All right. So there's some other experiments that show how they use that multiplanar mirror [INAUDIBLE].. Any questions on that one? Yeah? AUDIENCE: Maybe I just forgot this. But so if you have a 4D projector set up, they can actually project where the camera might not be able to see? And maybe that would reflect onto the ground. And then, the camera could see it. But if you flip them, then the projector can't illuminate occluded stuff. RAMESH RASKAR: That's the same example in the playing card. AUDIENCE: Yeah, it is. RAMESH RASKAR: So it still works. So do you understand this question? AUDIENCE: Yeah. RAMESH RASKAR: When we're writing this whole thing down, we had this implicit assumption that when the projector and the slide-- the camera can see it. And if it doesn't see it, we're going to ignore it. But that's not the case. Even if the camera cannot see it, the example we have there is-- we have a card and-- let's see. How does this [INAUDIBLE]? All right. Let's go back to this. We have a card here. And you have a projector. It shines light. And then, you have a camera. And none of the cameras can be reduced to these pixels because it's on the other side of the playing card. But it's going to see the reflection that comes from those areas here. So you're still going to see that yellow or red glow on the book on the scan. So remember, you don't have to see-- the camera doesn't have to see that pixel directly, that [? flash ?] directly, to be able to compute how to look when you switch the camera in [INAUDIBLE].. AUDIENCE: But just as in the example where the horse was occluded by the emblem, there can be cases where none of the rays actually reach the [INAUDIBLE]. RAMESH RASKAR: It's possible, yeah. It's possible, yeah. AUDIENCE: And is it possible to send the ray from the projector in an incident, something, something that would reflect the ray [INAUDIBLE]? RAMESH RASKAR: That's fine. If you put a mirror here-- AUDIENCE: The mirror is OK. But not the defusing surface because-- RAMESH RASKAR: No, [INAUDIBLE] question. AUDIENCE: Because there is-- I understand it in one way. But there's a way to be complex on it. So if you shine light not directly on the card, but on the book, would it still work? RAMESH RASKAR: Right. Here, that-- let's take that example. So I have this. I have the card. AUDIENCE: Yeah. RAMESH RASKAR: I have the projector. I just shine the book. AUDIENCE: Yeah. RAMESH RASKAR: And let's say that thing is still not visible from the camera. AUDIENCE: Exactly. RAMESH RASKAR: What will I-- I will read the book. I will not read the card. AUDIENCE: Yeah, oh, OK. [INTERPOSING VOICES] RAMESH RASKAR: Yeah, they say the book has a bluish card shape, all filled with blue. And I'll say, oh, yeah, it must have been [INAUDIBLE].. AUDIENCE: In theory, you should-- in theory, you might be able to still-- RAMESH RASKAR: Leave the card? AUDIENCE: Yes. [INTERPOSING VOICES] AUDIENCE: And a blue light [INAUDIBLE].. It's an-- [INTERPOSING VOICES] AUDIENCE: --then you have a very diffused light source lighting up the cards from different spots on the book, effectively, and reflecting back-- [INTERPOSING VOICES] AUDIENCE: Yeah. AUDIENCE: All right, somewhere in the matrix? AUDIENCE: [INAUDIBLE] AUDIENCE: Oh, I was going to ask if-- is there any clever way to compute the answer? Instead of having to take all, a million, like take some pattern? RAMESH RASKAR: Exactly. So you should know-- [INTERPOSING VOICES] RAMESH RASKAR: --that certain parts of the scene-- AUDIENCE: Or even without any knowledge of the scene. RAMESH RASKAR: Yeah, you can do some probing. I can shine a light at one part and see, do I get light from here? I don't. So maybe that [INAUDIBLE] the future, I will shine both my lights there. So you can definitely do those probings and quickly figure out how to. AUDIENCE: Yeah, in the paper, they even have the algorithm for it. It's only this long. RAMESH RASKAR: Yeah, by the way, so we-- our next assignment, assignment number three, dual photography that's one of the options. So you can either take a million photos or you can use the shot algorithm to reduce the number of samples. But it still takes a long time. [LAUGHTER] You've got to run it from-- you'll have to leave it running overnight and come back in the morning. All right. So to me, a lot of projects in conversion photography are just sheer magic, being able to see a playing card that you can't really see from a camera. The next one, which I was involved in, is also a lot of fun called visual chatter. So the concept where the two batches are talking to each other, we call it just visual chatter. If they can't see each other, there's no chatter. They can see each other, there's chatter. And this concept is actually very unique. By the way, when I meet photographers or professional photographers who are technically inclined and they say, what do you mean by computational photography? And I give them all the buzzwords and big definitions. And they don't get it. I try to explain to them dual photography. And they immediately get it. You should. They say, wow, OK, with all the techniques I have in my bag, all the polarization filters, and all the flashlights, and all the umbrellas, and all the fancy lenses, I can't think of a way to create those, kind of, photos. And the next one you'll see is similar. So giving these concrete examples, at least for me, it has been helpful to communicate to photographers, what's different here with computational cameras and photography? So now, we're going to look at a little bit more about how light bounces around in the world. And we're going to think about direct illumination and global illumination, which is unfortunately wrong terminology because what we're really talking about is direct reflection and global reflection. But because of some reasons I won't get into, people call it direct and global illumination. So direct reflection is straightforward. The global reflection is when light bounces around and reaches the rotation. So the path a is just direct. Path b is reflect, reflect, reflect. So if I shine the spotlight here, that's direct. If I look at this shadow region, that's indirect. The third one that's interesting is subsurface. Light actually enters the surface, it bounces around, and then it comes out at some point. And marble or skim is a good example. This is subsurface scattering. There are some other ones such as participating mediums. If you have fog or water, then light is going to scatter around and then come and you can look at that. And then, finally, you have transmissive or translucent. Light is going to transmit to something and then come back to you. In this particular case, light actually is coming through this curtain through [INAUDIBLE] without diffusion and [INAUDIBLE] scattering. Now what we're going to do is distinguish between direct and everything else. And I showed this example in the beginning of the class, where you have somebody behind a shower curtain. You can find out what the direct bounce is. And unfortunately, it's too dark. But you can only see the face here. And that's the indirect because of transmission. This is one of the options in the third assignment. How does it work? Very simple idea. Very, very simple idea. You turn on all the pixels of the projector, take a photo. That's easy to capture the direct path. Now let's introduce a little bit of terminology. Keep it really simple. Before I got there, let me give you some intuition. And then, we'll go here. So imagine if I-- AUDIENCE: [INAUDIBLE] RAMESH RASKAR: You'll just come on this side because you're more inclined to go this way. [LAUGHTER] So imagine if I have a [INAUDIBLE] and I take a laser pointer and shine a spot here, what's going to happen because of that is light will, again, bounce here. And that light can bounce here. And when I look at the corner of this room, I will see a very bright red spot, but also see a red cast everywhere else. And so the red spot is because of direct. And everything else is because of indirect or global transport. Now let's say I go from this patch i to the next patch j. So I sh-- I change the direction of the laser just a little bit. So now, I'm going to shine here. Here in the bright spot, I'll-- [INAUDIBLE] noticed that the bright spot has moved from i to j. What will happen over here? Will I see a change? AUDIENCE: [INAUDIBLE] AUDIENCE: Very [INAUDIBLE]. RAMESH RASKAR: Very little change. Same here. There'll be hardly any change. What does it tell you? AUDIENCE: It tells you separate direct and global. [INTERPOSING VOICES] RAMESH RASKAR: We are going to separate direct and global. AUDIENCE: It tells you-- RAMESH RASKAR: But this very simple experiment tells you that when the light is bounced indirectly, it's a very localized result. But when the light is bouncing or bouncing globally, it's a very low-frequency effect. And it doesn't really matter whether I shine here or shine there. So now, let's say I wanted to figure out from these two photos, photo when the light is shining at i and light is shining at j, which one is direct and which one is global? How will you do it? AUDIENCE: You would go what doesn't change. RAMESH RASKAR: You go for what doesn't change. That's it. The part that did not change is the [INAUDIBLE] part, the global part. And the part that changed is the direct part. That's it. From those two photos, I can tell you which is the direct component and global component. So let's look at it a little bit more precisely. So photo one and photo two. So photo one is direct plus some global. And the second one is some direct plus some global. Now we know that when I should change the light a little bit, the global did not change a lot. So I can just call this i1 and i2. If I subtract i1 minus i2, what will I get? These two i, I'll just get [INAUDIBLE].. And simply from that, I can figure out a photo has a distinction between d1 and d2. So I have two equations. And right now, I have three unknowns, d1, d2, and this [INAUDIBLE]. Now I can take multiple photos. I can take a third photo. I can shine the laser slightly next position. I'll have d3 plus [INAUDIBLE] and so on. So I'll have n equations and my n plus 1 on this. For every equation, I introduce a new direct. And the global is the same. Of course, once I start shining somewhere else, if I go sufficiently far away from this, the global will change eventually. If not, you'll see the same. But it's a very simple observation. So going beyond that, let's do some real simple experiment. Instead of-- what we're interested in is not shining a laser at a time. But I'm going to put a bulb here. I'm going to turn on the flashlight. And I want to know when I take a photo, which part is direct? And which part is global? So I don't have this choice of doing one pixel or one direction at a time. How would you do that? That's the [? secret ?] [INAUDIBLE].. AUDIENCE: What do you mean you can't answer it? RAMESH RASKAR: You can't answer it because you're at Columbia. AUDIENCE: Yeah. [LAUGHTER] RAMESH RASKAR: And I'll give you a hint. Instead of taking thousands of pictures, we're going to take exactly two pictures. We're going to replace this bulb with a projector. I'm going to project one pattern, take a picture. I'm going to project a second pattern, take a picture. i1 and i2. The global still remain the same. And the direct will also be related. And from those two pictures, I'll be able to figure out what's direct and what's global. AUDIENCE: Can you make the two directs inverse grids? RAMESH RASKAR: Exactly. So in fact, somebody asked the same question. Remember I said in two patterns? There has to be some symmetry-- [INTERPOSING VOICES] [LAUGHTER] So I'm glad you're catching on that. So the answer is very simple. Did everybody get it? You didn't? There's more than one solution. But they all have the same principle, that they are inverse of each other. So the simplest one would be to show a checkerboard pattern. And I'll show the inverse of the checkerboard pattern. So let's take that a little bit further. And if I shine now, instead of one laser, I'm going to shine multiple lasers because I'm going to turn on every alternate pixel of the projector. So I'm going to shine this. I'm going to shine this, and shine this, and so on. So let's say I turned on all the even pixels of the projector. And take a photo. Certain pixels will be bright. The next pixel, some patches will be bright. The next patch will be dark, bright, dark, bright, dark. And that's not what I want. Now if I project the exact inverse of that, the pixel that goes on is off. And the one that is off is on. What will happen to global? AUDIENCE: The same. RAMESH RASKAR: It will remain the same based on the appearance from the [? Sun. ?] So in the first picture, I have some global. In the second picture, I have some global. And in case of direct, I put a direct here. And here, I had basically one minus direct because I just need more direct. And you can make it all even simpler. And therefore, the equations [INAUDIBLE] so we go through it very quickly. You project the pattern. Look at the pixel. In this case, it's lit. It's receiving the direct component and 1/2 of the global component because, remember, if I turn on all the pixels of the projector, I'll get certain intensity in the dark patch. If I turn on only half of them, then the global intensity also would [INAUDIBLE]. So the alpha here is just 1/2. So what I got in the first one is direct plus 1/2. If I switch the checkerboard, I'm going to go-- again is deep because it's just inverse of that. One minus these are [INAUDIBLE] because it's just 1/2. And again, 1/2 of [INAUDIBLE]. That's it. So we have two equations and two unknowns, a direct component [INAUDIBLE]. If I take the subtraction, I will get-- AUDIENCE: [INAUDIBLE] [LAUGHTER] RAMESH RASKAR: What will I get? [INTERPOSING VOICES] AUDIENCE: Are they exactly the same [INAUDIBLE]?? RAMESH RASKAR: Yes, so [INAUDIBLE] all of them are related. So all I'm going to do is take those two pictures, find the max of the two. And that's going in my direct. And to find the mean of the two, that's going [INAUDIBLE] global. So if I look at this particular patch when it was not lit, it's the same. So think about this patch, particular patch. So i of x. Let's call it patch x. This is the same, direct plus 1/2 global. But unfortunately, this particular one did not get a direct path in the first picture. In the second picture, again, you [INAUDIBLE] plus 1/2 g. In the second picture, I did get the direct. So my first equation is 1/2 g. Second equation is [INAUDIBLE]. If I just apply the two, I get the direct. And then, if I do additional [INAUDIBLE] or if I just take the minimum of these two, I will get the global. And if I subtract the two, I will get it. That's it. So from these two pictures, I can tell you what bounced straight from the wall and what bounced here. This is almost like magic because physicists build really expensive equipment, laser equipment, to solve this problem of what's going to bounce and, what's a [? global ?] bounce? And now, we have come up with a very simple technique, where we just change the illumination, which we call computational illumination, to figure out what's direct and what's global. Yes? AUDIENCE: So by figuring this out, you eliminate the global illumination. And basically, you have a [INAUDIBLE] in a very high-coordination device, like a laser. So you remove all the [INAUDIBLE],, all the indirect and global. RAMESH RASKAR: Right. AUDIENCE: So maybe we solved it. And so we can use a light [INAUDIBLE] like a [INAUDIBLE] high-collimation. RAMESH RASKAR: I don't know what you mean by collimation. You-- AUDIENCE: That you [? inside the ?] laser. RAMESH RASKAR: Just the [INAUDIBLE] you mean? AUDIENCE: Yeah, [INAUDIBLE] of the spot is-- [INTERPOSING VOICES] RAMESH RASKAR: Is very high, yes. But as you can see, even if you have a very narrow spot, it still has global scattering. AUDIENCE: Sure. [INTERPOSING VOICES] RAMESH RASKAR: So collimation alone does not give you-- AUDIENCE: Sure. Sure. RAMESH RASKAR: So the right way to think about this is imagine if somebody was sitting in this room and there's a wall. Now think about two different pictures. Imagine this one is white. And I turn on the light switch. What will happen is-- and this is what's the difference between professional photographers and consumers. If you go to a professional photographer, they'll put a nice white drop. I think, Marcel, you have that box in your office? And the reason the box is white is because you want to see the object nicely lit from multiple directions because of scattering. You will get 1,000 pictures. Now imagine if this box around the object is completely black. So that light hits this thing. It goes to the reflector. But nothing comes back because it's completely black. The picture that you would get when the rest of the wall is completely black would be the direct illumination. There's no scattering going on. As opposed to putting it on a white wall, well, you would get this one. AUDIENCE: So it's a bit like being in an echoing chamber for some. RAMESH RASKAR: You can think of that way. So if you go to, yeah, an [INAUDIBLE].. AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: Yeah, you feel that it's strained. This wall, I can hear my echoes. And that helps me change my tone because I know with that, I'm talking too loud and so on. AUDIENCE: Yeah. RAMESH RASKAR: But sometimes, when you're at a performance, you go on the stage and you can't hear yourself. That's a similar feeling. So if you eliminate all the other bounces, that's the direct component. And so as you can imagine, in photography, the global component is very critical. It's very, very critical. When you're shooting a movie, and even if it's a nighttime scene, the director will always put some small light sources here and there so something is still visible on screen and you can figure out what the characters are saying. A real nighttime scene doesn't look like that. But by creating this global illumination, they just created enough brightness so they can do so. So it's extremely powerful [INAUDIBLE] across the globe. And let's see how we can use it. So before we see that, let's look at some real-world objects and how, for them, direct and global components really matter. So we have marble. We have this very diffuse candle wax, water with some-- a glass with milky water, and inter-reflections between the walls, and so on. And this one, if you do this trick, you'll see something really interesting. What will you see, let's say, at the corner here? Brother, do you know why the corner of any room looks really, really bright? [INTERPOSING VOICES] RAMESH RASKAR: It's all inter-reflections. There's more light reflecting back and forth there than anywhere else. That's [INAUDIBLE]. What about this wax here? Most of the light is actually scattering inside. So it's almost all global. AUDIENCE: Yeah. RAMESH RASKAR: Same with marble and so on. So we'll see how it works [INAUDIBLE].. But this is how the world looks. This is all the light that's coming straight back. If you look at wax or milky water, none of the light actually comes straight back. It all bounces around before it comes back. On the other hand, over here, you can see that light is actually reflecting. And also in the corner, you can see that everything is because of inter-reflection. There's a bug here. He's in the corner. And you see it's very bright. It shouldn't be because the corner here should not be any brighter than the rest of the wall because it's just a direct reflection. But there's a limitation to this algorithm in terms of its resolution. That's why it doesn't-- [INTERPOSING VOICES] AUDIENCE: Yeah, so you get misategorized because your checkerboard size isn't small enough. RAMESH RASKAR: Exactly. Your pixels are not small enough to capture that card. AUDIENCE: Yeah, another question is, so in the previous example with the shower curtain, the direct component lets you see through the scattering curve? RAMESH RASKAR: No, the global component [INTERPOSING VOICES] AUDIENCE: Oh, it's global. [INAUDIBLE] here. OK, never mind. RAMESH RASKAR: Yes? AUDIENCE: Well, it's a good technique, also, to take a picture of an aquarium. RAMESH RASKAR: Aquarium? AUDIENCE: Yes. RAMESH RASKAR: Yeah, certainly. [LAUGHTER] AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Yes? [LAUGHTER] AUDIENCE: Is there a [INAUDIBLE] to adjust how much flow [INAUDIBLE] happen or directly happen [INAUDIBLE]?? RAMESH RASKAR: Beautiful. So you can play those tricks in your Photoshop when you aren't just changing the brightness, but you're changing the global versus direct. So the wax will start glowing. But the regular [INAUDIBLE] surfaces will not. You can play all those tricks. Also, the scan. It turns out, we'll see later that if you lived in a world where direct and global could be separated, there'll be no racism. [LAUGHTER] AUDIENCE: Sorry. Can you go back? How about the highlight on the glass on the direct side? RAMESH RASKAR: This one here? AUDIENCE: No. [INTERPOSING VOICES] RAMESH RASKAR: Oh, this one? AUDIENCE: Yeah. RAMESH RASKAR: Yeah, so you will see later that this technique doesn't work for highlights. And I want you to think about why, why that is. All right? So let's look at some fun examples. So this is a weed that grew in a wall, which we already saw. It should look flat. And only the corner, you'll see inter-reflections. Here's a failure case. [LAUGHTER] If you have a shiny mirror, then if I shine a laser at the mirror, I'll get-- so let me try it properly. I have a mirror. And I have a bulb. If I shine a laser here, it reflects here. On the other hand, if I had a bulb, then if I shine a laser here, it reflects everything. So it only reflected one spot. If I change the laser direction to slightly here, it goes in a different spot. And the global effect of here versus here is quite different as opposed to here. AUDIENCE: Yeah. RAMESH RASKAR: If I'm shining this part versus this part, this will have the same ratio, same ratio. But that's not true for everyone's. [INAUDIBLE] AUDIENCE: But would you even call that global illumination? RAMESH RASKAR: So global domination is anything that has more than one bounce. AUDIENCE: Oh, OK. RAMESH RASKAR: So by the definition of global reflection, anything that has more than one bounce of a photon is global. So yeah, it fits. Depending on your application, you can say [INAUDIBLE] is not part of my world. And I can [INAUDIBLE]. It has its issues. So now, we have some interesting examples. I really like this part. So make a reflection, x. What should you see? AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: Because between those x's, it's got a lot of light bouncing around here. So [INAUDIBLE]. It's bottom right. You can also do a trick, where you can do it in sunlight. You can just do the inverse effect. I can just take a stick and move it. And remember, if I take lots of images, the minimum of all the images is the global. And the maximum of all the images is direct plus global. That's it. Those three equations are all done. It's very easy. So if I move a stick and shoot a video, at any given pixel, I look at all the frames. And the minimum of that is the global component. So you can do this outdoors [INAUDIBLE].. Sorry. Then the shower curtain, our favorite example, [INAUDIBLE] reflect. And here, light that bounces straight from the curtain is the direct component. Any light that actually bounced around and came back to the camera is the global component. So anything that's behind the curtain will actually play a bigger role in the global. Now the problem is that even the texture of the curtain actually has some subsurface character. It has some thickness. And light is bouncing around and then coming up. So that's kept here as well. But you can capture those. Yes? AUDIENCE: What was the mesh diffuser that you were using? RAMESH RASKAR: So yeah, that's a good point. So there are multiple ways you can do it. I can either use the projector. Or I can just take a mesh. At least buy it at Home Depot. Buy a high-frequency mesh for the light source. And just move the mesh in front of it and shoot a video. So to cast-- to create this slide projector that projects a checkerboard and it's inverse, I don't have to use the projector. I can use the light source and a mesh or a stick, whatever it is. AUDIENCE: For you-- so for Photoshop, do you-- need a video. For the stick, you need video? Or do you need to [INAUDIBLE] just two pictures befor [INAUDIBLE]. RAMESH RASKAR: You're going to take more pictures. AUDIENCE: OK. OK. Good. RAMESH RASKAR: But the minimum number of pictures you need is two because you have two unknowns, so direct and global. AUDIENCE: And global. RAMESH RASKAR: And then, this is your fish tank, where you will see very clearly in murky water. That's direct. Actually, it turns out most of the light is bouncing around and coming [INAUDIBLE]. AUDIENCE: Yeah. RAMESH RASKAR: And the pixel on the left-- maybe I should lay on the lights here. AUDIENCE: There's some pixel [INAUDIBLE].. RAMESH RASKAR: Yes. AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: Is that better? AUDIENCE: Yeah. AUDIENCE: Yeah. AUDIENCE: No. AUDIENCE: No. [LAUGHTER] AUDIENCE: [INAUDIBLE]. RAMESH RASKAR: I don't want to disturb those [INAUDIBLE].. AUDIENCE: Yeah. I moved it. AUDIENCE: Yeah, exactly. RAMESH RASKAR: And this looks like some computer graphics rendering. Why is that? AUDIENCE: Because in the old days, they didn't have global. RAMESH RASKAR: Exactly. Because chip [INAUDIBLE] graphics rendering is basically just a direct bounce of light. It doesn't think about RenderMan, Maya, doing all this bouncing of light around and so on. So when Pixar decided to make animation movies, which movie did they make first? AUDIENCE: "Toy Story" [INAUDIBLE].. RAMESH RASKAR: "Toy Story." Why? AUDIENCE: Because [INAUDIBLE]. AUDIENCE: It's all plastic. [LAUGHTER] It's very easy to render. [INTERPOSING VOICES] RAMESH RASKAR: They cannot do ants. And they cannot do fur. And they cannot do rain. And they cannot do fog. It's just toys. Toys are the easiest thing to render because they have very simple surfaces. And they all look like this on the left. So you can-- AUDIENCE: So what explains the color shift then that's in there? RAMESH RASKAR: Because if you know how light reflects inside some scattering medium, then it has a preference or certain wavelengths. If you're underwater, then red is suppressed and blue, green is not. So similar thing here. So it always has a blue, green tinge because what you're scattering suppresses one wavelength but not the other. AUDIENCE: Well, so the scattering goes, well, maybe to the border. So the blue should be scattered more [INTERPOSING VOICES] RAMESH RASKAR: So yeah, in the global component, you see more bluish. And the tinge, the bluish tinge on the red cup, there is gone on the left. AUDIENCE: Oh, OK. I see. RAMESH RASKAR: Because this is just a direct bounce. We're ignoring the scattering. So imagine the projector pixel comes. It scatters through the water. And it hits the red cup. And I'm just trying to measure that. I don't want to worry about how everything else is contributing back to that red cup. AUDIENCE: In that example, you show you either use a light bulb or a projector. Could you use the light of the sun? RAMESH RASKAR: So the example of the stick was with the sun. AUDIENCE: But can you motivate it? Could you via-- RAMESH RASKAR: Yeah, I can take a grid or a mesh, and just move it, and shoot the video. It's the same thing. AUDIENCE: But then, just out of curiosity, would this whole scattering colors that I've been able to get, what's the actual real color of the green card, blue and green. RAMESH RASKAR: It's that. If I take it out and put it in free space-- [LAUGHTER] No, it's that color. AUDIENCE: Green. RAMESH RASKAR: Green. AUDIENCE: So there's the one that [INAUDIBLE].. AUDIENCE: Is it green? I don't know. AUDIENCE: New age group. [INAUDIBLE] RAMESH RASKAR: And then, you can play with adding more tinge and so on. As you were saying, can I have more global component or less global component and so on? Now here are some really fun examples. You can even figure out what is real and what is fake because one fruit is real and one fruit is fake. Let's see. Let's start with the banana. Which one is real? Which one is fake? Top is for real? AUDIENCE: Yeah. AUDIENCE: Yeah. RAMESH RASKAR: How many think top is real? The bottom is real? Wow. Everything bottom is real. What about the apple? Left is real. Again, left is-- I said, right is real. So for bottom and right. And the lemons. Left is real? Right is real? AUDIENCE: Yeah. RAMESH RASKAR: All right. Remember your choices. [LAUGHTER] Pretty good. Pretty good. But how would a camera do it? If I just take this picture and you would do a camera, like a computer vision algorithm, it's almost impossible to figure out which one is real, which one is fake. And the reason for that is when you make a plastic or a fake fruit, you only try to match its outer appearance. You're trying to match its inner appearance. So in case of the apple, you realize that light bounces around inside in the real apple. And it becomes reddish. That's what gives the reddish color. But in a fake apple, no light bounces around. So you know which one is apple and which one is Microsoft. [LAUGHTER] Over here, it's a little bit more complicated because although it's fake, I know, it has-- first of all, the direct bounce from a real banana is actually a very different color. And this is all because of the ripening that's happening inside. And when it comes to a lemon, this one is very complicated for me because in the real lemon, if you look at the direct component-- and this lemon, maybe it's good quality, a fake, because even the internal reflections inside that fake lemon. And I think I've seen those fake lemons. They are a little bit soft and fuzzy so light can bounce around inside as opposed to apples, which they're just hard. The way to figure that out is if you look at the direct component of a real lemon, first of all, you realize that it's green. It's not blue-- it's not yellow. And the real lemon actually has a texture that you never see directly. So the real-- direct bounce, you'll see direct reflection. So a lot of interesting things. And I wish I had it in my cell phone camera so I can figure out-- so this is, by the way, a great trick to figure out if something is ripe or not. If it's more ripe, light will bounce around more inside because its permeability is different. So I can build a cell phone camera that can tell you if your food is ripe or not. [LAUGHTER] AUDIENCE: And so in photospectrometry, you can see the [INAUDIBLE]. RAMESH RASKAR: Exactly. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Yeah, you can do it with the direct as well. But this is another tool you have-- AUDIENCE: Yeah, [INTERPOSING VOICES] RAMESH RASKAR: --which can be implemented easy. AUDIENCE: Even with 3D image, the face of somebody, you should actually put everything inside because if this has worked, then it's [INAUDIBLE].. RAMESH RASKAR: I'm sorry. Repeat the question. AUDIENCE: You showed that [INAUDIBLE] maybe something like that, something that is not made from the real [INTERPOSING VOICES] RAMESH RASKAR: Right. So whether there's a makeup or not. AUDIENCE: [INAUDIBLE] or not. RAMESH RASKAR: Yes, so unfortunately, yeah, this trick will also tell you if somebody is wearing makeup or not. Let me just talk about ethnicity. So as you know, that hands of different ethnicities look very different by skin. But if you look at the direct and global, the direct is almost always gray. And all the pigment is in the global. So if you take people of different ethnicities, I hope I have the example, the direct component is almost the same. AUDIENCE: Are these normalized at all? RAMESH RASKAR: They may be normalized, yeah. AUDIENCE: Because I-- RAMESH RASKAR: But the global component is what actually gives the real color. AUDIENCE: OK, So when you add [INAUDIBLE]---- RAMESH RASKAR: So there is no color. The direct reflection has no color. So if an alien comes in and can see only indirect illumination, that's [INAUDIBLE]. AUDIENCE: You should also look on Flickr for infrared photographs. RAMESH RASKAR: Yeah. AUDIENCE: They show people are really wrinkly. RAMESH RASKAR: Exactly, yeah. And we'll see some thermal [INAUDIBLE] in there. That's also [INAUDIBLE]. Anyway, I'll go through it quickly. AUDIENCE: Yeah, what happens if you do this with IR? RAMESH RASKAR: It's the same effect. [INTERPOSING VOICES] RAMESH RASKAR: Sometimes, light-- at some wavelengths, there's more in inter-reflections than others. AUDIENCE: Yeah. RAMESH RASKAR: So if you can imagine, some surfaces are actually darker in a particular wavelength. So the situation I explained, where you have a person in a white room versus a black room, that appearance looks very different because there's direct plus global in the white room but not in the dark room. So at a particular wavelength, the room may be black. And you'll see just direct. All right. So let's take a break. And then, we'll come back and talk about optics and light rays.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_5_Lightfields_part_1_Part_1.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RAMESH RASKAR: So we should celebrate. The Nobel Prize is for imaging this time. Ah, of course, fiber optics, but who cares about fiber optics? But it's also about light and imaging. And this is the point I was making just a couple of weeks ago about how in imaging, you keep on getting Nobel prizes. But somehow in other fields, just no Nobel prizes. Of course, [INAUDIBLE] sent out an email. I don't know if I told you this. And he thinks prizes like Nobel Prize and all that have outlived their importance because they were always trying to feed this notion that science moves by a select group of people working in select places and coming up with amazing discoveries-- like, select individuals. And we know that's not how the world works now. Even the development of fiber optics, it's not just one person or one team, but hundreds of people have made tremendous contributions and the same with CCD chip for which there was a Nobel Prize. And you know, somebody may have thought about it. Somebody may have implemented it. But at the same time, there are a lot of people involved. So it's somewhat misleading to identify one or two people as the ones who made all the difference. By the way, the inventor of light fields, which we're going to talk about, Lichtman, also got a Nobel Prize. But he didn't get the Nobel Prize for light fields. He got a Nobel Prize for inventing color photography, which is very closely related to light trace. It's surprising. We'll discuss about that today. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: So the assignment actually has two options, of course, and the light field part has several parts, several sub-parts. But don't get intimidated by all the things you ought to do. It's mostly a matter of how much you want to do. And based on your background, so if I know you have a lot of background in light fields and so on-- so like Rohit over there, for example-- I would expect him to go quite a bit deep into the assignment. But those of you who don't have as much optics background or those of you who are not programming and using-- or don't have programming background and using some GUI-based tools to do the actual assignment, of course I'll expect you not to-- just finish the first part but not-- first sub-part but not all the parts. So we will kind of normalize your performance based on your background and abilities. So you may want to send an email to me separately in case you run into problems and say, hey, I don't have as much background in this area. But the assignment is basically-- it's basically taking pictures and taking an average of those pictures. So it's just that when you-- you have to take a picture, shift it, and add it to some other picture. So the concepts are really simple, and if you do the first sub-part, you should be able to do other sub-parts very quickly if you think about it some more. But again, given that you have limited time, you don't have to go all the way. All right, so what I'm going to do is actually start talking about the light fields, and then halfway through, we'll come back and talk about the assignment, OK? So if you're in photography or you're new to cameras, people always throw the stumps-- at you-- the F stop and fast lens and slow lens and OK and depth of field. And Ankit covered it a little bit, but it's always very confusing. And photographers, just like any other cult group, always try to make it more exotic and more difficult to understand what they're saying. So my general advice is as a researcher, as a student, as a scientist, just get rid of all the jargon and focus on really simple parameters. So F stop, for example, is really complicated, and it's not just it's complicated, but it's actually wrong. The numbers they use are actually wrong. They're off by sometimes up to 10%. And increasing the F number or decreasing the F number is not very clear what it means because it's inverse of-- so an F 2.8 versus F 5.6, the 5.6 actually is smaller than 2.8. We're working with that. In microscopy and scientific imaging, they use numerical aperture, which is more meaningful because a numerical-- when an aperture becomes larger, the numerical aperture becomes larger, and it's part of order in a scientific way. So let's see. So the whole concept of light field might seem very foreign, and you'll wonder why we need to study it. But once we have understood this relationship between 3D points to 2D pixels and rays and relationship between different rays and so on, all these other concepts such as depth of field and F number and all of that will become extremely clear, and you won't think of them as some abstract quantities but quantities with some real meaning to it. All right, so this is how we started thinking about the ray space. And then we realized that rays are five dimensional because there's a point and direction. So three degrees of freedom for the point and only two degrees of freedom for the direction-- why is it not six degrees of freedom as we have elsewhere? Usually have three for translation, three for rotation. Here we have three for translation only, two for rotation. AUDIENCE: Because there's not [INAUDIBLE].. RAMESH RASKAR: Sorry? AUDIENCE: There's no [INAUDIBLE].. RAMESH RASKAR: There is no role. AUDIENCE: Yeah. RAMESH RASKAR: Ignore that. But let me realize that sometimes you don't have to use five dimensions. So you can get away with four dimensions. In which case does that happen? AUDIENCE: Two plane approximation? RAMESH RASKAR: Two plane approximation? That's good, but why does it allow you to go from-- why can't we go from 5 degree 2 degree? [INAUDIBLE] Raise your hand? OK, let's take the next one. AUDIENCE: You're only caring about a single ray or a single point? RAMESH RASKAR: Exactly. So if you don't have this uploader in the middle, the intensity on this side is going to be the intensity on the other side. So you can just use it as a four dimensional quantity. And now the four dimensional quantities, we can specify multiple ways. We can either specified as a 2D position and 2D angle, or we can do it as a two plane parameterization where you have 2D point on one plane and 2D point on the other. And this is kind of in the 3D world. So UV coordinate on this plane and SD coordinate on that plane, and that defines the ray direction. Now what's the disadvantage? I'll just go back to this. So you'll realize that when we're talking about all these optics and rays and geometric quantities, we'll always think about them in flatland. So the real world is 3D and rays are in 4D space. But for the sake of discussion, our world is just 2D, the flatland, and the rays are how many dimensions? AUDIENCE: 2D. RAMESH RASKAR: Two dimensions, OK? So something strange going on already, right? Because we went-- in a 3D world, the rays are 4D. But in a 2D world, the rays are-- AUDIENCE: 2D. RAMESH RASKAR: --2D, right? There's already a mismatch going on, and as we go forward, you'll realize where that comes from. OK, so most of our discussion will stay in flatland. So our world is 2D. Our rays are 2D. There is one degree of freedom for the position in this. So in this case, one degree of freedom for the position and one degree of freedom for the angle, OK? So that's the two dimensions for the ray. We can either express it as position and angle or as position and position. Now that seems pretty good. There are some rays here, which cannot be represented very well with this representation, which, it's OK to represent them here. But they cannot be represented here. Which ray is it here? Yes? AUDIENCE: Ones that are parallel to the plane? RAMESH RASKAR: Exactly. So there's something-- if a ray is vertical here, it will never intersect both planes. So it's difficult to express that in this particular formulation. And if you are thinking about implementation, then any ray that's-- even if it's not vertical but close to vertical will have sampling issues. You won't be able to represent that where you are. And here, you don't have that problem because you're going to get-- you'll use a different 3D position, and theta phi is always defined. So that makes it very simple. All right, so now if you can build this, a matrix, the machine that can see everything everywhere in every wavelength, how many dimensions is that? AUDIENCE: 7D. RAMESH RASKAR: Seven dimensions, right? Five-- so we go back to this one. 5 dimensions here plus wavelength and that one. AUDIENCE: OK. RAMESH RASKAR: OK. Now we're going to get closer and closer to that matrix that sees everywhere. And we're going to start with a camera array. And we're going to say that an array of cameras can capture light field. How does that happen? Here we have-- let's see. Imagine on this plane, I'm going to put a set of cameras. And for every camera, its frame buffer I'm going to indicate as S and T. So every camera position here is given by UV coordinate, and the frame buffer of each camera is specified by ST. Like so is ST, OK? So this is how we are. We have the UV plane. I'm going to put a camera here. And then I have ST plane. And what I'm going to say is for this camera, I'm going to show all these [INAUDIBLE].. So as you can see, the image starts becoming very messy in the real world. So we go back to find it. And they go right here. So this is my U plane, and this is my S plane. I'm going to put a camera here. Camera here, camera here, and camera here. And when I say place a camera there, I really mean the center of projection, the pinhole of the camera, is placed here. So I put a pinhole camera here and the sensor back here. And what that is doing is for every pixel through this pinhole, they're shooting the ray over here. Again, we're going to think like the Greeks. The camera should be placed out in the world [INAUDIBLE].. And the coordinate of these pixels now is even less. But the position of each camera [INAUDIBLE].. So maybe this u equals 1, u equals 2, u equals 3, u equals 4, and so on. And similarly on x-coordinate, whatever, it goes [INAUDIBLE] coordinate. And we do say, like, this camera can capture all the rays that pass through. And then this camera [INAUDIBLE] plus all the rays that [INAUDIBLE]. So very similar deal with [INAUDIBLE].. I'm going to the camera, shift, shift, shift, shift. And it's going to capture all of those rays. Certain rays are missing here. They're not here to capture, usually [INAUDIBLE].. So the capturing-- if I take a photo from here, I will still capture 1,000 rays corresponding to each X coordinate. If I go here, I captured another 1,000 rays. So in this picture, I have captured over 4,000 rays. And I'm using the rays without specifying what it means. But I think the whole notion of rays-- some type of bite doing that. So what's missing? AUDIENCE: The point distribution [INAUDIBLE].. RAMESH RASKAR: Exactly. So I'd say that when q equals 0.5, those rays are not being captured. So that's what life is. The world is continuous. And somehow, it will discretize and [INAUDIBLE].. OK, but that's fine. It's close enough for us to start representing the world in a 4D resolution. So now if somebody asks you what's the dimensionality of the world, people of course say, yes, it's 3D. But what is this thing? The world is actually four. And it's almost a difficult concept to get across. But some other ways of thinking about the same situation is imagine if you build a hologram. You have a hologram, and you're standing in front of it. And the hologram has a coordinate system ST. And as you move your head, you see a new image coming out of this hologram, right? So if I move in the U direction, I see a different set of images. But if I move my head up and down, then I [INAUDIBLE] see new images in the v direction. So the hologram also is recording four dimensional rays on particular displays, perpendicular [INAUDIBLE].. He's also recording a 4D version. So to record a view through a window, for example, I need to adjust to a four dimensional dataset in a two dimensional world. And that's [INAUDIBLE] problem. And this one is actually inverse of that. We're going to capture the world. And when you capture the world through the window, again, we record a four dimensional [INAUDIBLE].. So try having that argument with your friends where the world is 3D or 4D. Maybe the [INAUDIBLE] have gone one more dimension. All right, so now we have RF cameras, and we're going to think about how that can be achieved not within RF cameras but just with a single camera. So the RF camera is very much similar to this, just a set of cameras with a bunch of sensors behind it. And as far as photography is concerned, we want to think about how we can achieve that with just a single camera. So there are a few concepts we are going to go in to look at more carefully. One is that an image that's created for a 3D point is actually a sum of multiple images created by different parts of the lens. So if we have a whole lens and you cover all these parts and then just do this more often, it will create one image. Then if I lock at one where we see this one open, you'll get another image and so on. And in this case, I will get, I guess, five different images. Now what's the difference between those five images. You can always kind of see the hint here. It's as if I placed a camera here, then I placed the camera here, then I placed a camera here, here, and here. So it's just like opening a part of the aperture or sub-aperture is just like moving a camera but within the aperture of your oculus. If your aperture is only about 25 millimeters, the whole shift is only about 25 millimeters. But there's a difference between this situation and this situation, OK? So because all of these are pinholes and there's no optics in it and this one has pimples-- but there's some special shape. If you just look at any one section, what does it look like? Don't look at the whole lens. Just look at one section. Sorry? AUDIENCE: The trapezoid? RAMESH RASKAR: Trapezoid, which is a truncation of a prism, right? So you can think of a lens as being made up of sections of a prism. And of course, this prism is truncated here. And this prism is truncated here. And this one is just a slab of glass, just a [INAUDIBLE].. So it's just a flat piece of glass. It doesn't really do anything. And so in high school, when they say-- when the light goes to the center of the lens, it doesn't change light. It's a slab of glass. Of course light goes through it. And here, the whole prism. So light goes in, and it deflects. So we just have a set of prisms. Here we just have a set of pinholes. So let's assign some of those coordinates one more time. We're going to say the camera position, right? The camera position here is given by U and the pixel position is given by S. You can ignore the second variable here. And we're going to try and understand this four dimensional space that the camera has captured. I'll give you a punchline here, which is if a camera can capture this four dimensional representation of light, it has captured everything that's coming through that lens, and it includes phase and all that information. So all the geometric information about the world is completely captured by this four dimensional representation. So once you have that, of course, you can go back and do anything you want. You can do digital refocusing. You can estimate 3D shape. You can insert new objects in it, whatever you want. That's all you can get. So just by going two dimensions higher, from 2D to 4D, it's a complete representation. And that's why it's so important to capture that for the lectures because if you want to think about any sophisticated photography applications, that's the maximum information you can get. We still have wavelength and time and polarization and so on, but those are not geometric dimensions. The geometry dimensions are just space and angle or two space and so on. All right, so, as I said, we're going to think of the lengths as being made up of a sequence of prisms. And when you capture a light film, you end up getting an image that looks like this. So the picture from here versus here versus here versus here will have a small parallax as you move around. It will look almost the same, but there'll be a very tiny parallax. So if you look at this set of images, they look almost the same, and they're captured by a light field camera. But if you see very carefully, then the distance between the eye and the ear of the body is changing from top to bottom. Here it's almost touching the eye. And over here, there is a significant gap between the-- so there are these small gaps, and I'm sure when you do this experiment where you look at something with left eye versus right eye, it seems to shift. Your finger seems to shift. And that's the parallax that you're introducing. And that's the same effect that's happening here because parallax between your eyes is only about six centimeters. But that's sufficient to create shift and relative to certain positions. AUDIENCE: What's a method for capturing the phase? RAMESH RASKAR: So we'll come and discuss that. It's a very important question. All right, And Marc Levoy and his group at Stanford are world leaders in thinking about many types of light field cameras and also their applications in different areas. So I've taken a lot of slides from his presentations, and he has also applied it for microscopy and so on. All right, so very briefly-- I'm not sure I'm correct. OK, very briefly, there are other ways you can represent this. So right now, we're thinking about a UV plane and an ST plane and just the intersection of that. But depending on your application, you may have some other ones. Maybe you have a sphere. And you cut the hemisphere, the two hemispheres. And any point on this hemisphere and any point on that hemisphere together will create a ray as well. Or in some cases, you want to create some convenient 2D manifold, and the position on the manifold and the theta field respect to dive can also be electric. So just those of you who think about it mathematically, there's a continuum between flexible light fields to two plane parameterization [INAUDIBLE].. All right, so let's come back to talking about light field inside the camera. So why does a traditional camera capture this 4D light field? So you have a point in sharp focus. The reason of that [INAUDIBLE] they bend at the lens, and they converge to a photodetector. And all the radiance coming from different directions is integrated together to get to-- and that's recorded as the intensity of the [INAUDIBLE].. So what you get out there is just two-- so fly sensor, you get 2D image. The question is, what do we get extra when we capture a 4D light field in terms of [INAUDIBLE]?? AUDIENCE: So right now, it's compressing all the angles into [INAUDIBLE] extract the angles. RAMESH RASKAR: Exactly. So this is the most important question. Remember, I'm repeating it multiple times. But now we want to figure out not just the total of all the space but also the value of each of these rules. OK, and where the rays comes from individually? They're coming from different parts of the lens. So there's a very interesting relationship between the lens and the sensor. And if realize that geometric relationship, you may be able to recover this 4D lecture. So [INAUDIBLE] and his student came up with an idea in 1992 to try and build a compact device. The original idea was again presented by Lippmann in 1908, more than 100 years ago now. So the concept of light field has been around about 100 years. And then Brenning at Stanford in 2005 created this really beautiful device that can very compactly capture a light field as well. The basic idea is very straightforward. You move the sensor a little bit further back, just a few micrometers, and then replace this plane with a set of microlens array. And if you just look at these two here, it's very similar to the lenticular display or microlens-based display that you see on cereal boxes and rulers and so on. But now we're going to use that for imaging rather than for display. And this is how they built the device. They placed these microlens arrays. The pitch of each microlens is 125 microns, and the pixel itself is about 9 microns. So under each microlens array here, you have about 9 by 9 pixels-- sorry, 14 by 14 pixels. I'm sorry. And then, so what they're going to do is they're going to record the incoming light in 14 different directions. That basically means that they're going to slice the lens into 14 segments, and they're going to capture the light that would have appeared if I just opened the first part of it and blocked the remaining 13, and then I unblocked only the second one and blocked everything else. So the 14 pictures that I would have taken by exposing only one sub-aperture at a time, you can capture that in one shot. Yes. AUDIENCE: So I have a question about the [INAUDIBLE] array. Do any of the rays ever go outside of that 14 by 14 area and hit a different sort of thing? So-- RAMESH RASKAR: So it's not coming from here. It's something from here, but come and-- AUDIENCE: Yeah. RAMESH RASKAR: --and spoil image? AUDIENCE: Yeah. RAMESH RASKAR: That's a great question. That's a great question, and we'll come to that. And it basically relates to the F number or the relative opening angles of the main lens and the microlens. AUDIENCE: But the goal is for that not to happen, right? RAMESH RASKAR: Exactly, yeah. Not just it should not happen in terms of overlap, but you also don't want any black lesions here. So you want to put the sub-aperture images as tight as possible. And these are some examples of that. Yes. AUDIENCE: So why can't I just think this as like [INAUDIBLE]?? I mean, it's quite a nice array you're working on. Maybe I can think of it, the system, as my negative equations [INAUDIBLE],, and if I know the parameters on the unknowns, then I can just solve. I don't really need to-- an extra-- RAMESH RASKAR: Excellent. That's a very good question. So those of you who think of this as some kind of a projection of a 4D space onto a 2D sensor will say, OK, there must be some way I can recover the 4D image, 4D representation, from this 2D image, right? And fact of the matter is depending on the scene, you may be able to do it. But in general, this inversion is very unstable because-- I'll explain. Because if you have-- so what does it mean to actually capture this image? If I have something that's really sharp in focus, then all the rays, have converged on that. So there's almost no information recorded about the relation between this and this and this. And let's see if I go out of focus. Then I will get a blur here. And we'll look at that more. And all the blurs will go on top of each other. So basically, we are reducing the dimensions of the data set and hoping to kind of look up and recover a higher emission data set. AUDIENCE: But even if something in terms of-- OK, so what you are saying is that you're probably not solving this important because you get linear [INAUDIBLE] equations. RAMESH RASKAR: Yes. AUDIENCE: [INAUDIBLE]. I'm trying to understand why [INAUDIBLE] just because in focusing on the summations, the coefficients of all the time of all the variables are going to be very small on every single sensor? RAMESH RASKAR: It's a bigger problem than that because remember, we're going from four dimensions to two dimensions. What you're saying is valid when we're going from two dimensions to two dimensions. But if I'm going from four dimensions to two dimensions, there is a significant-- it's a lossy representation of the signal. If I have a very unique scene which itself doesn't have a 4D light field going out, then I can recall it [INAUDIBLE]. AUDIENCE: [INAUDIBLE] long [INAUDIBLE] sensing [INAUDIBLE] did not only have full information-- RAMESH RASKAR: Exactly. AUDIENCE: But does that mean that I can't actually solve? RAMESH RASKAR: In most of the cases, you can't. AUDIENCE: [INAUDIBLE] four pixels as-- or four [INAUDIBLE] original scene as one pixel. RAMESH RASKAR: Yes. AUDIENCE: Can I solve for that? RAMESH RASKAR: So to rephrase your question, you're saying, how can I queue up some resolution and recover a four dimensional representation, right? And that's exactly what's happening here with a hell of some physics. So instead of thinking of this as just bare bones lens and sensor, if you take some help from physics and optics, then it turns out that that inversion is possible. AUDIENCE: But why can't I [INAUDIBLE] like, just theoretical stability? Because [INAUDIBLE] small [INAUDIBLE] physical process [INAUDIBLE] equation can have independent? RAMESH RASKAR: It's just numerical stability. Yeah, I mean, first of all, you're going from 4D to 2D. Sometimes even going 2D to 2D, it may not be invertible. And here, because you're going from 4D to 2D, it's even more challenging. AUDIENCE: [INAUDIBLE] did not have complete, full [INAUDIBLE]. RAMESH RASKAR: Exactly. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Exactly. AUDIENCE: It should be possible to solve that. RAMESH RASKAR: So one good example of that is in astronomy, and we'll see it later. We're looking at a star. And clearly, if you take a picture of the star, within the aperture of the lens, the star doesn't change that much. So clearly, it's a redundant data. And then light fields are used there to actually figure out the aberration in air. So that's an example we look at. But that's a very important question because we'll always get into this. We always want to do more with less, right? So one motivation is how a 2D sensor somehow will capture more information. And we're going to either use some optics and some physical techniques, or we're going to use some computational techniques. And the best case scenario or the best strategy is actually to combine the two. Combine a physical and computational approach to recover the small one. And the things we saw in the last class, for example, being able to tell whether an apple is real or fake or being able to read a playing card and all that, is a similar problem, right, because the world is high dimensional, and we only have a 2D sensor. AUDIENCE: OK, so one question about this. Wouldn't it be similar if I take the photo sensor and the something and just move it forward so that all the rays don't converge? I mean, I'm waiting to separate them. When would I get them before they converge? RAMESH RASKAR: You're thinking the right direction. There are multiple ways you can recover all the 4D rays. You can-- the simplest one would be to simply block part of it and take multiple pictures. So that's time multiplexing. This one is space multiplexing because I'm giving up some resolution here to be able to recover the ray direction. So I'm trading off basically spatial resolution for angular resolution, right? And then the technique you're describing says, why not just move the sensor back and forth and take multiple pictures? And that's actually what's used in astronomy to recover the aberration in the sky and so on. So if I had just come up with that a few tens of years ago. Actually, 1967 is when phase diversity was, I believe, invented by a professor at Tufts. All right, so just a piece of that thing we saw just earlier that we can take the 16 megapixel image now, 4,000 by 4,000. But then because each microlens now will be treated as just one pixel-- one megapixel, rather. But under one megapixel, you have 14 by 14 pixels. So in terms of megapixels, you only have 292 by 293 image. But because each megapixel has 14 by 14 underneath, we can do a digital refocus from a single shot. So this is spatial multiplexing because we are trading space for angle. So here's an example where you said, OK, I want to give up four pixels to get angles for only one pixel. So now we have given up 14 by 14 for one megapixel. But still, this involves using additional optics, and your dream is can I do that without changing anything, right? Is that correct? AUDIENCE: Can you put the left part of the image in focus up close and the left part of the image [INAUDIBLE] way in the back? RAMESH RASKAR: Yeah, you can do that, and that's part of the assignment. Having a slanted focus is now possible. As opposed to just planar change in focus, you can have focus that's not at a constant depth but at some slant. And the tilt shift photography, which we saw in the very first class, where your angle between the lens-- so typically, when you have a lens at the center, what's in focus over here. And using [INAUDIBLE] law, if I shift the sensor from here to here, then the planar focus shifts from there to there. So different things come into focus. But actually, if you tilt these guys, then the actual planar focus was on the [INAUDIBLE].. And the [INAUDIBLE] principle says that this and this case will all meet at one point. But with the light field camera, you can do this directly. And you can decide this effect afterwards, after the problem statement. The plane of focus doesn't really shift [INAUDIBLE] and so on. So I recommend you to do that part of the assignment. You can skip the first one and just go for that part. Because the way the refocusing is achieved, as we discussed last time, you're just going to take all these 14 images in case of flatline and simply shift and add them. And how you shift and add them decides that planar focus. If I don't shift and add, I focus at infinity. If I shift a little bit and add, I start focusing closer. If I shift more and add, I focus really close. But then I don't shift all of them the same way. I can shift some of them more and others less. And then I can focus on [INAUDIBLE].. Go ahead. AUDIENCE: [INAUDIBLE] to assume that the pixel blur will be constant or [INAUDIBLE] definitely for a given step. RAMESH RASKAR: What do you mean by that? Can you repeat? AUDIENCE: So as you dive for a given [INAUDIBLE] the focus. RAMESH RASKAR: Right. AUDIENCE: The pixel [INAUDIBLE] to see what all [INAUDIBLE].. RAMESH RASKAR: Mm-hm. AUDIENCE: Is it a reasonable assumption? RAMESH RASKAR: It's a reasonable assumption. It's a reasonable assumption that every point at a given depth will have the same amount of blur. When you start thinking about the length separation and so on, you will actually realize that this is true only in the middle part of the image. But as you go to the periphery of the image, you start getting some length separation effects. But the first order approximation, you can assume that the whole chain has the same overflow. And that's what distinguishes a cheap camera from an expensive camera-- because for a cheap camera, the center is in good quality, but the periphery is not. And an expensive camera will try to use multiple lenses and all kinds of tricks to make sure that particular statement is true. All right, so now we come to the part that kind of suggests the concept between scientific computing and light fields. So when you want to-- there's this whole concept of adaptive optics which is used in astronomy and telescopes and so on. And what this simply says is that if I'm looking at a star which is very far away, then the wavefronts that are coming from the star should be mostly planar. So imagine that for a stone in water, and it starts creating the wavelengths where if you go sufficiently far away, they become mostly planar [INAUDIBLE] vibrations. So everything is clear between the star and where you are. All the wavefronts should look planar. But if there is some disturbance because of hot air and so on, this will be distorted. So let's say this one is-- there's hot air here. There's nothing over here, which is conceptually the same as putting a piece of glass here and nothing over it. So here, the light continues as it is. But over here, the light slows down and takes slightly longer to come up. And so the wavefront will look something like this. And as you go further, they all start to connect. Now you can imagine there's another pocket of hot air somewhere here. And then this guy will slow down. So you'll start looking something like that, like so. So the the wavefronts is bent. Now the way it's thought about in science and imaging is that if I want to take a picture of this star which is far away and the wavefronts are bent like this, the image that I will get will not be sharp or actually will be blurry. And this happens to you, right, if you are just looking at hot air or a foggy scene-- OK, not foggy scene, but just, if you look across the river to Boston, even on a clear day because the temperature variations on the water versus land, the lights will not be sharp. They will be blurred. And actually, over time, they seem to shift a little bit [INAUDIBLE].. And that's happening because at one instant, it's bent this way. Some other instant, it's bent the other way because the pocket is changing. The pocket of air is changing up and down. So this is a big problem for telescopes because they would like to see this very clearly. And this is an example where we know that we should be looking at something that's very far away and really doesn't [INAUDIBLE] because the [INAUDIBLE] is all black here, and there is some galaxy or some [INAUDIBLE] over here very, very far. So what I would like to do is take all these guys. And if I somehow can by a different method figure out what this disturbance is, I want to create a mechanism so that when it comes out, it's again back to nice and clean. So before that, I should say that in a normal situation, in a friendly situation, from a same point, light goes up. It is parallel. And then remember the certain piece of glass which is thickest, right? So it slows down quite a bit. And here, the glass is very thin. It goes pretty fast. All right, so what comes in as a planar wave actually just starts becoming a concave because remember, this one slowed down the most and this one slowed down the least. And this will converge down to one. So this is kind of the real propagation way of thinking about how the image is formed. And the ray representation will be our lens, our point. They go parallel. It's a prism. And each prism, the middle piece of this glass, it goes straight through. This one is a prism with this light film. So the ray is bent a little bit. This one is basically prism [INAUDIBLE].. And eventually, it forms [INAUDIBLE].. But then [INAUDIBLE]. And by showing these two examples, what we realize is that we don't think about how different parts of the lens impacts the incoming image. So going back to this example where you want to deal with looking at something that's very far away, then I can-- so we mentioned this distorted wavefront here. I can first reflect it off of some mirror. And what I'm going to do is because the shape of this mirror is going to be exactly opposite of this particular thing. So I will do something like that. And when that was true, even though it was spent this way here, when it goes out, it will [INAUDIBLE].. And then I can put a lens and capture the image. So that's how it's done in astronomy. And this deformable mirror is deforming at thousands of Hertz. And how do they calculate what the perturbation are? Anybody knows? They actually shoot a laser which ionizes at a particular height. So they basically create a virtual star. Sometimes they call it pilot star. And they take a picture of that. And they know that star is supposed to look like a point. So any change in the appearance of that star is an indication of how the air between the telescope and the stratosphere is impacting the incoming light. And so you can use that mechanism to correct for-- so that pilot star basically acts as a calibration, and they will feed that signal to the deformable mirror. And it will correct that incoming wavefront. So if we have the same thing, you can go out in hot air where you have those mirages on a street, on a highway, or over water, and you can correct for it in real time. But this is really, really expensive. AUDIENCE: To make a window, [INAUDIBLE].. RAMESH RASKAR: So you want to correct for it, or you want to create an illusion that it's hot air outside? AUDIENCE: Illusion, yeah. RAMESH RASKAR: Yeah, I think you can. Yeah, that's pretty easy. It'll be a very interesting effect. In fact, what you can do is you can take a piece of glass. And I mean, the one key property of mirage is that it's not just that you have an inversion. Everybody's familiar with mirage here where you have-- it's Walter [INAUDIBLE]. AUDIENCE: I think it's John. RAMESH RASKAR: John, sorry. You like also with this? AUDIENCE: Yeah. RAMESH RASKAR: OK. Just tell me, and I'll switch over to the other side. In case of mirage, you have the highway. You're driving over here. I'm sitting here. And so the ray is going straight. The [INAUDIBLE]. OK, and sometimes, you basically create the lens [INAUDIBLE].. And what you see in the picture is that you have the road that's going towards you. Then you see the blue sky, right? Let's look back here. And then you see the car that's coming from the other side. So you have this inversion. And that's very similar with this inversion because the air near the road is very hot. It's [INAUDIBLE]. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Sorry? AUDIENCE: Is that because of the Bragg scattering [INAUDIBLE]?? RAMESH RASKAR: The Bragg scattering? Yeah, that reflects as well. The [INAUDIBLE] reflection and so on because again, changing the [INAUDIBLE]. Very minor change spread over kilometers. So anyway, the reason why I'm bringing this up is the way that's computed-- now you shoot a pilot star, and you want to figure out how the air is filtered. How do they do that? AUDIENCE: Ramesh. RAMESH RASKAR: Yes. AUDIENCE: How are they going to image those pilot stars? So say you shoot them with a laser, right? RAMESH RASKAR: Yes. AUDIENCE: It should [INAUDIBLE] so that they could see it. RAMESH RASKAR: No, it ionizes the air, and you just see this bright spot. [INAUDIBLE] AUDIENCE: And then compensating only for the actual spherical variations. RAMESH RASKAR: Exactly. AUDIENCE: Because there are other deviations due to mass and [INAUDIBLE]. RAMESH RASKAR: It's changing very rapidly. Every second, the air is changing. So only if you compensate for it once, that's not good enough because it's changing every. OK, so now we have the pilot star. How do you figure out how to deform those mirrors? They capture a light field. But this is not how they explain it. The explain it using the notion of [INAUDIBLE] wavefront sensor. That's the model they use. They are really expensive because they're very high quality. All they're doing is this wavefront is coming, and they have a lens very similar to the one we saw from [INAUDIBLE] at Stanford. And the image that's found here actually tells you how the light is bending. So let's go back to our picture here. We know that if you have a lens and the wavefront is traveling in parallel, the wave image form is-- AUDIENCE: Focal. RAMESH RASKAR: Focal. Exactly with focal. If the wavefront shifts, where they will form? [INAUDIBLE] formation, right? So a particular-- This is A. This is B. The image of A will be formed here. B will be formed. So just again in wave optics, if the light is coming from here, the image will come here. If the light coming from here, the image will come here. [INAUDIBLE] Now what's happening over here is we don't have this one lens, but we have a set of lenses. So we will have lens, lens, lens, lens, and so on. If you were looking at a very clear sky up on the pilot star, the wavefronts will compound. And all the images will form exactly at the same time. But you mentioned because of perturbation, you have, I think, a very simple example. You have something that's going straight and something the same. That is one example. Now what's going to happen is for this path, when it hits these lenses, it [INAUDIBLE] made at the same location. But for this particular one, which is tilted, the image will be formed slightly off center. So I can look up this image, right? Originally, the point was in the middle, middle, middle, middle. And now after the aberration, the actual point will be shifted. So these two guys are fine, but these two guys, they've some shifted in the middle. And that tells you how the waves are bending. You just feed this differentiable signal. You feel the mirror, and the mirror will [INAUDIBLE].. It's as simple as that. But this is also [INAUDIBLE] because you can think of this as a perturbation of this. So in this case, the waves are traveling in a straight line. So the optical axis. And in this case, the waves were traveling [INAUDIBLE] like this. And so that's why these guys get here. They go slightly off. And this guy goes [INAUDIBLE]. So the notion of light field is also very compatible with the notion of wavefronts. There are a few more details, but again, keep it simple. It's also used in ophthalmology to look at any aberrations in the lens of the eye. Same exact idea. You have-- let's see. You had the retina here. You could somehow assign the beam and create a bright, small spot. If everything is well, the light will go through it, and it will create this set of dots in the middle. If there is an aberration in the lens, this wavefront will bend, and the points would not be at the center, but they will be offset [INAUDIBLE].. So in a single snapshot, I can tell you what the variation of the string is. There's one critical element here that I'm not talking about, which is when you get this curve out, you don't really get the whole thickness or the depth of this curvature, but some piece of information is missing. What is that? Perfect. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: The phase or-- you get the phase, but you only get-- AUDIENCE: It's 2 pi, right? RAMESH RASKAR: Exactly. It's modular 2 pi. It's modular 2 pi, which means that if you-- let's say I put a piece of glass here, and the weapon goes here like that. And if I put-- so let's say there is a piece of glass floating down exactly by one wavelength. I can put a piece of glass with twice the thickness. And light be delayed by two wavelengths. That's why this is concern. The phase delay is computed only by more [INAUDIBLE].. So there's a phase wrapping that's going on. So if you have a shape like that, if you have a shape, let's talk about it. So this picture here is actually showing that it's phase wrapped. So it might looks like a lot of discontinuities. But basically, it starts from the edge, and there's zero difference. And there's-- this curvature here is bent. And then it seems to be jumping from, say, here to here. But actually, it's a continuous surface. It's just phase wrapped. And this concept is used in many other fields. If you're familiar with synthetic aperture radar or arranging. You get some values that are [INAUDIBLE] specific things. So here we have parallels that you see on top of transparencies. If you have a lens like that, disrupting a very thick lens, you can create a very thin lens which basically helps the slope. And you take this slope right here. You take this slope right here. You take that slope, put that here. The middle one is just-- start with us and so on. And this one and this one-- again, in central cases, [INAUDIBLE].. So the flat magnifying glass is [INAUDIBLE] or the large [INAUDIBLE] on top of the [INAUDIBLE] projector. They all look [INAUDIBLE]. It's always at the edges like [INAUDIBLE].. So here, there's a small surface. They're not actually-- this is just conceptual models. But if display [INAUDIBLE],, it doesn't really matter. It's still that density. But here, the ratings of the boundaries [INAUDIBLE].. There are some issues there. But a lot of those issues based on this [INAUDIBLE].. And here, what you're [INAUDIBLE] are saying I'm just going to chop this glass into multiple slabs, and I'm going to keep one in the last slab. So I'm going to keep this one, this one, and this one, this one, this one, and then I'll go to the last slab, this slab. That works the traditional lens. [INAUDIBLE] And similar concept is also applicable phase for [INAUDIBLE]. So then your doctor can see that in a perfect eye, all the points are in the center. But if there are aberrations, the points will shift on the center, and all you have to do mathematically is this direction, so keeping with the gradient, 2D gradient. So you have to do a 2D integration to recover the surface. You have to solve the Poisson equation. And for those of you who are familiar on how to go from 2D gradients to 2D height fields, it's a straightforward process. AUDIENCE: When they test your eye, which test is this? Is the one where they scan the light on the back of your eye? RAMESH RASKAR: I don't have glasses. And I don't have contacts. Anyone else? AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Right. AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Exactly, yeah. That's a very good point. That's a really good point. So all these discussions we are having, where we have on both so far, is for one specific wavelength. So let's say it's a red color or black. It's linked to this particular type of distortion. But it had a green light, blue light, [INAUDIBLE],, then the distortions are a function of the refracted areas. The refractive index is a function of the wavelength.s So the distortions are also a function of the-- and so the aberrations that you will see or focusing mechanism you'll see will be different at different wavelengths. And we'll go into that a little bit later when we talk about [INAUDIBLE]. So that was a bit of a small detour of how waves and rays are connected and how the concept of light field is used in many areas. OK, so let's start having some fun with this sequence of images. So we saw last time that you can see through occluders. And this is what you are going to do for your assignment. And think a little bit about how these different rays are allowing us to achieve these effects, OK? Now some of the slides will have some math and some geometry. So those of you who are interested, stay awake. Those of you who are not interested, you can ignore these slides mostly. But for those of you who really want to think about light rays, this is an extremely powerful concept. And when you visualize light field, you're talking about position and angle. Some people call it phase space. In Fourier objects, it's space and spatial frequency or spectrogram. And all it's saying is that again, if you back to the ray representation flat line, there's position, and there's angle. So I can take this green ray and plot it in a position and angle space. Position is black, like so. There's x here and p there. I'm going to take this green ray, find its x-position, which is x1. And it's at a particular angle, theta i. And I'm going to place that. There may be another way that starts from the same point but at a different angle-- so the same position for a different angle. What about this one here? This one has different position for the same angle as the green ray. So x2. It's a different position and an angle, the same as the red. So it's the same as the red ray. Is this diagram very clear/ Because we keep coming back to this, this particular diagram. OK, now something-- we can start thinking about how life will propagate. And if we're doing the first assignment, the virtual optical bench, this is what you'll be doing. And once you see this couple of slides, it will be crystal clear how this is done. Now let's see this ray as propagating from this plane to this plane. If it travels by a distance of z, I can write down the new coordinate as simply the original coordinate plus the angle times the distance. Typically, you would have I guess a [INAUDIBLE].. But a small angle [INAUDIBLE]. And then the new x-coordinate is just old x-coordinate [INAUDIBLE]. So how can you represent that over here? So if you look at the original points, OK, I'm going to take the green gray and change its x-position. But its theta position is not changing because it's still going in the same direction. So its position is x1 prime. But the angle is still theta. So I will take this green guy and just shift it to the right to represent this critical path. I just want to shift it from here to here. And these other guys will shift as well. They'll shift to the right if they're above this plane, and they'll shift to the left if they're below this plane. So you create a shear of the x theta representation. And this is unnecessary. Why are we thinking about this dual space when we can just think about the primal space of the real world? But again, it becomes extremely easy to analyze it in this dual space of x theta. Is this clear? AUDIENCE: I didn't get why the others were affected if you were only moving the green ray [INAUDIBLE].. RAMESH RASKAR: So if I take now this particular ray, think about what will x2 prime for that will be. So we started with x2. Started with x2 and theta g. And now we want to find the new x2 prime. It's going to be x2 plus similar equation-- z times theta g. So it also has no sheer. And depending on how far up you are here, the sheer moving on. So this means that if the ray was actually parallel to the optimal axis and perpendicular to this plane, x will not change and theta will not change. So if you are lucky enough to be on this equator here on the optical axis, nothing will change, really. Does that make sense to everyone? All right, so let's do a couple of quick exercises to make sure we are all on this. If there's a point and a light source-- and here's an example where the like is very simple. I have a bulb, LED, and it's emitting light in all directions. How can I represent that index theta space? So there's a particular x. I'm going to put it somewhere here. And it's going to span all the thetas. AUDIENCE: The vertical line? RAMESH RASKAR: So it'll be a vertical line, very good. Now if I take, if I let it propagate-- or let's say I have a new light for here. Where the light is going very far away and all the rays are coming at 10 degree angles with respect to the optical axis, what will this say? AUDIENCE: [INAUDIBLE] RAMESH RASKAR: A horizontal line because they're spanning all the x-coordinate but only with the theta of 10 degrees. That's what we're going to do. All right, so they're two very simple examples-- point that's very close, vertical line-- point that's very far away. Think about a star-- horizontal line. This is why you think there's two. We'll be able to build a lot of machinery to understand how we can create some truly amazing effects for using light fields. OK.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_8_Survey_of_Hyperspectral_Imaging_Techniques.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. INSTRUCTOR: He's the world's expert in understanding wavelengths and so on. [LAUGHTER] No, he's been a collaborator with our group, and he has done a lot of work in the space. He's built all kinds of interesting camera, also compressed sensing, and now he's working for an organization called MITRE, M-I-T-R-E. MICHAEL STENNER: Correct. INSTRUCTOR: And he's going to tell us-- he did this really beautiful study of different ways of capturing multispectral images. So he's-- we're delighted to have you here. He's going to tell us in more detail. MICHAEL STENNER: OK, so by way of introduction, and I'll talk about this a moment, but it's an interesting thing. I think in this computational photography or computational imaging community, I think it's a new enough discipline that very, very few people actually study it. Most of the people who have come to this community have come either from a computer vision, or graphics, or other computer background, and others have come from an optics-y, physics-y background, and I've come from the other side of it, which may shine through in some of the things that I say to you today. So I'm going to talk, in general, about this spectral imaging. And this is a talk that I've put together not exactly for a group like this. So it's a little bit short on introductory slides. It kind of just dives right in. So I'm going to chat for a moment, and I'm also going to talk a little bit. These guys provided me some background slides, since you guys haven't talked about tomography yet, which is going to come up at some point in my talk. So this is going to be a little disjointed, but consider this background before I get into the body of the talk overall. So some of the techniques that I'm going to talk about rely on tomography, which most likely, everybody has heard about. That's the T in a CAT scan or CT scan. It's also a major part of how MRI scans work. And the basic idea behind tomography is that you take a measurement where you're integrating something along a path through some object, and then you measure, for example, the intensity of these things. So you end up with this, basically, line integrals through this thing of some property. In this case, it's the, effectively, the density of an object, and then you measure that at the bottom on the other side. You do that for a bunch of different angles as you rotate your source and your detector around the object. And from that, as we'll talk about in a minute, you can reconstruct the object that you're interested in. Now this slide is just demonstrating that there are a few different geometries for doing this. In one case, you have this parallel beam tomography. In other cases, you have this fan beam tomography. That is, to a large extent, a choice made purely on the convenience of your physical system. So for example, here, if your beams are X-rays or something then generating a whole bunch of parallel X-rays is kind of a pain, whereas generating a-- creating a point source of X-rays that just go off radially, and then you detect, over here, that's quite a bit easier. So you just have some rearranging of rays to deal with on the [? editor, ?] but that's the basic idea. So then go two slides forward. Well, no that second one is fun. So let's go do that. So here's an example of looking at the computed tomography for ahead apparently. So this is the density function. And if you look at the parallel beam projections, this is-- mathematically, this is called the radon transform. But if you look, this is the angle that the parallel beams are going through, the hedge, in this case. And this axis is which of those many parallel beams you're looking at. So you see this-- it's really hard to describe without some familiarity with this problem. But this is the sort of pattern that you get. And basically, there is enough information here to reconstruct this. INSTRUCTOR: So you can think of each vertical slice as one image taken from each direction-- MICHAEL STENNER: As one-- yes, so for example, if you were to just take a standard X-ray, like you're looking at here, like your doctor puts up on the wall, if a vertical slice is a 1D version of that X-ray, right? So you take that same kind of X-ray, but looking through in a whole bunch of different directions, and this is what you get. The same thing here, except that they're just kind of rearranged a little bit. Instead of a vertical slice, now it's a curved surface is that normal thing. Now as you get all of the same parallel rays, they just don't happen all at the same time anymore. Next one. The next one now. So now, the question is, how do you turn this back into the image that you're looking for. And as it turns out, basically, the-- and then, it's been a while since I thought about it. So I might get the details wrong. Feel free to help me out here if I screw this up. So if you are projecting through this way, then that vertical slice is what you get for a single angle, and that vertical slice basically is a line through the 2D Fourier transform of your image. OK? So you take all of those vertical slices, and instead of plotting them this nice rectangular thing, you take each one and lay it out at the angle it was taken. And it's going to give you an estimate of the Fourier transform of your object. Except you're going to have samples along these radial lines based on how you scanned, all right. INSTRUCTOR: Maybe you can draw it quickly? MICHAEL STENNER: That's a very good idea. I can see that. Yeah, I can do that, thank you. So here's my Fourier space. Here's my physical space. Start out with my parallel rays. I measure the result with my detector over here. And that tells me the Fourier transform-- so this is a 2D Fourier transform-- and I now know the Fourier transform sampled along that line. OK? So now I do it again in this direction. And now, I know the 2D Fourier transform here. All right, that's great. So one might only slightly naively think that, hey, if I know the Fourier transform of this thing, then I know the thing, itself. All I got to do is Fourier transform back. Right, that's almost completely true, except that you realize that the pattern you have, this thing Fourier transformed in, is not a rectangular grid. So you can't just pop it into your standard 2D Fourier transform algorithm, and get the Fourier transform back, or the direct function back. There are lots of crazy interpolation problems, and it gets a little bit ugly. So there's a-- I don't if you have-- yeah, so there is-- I'm not going to talk about how it works-- but there is this other algorithm that's very popular for this of thing called filtered back projection. Really, it messes with where you do the interpolation. And in general, it gets past a lot of the artifacts that you would otherwise see. The bottom line, the thing to take away from this little pre-talk talk is the basic concept of tomography, that is, you perform integrals through some object from a bunch of different angles. And then it gives you all you need to know in order to figure out the full structure of the object, itself. INSTRUCTOR: Is that clear? MICHAEL STENNER: Any questions? INSTRUCTOR: So X-ray tomography, you have to put X-ray with the sensor, take multiple projections, ' from there reconfigure what's inside. And we're going to use that same exact principle now, but think about wavelength. MICHAEL STENNER: Yeah, we're going to stop looking at heads and look at data cubes. Go ahead. AUDIENCE: So you have some kind of object that's spherically symmetrical, but if you cut it, actually it has different layers, how do you get the layers out? MICHAEL STENNER: How do you mean, it's spherically symmetrical? AUDIENCE: So let's say the density or whatever you're measuring is exactly the same for every direction, but internally, it's actually heterogeneous, instead of homogeneous? INSTRUCTOR: Like the hair? AUDIENCE: Well, that's not perfectly spherical. MICHAEL STENNER: So are you saying it's got layers? AUDIENCE: Yeah, but like perfectly symmetrical layers. MICHAEL STENNER: OK-- [INTERPOSING VOICES] MICHAEL STENNER: So a spherical shell of glass, a spherical shell of metal, like that built up? AUDIENCE: Yeah. MICHAEL STENNER: That's not a problem, because what you're telling me, I mean, what you're suggesting that might be a problem is that every measurement you take, every image that you take, is going to look exactly the same. AUDIENCE: Right, I get it now. MICHAEL STENNER: But the object really does look exactly the same from every direction. So that's OK. AUDIENCE: OK. INSTRUCTOR: But that's a good way of thinking about this problem, because it's a [INAUDIBLE]---- if it looks the same from the other side, then in a traditional camera, it must be a sphere. But in the X-ray tomography, or tomography general, if it looks the same from every projection, then it must be spherically concentric. MICHAEL STENNER: I'm a big fan of symmetry. OK, so this talk is basically a comparison-- it was put together for a customer who wanted a survey of hyperspectral imaging techniques and the trade-offs between them. So there's not a whole lot of intro here, but so what we're trying to do-- I'm going to talk about a data cube, which is x, y, and lambda. And we've got some resolution in all three of those-- in general, I'm just going to call that and nx, ny, and L-- so the number of resolution elements in lambda. What else was I going to say? OK, so that's a reasonable introduction. So these are all-- I don't think any of these-- some of these have become commercial products. But these are all basically research level devices, nominally looking at the same thing, except for the last couple. One of which is Ramesh's, and [INAUDIBLE].. And the other one is all-- well, maybe it doesn't matter where it's coming from, but it's another one that's not exactly a spectral imager. One more thing, by way of terminology, spectral imager means you get some spectral channels-- a few, five, 10. Multispectral generally means you get a bunch, dozens. Hyperspectral generally means you get like 1,000. So I tend to be very sloppy with those terms. But that's really all it means. INSTRUCTOR: There are some people who make multispectral as visible and hyperspectral as [INAUDIBLE] surface. MICHAEL STENNER: That's true. INSTRUCTOR: People use all kinds of different terminology. MICHAEL STENNER: Yeah, so basically, I'm just copping to the fact that I'm as sloppy with these terms as everybody else is. OK, so go ahead, we'll get started. So the points of comparison I'm going to talk about here are the data volume of the scene that we're looking at-- for almost all of these is this going to be nx times ny times the number of spectral channels. The physical volume, just how big is the physical device, the architectural impact on acquisition time. Some of these are devices that you point at something, push a button, and everything's acquired in an instant. Other things have to be scanned. Another one is the computational reconstruction and scaling. You guys are pretty comfy at this point that sometimes data has to be processed after you acquire it. The photon efficiency-- this is a big one that Ramesh was alluding to earlier, that in a lot of these devices, you end up throwing away a lot of light that actually comes through your aperture but never gets used. And then compression is an interesting one-- have you guys talked about compressive sensing at all? INSTRUCTOR: No, not yet. MICHAEL STENNER: OK, so compressive sensing is this-- basically the idea that if I'm trying to measure a data volume, nx times ny times L, I might take some number of measurements that is smaller than that. And based on some assumptions about the space, I might try to reconstruct that full data volume. There might be artifacts involved, but that's the general idea. So a couple of caveats-- some of these quantities are basically rough. And I'm not talking about the data quality here, because it's very, very dependent on the specific device and the way that you operate. All right, first and easiest, by way of introduction, is this-- what kind of a baseline camera, where you just have a scanning filter. And this is one that Anka talked about earlier, as well. You just have a lens, you're imaging a scene onto a sensor. But before you do that, you have some tunable wavelength filter in place. And so just to get you familiar with the language that I'm going to be using, data cube in this case really is this the standard thing. The volume is basically just however big your lens is and whatever its focal length is. Acquisition time-- the impact here is that you have to scan this filter. So you have to point the camera at a scene, and scan the filter and wait while that happens. Photon efficiency here is probably one of the more relevant, or more interesting points. And this is what Ramesh was talking about earlier-- it's 1 over L. All right, when you're talking about visible light, 1 over 3 doesn't seem like such a big deal. 1 over 1,000, that becomes a problem. INSTRUCTOR: You want to explain how the tunable filter works? Or what it does? MICHAEL STENNER: Yeah, so as Anka was talking about here, this might be a color wheel. So this might be a wheel that has different parts of it with different spectral filters. And each of the spectral filters in a device like this is going to be a notch filter, effectively. It's going to block everything but some narrow range, maybe 5, 10, 20 nanometers. Or it could-- there are-- I don't even know how to describe them-- that cavity based tunable filters that are electrically tunable. So they got no real moving parts, exactly. But you can dial in the filter-- INSTRUCTOR: It can change its color response by usually changing the distance between the plates. And if the soap bubbles are floating, you will realize that depending on the thickness of the bubble, it has a different color, also. Or if you spill oil on top of water, then depending on the thickness of the oil water layer-- there you go. MICHAEL STENNER: Yeah, so but by way of analogy, if any anytime you've got these two layers, you're creating a cavity, which acts like a filter. And you can think of that as-- is a loose analogy, don't take this too literally-- it is like an electronic domain, a single pull filter. You can add additional layers, and you get to something like a Bragg grating, if that sounds familiar to you guys, or a multilayer dielectric stack, if that term works better-- where you can get even sharper cut-offs and narrower resolution, and larger stop bands on the sides. So you can end up getting very extremely selective filters in situations like this. INSTRUCTOR: Unfortunately, the ones that are programmable are-- MICHAEL STENNER: Typically not that way. Yeah, they're typically like just a single cavity [INAUDIBLE]. AUDIENCE: Yeah, so what's in acousto optic filter? How does it work? MICHAEL STENNER: Pardon me? AUDIENCE: Acousto optic? MICHAEL STENNER: That's what-- acoustic optic, oh, boy-- oh, yeah, OK, acousto optic. So it's one of my favorite devices. Yeah, I'm going to try to keep myself from going too crazy here. AUDIENCE: You can always come back next week. MICHAEL STENNER: Yeah, in fact, I will be back next week. But I had a plan for talking-- I don't want to steal Robbie's time. AUDIENCE: That's OK. MICHAEL STENNER: I think he has a better idea. AUDIENCE: No, it's missing [INAUDIBLE] MICHAEL STENNER: Oh, just working [INAUDIBLE].. All right, so acousto optics-- so you've got some physics, some piece of glass, or it's usually not exactly glass, but glass-like substance. And you generally have a piezoelectric transducer here. So it turns electrical into mechanical motion, or vice versa. In this case, we're going to do the former-- we're going to drive it with some typically RF signal. So 100 megahertz. If you work with one of these in the lab and try to listen to an FM radio, it gets really annoying. And what this thing does, if you drive it with some frequency, is it will generate a traveling wave-- which this thing is usually built to dump it on the end-- it will generate a traveling wave of pressure wave, a sound wave, inside your device. And if you pass light through this, it will scatter off of that thing, just like any other diffraction grating. I honestly do not know how you do this-- I'm familiar with these in the laser setting. How you get-- INSTRUCTOR: It's a standing wave, then you-- you get a standing wave pattern. AUDIENCE: It's not a standing wave-- it's traveling. MICHAEL STENNER: It's generally traveling. AUDIENCE: It has a Doppler shift. MICHAEL STENNER: I don't know-- INSTRUCTOR: [INAUDIBLE] MICHAEL STENNER: So the way you can select wavelength with these things is that it's basically a grading, right? So if you imagine one wavelength coming in will go this way, a different wavelength will go that way, and so on. And it's just like any other grading. And then you can simply put an aperture out here. And then by changing the frequency that you're modulating with, you can change the grading period, and align different wavelengths with your pull and your aperture. So it's basically your standard grading, except you can tune the period. INSTRUCTOR: In object-based media, my group, Jack Smiley, he's actually building these to make holograms. So if you guys want to see, it's like a little thing like that full of ways to get a change of frequencies. AUDIENCE: For steering the lasers. MICHAEL STENNER: Yeah, one of my favorite devices that are really awesome-- I'm going to say one more thing, just because I have to because it's so cool. As a physicist like myself, you can think of exactly how these things work in terms of how much angle you get, and it actually shifts the wavelength of the light as it does this. So what comes out is not the same as what goes in. And you can figure out all of that if you simply conserve energy and momentum, and treat the incoming light as photons with known energy momentum, and treat the sound wave as phonons with known energy and momentum. You do that, then it's all just a standard physics 101 billiard ball type stuff. It's very cool. OK, back on the path here. Yeah, so that's pretty much our baseline scanning filter thing. Any other questions on this guy? If you haven't-- INSTRUCTOR: This was simple as possible. MICHAEL STENNER: Yeah, if you're lost now, you're in trouble. So speak up. OK, all right. Next one. So our second baseline-- Anka also talked about this one. This is the standard push broom. This is basically very similar to just a standard spectrometer. In fact, the thing that you are passing around here in the little box is pretty much exactly this. I mean, there's slight differences in how it's made. INSTRUCTOR: It's in the box [INAUDIBLE].. MICHAEL STENNER: Yeah, so in that box, as you were looking through it-- AUDIENCE: It's in here. MICHAEL STENNER: Yeah, great. As you're looking through this hole, you see that there is this slit here, going in the wrong place, a slit there. You can see it right there. And the slit, basically the slit is-- the light is in this architecture, light is imaged onto the slit. That light is then made parallel again. As it hits the grading, it's spread in different directions now. So the direction of the light is related uniquely to its wavelength. And then, that is refocused on the sensor. So now, if we have some flat, uniform scene, what we're going to see on the sensor is vertical stripes-- exactly like you saw through this thing. And where each stripe corresponds to the wavelength. So if you measure how much intensity you get in each of those stripes, you know how much intensity, or how much power you have in each wavelength. Now, if the scene has some structure vertically, then the structure along the vertical line is related to the object's physical structure, vertically. If the object is uniformly dark at the top, and right at the bottom, then that's what you're going to see on the sensor. And the horizontal structure of what you get on the sensor is directly related to the wavelength. That's completely it. So you get-- what this gives you, if you have this 3D data cube, a single frame of the sensor will give you one solid plane through that data cube, except instead of x and y, like a normal camera, it gives you x and lambda. OK? INSTRUCTOR: So y is lost. MICHAEL STENNER: y is lost. INSTRUCTOR: But averaged. AUDIENCE: What did you write-- MICHAEL STENNER: No, it's not averaged, it's-- INSTRUCTOR: Lost, only one point. MICHAEL STENNER: It's one point. INSTRUCTOR: That's it. MICHAEL STENNER: Exactly, and so normally what you do in this case is you scan it. So you either move the camera and take multiple frames at different locations, or Anka was completely correct, a lot of these standard applications are airborne, either for military purposes or for agricultural surveys, or whatever. So you're flying on an airplane, and basically you get the scanning for free. You just use the airplane to do the scanning. OK so the volume is a little bit bigger, you get more optics. There's no reconstruction here, you're building the data cube up directly. And the other interesting note is this is also not particularly photon-efficient, because you are getting light from other locations, other x values. They're simply being dumped by this slit-- that is if they don't hit the slit, but hit other side of the slit, we're throwing the light away. So that light is all getting wasted. And so this is also not particularly photon efficient. All right, so now we get into the first wacky version of this. It is also perhaps the most complicated. So sorry, the graceful introduction is over. So this, architecturally, is completely identical to the thing that you just saw, except we have now replaced the slit with some sort of code. So now, all of the light that gets imaged onto this thing, it will be modulated by the code. But then, everything else is the same. So the light gets modulated by the code. You can think of, if you prefer, by the way, if this helps you, you can think of the slit as a particularly boring code. All right, it's a perfectly legitimate code. It's just a really boring one. So this is a more interesting code. In general, they're going to be half filled and half blocked. But then, all of the usual stuff happens-- we read columnate, go through the grading, and the bits get separated and fall on our sensor. INSTRUCTOR: OK, so we have the notion of multiplexing, where we talked about if you want to measure nine bags, we can measure them one at a time. Or we can take random linear combinations and then invert. So it's the same concept, again, done for light. MICHAEL STENNER: Right, so this device was originally built to be a spectrometer just like this. So we will conceptually step back for a moment and think about it in that context. Don't worry about imaging, don't worry about x and y. In fact, you can assume that all of the light is maybe hitting a diffuser in front of this thing, for example. So there's no structure here. So imagine for a moment that this were just some interesting spectrum-- maybe one of these fluorescent lights, or whatever else. But spatially uniform. The benefit of that is that now, instead of just the single slit through the middle of this thing, now we're collecting a lot more light-- we've gone from a factor of 1 over n, or-- sorry, 1 over ny to 1 over 2, effectively. All right, so we modulate this guy. And then now we have this problem. This is the problem in general with the slit-- I'm going to back up one more time. Why don't we just make the slit bigger if we want to collect more light? Any takers on that? Go for it. AUDIENCE: Because then you're probably not sampling just one light source that you're interested in, you're probably sampling a larger portion of the scene, which won't give you the specific [INAUDIBLE] you're looking for. MICHAEL STENNER: Yes, so if we look at our sensor-- if we look at our sensor plane, and we look at one column of pixels along here, if we have an infinitely narrow slit, we know that this physical location on the sensor is associated with one wavelength. OK? If we now have a wider slit, then this is associated with one wavelength from the left half of the split, but a different wavelength from the right half of the slit. So now, we have this mixing of spatial and spectral information that is problematic. Now, the assumption that the whole scene is spatially uniform helps, but that's not generally a realistic assumption. So what this does is help us get around that. So now, what we have-- because we've coded this-- is we have a way of disambiguating this otherwise ambiguous spatial and spectral mixing. So now, let's go back. We have this single line on our detector. We really do have a different spatial locations on our code, on our aperture, mixing-- they're all combining on this column of pixels. Except that this one over here has one wavelength contribution, because that bends say less. And this one over here is contributing with a different wavelength, because that one bends a lot. Right, so what do we do? We take our measured values here, and we take the dot product with the appropriate code-- and these codes, if you've talked about Hadamard codes, for example-- are designed so that the dot product of any two mismatched codes will just be zero. And the dot product with the single lonely correct code is some large value. So what we do is for this, we take the dot product with each of these, and that tells us how much light came through this part of the aperture and landed in this place. And you do that for all of those, and you get to figure out what each of those contributions is, and you can reassemble them. OK, so that's how this thing works as a spectrometer. Now, turning it into a spectral imager is just a little bit different. We've only got one-- we're only trying to make this a push broom spectrometer, so we've only got one spatial degree of freedom that we're trying to recover. And in this case, we're going to actually make that the vertical direction. Is that right? Did I get that right? Scanning wide-- no, make that the horizontal direction. Because we know how much-- we've now figured out exactly how much light of each wavelength came through each of these vertical columns, so we know how much light came from that location and what the wavelength of each contribution was. So we're almost there-- there's only one remaining problem, and that is, if the scene has structure, then that's a problem. The Hadamard code assumes that the underlying thing we're trying to reconstruct is smooth, is flat, is unstructured. So the cleverness here comes from the following. We can slide the code vertically and wrap it around at the bottom, right, and take each of those versions where we slide it one, take a frame, slide it again, take a frame. And reassemble them at the end, so that you can, say, take this-- pick it right here-- this spot in this code. And look at what looked at what happened when it was here, and then look at what happened when it was here, and look at what happened when it was here, and so on. Or equivalently, you can look at this spot on the image and say, what happened when this part of the code was there, then this part of the code was there, and this part of the code was there, and so on. And you can synthesize a full version, a full-frame version of this that is actually using only this row of data over and over and over again. That's horribly unclear, and I apologize, but that's just about the best I can do. The point is that you can get past the limitations on spatial structure that this spectrometer generally has by scanning this thing in the y direction in this case. And that's OK, because we're planning on doing that, anyway in these push broom architectures. So we do that, we get the vertical direction. We already had the horizontal direction. And so you get the two spatial degrees of freedom. And you have your spectrum. The upshot of this guy is it only throws away half the photons, which for these things is actually pretty good. And that's pretty much it. There is a little bit of reconstruction-- every point on this thing is actually created by taking a dot product with the code, but that's actually not that bad. So this thing is actually a pretty cool system. This is, in some sense, kind of my favorite. INSTRUCTOR: And I keep going back to the comment that Mike made, this can be thought of just as the simplest-- the previous one of a line, which is just a boring code. MICHAEL STENNER: That's right. INSTRUCTOR: And the operation would be identical. You would assume, as if you didn't know what the code was-- I mean, you didn't know that the code was so simple. And you would still do all the same operations, except now, instead of measuring multiple quantities at the same time, we're actually measuring only one row, I guess. Only one row at a time. MICHAEL STENNER: Yeah, let me try one more time this way. If this thing were-- if every column had a uniform value here, we're all pretty confident that this would work, right? The Hadamard code does a good job of allowing us to distinguish how much each column contributed to each column on the sensor. So that tells us, for a given column, what came from what place in the code, and what had what wavelength in the code. So that's pretty straightforward. The only real trick is what happens when it's not vertically uniform, and that we can fake. We can fake by sliding the-- In fact, let's do it this way. This is a much easier way to say this. We've got some structured thing here, and we're going to fake just a single frame that has uniform structure vertically. How we're going to do that is I'm going to say that we are only interested in this line. As we're scanning the object past this thing, when this line is here, I'm going to collect this row. And I'm going to ignore everything else. A moment later, when that line is up here, I'm going to click this row and I'm going to ignore everything else. And so on, do that all the way up. And one step at a time, you construct exactly the frame that you would have if the whole thing were uniform with this line. So that's how we fake it. Except we're not doing one line at a time, we're doing a whole thing. INSTRUCTOR: You're doing Hadamard multiplex [INAUDIBLE].. MICHAEL STENNER: We're doing Hadamard, exactly. So this guy does a pretty good job, and it's pretty-- INSTRUCTOR: As you'll see over and over again, this trick is constantly being used of taking linear combination of multiple quantities. Because we want to make the photon efficiency go as close to half as possible. MICHAEL STENNER: This is a big factor, here, the photon efficiency. Because the main problem with those two baselines is they both have lousy photon efficiency. All right, yeah, so I think I've already talked about all of this stuff. So this is the reconstruction. The scanning options, by the way, we can either move the scene over the code, like with the airplane moving, or we can circularly scan the code in a circle snapshot kind of way. That is, just point the camera-- nothing moves but the code. Yeah, go ahead. OK, so this is out of the same group. So by the way, my notation here, the person in parentheses is the professor in the group, and the name in front is the person who did all the work. INSTRUCTOR: [INAUDIBLE] really get the credit. MICHAEL STENNER: That's right-- that's right, so it's the real person, and then parenthetically, don't forget about this guy. He paid for it. So this is a similar architecture in some ways. In this case, we do the same thing. Well, wait-- I thought it was on the wrong one. So this is exactly the same piece of hardware, All right, so this is going to take much less time because everything I told you about before, it is completely true here, except for one thing. We're not going to move it one line at a time and take every measurement. We're going to take fewer measurements. How many fewer? Eh, it's up to you. So in this case, we know that we don't have enough information to fully reconstruct all of the full data cube. So we're going to use some clever algorithmic tricks, and that's where this horrible scaling comes in, to try to reconstruct that full data cube. All right, so you guys haven't talked about compressive sensing. I'm going to give you my one-minute version of compressive sensing. You have some linear algebra problem here, or you've got some big 1V vector, and you're going to operate on it with some non-square matrix. INSTRUCTOR: Is that a little bit colored [INAUDIBLE]?? MICHAEL STENNER: That would be handy. And I can't erase very well either, so apologies for that. Let me try this. This is about as complicated as it gets. Oh no, dear God. AUDIENCE: I hope that meant yes. MICHAEL STENNER: All right, so this is as complicated as my little diagram gets here. We have some number of parameters that describe a scene, and we have some matrix that is performing an operation on that. And as a result, we have some number of measurements. If this guy is not square, and in particular, if it is wider than it is tall, then the number of measurements, we have is smaller than the number of parameters that fully describe the scene. So this-- so what does this tell us? INSTRUCTOR: In a traditional numbered multiplex thing, it's exact square matrix. With a number of unknowns, the number of knowns is equal. MICHAEL STENNER: Right. INSTRUCTOR: And now, we have fewer measurements than this. MICHAEL STENNER: So even if you don't know anything about the structure of this guy, what do we know from this? And don't think too hard. This is inter-linear algebra. AUDIENCE: Under constraint for all of it. MICHAEL STENNER: Yeah, it's under the constraint, or under determined. AUDIENCE: Do you know the thing you're capturing on the right? MICHAEL STENNER: What's that? AUDIENCE: You know the thing you're already capturing? MICHAEL STENNER: We do not know it. That's why we're doing this. INSTRUCTOR: Right, it's the unknown. MICHAEL STENNER: Yeah, so this is the scene, this is the world. And this is the data that our camera, in this case, spits out. So our goal is to-- from these measurements, figure out what this thing was. We do know this matrix-- I mean, it doesn't matter what the structure is at the moment, but in general, doing these problems, you do know what this matrix is. And you have your measurement, but you're trying to get back at this guy. All right, so this is an under-determined problem. And there are many algorithmic approaches to estimating this, and anybody who's ever used a Moore Penrose pseudo inverse, you can try and invert this thing. And then, with that, this guy will give you an estimate of this. In general, it won't be that good. It depends on the details. But there's this-- and that's as much detail as I'm really going to go into. There's a relatively new discipline called compressive sensing, which is devoted to finding better ways to do this. In general, they assume you have control over the nature of this matrix. But the idea is that you're going to collect far fewer measurements than what you're trying to reconstruct. And the basic way that they do that is by using algorithms that assume something about the object. And the thing that they generally assume is sparsity in some domain. So for example, if you're looking at stars at night, you can assume that compared to the number of pixels, we don't have that many stars. Most of the pixels are black, only a few of them are going to be white. So if you get lots of blurry stuff, you can generally assume that, in a very simple case in the center of each blur, you've got a single star. In more general imaging applications, you can assume that the object is sparse, say, in the wavelet domain. We don't have white noise, but we've got nice, round faces, and eyes, and hair, and that sort of thing. And if wavelets didn't work so well, then we wouldn't use them for JPEG 2000 and other things like that. So if you assume the thing is sparse in some domain, then you can generally do quite a lot better in terms of reconstructing this guy. So that's the basic idea. You design this thing well, and then you reconstruct with these clever algorithms, assuming sparsity, to get the original object. Any questions on that really, really fast intro to compressive sensing? So that's where this lousy scaling comes from. Unfortunately, the algorithms typically used for compressive sensing are not especially fast. They scale badly with a number of points. So what this guy does is, using the same hardware we were just talking about, we can take a compressive measurement where we simply don't scan as much as we might like to. The signals get mixed together in a way that we cannot uniquely unscramble them, but using these approaches, assuming sparsity and whatnot, we can do a pretty decent job. Again, this is a good example where I'm not talking about image quality, because there are problems with it. There's no easy way to characterize it and compare. OK, let's move on. So this is a similar one out of the same group. I'm not going to dwell on this too much. The architecture will feel familiar. In this case, we image-- let me make sure I get it straight-- so we go through a standard spectrometer, except put our-- now we have our code here. And then we go through a completely identical grading and lens formation to basically remove this wavelength smearing that we had before. INSTRUCTOR: Optically. MICHAEL STENNER: Optically. For example, if we just removed this coated aperture and just left it open, left nothing in that space, then what we would get at the end here would just be a normal picture. That is this grading separates the wavelengths, and this grading puts them back together again. This would do absolutely nothing but take a nice normal grayscale picture. What changes things up is that we have this coated aperture in there, and it's there, it's in there in an interesting place. It's in there where the spatial and spectral information has been mixed. That is some wavelengths have been shifted more than others. And so things get all scrambled up. And then put back together. So what happens is-- and again, maybe we can kill those fluorescents again, thank you-- so this is a little bit hard to see. But you have what looks like a relatively normal picture here, except that at different pixels, you have different codes that have been applied to the spectrum. So yeah, so you can again decode on the back end to reconstruct the image. This is also a compressive sensing problem. It's half photon efficient, again, which is going to sound familiar if you hear more about this coded aperture stuff, because roughly half of these guys are closed. It's compressive, in that you take a single spatial shot, and that's got all of the information you're interested in. So we can compare these guys a little bit in terms of how they work. If we look at the same scene for both of these-- and this is intended to be three colors, some red color, some green color, and some blue color, with a little bit of overlap in between-- as in the single disperser, this thing gets modulated by the mask before any shifting. In the dual disperser, you shift first and then modulate. And then in the dual disperser, you put things back together. In the single disperser, you modulate and then shift. So this is just giving you a feel for how things are mixed together. Going to the next one-- some of the implications of that are if you have three white sources, in this case, the dual disperser, because you split them up first and then modulate, you might lose some of the color bands associated with the single point source. So the red and green for this point source are just gone. But the odds are that when you put something back together, all three of those point sources will be represented. For the single disperser, on the other hand, because the modulation happens first in the image domain, if a point source happens to fall on a closed point in the mask, it's just gone. Tough luck. The good news is, you do retain better spectral information about the locations you do see. So it's really just kind of a trade-off in terms of what information is higher priority to you, and what works better for your application. And the last example, same basic thing-- you lose spatial structure here, but you get good spectral information. Here, less spectral information, but you still have a better idea of what the spatial structure looks like. All right, what's next? OK, great. So here's where the tomography comes in. So this is another architecture that works in a completely different way. The way this guy works is, we're going to have some sort of lens come through our imaging system, columnate the light, and then go through a diffraction grading. OK, now this is a specially created diffraction grading that scatterers in many different two-dimensional directions. AUDIENCE: 16 [? bit? ?] MICHAEL STENNER: Yeah, so yes and no. It's one-- it's hard to say, exactly. INSTRUCTOR: 3, 4, 5-- MICHAEL STENNER: I think it's six directions, but then you get combinations. So this one here is a combination of this and that. And this is two orders of this thing. So this one-- these six around here are the first orders of diffraction. INSTRUCTOR: Show those same effectiveness-- MICHAEL STENNER: OK, yeah. INSTRUCTOR: So think of one of these as this guy here. So if you shine light on this, you'll get a sphere. And as you rotate this guy, these other guys will show. So [INAUDIBLE] orientation this one. But imagine if there are final gradings that will mirror even more. MICHAEL STENNER: Actually, what I think this is physically, is-- yeah, I'll pass this around in just a moment-- if you took three of these guys and rotated them each 120 degrees from each other, this is basically what you would get. So this is the scattering off of one of them. Go for it. Oh, hey, good. I can do better than that-- look here. INSTRUCTOR: Now we just have monochromatic. MICHAEL STENNER: Yeah, that's true. But you can get the idea of multiple orders here. So one of these, that one, is my zero order beam through this diffraction grading. INSTRUCTOR: The little one here? MICHAEL STENNER: And each of them, going further-- INSTRUCTOR: 1, 2, 3, minus 1, minus 2, minus 3. MICHAEL STENNER: Right. So what we're looking at here, if we just had one of these guys, we'd have 0, 1, 2, and the 1 and 2 are just merging into each other. It looks like one long one, but it's actually two. So 0, 1, 2, negative 1, negative 2. So this is one, two from a grading at this kind of angle. This is almost certainly one from this, and one from grading that's at this angle. So you get these multiple gradings all on the same material. And you get this scattering in different angles. We can pass that around if people care to look at it again. INSTRUCTOR: And again, if you just look at the world through this, and you have zero [? tech, ?] you'll see there are the copies of [INAUDIBLE].. MICHAEL STENNER: That's right. So all right, now, here's where the magic happens. What good does this do us? Right? Think about what it means to smear these wavelengths, and then let that light fall on a camera, or on a sensor. We don't have just a single unidirectional beam here, but we have an image. We have a scene that's being propagated through this thing. And if we imagine the scene has multiple colors, or multiple wavelengths, as it must if we're interested in it, then what's going to happen is say, a blue wavelength part of the scene, and a red wavelength part of the scene-- which started out on top of each other-- are going to get shifted and added together on the sensor. So if we imagine the red-- let me do this-- OK, we can maybe turn the lights back on for a moment. Imagine this is a red layer, and now I'm going to take the blue layer. It's going to be shifted and added together. These two things are going to be shifted with respect to each other and added together. What's another way to think about that? I'm going to draw my data cube here, which I've been talking about. And I love to draw-- here we go, OK. If this is x, y, and lambda, what does a normal monochrome camera do? It gives us a value. So I'm representing the scene here-- a normal monochrome camera gives us the integral vertically through lambda of that scene. It just takes all of the different values at every different lambda, adds them all together, just gives us the total amount of power for all wavelengths in the scene. What our shift now does is it gives us the integral along some line, at some non-vertical angle. INSTRUCTOR: Which is the same concept as the light field for assignment two. You've seen that [INAUDIBLE]. MICHAEL STENNER: So for a single-- so if we measure this out here, for example, what we're getting is a full image made by many line integrals at some angle through this data cube. Right? And now, if we measure this one way out here on the edge, where things have been shifted a lot in the other direction, that we get some very steep line integrals in the other direction. OK, now if this is starting to feel like tomography, it should, because that's effectively exactly what we're doing here. We are taking many line integrals through this data cube, at many different angles, and we can use standard tomographic techniques to reconstruct them. INSTRUCTOR: Let's just make sure everybody's with that one. You see the analogy between X-ray tomography and what we have done here? MICHAEL STENNER: The wacky thing is that this is no longer a head, or some physical, three-dimensional object. It's no longer x, y, and z, that are three dimensions in this case. INSTRUCTOR: It's x, y, and lambda. MICHAEL STENNER: It's x, y, and lambda. And our path integrals are no longer density, but they're just the amount of energy represented by that x, y, and lambda Voxel. All right, mathematically, completely identical. We're just taking path integrals through at a bunch of different angles. We get enough of them, we can reconstruct that entire data cube. INSTRUCTOR: And if you have, say, I forgot how many was it? 25 here, roughly? Let's say you have 25 such arrows here, that corresponds to what in the next [INAUDIBLE]?? AUDIENCE: Angles? INSTRUCTOR: Corresponds to 25 angles. As if you put X-ray position 1, 2, 3, 4, 5, 6, and 1, 2, 3, 4-- 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, and you basically put those 25 projections. That's exactly what it is. Which also means in terms of resolution, what's going on here? In terms of your sense of resolution? You're losing resolution by at least by a factor of 25. Usually much more, because you cannot pack these guys very tightly. MICHAEL STENNER: Right, yeah, so that's one of the problems with this architecture is this is the sensor-- this square box is the sensor. And you can get some 12, 14 megapixel sensor, throw a lot of pixels at this thing. But you're wasting a lot of them. No light's ever falling on them. And that's because you got to make sure that these don't overlap with each other. That's a real problem. So as a result, you end up with dead space in between them. INSTRUCTOR: But it's a very clever scheme. MICHAEL STENNER: It's a very clever scheme. INSTRUCTOR: When you think about how tomography [INAUDIBLE] is being used for hyperspectral imaging. MICHAEL STENNER: All right, go ahead. INSTRUCTOR: Let's stop here, because we are running over time. And let's see if there are any questions on that. And then, you're here next week, right? MICHAEL STENNER: I am, actually. Just do me a favor. Just flip through to remind me what's left and I'll see if I have any comments. I'm not going to talk too much about yours. It's just summary table. [INTERPOSING VOICES] INSTRUCTOR: Are there any questions on the [INAUDIBLE]?? The last one? [INTERPOSING VOICES] AUDIENCE: So the only one we didn't get to was Isis? MICHAEL STENNER: So yeah, this is very similar to see, so that's not so interesting. Isis and then the edge of spectrum, which Anka talked about a little bit-- well, a lot, actually. More than I will. I'm going to talk about it. So these-- I'll just say, these two aren't traditional hyperspectral or spectral imagers. INSTRUCTOR: So the goal there is not measuring the cube, traditional cube. MICHAEL STENNER: That's right. AUDIENCE: So all of these are the images that your company developed? MICHAEL STENNER: Oh, no, this is a survey of academic literature. We did-- Roark did work on something that could be a light field architecture, that could be used as a spectral imager. That didn't even get in here. AUDIENCE: I presented it a couple of weeks ago. MICHAEL STENNER: So you've seen it otherwise. AUDIENCE: You probably don't remember. INSTRUCTOR: So your assignment number four is multispectral imaging. And we won't be building hardware, because that takes too long. But we'll give you a data set that has 32 channels, 10 by 12 each, for flowers, and people, and beer, and so on. And then we'll do something, like what Anka was saying, where the stuff using standard RGB color response, you will be allowed to mix and match those spectrums and create interesting images. So this should be a relatively simple assignment, because you are simply adding up the start of the images and creating the three-channel image at the end. But hopefully, it'll get you intrigued about how different things look in different spectrum. So the assignment number four is actually open-ended. This is just a suggestion. If you want to do something very simple, something that takes a little more than six hours to do, propose that to me, as well. We can support that as your fourth assignment. Or if you don't want to take too much, then you can just do this one and focus your creative energy towards the final project. So next week, we'll be talking more about scientific imaging-- microscopy, tomography-- we did it in a slightly opposite sequence. We wanted to do the tomography first, but hopefully now you already have some idea, deconvolution and so on. That's what we'll talk about next week. And we also have a guest speaker. And then, we'll do very brief overview for the exam on the 13th, which is open book, open laptop, open everything. So you don't have to study for it. You should really be focusing on your final project.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_11_Coded_imaging.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RAMESH RASKAR: Kind of a summary of a lot of the things we talked about. So if you remember in the beginning, he said, you can just start with a pit. And then it kind of develops with the lens. But from even here, you can go down two different parts, either compound eyes, where each sensor or set of sensors have their own optics, like a sort of straw, or same lens-- sorry, the same pixel, might get image from multiple lenses, like here, right? So that's superposition. So this is a position, and this is superposition. And that concept of a position or superposition applies to all three types, shadows- or refraction- or reflection-based techniques. So we saw this last time, and we'll see how-- we already have some projects that are inspired by biological vision. You know, Matt is trying the chicken. And I think it's going to be-- [LAUGHTER] It is going to be very popular. And I believe Santiago-- where's Santiago? Oh, yeah, his triangle-- the piston, kind of-- so some really great ideas. So I'm glad a lot of these concepts are coming together in the final projects. So today, we'll talk about coded imaging. And the concept here is very simple, OK? So I'll start with this one, which is you have a taxi zipping very fast. And you want to kind of take a photo in such a way that you can recover the sharp detail afterwards in software. So it's a form of a co-design between how you capture the image and how you process the image. In a typical film camera, or even it is digital camera, you take the picture, and that's basically the end of the story. And here, you're trying to do something clever about how the picture is taken. So of course, there are other opportunities of capturing this. You can either take a really short exposure photo. But that's going to be very dark. If you take a high ISO, you can recover some information, but still quite dark. Or you can just take a long-exposure photo by keeping the shutter open. But then you will get a blurry photo, which is well exposed, but a lot of the high-frequency details are lost. And then if you try to apply some deblurring, you'll get a result that looks like this, which is kind of reasonable. You can see the number one on this, I guess, Thomas Train. But you get a lot of banding artifacts and a lot of repetition and noise here. AUDIENCE: So what are those lenses? RAMESH RASKAR: This lens? So when you try to recover this information, you start getting this banding artifacts. And we'll see it in the next slide, why that happens. So what's going on here is that if you have a sharp photo-- if you have a blurred photo, you can basically represent that as a sharp photo, where there is a convolution of the sharp photo with some kind of a convolution filter, OK? So if you look at-- where's my laser point? If you look at this-- the tip of letter one here, it's been blurred by a certain amount of pixels in the horizontal direction. And if you keep the shutter open for even longer, it'll blur correspondingly longer. So you have basically a 1D convolution that's converting this image into this image. And of course, the goal usually is this is a photo that you capture and you would like to invert and get back this photo. So one would say, OK, this converted with this gives me that. So just [INAUDIBLE] using the same filter and maybe you'll get that back. That doesn't work because something called division was 0. And the way to think about that is in the Fourier domain because convolution in the image domain, our primary domain, is multiplication in the Fourier-- just standard Fourier transform. So if you take the Fourier transform of this and multiply that by the Fourier transform of this, you will get the Fourier transform of this, OK? So let's say we take this photo. Find the Fourier transform here. Multiply that by the Fourier transform of a box function, which is a sync. So what that means is that I'm going to take the lowest frequency, multiply by that value. I'm going to take the next frequency, multiply it by this value. Next frequency, multiply by this value, and so on, right? We're just going to multiply each of the frequencies in the image by the amplitude of the Fourier transform of this. And you can already see that lower frequencies will be preserved, but higher frequencies will be highly attenuated. But there's also something strange happening. Even some of the lower frequencies are actually being set to 0, which means that in this photo, these frequencies are missing altogether. They have been suppressed. So it's not a traditional low pass filter. It's a low pass filter where some of the even lower frequencies are also being nullified, which means that if I tried to recover from this photo, this photo, there is no chance because I have already attenuated and have lost all those frequencies. So the moment you take the photo, the damage is done. And there's nothing you can do to recover those frequencies because in the Fourier domain, all you have to do is take the Fourier transform of this and divide by the Fourier transform of this, which is this. And it will give you the [INAUDIBLE],, OK? But the Fourier transform has some zeros, so you cannot divide those frequencies by 0 and recover an image. So the culprit here is really this box function, which is equivalent to-- when you release the shutter, opening the-- release your shutter button-- opening the shutter and keeping it open for exposure duration and closing it. But that's the most natural thing to do. But apparently, it's not the most effective. So what if you change that? What if you changed that? And instead of keeping the shutter open for the entire duration, you open and close it in a carefully chosen binary sequence. So for some time, the shutter was open, then shutter's closed. It's open for some time. Again, it's closed. Here, it's closed for quite some time, open for a short time. And so on. So at the end, you still get just one photo. But now something magical has happened because first of all, if you look at this number one, you'll see that it's not the same as before. It has-- it seems to have these replicas. And the reason why this is better is you take the Fourier transform of this. It's actually flat, which means it's preserving all the frequencies in the image. So we can be sure that, in this photo, all the spatial frequencies-- low frequencies, high frequencies-- they're all preserved. Of course, they're attenuated. It's not as high as-- it's not 1.0. It's reduced. Maybe it's 0.1 or so. So they're all attenuated, but there is still some hope to recover this photo back from this because, in the denominator, we will not have seen. So of course, if you try to implement this mechanically, where you open the shutter and then mechanically try to close the shutter, that will be problematic. So what we did was we used an LCD-- actually, a ferroelectric LCD-- that becomes opaque and transparent. And in the old virtual-reality screens or even some of the games, you have these eyeglasses that flicker at 60 Hertz for time sequentially so that you can see the left eye versus, like-- because they're the same glasses. And a traditional LCD, unfortunately, doesn't have a very high contrast. And Simon is discovering that one more time. But the ferroelectric LCDs have a contrast of 1,000 to 1. So when it's opaque, the amount of light that passes through, as compared to when it's transparent and the amount of light it passes through, the ratio is 1 of 2,000. So when you turn this ferroelectric LCD off, it's really, really opaque. Yeah. AUDIENCE: Couldn't you just do a high-speed video or just a [INAUDIBLE] taking video and put out your frames? RAMESH RASKAR: So the question was, why not just capture high-speed video and take all these frames, right, and then put them together? The problem is each of the frame will be extremely dark. So you are basically adding up a lot of noise. Every frame is dominated by noise. AUDIENCE: Yeah, yeah. RAMESH RASKAR: So when the shutter is transparent, that goes through. When the shutter is opaque, light doesn't go through. And that's your 1010 inquiry. So, again, the idea is very simple. Instead of keeping the shutter open for the entire duration and getting a well-exposed photo, the shutter is open for only half of the time. AUDIENCE: There is an issues there. The support for the representation of the Fourier domain of that function that you describe there is infinite, right? So you actually truncate this in order to-- RAMESH RASKAR: It's not infinite because you still have some width. AUDIENCE: Right, but you have infinite high frequencies there by the sharp conditions, right? RAMESH RASKAR: Yeah, you can think-- I mean, you can think of this one goes to infinity. But there's hardly any energy left. So although it goes to infinity, there is not much energy left. AUDIENCE: But when you get to invert the process then, that's why you're still not getting the perfect images to-- RAMESH RASKAR: In this case. AUDIENCE: --in this case as well. You still lost some high frequency, right? RAMESH RASKAR: So you haven't seen the results yet for this. AUDIENCE: You show the text. RAMESH RASKAR: Yes. So this is what it looks in this case. But it's a very controlled experiment in a laboratory. So you take the toy, and you move it in a very controlled way. And this is what you get in a traditional camera. And this is what you get in the flutter shutter. So these are real photos. And, yeah, you're right. I mean, you still get some noise. And actually, if you can pair this with ground truth, you'll see that it's OK, but it's not perfect. AUDIENCE: Yeah, so let's say that you took the 0s from the [INAUDIBLE], right? And you just replaced it by something that is pretty close to 0, but not 0. And if you invert the process-- RAMESH RASKAR: From here, this is what you get. AUDIENCE: That's-- OK-- RAMESH RASKAR: There's a deep learning of this. AUDIENCE: OK. RAMESH RASKAR: Yeah, and that's this loss of these frequencies also shows up as these artifacts at regular frequencies, at regular intervals. So, again, this one-- this doesn't go to infinity all the way. [INAUDIBLE] It cuts off and corresponding to the width. The width of this post was very short than yesterday were very far away. OK? Yeah? AUDIENCE: The filter is dependent from distance? RAMESH RASKAR: The filters depend on multiple factors. So if your toy is moving or your taxi's moving really slow, then there is no need to-- in this case, the sequence was about 51-- actually, 52 vector long. So let's say your exposure time is about 104 milliseconds. It's open for two milliseconds. Here, it's open for four milliseconds, off for two milliseconds, four milliseconds, two. Maybe it's off for eight milliseconds, two, and so on. AUDIENCE: Yes, but-- RAMESH RASKAR: But with a vector length of 52. AUDIENCE: This filter is in time? RAMESH RASKAR: In time. AUDIENCE: And you think about filter from space? RAMESH RASKAR: It corresponds automatically to filter in space. AUDIENCE: Yeah, so [INAUDIBLE] it's dependent on distance, if you-- RAMESH RASKAR: Yeah, the speed, you mean. AUDIENCE: --of faraway objects. RAMESH RASKAR: Yes. So your actual blur in the image may not be exactly 52 pixels. It might be 10 pixels. It could be 100 pixels. So your 52 vector is going to stretch or shrink based on how fast the object is moving. And you're saying that it also depends on how far the object is in space because faster-moving objects. And you mostly have to think about image space motion because the speed in the real world and-- the distance are they get-- you divide to normalize by the distance. So you only have to worry about the image space distance. Yeah. Go ahead. AUDIENCE: Could you get a similar effect if you had, like, instead of a hooded shutter, it could have flasher? RAMESH RASKAR: Yeah, exactly. So if you're in a dark room, you can just-- if you're in a dark room, then you can just strobe the light, rather than opening and closing the shutter. AUDIENCE: I think we might have a mobile demo of that scene. RAMESH RASKAR: [LAUGHS] Well, I don't know how fast you can-- AUDIENCE: Well, the problem is you can't [INAUDIBLE].. RAMESH RASKAR: Yeah. So what are some-- let's look at some pictures, actually. So here is a demo. I think I've shown it to you before. This is on Broadway. This women try to figure out the car make and the license plate number. What's the license plate number? AUDIENCE: 458. [INTERPOSING VOICES] AUDIENCE: 468. AUDIENCE: Something. RAMESH RASKAR: And the company? AUDIENCE: [? One more time. ?] RAMESH RASKAR: Yeah. So you get a reasonable result. But going back, what are the limitations of this method? Yes. AUDIENCE: You need to know the motion or the direction of the motion. RAMESH RASKAR: No, you need to know the point spread function, how the blur is created. If the car is moving from left to right versus right to left, you need to know that because the way your point spread function will be imposed on the scene will be different. AUDIENCE: You still inspect the lighting. RAMESH RASKAR: You just have the light-- very important, right? So this image is about half as bright as this one. What else? AUDIENCE: I guess there should be a little less of an acceleration of-- all of them should be moving the same-- RAMESH RASKAR: Exactly. So whatever is moving has to move at a constant speed. If we did 100 milliseconds, it picks up speed, then your assumption that the 52-length vector will map to some stretched or shrunk version of 52 is not valid. Some parts will go faster and slower. What else? AUDIENCE: [INAUDIBLE] RAMESH RASKAR: Sorry? AUDIENCE: If the object is moving in space, [INAUDIBLE] distance and then [INAUDIBLE].. RAMESH RASKAR: Yeah, so you-- so if it's moving in a perspective, for example, it's not so bad because you can rotate the image. And again, it'll become-- so that's not acceleration. That's still constant speed. It's acceleration in the measurement, but in the real world, it's still constant speed. So you can play with those tricks. You can either go to object space or you can come back to image space to make sure there is no acceleration. It's all linear. AUDIENCE: So does this technique still work if you're moving in multiple directions at once over the duration? RAMESH RASKAR: So if you have multiple cars, for example, and they're all independent, then it's fine because I can say this car is going this way. That car is going this way. As long as it's moving in a straight line at a constant speed, you're OK. But if the two cars overlap, what happens? Our model fails again. If two cars are partially overlapping during the exposure, it's possible, but it's more challenging because you don't know exactly how fast the two cars are moving. Yeah. AUDIENCE: Sorry, might we need to know how fast the car is moving when you're setting up your shutter? RAMESH RASKAR: No, when you're-- OK, so when you're setting up your shutter, if the car is moving really slow, and you don't expect it to blur by 52 pixels, and you expect it to blur by only 10 pixels, then using a 52 sequence is overkill. Maybe you should use a new sequence that's only about 10 long or 11 long, right? So it's just like-- AUDIENCE: OK, but that's just so you can get more light. RAMESH RASKAR: No, that's so that it's most optimal for that setting, right? So it's like setting an exposure time. When I take a picture, the camera automatically decides what the exposure time should be. Similarly, you should look at the speed of how things are moving maybe with an ultrasound Doppler or whatever. And it says, things are not moving at all. So I should not use the flutter shutter at all. Until they're moving very slowly, maybe I should use a 10 long sequence. If things are moving a lot, maybe I should use a 52 sequence. And to answer your other question, where you need to do is when we solve the system, we need to know how long the blur is, which is true in other cases as well. You need to know how much the blur is. Another major disadvantage is let's say I want to take this bottle. And if I just rotate it and motion blur that, it will not work. For any point in the front that you're looking at it, it'll work. But the point that was in the back, that all of the 52 sequence-- maybe for the first 10, it was occluded. And the remaining 42, it was seen. You have to know exactly when that point became visible during that 52 window. So in general, the technique works well when things are moving naturally. But if somebody wants to do this kind of an experiment, or move things behind an occluder and move out, those are very challenging scenarios. AUDIENCE: Can you combine both horizontal and vertical [INAUDIBLE] masks? RAMESH RASKAR: Vertical, horizontal is fine. You can-- it doesn't matter. It could be moving vertically. Basically, your point spread function-- the blur function will be vertical rather than horizontal. AUDIENCE: Yeah, no, but if you have a combined motion, vertical and horizontal, you have to encode this with a mask? RAMESH RASKAR: No, no, no. So let's say the two cars-- one is moving-- AUDIENCE: [INAUDIBLE] diagonally from their [? English ?] way, right? RAMESH RASKAR: That's fine. As long as it's in any one direction, it's OK. So let me draw it. AUDIENCE: But if you take a sharp turn, you pass through, or-- RAMESH RASKAR: Yeah, exactly. So you have to assume that the point-- so the basic assumption is that if you take any point in the scene, it's moving in a straight line, let's say. And if you have an object, and every point of that object moves in a straight line, OK. It doesn't matter which direction and what speed. AUDIENCE: So this doesn't help at all with any image stabilization if somebody's holding the camera. RAMESH RASKAR: It helps as well. So if you have-- let's say you have a camera shape. And I take a picture of an LED, and it creates some curve like that because that kind of shape. If I know that curve, maybe I can put a gyro. Then I can, again, figure that out. So the problem here really is the point spread function or the blurred function is very critical. And this is what we want to study about half of the class. And the concept is very, very, very interesting because light is linear. So eventually, it's very linear. What happens to a point happens to the rest of the object. So if I have a car that's moving, and I tell you how exactly one point of the car is behaving in the image, I can tell you automatically how the rest of the car is behaving in the image because it's going to do-- all of it is going to have the same spread image. So you can either-- for experiments, you can just put an LED on the car and see how that LED moves. And that tells you everything. And I'm sure you use this trick in other scenarios where you look at a very small impulse and see how the response is. There's also an impulse response. For those of you in audio, you might want to check and see how the room [INAUDIBLE].. And when you're trying to find a speed of a car, [INAUDIBLE],, a very small impulse. And it answers and comes back. It does. The point spread function for your time of flight. So that's the same concept here. You just want to call leading the world, take a picture, and see how it works. And this whole field of order dimension is basically engineering of the point spread function. So if you take an ordinary camera, a film camera, and take a picture, you have no control over how light is spreading-- if something is moving or a focus has different color spectrum. And so an order dimension basically means you want to control how something is spreading on the image. So we're going to engineer activity of the camera. So in this particular case, a point that was moving created a blur like this. And by engineering the time point spread function, it stops looking a bit like that. It's going to look like that, all right? It's going to look like fashion [? wise. ?] And then it just turns out that this one is easier to deal with than this one. So that's the basic concept, engineering or actively changing the point spread function. So this is very counterintuitive because you would say, let me just build the best lens and the best exposure time. And so that kind of mimics the human eye. And once I have that, I have the best possible picture. But when it comes to actually extracting information from that scene, it turns out you need to strategically modify how the camera works so that all the information is somehow preserved. Now the problem is, even after you are very careful and you have captured that image, it's still going to be somewhat garbled. It's going to be mixed in. But that's where the co-design comes in. So once you have this image, there is some hope, there is some computational technique, that will allow you to go from here to here. And this is what kind of separates an animal eye from a computational eye because in most scenarios, an animal eye is just going to take the picture and try to make the best sense out of it. But a computational eye is going to apply a lot of processing to this and be able to recover that. As far as I know, animals don't have deconvolution circuitry or deep-learning circuitry. I can look at a blurry image and kind of figure out. I mean, this was a challenge for you, right? Right. So we have pretty sophisticated eyes, but we're still not able to deep learn what this is. If you have some prior knowledge of how the Volkswagen logo looks like, maybe you can say, OK, maybe that was this. But on the other hand, if I give you this, you're immediately willing to believe that this photo is a blurred version of this photo. And so kind of thinking about that is when you go from here to here, information is lost. When you go from here to here, we're trying to recover some information. So going from a sharp photo to a blurred photo is easy for us because we just have to lose some information or to imagine what it would look like if some of the information is removed from this image. So the goal of coded imaging is to come up with clever mechanisms so that we can capture light but not just by converting photons into electrons, but actually modulating those photons, either blocking them or attenuating them or bending them, and so on. So that that's why a computational camera is doing the computation not just in Silicon but also in optics. OK, so that was what we can do to preserve information in case of motion blur, right? And the circuit is very, very simple. You just take the hot shoe of the flash, and it triggers. When you lose the shutter, it triggers the circuit. And then you just cycle through the code that you care about. What can we do for defocus blur that is for motion blur? What can you do for defocus blur? We, again, want to engineer the point spread function. AUDIENCE: Spatial coding. RAMESH RASKAR: Spatial coding. How would you apply spatial coding? AUDIENCE: Coded aperture? RAMESH RASKAR: Coded aperture. So this is coded exposure, coded aperture-- very easy. And all you're going to do is put some kind of a code in the aperture of the lens. And this is how, actually, it started in the days of-- in scientific imaging, especially in astronomy, coded apertures are very well known. And those of you attended Professor Han's lecture on Wednesday, that's what he talked about, coded apertures. So I've been following this for a long, long time. And I thought, it must be useful for something in photography. And so I said, OK, let's try to put a coded aperture in the camera and see if we can deal with focus and so on. And that was back in 2004. And we tried it for six months, and it just didn't work. It was really frustrating-- really, really frustrating. And then one fine day, I said, OK, if you can do this in space, I'm sure we can do this in time as well. And so we did this, and this worked right away, within a couple of weeks. So we went ahead and built this whole system. And that was just a graph paper. And then we said, OK, let's come back and think about this. What's going on? Why don't we get good results? So it took almost two years to realize that to put this coded aperture in a camera, there are only a few places where you can put it to get good results. So out of that came this particular experiment. So I have a colleague, Jim Kobler, at MG Edge. And one day, he showed me-- this is his lens, by the way. He was telling me the story that he was fishing with his camera, and some creature came out of the water, some kind of an alligator. And he lost his balance, and the boat flipped upside down. Somehow, he managed to flip back in. And the alligator went away. But it completely damaged his camera that was with him, and it just wouldn't work. So he just took out his lens, which is a standard Canon lens. And he said, let's open it all the way. So he ripped open all the damage. It had all the mud in it and so on. And then he just showed me this thing as is. And it was very fascinating because this is a standard film lens, which, of course, can also be used with a digital camera. And this is a fixed focal length lens. It's 100-millimeter focal length lens. And when you focus with this, it works in very interesting ways. First of all, it doesn't have a single lens element. It has multiple lens elements. So when you change the focus, it has to do some really interesting things. It has to deal with chromatic aberration, geometric aberrations, such as radial distortion, and so on. So it has to move all these lenses with corresponding ratios, OK? So I'll pass this around, and you'll see that there are these notches on this lens that are in a parabolic fashion. So when I wrote this, the internal lens-- the outermost lens and the innermost lens instruments are the same place. But all the inner lenses move with some particular ratio. It's amazing the way it's structured, right? So the multiple lenses are moving every time I move this. And they're moving because they're guided through these groups. But there's one particular location that does not change in this lens, and that's the aperture. So we said, let's look at this aperture. And back then, it was still a reasonable-looking lens. So we went in our lab, and we cut open all the way. And you can start putting new apertures in this plane. So you can cut open that particular guy and start putting this aperture. Now it turns out the center of production of this lens is very carefully designed by camera makers to be the same plane where you put your aperture. So when you change your f-stop and decrease it and increase it, it's all happening in the center of projection. Everybody knows central projection? So when you think about a visual camera, you make this very simplistic assumption. That is a pinhole, and there's a sensor. And when you put a lens, we assume that the center of the lens is the central projection, that this always can be assumed to go to that point. When you have a bunch of lenses, like way over here, where is the center of protectionism? Is it here or here or here or here? And of course, there is-- you can take a collection of these lenses and create one single center of projection for normal cameras. For professional lenses, that's not true. But for normal cameras, you have the central projection. But again, conceptually assume that all the rays are going through that point because you can replace this whole thing by one single lens in a [INAUDIBLE]. So finding that plane is actually a tricky problem. And in retrospect, it's very easy. If the lens makers are putting everything there, we should put a recorded aperture also in the same plane. So initially we said, oh, let's put it in the front. Let's put it in the back. We tried all those things. But that creates a blur that's not constant all over the image. And it has a lot of issues. But placing it over there, it turns out you get the same blur. So what exactly happens if you take a picture of a point light, and everything is a sharp focus? Nothing changes, OK? If you have just an open aperture and take a picture of a point light, it looks like a disc. Now what's going to happen when you put this code, like the 7 by 7 mask, and take a out-of-focus picture? What will happen to the LED? AUDIENCE: It's going to look like a code. RAMESH RASKAR: It's going to look like the code, right? And why is that-- why is that happening? So let's think about [INAUDIBLE] focus. So we have our lens, right? And we have a point light. And we will put some code here. When it's in sharp focus, it doesn't really matter what the code is. Basically, you're talking about half the light, so the photo will be half a square. But other than that, it looks like an ordinary focus. And that's why if you have some dust on your lens and so on, usually it doesn't matter unless you have the dust all the way on your front lens because the center operation's over here. So if the dust was over here, nothing will happen. The image will be slightly darker. But if the dust is all in the front, then you start seeing distance. Anyway, so when it's in sharp focus, you just see the point. But let's say that your autofocus here. What will you see? You will see the same exact [INAUDIBLE].. So the [? ray ?] comes in. It's blocked. This ray goes in. It goes through. This ray comes in. It's blocked. This ray goes through, and so on. So basically, you'll see the same [? boat. ?] If you put the sensor all the way here, you'll see the whole code. If you start moving away, the code will shrink. And eventually, when you put it here, we get another code. That's exactly what's happening here. When it's auto focus, we just see the code, all right? By the way, this is the same idea behind another project, which is [INAUDIBLE]. So the idea came around at the same time of how to make this happen. OK. AUDIENCE: Wouldn't the imaging of this code now still have to be blurred? Like, so is that basically multiple apertures that you're seeing? RAMESH RASKAR: Yeah, so the photo here is nothing-- the photo here that you see is still blurred. It's just that it's blurred in a slightly different way-- strange. Here, it's blurred with that shape. Every point is blurred with that spread function. And you cannot see anything on the resolution chart. But here, if I just promote this guy-- no, that won't work because I'm in a different mode. If I look at this picture, you will see that-- so this is a sharp photo. It's blurred with disc. And it's blurred with that function. You can already see that it seems to preserve slightly more information. But it's still-- you won't be able to with your naked eye. You'll not be able to figure out what underlying patterns are. But it turns out, after the blurring, you can. All right, so then you can do these simple tricks, where the person you're interested in is out of focus. But then you can refocus digitally. So this is the input photo and the stock photos, all right? So the same exact trick, which is in case of motion, we created a point spread function that was engineered in one dimension. And here, we are engineering a point spread function that's two dimensional. So here, we know that the Fourier transform of this 1D 52-length vector is broadband. It has energy at all the frequency. What can we say about this? It's fully transformed. What can we say? It's still 7 by 7. So its Fourier transform is also 7 by 7. When it's 52, its Fourier transform is 52 long. AUDIENCE: It's more distributed instead of just all being near the center. RAMESH RASKAR: So in 1D, this is what we saw, right? Its Fourier transform is flat. So there are 52 entries here, and almost all of them are the same. Now we're saying, think about the problem in 2D. And what's the Fourier transform of this? So first, for this one, the Fourier transform is-- as we see, it's black. And then if you take that in 2D-- so how is the code? I'll give you a hint. If I just take a square aperture, a traditional one, and take a square transform, it will look-- the Fourier transform of this one looks something like this. [INAUDIBLE] So Fourier transform of this one-- if I take the cross-section here, it's going to look the same. Same thing here for a square aperture. And now you're saying for this crossword-puzzle-shaped item, should be easy. It's going to look just like this one. So a Fourier transform of 7 by 7 will have a peak in the middle. So the truly Fourier transform will have a peak in the middle. But the rest of the values will be constant. And that's the magic of a broadband code. So if we're placing a broadband code, certainly we have an opportunity to recover all the information. So it seems very, very long winded, right? If all I wanted to do was create a photo from which I can deblur to get sharp photo, why do I need to think about all this theory, right? And the reason is, when we think about point spread function, it's just traditional signal processing. It's a convolution and so on, and it's much easier to think about convolution and deconvolution in frequency domain than in primal domain. And in communication theory, everything is [INAUDIBLE].. We think about carrier frequencies of radio stations in frequencies. We say, my FM channel is at 99 megahertz, 100 megahertz, and so on. And we think about guard bands and audio bands and everything interested in frequency domain. And that's because it's signal processing. It's the same thing that's going on here. And convolution, deconvolution-- much easier to think in frequency domain. Although all the analysis in the frequency domain, at the end, the solution is very easy-- just flutter the shutter or just put a coded aperture. Extremely simple solution to achieve that. So those are all good things about coded aperture. What are some bad things about coded aperture? What are some disadvantages here? It's very similar to the [INAUDIBLE].. AUDIENCE: Half the light. RAMESH RASKAR: So half the light. Very good. And that's when you talk to people who build cameras, and you tell them, they say, no, no, no. That's not allowed. It doesn't cut the light. Yes. AUDIENCE: Are the bokehs kind of uppity? RAMESH RASKAR: The bokehs are-- it depends on your-- I mean, for your average consumer, I don't know whether this matters. But you're right. If you're looking at something that's-- we have bright lights in the scene. At a distance, take our false photo. They will all look like this. AUDIENCE: Or you could put hearts in it, or, like-- AUDIENCE: Right, yeah, I was thinking maybe-- AUDIENCE: I mean, that's totally possibly. [LAUGHTER] RAMESH RASKAR: So an interesting art problem is how do you create-- how do you create a mask that visually looks aesthetic but is mathematically also invertible. AUDIENCE: Yeah. RAMESH RASKAR: Are there disadvantages? Or challenges? Not really disadvantage. Remember, in the motion case, we had to know how much the motion is. What do we need to know here? AUDIENCE: We know how much the blur is. RAMESH RASKAR: How much the blur is. And what is that function of? If anything, plane of focus is sharp. When it's out of plane of focus, it's blurred. But the size of the blur is dependent on what? AUDIENCE: Belt. RAMESH RASKAR: The belt. But not just depth-- depth from the plane of focus, right? So that's an extra parameter you would estimate somehow. Maybe you can use a rangefinder or something like that, or just a software. There are methods you can employ. AUDIENCE: Don't you just try to assume something like we've got to see this contrast? RAMESH RASKAR: Yeah. AUDIENCE: Yeah. RAMESH RASKAR: You could do that. It doesn't work that well. but you're right. That would be another way, too, to try this. AUDIENCE: You can just maximize your hard edges in the image. RAMESH RASKAR: Exactly. That's what you would do, like, in a light field, when we did the refocusing. That's the trick we used. We said, OK, let me try to refocus. I don't care about the depth. When it comes into sharp focus, my edges, that must be the right depth. Unfortunately, it doesn't work out in this case. And we won't go into the detail, but the main reason is that, because it's coded aperture, no matter where you refocus, it still looks like it has very high frequencies. So that makes it challenging. Yes. AUDIENCE: How did you come up with the pattern? RAMESH RASKAR: Oh, exactly. So you need to find this 7-by-7 pattern or even the previous case, the 52 pattern. And you take a random sequence. Take a Fourier transform to see if it's flat. If it's not flat, you go to the next one. AUDIENCE: Oh, so this is brute force? There's not, like, a pretty mathematical formula for this? RAMESH RASKAR: So initial, that's what I did. I said, wow, it can't be that bad. 2 to the 50-- I mean, it's 52-element long. And I know some of them. I only want to take the ones in which about half of them are-- AUDIENCE: 1s. RAMESH RASKAR: --1s and half of them are 0. So it can't be that bad. So I wrote a MATLAB script. And I said, by the time I come tomorrow morning, I'll find a really good code. And I came back next morning. Nothing had happened. I waited all day. It was still running. And it never came out of that. So 2 the 52 is pretty challenging. AUDIENCE: Yes. Where's your [? canu ?] cluster? We need it. RAMESH RASKAR: Yeah, so-- sorry? AUDIENCE: Where's your [? canu ?] cluster? We need it. RAMESH RASKAR: Exactly. But even if you use a cluster, it's still a pretty big number. So you can do some approximation. So you can start with some code and do a gradient descent and so on. Yeah. AUDIENCE: Does the [? harder ?] [? mark ?] code or anything? Is that applicable here? RAMESH RASKAR: Mhm. So actually, after we did these two projects, I attended Professor Han's lecture on computational imaging, which I highly recommend, by the way. It's terrific. And there are all these theories about how to create different codes for different applications. So [? harder ?] [? mark ?] code, which we learned about a few weeks ago or so-called broadband codes, they all have polynomial solutions and this and that. There's no good solutions for 2D. But for 1D, there are some really good solutions to come up with that. And even for 2D, for certain dimensions, they call it one more 4 or three more 4 because prime numbers can be one more 4. Basically, when you divide by 4, the remainder can be 1 or 3. And there are certain sequences that are beautiful mathematical properties, of which sequences could have broadband properties and which may not. So it turns out you cannot-- there's a little bit of cheating going on here. So you cannot really use the broadband code here either to give you the best result. You can call them broadband because their behavior is broadband. But the traditional code's called MURA code, M-U-R-A, Multiple Uniform Redundant Array. They invented not very long ago, maybe 20, 30 years ago. And they used in CDMA and many other astronomical-imaging applications. And they have similar properties of being-- if you take a circle to transform, it's broadband. The problem is, in many of those example-- many of those applications, your convolution is actually circular. So you apply the filter, and then when you go off the edge, you apply the filter to the beginning of the signal. This particular filter is actually not circular, but it's linear. So when you apply the filter here, when you start applying the filter at the end of the image, you don't go back to the front of the image because, clearly, if I put an LED here, you get out of focus. If I put an LED here, you'll only get half of that. The rest of the half is just blocked. It's not going to magically appear over here. So that's the difference between linear convolution and circular convolution. It turns out, for circular convolution, the match is very clean and beautiful and smoother course work. Or for linear convolution, there is no good mechanism. So we came up with our own code called RAT code, R-A-T, which is after three quarters. Oscar [INAUDIBLE]. AUDIENCE: So how did you find that code? RAMESH RASKAR: By doing research. AUDIENCE: Just doing research? RAMESH RASKAR: Yeah. AUDIENCE: OK. RAMESH RASKAR: But it's not a brute-force search. AUDIENCE: Yeah. It was an intelligent. AUDIENCE: And if you included enough padding there, wouldn't you be able to use circular convolution? RAMESH RASKAR: Yeah, I mean, circular convolution-- I mean the linear convolution is basically circular convolution with a lot of padding of 0s. AUDIENCE: Yeah, because you said then the math would be easier, right? RAMESH RASKAR: But then it's too large. I mean, finding a code that's 7 long or maybe 30 long is OK. Finding a code that's 1,000 long is nearly impossible. AUDIENCE: So the difference between MULA and that is only on the edges? Or is it all over the picture? RAMESH RASKAR: It's only the fact that one is linear convolution and one is circular convolution. AUDIENCE: OK. AUDIENCE: Yeah, and another thing is it's pretty amazing that [INAUDIBLE] because if you start just having very simple patterns on a square-- like, say, if you just draw this square and sat down this square, you get the free entrance form, and you have all-- RAMESH RASKAR: Yeah, all over the place. So, yeah, so it seems like can just choose a random sequence and get a similar property. But actually, it doesn't work. The chances of a random sequence doing the right thing for you is very, very low. AUDIENCE: Instead of [INAUDIBLE].. [LAUGHTER] AUDIENCE: Are astronomy people are already using-- RAMESH RASKAR: Mhm, yeah. AUDIENCE: Or they were using this for [INAUDIBLE]?? RAMESH RASKAR: So in astronomy, you have circular convolution because they use either two mirror tiles and one sensor or one mirror tile and two sensors. So the whole circular convolution. So all right. AUDIENCE: If you're tired of [INAUDIBLE] astronomically coded imagery. RAMESH RASKAR: Repeat that. AUDIENCE: If you're tiling the mask at aperture, but you are using single-tiled aperture-- RAMESH RASKAR: Right. AUDIENCE: So if you're tiling that up-- RAMESH RASKAR: If you tile aperture, you'll get really horrible frequency response, unfortunately, because if you put two tiles, that means certain frequencies are lost. AUDIENCE: [INAUDIBLE] impressive. It's saying that, if I understand this right, basically, by taking the DC coefficient, you're reconstructing almost everything. Is that-- RAMESH RASKAR: No, no, no, not DC coefficient because if you look here, all the high spatial-- I mean, the whole image is not one value. AUDIENCE: Yeah, but look at that. That's the spectrum of your-- RAMESH RASKAR: Right. No, but there is a non-zero value at other frequencies. AUDIENCE: Yeah, yeah, a few. But-- RAMESH RASKAR: No, no, that's very important. AUDIENCE: Yeah, but by taking that, you could get a very good approximation. RAMESH RASKAR: Yeah. But if-- to a naive consumer, this photo-- so look at this part, OK? This photo and this photo looks almost the same, right? And remember, in this photo, many of those frequencies are lost. And in this photo, those frequencies are not lost because all the frequencies are preserved. But that's because our eyes are not very good at thinking about what the original image could be, given either this one or the previous one. So given this, I can challenge you that you're not able to predict that it has all this structure, right? From here, you cannot predict that you have the structure. AUDIENCE: So how would you describe the mask as? Basically, you spread the energy in [INAUDIBLE] set of over many frequencies but very small coefficients. Is that-- RAMESH RASKAR: Exactly. It's about-- depending on the code, it's about 1/10 or 1/20 of the original power of that frequency. So you get significant attenuation. So the results are not perfect. If you look here, right, it's not it's not perfect results, whether it's here or here. Look at this. I wouldn't call it photography quality yet. AUDIENCE: Yeah, no. RAMESH RASKAR: But if you apply very simple-- but it is a raw resource. There is no medium filtering or smoothing or anything. It's just pure x equals b, x equals a backslash b. AUDIENCE: Just the fact that the mask, I guess, gives you balance. RAMESH RASKAR: Yeah, it's fun. What's amazing about coded imaging is that the math is elegant and beautiful and sometimes complicated, but the implementation is very easy. At the end, all I had to do is put this code or shutter it, and very easy to explain. My previous boss is to say, the best ideas are the ones that are easy to explain but difficult to conceive. All right, so let's move on. OK, let me finish this one. So there's just one way of-- we only saw two ways of engineering the point spread function, one in motion and one in focus, right? But there are many others. We saw some of them over the course of the semester, where you can put, for example, a special filter in the lens so that you get blur that's independent of that. AUDIENCE: Ramesh, let me ask you one more question. RAMESH RASKAR: Yes, go ahead. AUDIENCE: You had this binary mask, right? RAMESH RASKAR: Mhm. AUDIENCE: What if the mask was not quite? If you have some information by the board so that you could set up approximate [INAUDIBLE].. RAMESH RASKAR: Right. AUDIENCE: So what would you have? RAMESH RASKAR: So that's a very good question. So-- let's see. Let me get this out first. So if the function was-- if the function was continuous-- so in case of flutter shutter, we didn't have much of a choice. It's either opaque or transparent. It's one or the other. AUDIENCE: Yeah, yeah. RAMESH RASKAR: But in case of aperture, yes. It doesn't have to be opaque or transparent. It could be a continuous value. And initially, actually, I and my co-author, Amit Agrawal-- very smart guy-- we always had these arguments about maybe continuous is better. Maybe binary is better. And he continued to believe that continuous is better. But it turns out-- and we still don't agree with this, by the way. And nobody has written this down. It turns out that, for any continuous code, there is a corresponding binary code that will do an equally good job, so far. And that's because in a binary code, you get to play with the phase function. I won't go to the detail. But because here, we are only showing you the amplitude of the Fourier transform but not the face. So you get that extra degree of freedom to play with. So if you play with the right phase, then it turns out you can always have a binary function. Mike? AUDIENCE: Has anyone tried to combine the coded aperture and the coded [INAUDIBLE]? RAMESH RASKAR: That's a great idea. People talk about it, but nobody has done it. It's just one of those things. It's just one of those things. It's like we are sick of it, so we don't want to do it. But I think it's worth trying. And because those are orthogonal motion blur. So here's a great thought experiment. So Mike's question was, there could be something that's moving, so it's motion blurred, but it's also out of focus. OK? Can you use both at the same time and record? AUDIENCE: Yes. AUDIENCE: What about the light width? RAMESH RASKAR: Yeah, it's one fourth of the light, but let's not worry about that. OK. [LAUGHTER] Explain. AUDIENCE: There are orthogonal technologies, basically. RAMESH RASKAR: Exactly. So it's amazing because motion is time, and the focus is space. They're completely orthogonal. So you can play with it. It's very interesting. AUDIENCE: But still, motion is being represented by space on the-- RAMESH RASKAR: Yeah, eventually, it's going to have a 2D projection. AUDIENCE: Yeah. RAMESH RASKAR: So that's very interesting. All right? So the point spread function, although I and my team were the first one to do that in a graphics vision domain, people have been trying to do that since mid '90s in imaging. And there was a very classic paper by Cathey and Dowski and others for so-called wavefront coding. And a lot of it is actually being used in cell phone cameras. And what they do is they put this face mask between the object-- near the lens so that-- and we saw this in the beginning of the class-- so that the image does not come into sharp focus ever. Instead of that, it's like a set of straws. Imagine these are all straws that are coming in. And you just twist them. So the top one kind of goes at the top-- I'm sorry, at the bottom. The bottom one goes at the top. And when you think about the cross-section of all the straws, it's kind of cylindrical, when they all come together. OK, I'm going to take all these straws, or maybe strings, if you want to think about it. And I'm going to twist them so that they remain cylindrical. So if I put my sensor here, if the image is out of focus by this width, if I put a sensor here, it's still out of focus but by the same width. So no matter where you are, the image is out of focus but by the same amount. And you say, well, what's good about that? It's always out of focus. But turns out, the wavefront coding, as they call it, but you can think of this now we know what light field. So this just a unique light field of the scene. It turns out that from that, you can recover images. Like, so this is open aperture. I'm sorry, I don't have a picture. But we discussed it in the class, so I hope you remember that. I missed that picture. We saw this right in the very first class, by the way. And the benefit of that, it turns out, is that it preserves the spatial frequencies, and it has the benefit that, no matter which steps you are at, you have the same defocus blur. So the disadvantage of coded aperture was that you need to know what the depth was to be able to deblur. But now, because it's independent of depth, you can just apply the same deconvolution and get back a sharper image. So whether if I hold myself on camera, whether I'm here or here or at infinity, I get the same amount of blur. Same point spread function. And from that, I can deconvolute and get an extended depth of field that goes from very close to the lens to infinity. So OmniVision, which bought this company, cerium optics, which is named after Cathey, Dowski, and somebody-- those are the two professors at Colorado. And the last one, I forget. That was just bought by OmniVision, which is a big cell phone-- I mean, big imaging company. Most of the business is cell phones. And they acquired the company and immediately laid off all the smart people who invented this. It's very sad Because that part is done. So they just wanted the technology. And it's in a lot of cameras. There's another company called Tessera, which has a very similar solution. But what they do is-- this one, basically what it does-- and we discussed this, I think, in the beginning, the wavefront coding-- is they are simply placing an addition here so that this part of the lens will focus on an image here. This part of the lens will focus on this one. This one focuses here. The top of the lens has a short focus lens. It focuses here. The second one focuses here. Third one focuses here. Fourth one focuses here. Fifth one focuses here. Here. And here. OK, so if you can imagine the main lens has a certain focal length. And we're just going to add a little bit of additional focal length, which is-- that's why you have focal length F1, F2, F10. And then [INAUDIBLE]. And this is the twist that I was talking about. This was [INAUDIBLE]. But within this region, the thickness will be a bonus. So you can either think of it as adding small matchsticks on top of the main lens-- or the way they do it is they actually put one single sheet that looks like that, an additional layer of support, a face mask. And a face mask basically means you are changing the face of incoming light. And, as you know, if you have a piece of glass, and light is going through, it's going to slow down here and then again [INAUDIBLE].. That means you basically slowed down the light. And that's where the glass [INAUDIBLE].. If you have it at the top of the lens, like those two, it doesn't slow down that much. If you go to the middle of it, it slows down forever. That's why, as we learned about at the beginning, if you have something very far away, this slows down a little bit. So those go over here. This goes over here. And everything just works out with operations. But [INAUDIBLE] this extra piece of glass, you're saying, I'm going to speed up and slow down in a slightly different way [INAUDIBLE].. This is the Syrian optic solution or the [INAUDIBLE],, which is actually bought as another company. [? Australia-- ?] forgetting the name. The solution is very similar. I'm sure they're fighting out in court right now. Same solution. Instead of putting this particular guy, that's just going to add some extra glass, but mostly in a minor form. It's just [INAUDIBLE] on that one. So basically the same solution but creating different focal length for different [? partners. ?] AUDIENCE: Yeah. Although you said, I mean, there's this portion there, where if you have another blur [INAUDIBLE],, right? RAMESH RASKAR: Right. AUDIENCE: But what is being blurred? At each piece, or where is? RAMESH RASKAR: Independent of the depth, you get the same blur. AUDIENCE: Yeah, but see, some guys are focusing, say-- RAMESH RASKAR: At an angle? It doesn't really matter. It doesn't really matter because, just like in a traditional camera, even if the point is not on axis but off axis, you still get the same-- you'll still get a disc, right, which we saw in the-- AUDIENCE: Yeah, you get the deuce, but I think that the given picture, as you just move it back and forth, you're going to get a different color, as you-- I mean, a different amount of the mixture of-- RAMESH RASKAR: Different shape, you mean, or different color? AUDIENCE: Different color. RAMESH RASKAR: Not really. AUDIENCE: Because look at that. If the guy was coming from the top, it's going to reach at some point. RAMESH RASKAR: But they all have the same-- I mean, you're saying, because of chromatic aberration? AUDIENCE: No, just because of the geometry, at least it seems to me. RAMESH RASKAR: Using color or shape? Because just to be clear that we are not adding any color here. We're just adding one glass. AUDIENCE: OK. OK, OK. RAMESH RASKAR: We're just adding one glass. So we're painting the rays, but the colors are, for all practical purposes, that'd be the same. AUDIENCE: Yeah, what I find is that maybe some points in the scene would be-- made sure even a piece of-- RAMESH RASKAR: Yeah, the effect is very low, though, remember. The effect is extremely low. So maybe you have a pixel and get blurred by 10 pixels or [INAUDIBLE]. It's not a global effect. So this picture, maybe-- this particular diagram is misleading because it seems like this point is going to go all the way. But this is very narrow. And the blur is only about 10 pixels, no matter where you [INAUDIBLE]. So maybe that was the matter. So if you have a point of access, it's still going to create an image that's blurred 10 pixels. So this is, again, very counterintuitive, where you go to make the image intentionally blurred. It's just that it's blurred everywhere. And then we also saw this one very early on, where the point spread function-- typically when something goes in and out of focus, it looks like a point. And then when it goes out of focus, it looks like a disc. If it goes out of focus other ways, it still looks like a disc. But this group at, again, at Colorado have-- when it's a sharp focus, you see two doors for similarity. And if you go in and out of focus, then the two dots [INAUDIBLE]. So they call it rotating point spread function. AUDIENCE: Is it the same group that developed the [? framework? ?] RAMESH RASKAR: It's not the same group, but same university and the same neighborhood. AUDIENCE: What was the reasoning for developing the rotating point spread function? RAMESH RASKAR: Doug's question is, what's the benefit of this? AUDIENCE: Does [INAUDIBLE]? [LAUGHTER] RAMESH RASKAR: She would have used it by now. AUDIENCE: Yeah. RAMESH RASKAR: What's the benefit of the strange point spread function? AUDIENCE: You know if you're out of focus in which direction. AUDIENCE: Yeah. RAMESH RASKAR: Yeah? AUDIENCE: From the focal point. RAMESH RASKAR: That's one. AUDIENCE: And you know your-- RAMESH RASKAR: But do you know by how much? AUDIENCE: Yeah, you know by how much because-- RAMESH RASKAR: Because it's an angle. AUDIENCE: --rotation. RAMESH RASKAR: So the goal here was no matter where you are, your point spread function is the same. The goal here is exactly opposite. If you go slightly out of focus, you get a very different point spread function. So this one they use in microscopy with fluorescent dye. So when you're looking with a microscope, depending on what the depth of your tagged particle is, the point spread function will look very different. So you can estimate the depth by looking at the orientation of those two dots. So that's very interesting. AUDIENCE: But can't that guy keep going all the way in? At some point, you can't-- RAMESH RASKAR: No, it doesn't work. After some point, they'll stay the same. AUDIENCE: OK. RAMESH RASKAR: This is only in the [? sweet ?] region. AUDIENCE: So have they been able to reconstruct three-dimensional neuronal structures, or-- RAMESH RASKAR: Yeah, that's why they're getting a lot of press. And they're doing some amazing work, [INAUDIBLE].. So they have a lot of collaborations, and now they're able to measure the z-dimension down to about 10 nanometers. AUDIENCE: Wow. RAMESH RASKAR: The xy still remains traditional microscope 1 micron, 1/2 micron. But the z-dimension is 10 nanometers. It's very new. They are still working on a lot of these concepts. OK? So let's very briefly look at compressed sensing because it's something you should be familiar with. OK, so here's an idea that received a lot of publicity. It was even "The 10 Emerging Technologies" by a very reputable magazine. I hope you don't believe any of those things. It's a very cool idea, by the way. And as a scientist, I really like it. But when somebody like Technology Review or Wired Magazine says, Top 50, Top 10, of course, I wish I'm listed among them. But at the same time-- because, you know, it has good side effects. Well, anyway, this single-pixel camera was listed as one of the big things in 2005 by Technology Review, which a magazine I really like, by the way. And the idea is, instead of taking one single photo, what you're going to do is-- let's say that's your scene. You're go to turn on-- you go to take a single photodetector and aim it at a set of micrometers. And in the simplest case, what you will do is you turn off all the micrometers, that light goes this way. And there's only one micrometer, like this one. So a single photodetector-- this is like the dual photography we saw right at the beginning, where you can see that card. If I just turn on this one micrometer-- by the way, this is what's in your DLP projectors, the Texas Instruments Digital Light Processing Micrometer Displays. So it's very easily available. | just receive light from the scene for that one pixel. So this scene is being imaged on this little array. And you just want to turn on this one pixel. And then the next picture, you're going to turn on the next pixel, and so on. And one at a time, if you go through this million pixels, you will get a million megapixel image, right? But of course, the light will be very little if you just turn on one pixel. So now, we'll do some [INAUDIBLE] multiplexing, which we saw a few classes ago, where you go to turn on over half of them, take one reading, turn some other random combination of half of them, and take a picture, and so on. And, again, after-- now about half of them are contributing to the photodiodes. So the photodiode is very well exposed, and you can take a very short exposure reading. And, again, if you take million such readings, you can recover this picture. That's the concept. AUDIENCE: Does it exponentially increase, the number of readings you have to take? RAMESH RASKAR: No, just linearly. If you're on 2 megapixels, then you need to take 2 million [? pics. ?] All right? So the claim this group made at Rice University was that if I wanted a million-pixel image, I don't have to really take a million readings. I can do much fewer than a million readings. And the claim is that imagine if you had this photo as a JPEG. In a composite, it might take up only about tens of thousands of bytes. So let's say it takes up 10,000 bytes. So if I can represent the image with 10,000 bytes, and I'm going to take a photo and compress it down to 10,000 bytes, I can't just directly measure only 10,000 values in the scene so that I save on everything, OK? So I can take this picture effectively with just 10,000 pixels but recreate a million-pixel image. And that's where the concept of compressive sensing or compressed imaging comes up. You want to take something that is much higher resolution but recover it in a compressed way, where it's taking the picture with a hardware and compressing the software. You're going to compress it while sensing. So how does it look mathematically? So let's see. Let's see if there's an easy way to explain this in a shorter time. So that's the trick we're going to do. We're going to take about half the pixel and measure the intensity and so on. So those are our measurements. So our unknown image is x. And we're going to take a lot of these projections. This is the [INAUDIBLE] matrix, for those of you familiar. And these are our measurements. So we're going to say, given these measurements, I'm going to recover my original image. Now, when you think about a natural image, the claim is that if you just use DCT, some photo coefficients, then you can compress the image and represent them with very few bytes, only 10,000 bytes for a megapixel. So let's say your Fourier coefficients are here. And this is your image. That means that if I just put a Fourier transform here, then I can convert the coefficients into the image. And the number of values required to represent an image are much fewer than the million values required here. So we have a million values here, but only about 10,000 values here. And the claim is that by using this understanding that my image can be represented in some transform basis-- in this case, Fourier basis-- using very few coefficients, can be exploited while I'm sensing. OK, this is your optics. This is your map. See if I have it on slide 5. So that's the theory of compressive sensing, that, using some basis, I can transform the image and measure in [? your ?] measurements. And there are certain cases where it is really true. You have signals that can be compressed very easily. A very classic example is in communication, where, if you are doing software radio, where you have a huge band of frequencies, and software radio-- instead of tuning it with electromagnetics, you just capture the whole signal. And then software, you can listen to any station. And the necklace theory says, if your band is, I don't know, 100 megahertz, then you must capture it with a signal that has a bandwidth of 100 milliamps. But we know that in communication, not all bands are actually occupied. Many of the bands are empty. Only certain frequencies have a signal. So people have come up with very clever mechanisms, where they realize that you don't have to capture a 100-megahertz signal. Only some of them are actually on. I'm getting through the Fourier transform because in communication, that's natural. And by doing that, they're able to sample this effect of a software radio with a detector that doesn't have to measure 100-megahertz-wide signal. It turns out for images, this doesn't work. And that's because there is no transform that allows you-- no linear transform that allows you to represent an image with very few coefficients. When you do JPEG, it does frequency transform. But after that, it does a lot of other things. It says, perceptually, the higher frequencies are not as important, so I'm going to represent them with fewer quantization grids. Or certain values are too small. I'm just going to truncate them. So all this operation-- changing quantization bands, truncating, or thresholding, are all nonlinear operations. They are not linear operations. So it turns out there is no transform that allows you to represent an image with fewer coefficients. So in general, this scheme doesn't work. But you will continue to see people who come to you and say, you know, I have this magical thing I just heard or compressive image something, and that will just solve a problem. There are certain images, like cartoons, that can be represented with very few samples because they have flat regions, sharp boundaries, and fluctuations. But a natural image, unfortunately, cannot be transformed that easily. And you can talk to Rohit, and he'll tell you all the details of the dangers of [? compositions. ?] AUDIENCE: So the single-pixel camera is just a hypothesis but not-- RAMESH RASKAR: Yeah, but at the same time, it was the first one that kind of allowed people to visualize or kind of conceptualize in their mind what compressive sensing might do. AUDIENCE: This idea is cool, but how feasible or how important it is to have a single sensor rather than having wide arrows sensing? So what this is achieving is basically allowing you to build a camera with a single sensor. But do we really want it just to do compressed sensing? RAMESH RASKAR: From a scientific point of view, if somebody can build this and show that you can take fewer measurements and recover the image, that's a breakthrough. How do you use it? I agree with you that, in terms of practical implementation, maybe this is the best application, maybe it's not, and so on. But that's kind of a business reason. AUDIENCE: Will this be faster than an array of sensors? RAMESH RASKAR: Again, in terms of in practice, both of you are right. There are very few benefits. But if you just do compressive sensing, you realize it's a very, very active field. AUDIENCE: So again, maybe a different type of sensing, but what are the features-- like, what are the people doing in computational photography for feature extraction in the same way that the brain processes certain features of linear [INAUDIBLE] to do a better compress sensing of context and imaging? RAMESH RASKAR: You mean compressing an image or sensing with fewer samples? AUDIENCE: Sensing with fewer samples. RAMESH RASKAR: Yeah, so that all kind of gets clubbed into this concept of compressive sensing. If you think about a B1 and B2 and visual processing, there's a lot of work that has been done over the last 30 years. There's good work at CSAIL as well. But that's purely software. AUDIENCE: Right. RAMESH RASKAR: And maybe you're asking, can we use sensing mechanisms that are similar to our brain so that we don't-- AUDIENCE: You don't to do any software. RAMESH RASKAR: Exactly. The secret of success for film, of film photography, is that if somebody had given you this problem before the invention of film, that there is a scene-- and I want to give you a sensation of the same scene-- time shifted or space shifted. There's so many ways you can solve that problem. You can start with a reproduction of a photo, or you can tap into retina. You can tap into V1, V2. You can interface that at any point in the pipeline for human vision. But the simplest solution is to just create that photo on a passive surface and let the brain do that processing all over again. And so it's like a simple impedance match. If I can see the scene and understand it, I can just present that as is and let it go through. And this is how we have been treating photography all this time. It's a record of visual experience, which is great for humans, but it's not so great for computers because computers don't understand any of that. And what you're saying is, what computers care about are all these high-level features. And that's why we're going back to the drawing board and saying, let's build cameras that are not mimicking human eye but actually extracting more information, like [? apertures ?] that we remove the flash camera, or additional information with light-field cameras or multi-spectral cameras and so on. AUDIENCE: So what kind of-- so Brett was asking, why would you want to do [? precisely? ?] When do you have to reduce the number of measurements [INAUDIBLE]? And I think one of the problem [INAUDIBLE].. I don't know. The debate about whether it's really better or not is photography? [INAUDIBLE] RAMESH RASKAR: Tomography, yeah. AUDIENCE: Yeah, when you have to recover the sky, you want to take as few measurements as possible. So if you can reduce that [INAUDIBLE],, That's one of the [? implications ?] of that. RAMESH RASKAR: Right. AUDIENCE: This one piece of camera-- really, it does that. RAMESH RASKAR: But the benefit of tomography, which we studied in the last couple of lectures, is it's a very high-dimensional signal. And so usually, in a high-dimensional signal, there's lower sparsity. There are only a few places. If you think about taking a CAT scan of your body, there are only like four or five types of materials. There is muscle. There is blood. Whatever. There are only five or six things. It's like a cartoon. AUDIENCE: Exactly. I find this is interesting because the test-- RAMESH RASKAR: It's a 3D cartoon. AUDIENCE: But if you look at it, it looks just like a cartoon does-- some whites clothes, some black clothes, some-- RAMESH RASKAR: Exactly. And that's why compressive sensing works very well there. AUDIENCE: So is compressive something used in anything commercially currently? RAMESH RASKAR: A lot of people are getting grants. AUDIENCE: Oh. [LAUGHTER] RAMESH RASKAR: Is that a commercial-enough reason? AUDIENCE: No. But also-- RAMESH RASKAR: If you put those two words, your chances improve by 50%. AUDIENCE: I was thinking of this interesting album that, I guess, extends to all you guys are talking about. So compressive sensing allows you to take less measurements. But the problem is you need to actually have more information about the scene before you take the measurement, which is another measurement. RAMESH RASKAR: So actually, to clarify, the measurements are done in a non-adaptive manner. So you don't have to know anything about the scene to do this measurements. That's actually one power of-- AUDIENCE: I mean, if you want it to actually succeed-- RAMESH RASKAR: But when you reconstruct, you have to know something about the scene. You have to know in which transform basis is actually sparse. So is it sparse when you take a Fourier transform? Is it sparse when you take a [INAUDIBLE] transform? Is it sparse when you take gradients? Like in terms of cartoons, it's just gradients. So you have to know that when you do reconstruction. But advantages are, at the time of capture, I just use this random basis or Fourier basis-- I mean, kind of a modified [INAUDIBLE] basis. I can just go ahead and spec sample it. And in software and reconstruction, I don't worry about some prior information about the scene, which is great. AUDIENCE: Well, I think in the case where you're just taking a set style of captures, that you're limiting yourself in what kind of scenes will be compatible with that capture. So for example, if I just had a scene that's all white, then just one captured would be enough. RAMESH RASKAR: Yeah, but that's because you know something about the scene. AUDIENCE: Exactly. Exactly. So-- RAMESH RASKAR: But if you have this situation where you don't know anything about the scene, you'll just use the same exact procedure, for example. AUDIENCE: Right, but you don't, then you lose the benefit of taking less pictures. RAMESH RASKAR: No, the claim is that even if you don't know anything about the scene, you take very few measurements. All you know about the scene is that once you take its transform, some transform, it's very sparse. It can be represented in a complex place. AUDIENCE: So I remember, actually, a mathematical mapping for this, where we're reducing dynamically the number of captures you have to take while you're capturing it. RAMESH RASKAR: But that's adaptive measure-- adaptive measure. AUDIENCE: Yes. RAMESH RASKAR: Because once you take a picture, you say, let me see what I did not capture. So let me take the next one next. That's a very different problem. AUDIENCE: Yeah. Well, anyways, the code for it's inside the dual photography thing. RAMESH RASKAR: Yeah, somebody did dual photography with compressive sensing. AUDIENCE: And it's adaptive. RAMESH RASKAR: And it works very well because, again, it's a high-dimensional signal, 2D camera, 2D projector. It's four dimensional, but what you're trying to recover is two dimensional. So it works again. So tomography is the same. It's 4D capture for 3D representation. OK, so I'm sorry we're not taking a break. Should we take a 30-second break before we move on to two very small topics, which is how to write a paper and wishlist for photography.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_8_Project_ideas_discussion.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. RAMESH RASKAR: So some of you have already started contacting me about project topics. And that's great. In fact, we decided a couple of projects or we started a direction over email. So I encourage all of you to start doing that. User interaction is a lot of fun, so you're welcome to do a project in that space. But avoid the list of boring [INAUDIBLE] topics that I presented in the class. Using some intelligent lighting, this one is a lot of fun. I think if you can convert the photodetector of a flatbed scanner, and do something really clever with it, because it's like a 2000 Hertz camera with 4,000 pixels. So Matt is doing something with it. [? Ankit ?] it has a lot of experience ripping open flatbed scanners, and [? Masai ?] is also playing with it. Think of some cool projects in that space. Tomography-- STUDENT: I actually have a question for that. It seems like almost everything you can do with a flatbed scanner has been done. I had some ideas, and people have done so much stuff. Is it possible? I mean, if there is a project out there that I would like to do, but it's already been done by somebody else, is it still possible to pursue it? Or is it useless, then? RAMESH RASKAR: I hope by the end of the class you'll always say, people have done a lot of things, but they're very boring. STUDENT: I mean, there's always a little twist on it, but it's not like fundamentally-- RAMESH RASKAR: But what we'll do will be radically new because we are combining hardware with computation. Even on cellphones today, you can buy a simple app that will do panoramic stitching. It's mind blowing, because it requires some computation. All the apps I have downloaded so far don't allow me to do panoramic stitching. It's just amazing. They charge a couple of dollars for it. And people just don't do computation. People are really good at hardware hacking. It's like I told you-- there was the era of fixing cars, and there was an error of building electronics, and now is the era of computation. So I think you'll be able to find lots of cool things you can do by taking hardware and applying new methods to it. But send me your ideas and we'll refine them. Talk to [? Ankit. ?] Talk to other people who are here to help us and [INAUDIBLE].. Yeah, don't go with the flow. Don't follow the hype. Tomography for internals-- Doug was just talking to me. He's interested in thinking about that, always a fun project. And Doug, who's going to talk next week with this really cool project on an array of lights, putting an object in the middle, capturing photos of that, and from that, creating 3D models. But you can talk to Professor [? Mukagawa, ?] who's back there. And he's very interested in this kind of project. STUDENT: Are the pictures taken in front or from the back? RAMESH RASKAR: The picture's taken from the back. STUDENT: But there are cameras-- RAMESH RASKAR: It's just one camera looking at-- let's see if there's a picture. No. There's a camera that's looking at the screen, basically. STUDENT: But it seems that there are several cameras above and below the light source, right? RAMESH RASKAR: No, these are all lights. There are no cameras. That's also lights. STUDENT: Oh, also lights, OK, I see. RAMESH RASKAR: So we have this set up downstairs if anyone wants to play. And there's a single photo. From that you can recover 3D models. But you can do this for tomographic reconstruction. [INAUDIBLE] for 3D scanning is always a lot of fun. And in fact, Doug, again, has all the notes on different ways you can scan objects in 3D. So you can just look that up. Using fluorescence or transparent material, adding some objects-- so think about scanning 3D objects to start from. Cameras using other spectrum, maybe a camera using Wi-Fi. So maybe you saw this project called "Wi-Fi camera," where basically in that case, they just took a tube, metallic tube, and put a Wi-Fi detector in it. And they just scanned the whole world, just scanning it-- kind of a boring project, but fun. But maybe you can just create that as a light-free camera. That could be a lot of fun. Visible and thermal-- a lot of interesting things you can do with it. Mike might be able to give us some thermal cameras to play with, depending on how things go. You can use thermal IR and visible thermal segmentation. For example, you know that glass is completely black, but invisible is not. And things look different in thermal versus visible, so you can combine the two to do something interesting. Thermal IR, your human face looks very different. So you can do some interesting things there. Multispectral camera, which we will talk a lot about color, and wavelength, and spectrum. And we might be able to create cameras, for example, that can distinguish camel from sand. And I'm sure Michael has other more motivating applications of distinguishing color. Some other simple projects-- let's say you want to create a six-color camera. You can just take an ordinary RGB camera and put two filters in front of it. Filter one, take a photo, filter two, take a photo. And from those two photographs, you can create possibly a different spectral response camera. This is a fun project, also, to do. Polarization is the under water, or fog, or freshness of skin, or vegetables, and so on. You can convert ordinary camera into multispectral camera. So we're going to see this particular toy you can buy for $2. Imagine if you can put something like this in front of your camera phone, and convert your camera into multispectral camera. And Roark is interested in doing that. So we can work with him. Multiflash camera-- you're doing it for an assignment, but instead of one camera and four flashes, maybe you have multiple cameras and multiple flashes, and multiple cameras and one flash-- again, some interesting things you can do there. Flare photography-- you can look at thermal effects. And we saw it earlier. Again, lot of fun there. Let's see if there's an image. So creating this is a-- I believe it's a soldering gun here. And you can see the changes in temperature. So Mike, for example, is going to explore flare photography with thermal IR cameras. I thought that was just a great combination. Strobing and color strobing-- you can do a lot of crazy things. You may have seen waterfalls. If you strobe the light just at the right rate, the waterfall becomes steady. And if you start strobing at a slightly slower rate, the water actually starts going up-- all kinds of interesting effects. And if you do color strobing, then the waterfall looks like it's a rainbow. And if you do faster color strobing, the waterfall looks like it's moving up in a colored rainbow-- all kinds of [INAUDIBLE]. We just had a SIGGRAPH paper looking at waterfalls, using an electro endoscope using strobing. And if you're interested in playing in this space, so we use compressive sensing with strobing [INAUDIBLE].. In the light table photos, it's a Holy Grail of compression photography. How can I take a photo and then change the lighting post capture? So we saw the above extent was your first assignment. But maybe there's something more we can do in that space. Non-imaging sensor, and combining that, so combining the gyro, or the GPS, or online photo collections-- lots of cool ideas there. Also some cool ideas in what if two cameras actually talk to each other? So Kevin has been exploring a little bit of that. If two cameras at the time of capture communicate with each other-- optically or some other mechanism-- we can have more information there. Any questions on all this topic so far? Optics is just wide open-- light film, [? score and ?] [? exposure, ?] [INAUDIBLE],, aperture. Again, I had this project on placing interesting apertures that had different colors and different shapes. So here is a traditional blur, but here, the aperture is shaped eight, so all the highlights have a little eight in it. Bioinspired vision-- trying to mimic some of the biological imaging, whether it's optics or sensing. We did a project where we used-- this is Professor [? Hera-- ?] using compound eyes. These are fill eye with multiple lenses here. But you can think of trying to imitate the biological vision of any creature. So mantis shrimp, for example, it's been the news-- it's always in the news, because it uses polarization and mirrors. Maybe we can create a camera that mimics a shrimp, or a lobster, or a scallop. The three of them have completely different mechanisms for sensing an image. So that'll be cool. Some dreams, like how can you change room without moving parts-- you can talk to [? Ankit ?] about that. What can you do if you are allowed to move sensor while the photo is being taken? So [? Ankit ?] had this project where you can take a picture with like a cellphone camera, very tiny aperture. But by shaking the sensor and the lens while taking the picture, you can create a shallow depth of field. So it starts behaving more like an SLR. So how can you create a SLR-like image quality from a tiny aperture? This is achieved by using shift of sensors. And then at CSAIL, they used sensor motion for supporting motion deploying. Time-lapse photos-- always fun. We saw this project where you can take a nighttime photo and convert back. But there are many, many other groups that are doing really fascinating work with time lapse. Displays-- so displays are definitely part of computational photography because after all these great photos we create, are we going to experience them just on a flat, 2D screen? So maybe a display that's aware of temperature, aware of-- can blow out the candle, lower density, shake aware, going beyond what Matt did for his [INAUDIBLE] screen. How can you convert a big screen LCD into a camera that does even more? Scientific imaging, microscopy-- so a new trend is that instead of trying to build a microscope, you can just take the sensor of a camera. And because the pixels are shrinking down to 1 micrometer now, they're approaching a resolution of a traditional microscope. So you can picture it directly on the sensor. You can just put an object directly on the sensor and shine light at it. And I can take a picture on the flat sensor. First of all, it's very large area. It's very cheap, very fast, very simple optics. So there are very interesting things you can do in that space. So think about all the light field, and all the lighting tricks we did. And today, we're learning about color. How can you combine that with microscopy? Because it's a completely new regime, doing microscopy without a lens, microscope using smarter lighting. Confocal illumination-- we'll study that next week. For looking at samples layer by layer, and scattering-free or scattering-aware imaging. If you have a sample that scatters light, how do you image without the scattering effects? And then listeners, you should pitch your ideas. So we have a lot of ideas floating in the air. So you still have two or three days to propose your three ideas. They could be any one of these or something that you're thinking about on your own. And talk to me or Professor [? Mukagawa, ?] Professor [? Oliveira ?] [? Ankit, ?] and [? Ashok. ?] He's not here, but-- and other people I mentioned here, like Roark, and Doug, and others. Sounds good.
|
MIT_MAS531_Computational_Camera_and_Photography_Fall_2009
|
Lecture_8_Wavelengths_and_colors.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So let me hand it over to Ankit now. And we're going to talk about color. ANKIT MOHAN: So I'm going to start with something that I discussed a few weeks back when we were talking about epsilon photography, and if you remember the slide I had back then. And this is one of the oldest ways of creating color images. You just capture the images over time, one with a green filter in front of the camera lens, one with a red one, with a blue filter. And from these three images, if you project them back on a white surface, again, through three different projectors-- one with a green, one with a red, and one with a blue filter in front of it, you will get what looks like a color image to the human eye. And this is stuff that was done more than a century ago. And you can take these images now, these individual grayscale images, and color tint them and then again, add them back together, similar to what you did for your assignment when you were adding the lighting. And you can create these digitally. And now there's a whole library of-- Library of Congress has this huge database of images that they're, by registering them, they're recreating these color images from black and white images. So this is one way of creating color in photography. Another related similar thing to this is the color wheel. I just wanted to-- this is something that, again, I talked about in the previous class. And it's something that's used more for projectors. You have this color wheel rotating in front of this-- probably has one if it's a DLP projector. And at any given time, you're just projecting one component of the light. You're either green, red, or blue. And the human eye actually integrates it over time. Because it happens in rapid succession one after the other. So it's a little different from the previous case, in which you were projecting all three at the same time and light was adding in space. Here, light gets integrated over time. You get the effect of color. So a third thing that I discussed previously in the previous task force, this concept of three CCD sensors, where you use these dichroic prisms and mirrors and you split light into the three wavelength regions. And then you have three separate sensors that capture the three wavelengths. And you get, using three monochrome CCDs, you can capture a color image. This is probably what's the most popular in most still cameras, and also many of the digital cameras. And it's basically what's called the Bear mosaic. That's because the person who invented it from Kodak, his name was Bear. And he had this patented in, I think, mid or late '70s, which describes this technique. The basic idea is that instead of having monochrome pixels, you put these little filters, tiny filters, on top of the pixels. And the filters, you have two green filters, and one red, and one blue filter in a 2 by 2 region. And then this 2 by 2 thing is tiled all over the sensor. So any one pixel is only sensing one wavelength, either red, green, or blue. When I say wavelength, it's actually a whole band of wavelengths. But let's just stick with color for now. And so it's sensing just one of these three colors. And then they use clever demosaicing algorithms, which essentially interpolate between the image that sense. So for example, at a pixel with sensors only red, you would interpolate between the neighboring green pixels in order to estimate what the green color is over there, while still making use of the color, that center of the red color. So what you see in most images is almost kind of you're hallucinating the higher resolution because you are going up by a factor of 3 in the resolution in the image when you do this kind of interpolation. PROFESSOR: And so a 4 megapixel camera, and here it looks like we have for every four pixels we have two green, one blue, and one red. So if we take a 4 megapixel picture, how many pixels are actually taking each of those colors? This four megapixel camera actually giving you 12 megapixels, because when image comes out, it's four megapixel in red, four megapixel in blue, and four megapixel in green. So what is going on? STUDENT: Well, I mean, you get the resolution for each of those colors, since they're separated in the space by the other colors being in space, as well. PROFESSOR: So in this 4 megapixel, there are only 4 million pixels total. STUDENT: Yeah, total. [INAUDIBLE] PROFESSOR: And then two megapixels are green-- one megapixel is red, one megapixel is blue. And so what's the benefit of, if you can pair this with a previous design, where you had three separate CCDs, one for each color. What's the benefit, and what's the disadvantage? STUDENT: Money. Cheaper. It's cheaper. Yeah. PROFESSOR: Why is it cheaper. STUDENT: One CCD. STUDENT: And it's less optics. PROFESSOR: Less optics climate. Alignment is easy. What's the disadvantage? STUDENT: Plus you're blocking light. PROFESSOR: You're getting-- effectively getting one third light because every time you sense one color, the light for other two colors is being thrown out. And when Mike talks about a little bit later about some other multispectral cameras, this will become the biggest issue of the notion of [INAUDIBLE] light. STUDENT: And also can you perfectly reconstruct the image. PROFESSOR: So this one was an issue. So, most of the reconstruction, the interpolation that we just talked about is based on some assumption about the natural scene. But if your scene is not natural, maybe it's a black and white text or you have very fine stripes on your shirt, in that case, the interpolation would not give you the right results. And you see it often. You can see this strange [INAUDIBLE].. ANKIT MOHAN: Yeah. So, one of the things they do in order to avoid this kind of thing is that they place a low pass filter immediately on top of the sensor so that you get rid of any such high frequency details in a single color. But still, you get-- I mean, in many of these images, you do see color [INAUDIBLE] so you have some weird artifacts in one color channel and not in the others or rainbow effect. And that shows up, especially along near the edges, and so on. Someone-- did someone have a question? So, one of the things as Ramesh mentioned, is that you are throwing away 1/3 or 2/3 of the light in doing the previous one. So, there is this recent pattern that Nikon recently introduced where what they're doing is combining these two notions that we just discussed. They have these red, green, and blue sensors at the pixel level. But they-- instead of using three separate filters, they use dichroic mirrors in order to separate the light that's falling on the sensor into the R, G and B component. So, it's similar to the first three CCD case that they're using dichroic mirrors. They're not using any light, but it's all happening at the sensor level. So, conceivably they can build this thing in the semiconductor itself. And so it's much cheaper than using prisms outside of one. PROFESSOR: Anybody familiar with dichroic materials? It's basically a type of [INAUDIBLE] in simplest words. It's a type of a glass where at the right angle, if you shine light at the right angle particular wavelength will pass through and all of the elements will be reflected. ANKIT MOHAN: It's essentially, I think it's total internal reflection is what it makes use of. And the threshold is different for different wavelengths when the light is going to get total internal reflected or just keep going straight into the next material. I'm using a series of such mirrors they can separate the three weapons. PROFESSOR: And in the break, we'll have the soap bubbles and then we'll demonstrate the same concept. ANKIT MOHAN: So, one other popular-- semi-popular sensor design is what's called the Foveon X3 sensor. And instead of using three separate pixels, each pixel having a different filter on top of that, the design of this sensor is very similar to that, which is used in film. So there-- in film, they actually have three separate emulsions or three or four separate layers of emulsions each sensitive to a different wavelength of light. And similarly over here, they have as you keep going down deeper into the pixel, different depths of the pixel actually are sensitive to different wavelengths of light. So, the top region is sensitive, more sensitive to blue and the next part, the green, and the bottom most is to red. And so a single pixel can actually sense all three colors, all three wavelengths, as they're falling down on it. And the advantage is that you don't need to put this low pass filter on top of it because it's traveling the spatial-- multiplexing-- you're doing this multiplexing in depth. So, that's a big advantage. In terms of the resolution, yes, you do get-- you don't lose the resolution like do in the case of a Bayer filter. But so far, when they caught the number of pixels that they simply multiplied by 3. So, when sigma is the manufacturer, that manufactures these Foveon sensors now they [INAUDIBLE] over Foveon. So, when they say they have a 12 megapixel sensor, it's really a four megapixel sensor. They're just call it a 12 megapixel sensor because there are three elements for each pixel. So, it's not clear whether that's really-- again, in terms of the resolution. But a definite gain is that you don't need a low pass filter over it so you don't get these-- you get-- you're able to capture much higher frequency information. PROFESSOR: So, you can't win either way. When they say it's four megapixel traditional camera makers, you only get a two megapixel green image. ANKIT MOHAN: Right. PROFESSOR: And when they say it's 12 megapixel, you still get a four megapixel green image. ANKIT MOHAN: Right. But one other disadvantage of this is that unlike the previous case where you can almost arbitrarily choose what filter you want over each pixel in this case, the separation between the red, green, and blue channels is not as great. You don't-- they apparently don't have as much control. So, they need to do a lot of software processing on the image that's captured in order to separate the red, green, and blue ones. Possibly, they're doing some sort of patchworks based on some of image prior, so you would have some image artifacts over there which might not be these moire artifacts, but you'll have some color artifacts. PROFESSOR: And the actual profile is very misleading because when you see this diagram on the left, the blue is basically getting everything. So, it's something like-- this is your blue, green, and red. Then the blue pixel is getting almost everything and the green pixel is getting a little bit less and the red pixel is getting whatever remains. So, it's not-- it's not like blue, green, and red. So, the picture on the right is completely misleading. It's getting some free values that are highly overlapped. And from that, they're going to do some inversion and figure it out actually. So in the very simplest words we can think of, the first one here is everything. The second one here is just this and the last one gets this. And from that, they can figure out what red and blue is. ANKIT MOHAN: So, I should also say that there's nothing really holy about this RGB design. It's just that this was the one that was proposed first. And this is what's being used. It's the most common. But there are a whole number of other so-called bear patterns that have been proposed which don't use this 2 by 2 tile. They have even bigger tiles. And the simplest one is red, green, and blue and clear. So, you get one pixel which gets all the wavelengths and then red, green, and blue interpolations. And I'm sure there are studies that compare various ones because if you don't get RGB, you need to do some of inversion. And you would get noise in that inversion. But if you use [INAUDIBLE] magenta, yellow, you would be able to get more light onto the sensor. So, I'm not very sure of-- I mean, I couldn't find any real studies that compare the various sensors or various-- PROFESSOR: Kodak has done a few studies. They have it on their website. ANKIT MOHAN: They do? I mean, it was always very-- I'm not sure. So, yeah. So, there are various trade offs between the different kinds of sensors but this is the only one that you find in practice right now. PROFESSOR: This will be a great class project by the way of figuring out which spectrum to choose depending on the scene. ANKIT MOHAN: OK. So, taking a step back from-- this was more a rehashing of what we already discussed before in terms of color sensing on cameras. It's important to look at what we are sensing when we sense light. And it's really a part of what's the electromagnetic spectrum that is actually much, much wider and has many other types of rays than what we are usually looking at in photography which is a visible spectrum. And the visible spectrum of light goes from 400 to 700 nanometers. Usually, when you talk about light, you talk in terms of wavelength. And in different-- in radio and microwave, you would probably talk in terms of frequency or in terahertz radiation and so on. But it's-- for visible light, you always say the wavelength is 400 to 700 nanometers going from blue-- from blue to red. And it's interesting that this is really the only wavelength region that is almost completely-- the atmosphere is almost completely transparent to it. So the sunlight that comes through is-- it's-- other than maybe radio waves, almost all other wavelengths are actually occluded by the atmosphere. And this is really the most of the natural illumination that you have is in this wavelength region which is probably why humans and most animals actually develop their-- are tuned for this region and not any other. And another sort of interesting thing is as you keep going away from this or shorter wavelength, the more dangerous out of the way it becomes. Already starting from UV, you start getting cancerous rays and so on. And then X-rays and gamma rays are even more so. But if you go for a larger wavelengths, they're usually harmless. PROFESSOR: And the way to remember that is wavelenth is inversely proportional to the frequency. And the product of the two, what's the number, and they multiply the wavelength of spectrum to the frequency of the spectrum. That's the speed of light. What's the speed of light? [INTERPOSING VOICES] PROFESSOR: But in this class? [INTERPOSING VOICES] And so the frequency is increasing as you go to the right, which means it has more energy, [? hnu, ?] which means it can penetrate deeper and damage more things. So, that's one easy way to remember what's going on the right. Unfortunately, the chart is flipped because we should think of limit increasing from left to right, but this is really showing the frequency increasing from left to right. ANKIT MOHAN: So yeah, I guess the point I want to make here is that we are-- it's only this region that we are looking at, which is really, really small compared to the whole spectrum, EM spectrum. and there is lots of interesting stuff going on, especially in thermal IR and thermal and even beyond that in using it for imaging. and we are using Wi-Fi for imaging also and things like that, which is support for gigahertz or something like that. So, you don't have to be constrained for even creating a photorealistic or a visual image of understanding what's around us. We shouldn't be limited to which will serve the visual spectrum. It's OK to think outside it. And I'll show a few examples of how thinking outside the visual spectrum actually enabled you to do a lot of things which you otherwise would not be able to do. So before again, I just wanted to talk about what a spectroscope is. And we have one of those here. So, I guess you could pass it around and so you-- so, what spectroscope really is nothing but a prism essentially. And a prism takes a signal. So, a prism is basically this optical element that bends any incoming ray of light. And but the interesting thing is because this is refraction, the refractive index of glass is actually slightly different based. It's a function of the wavelength of light. So, if you build a prism of the right material, right kind of glass, you can have it have a huge disparity in the refractive index between the 400 and 700 nanometers. And so when the light bend coming out of a prism. So, it says grating here but can be a prism or creating any of those. When the light went from here, the red and blue-- red, blue, and green, the different wavelengths, actually bend in different directions. And if you have a detector placed in front of it, you can sense the intensity of the blue ray, of the green ray, and of the red ray separately. And you can sort of decompose an incoming source of light into its constituent wavelengths. PROFESSOR: You have a grating? ANKIT MOHAN: Yeah. PROFESSOR: So, I'll pass this around and all you have to do is look through this hole here. And all it has is a slit in front. so the best path forward was to look at one of the bright lights up here. And very conveniently, there is a scale on this side that goes from 400 nanometers to 700 nanometers. So, the idea is that you can look at any point in the world and you can see its spectrum right away. So, unfortunately, we don't have fluorescent lights. ANKIT MOHAN: Actually, we do. STUDENT: [INAUDIBLE] PROFESSOR: Right. And you'll realize the [INAUDIBLE] is actually very spiky. It has like sharp blue and then a little bit of green and very, very annoying spectrum. It's not very nice. It's more like the sunlight. And just pass it around. I was going to pass around the flashlight but I'm not sure how many people have good hand-eye coordination. ANKIT MOHAN: There's also this, which is diffraction grating and it-- for the purpose of this dispersion, it's very similar to a prism. So, if you look through this, you will be able to see a whole rainbow of colors around any bright light. PROFESSOR: So, when you're looking at this, look at it through a particular angle, a particular orientation and then rotate it. And as you rotate, you'll realize that it will fall down. STUDENT: Speaking of hand-eye coordination. PROFESSOR: Exactly. I have an excuse. I was sick this week. You'll see that the image actually shifts around. So, if I'm looking at Daniel, I see him. Then I see a red copy of him and a blue copy of him shifted. I knew there was more than one of you. And if I rotate, it rotates. And just remember this principle when we come back and talk, when Michael starts talking about different mechanisms for exploiting wavelength. Again, a great class project idea. ANKIT MOHAN: Yeah. So, this here just shows one of our typical spectrogram. Kind of looks like it's-- you have the wavelength over here and the intensity over here on the y-axis. So, you can see those spikes over here at 550 nanometer and then again, 600 something. So this is probably a fluorescent light is kind of similar I guess. PROFESSOR: Maybe we can pass around-- [INTERPOSING VOICES] LED light. [INTERPOSING VOICES] ANKIT MOHAN: Will be very spiky. PROFESSOR: Yeah. That'll be really good to see. Take a look at this, [INAUDIBLE].. ANKIT MOHAN: OK. So, while one of you is looking-- so, this is stuff that Michael is going to get into much more detail later. But I just wanted to mention-- are you going to talk about [INAUDIBLE]?? MICHAEL: It won't hurt the do it twice so go for it. ANKIT MOHAN: OK. So, I mean, it's just something which is very high level of what he's actually going to talk about is the spectroscope is really imaging the multiple spectrums of a single point in space like a light source over here. If you have a full scene, you are not going to get much out of a spectroscope. And what you need to do then is what's called multispectral imaging or hyperspectral imaging depending on how many spectrums you end up getting. And so there are these standard ways of doing that. And I just wanted to briefly mention what those are. So, usually this kind of multispectral imaging is very popular and remote sensing kind of applications where you have a plane or a satellite that's flying over the region and you want to get the multispectral or hyperspectral image or data set of what's on the ground underneath. So, what we have here is a lens kind of thing, which is on the plane image sensor behind it and the object space, which is on the ground. And so the most straightforward way of doing this is what's called-- so, even this simple imaging where you're not imaging the complete spectrum, you can do it in various ways. The first way is where you have a lens which completely images the object space onto an image or 2D sensor. So, this is what just a traditional camera looks like. You capture the whole scene in one go. Another way of doing this is what's called a push broom. And a push broom, the way I think of it it's you're pushing forward as the platform or the plane is moving in that direction. So first, when the plane is at a particular location, you are imaging this line on the scene. And then in the next instance, you will image this line and then you will image this line and so on. So as you're moving forward, you are imaging one line that's [INAUDIBLE] you. The other one is called the whisk broom in which you are-- while you're moving forward, you're going from left to right and from right to left. And you're doing this whisk broom kind of thing, as opposed to a push room. And what you do with this whisk broom is that when you are imaging this one element, you-- instead of just imaging onto one sensor, you pass the light through a prism, and you get a whole spectrum similar to a spectroscope. And this gives you the complete spectral characteristics of this one element or one point on scene. And then in the next instance, you are going to sense the next element right next to it as your whisk broom is going from left to right and so on. So, you're going to sense the whole scene. The way the push broom hyperspectral sensor works is that you-- so in this case, you need just 1D sensor in order to capture the complete data set of the hyperspectral image of the scene. Another way of doing it is using the push broom where instead of cap-- so, you put a prism in front of each of these elements, the image of each of these elements, and then you have a 2D sensor. And the 2D sensor would sense along the x-axis is the point in space or the point in scene. And along the y-axis is the various spectral bands that you have for your hyperspectral imager. And then again, in the next instance, you will sense the next [INAUDIBLE] row in the scene. A third way of doing it is basically something that's similar to the traditional camera is to put a filter in front of your lens and change the filter that's in front of your lens. Either have that color wheel or have a tunable filter which is-- whose response is changing. But at any given instant, you have-- you're either sensing green red or blue. And then in the next instance, you're sensing the next one. It's similar to the first image we saw when I started the talk of capturing multiple images with different wavelength filters in front of the camera. It's similar to that. So, you capture the whole scene in one instant but only for one wavelength. So this summarizes what I was talking about that you have this what's called the data cube or the object cube, which is the x and y are the same coordinates, x y. And lambda is the wavelength where it goes from 400 to 700 or slightly beyond those extremes. You might also have near IR and so on. But this is what you want to get. And unfortunately-- so this is a 3D object. And what you-- unfortunately, our sensor can be at most two dimensional surface. So, you want to somehow get the three dimensional data set onto a two dimensional sensor. And there are various ways in which you can do that. And you can-- the most traditional, most obvious way of doing it is to have a third dimension be time. So, at any one given instant, you are either getting a slice like this or a slice like this or a slice like this either along x or along y or along the wavelength. And that in the next instant, you get the next slice and so on. And then you combine all this information together. So over a period of time, you build a complete object cube or data set. So, this one is where you're using a filter. And this is where you're using a whisk broom or a push broom depending on how you name them, x and y. So, this is sort of the more traditional way of doing multi-spectral scanning. And Michael is going to talk about even more interesting fancier ways of doing it by just taking projections. But I let him get into that. So, I think Ramesh briefly mentioned thermal imaging. And can we get the lights off or? So, this is what-- for those of you who've never seen a thermal camera, we have one upstairs. I think [INAUDIBLE] welcome and play with. But this is what images through our typical thermal imager or thermal camera look like. This is sensing and usually in the wavelengths in the range of 1 micron to about 10 microns or 15 microns in that range. That's the range in which humans show up as warm bodies usually. So, 6 to 8 micron kind of range. But you can also have-- actually, hot bodies, warm bodies show up like heat seeking missiles and so on. This is something that's explored a lot in the defense industry. This use of high resolution very fast thermal cameras, but it's something that's very rapidly coming and into the other applications also. There was recently an article in Time. I think that they were looking at the various-- the thermal profile of a house from outside to find out where it's leaking and where the heat is escaping. And then they're using these kind of thermal cameras to actually test for whether pipes are leaking and your HVAC is working properly or not and things like that. So, it's finding lots of applications in areas other than just traditional defense. And so on. And that's one of the reasons why slowly the price of these cameras which used to be about $20,000 each is hopefully coming down. This is another image of just thermal images which I thought was interesting. The first one is-- what's showing is that when you think of thermal light, it's actually quite different from when you're looking at visible light. And one example, I think Ramesh already mentioned is that of glass, that glass appears opaque in thermal IR. So, you can't look through glass. And there are other objects which may appear completely-- excuse me, transparent in thermal IR but are opaque in visible light. This is one example, which is sort of interesting, you have this fridge, which is brushed metal. And so in reflection, you see this very diffuse reflection. You can't really see what's on the other side. But if you just use thermal IR, you can very clearly see there is this rice oven on the other side that's really, really hot. And the reflection is nice and sharp because the wavelength that you're using is now not 700 nanometers but it's something much larger. And the surface is actually very smooth when you look at that wavelength. But when you're looking at it in the visible spectrum, it appears very diffuse and you can hardly see what the reflection is. PROFESSOR: Behaves Like a mirror. ANKIT MOHAN: It behaves like a mirror in the thermal IR range because of the difference in wavelength. So, this is sort of an interesting thing to think about of applications where you can-- imagine you had one of these thermal imaging cameras on each cell phone right next to your normal camera. What could you do with it? Or let's say you had it right next to your webcam on your laptop, can you do use-- do something interesting with it? Another one is this paper from-- Colorado-- no, University-- PROFESSOR: Houston. ANKIT MOHAN: Houston, right. Where they use thermal cameras to detect for lie detection. And they have this paper in Nature where they analyze the region around the eyes and how that changes when someone is lying in the thermal IR range. And they claim that they're getting as good performance as a traditional lie detector. PROFESSOR: It's not easy to reproduce, unfortunately, the results. But it looks pretty interesting. And the whole concept is that the blood veins pump more blood as your emotions change. So, as long as you can detect subcutaneous changes in blood flow, you can detect the correlated emotions. ANKIT MOHAN: I think the difference is really very, very subtle, and it's greatly magnified in this image that they show. So, when we tried to do this, we couldn't spot any difference [INAUDIBLE]. PROFESSOR: Of course, nobody lies in our group either. ANKIT MOHAN: So, one other thing-- when we were doing this last year, someone talked about near-infrared photography. And this is more from a photography point of view that actually, it turns out most of the CCD sensors are-- they're actually sensitive to neon IR or infrared that's just after 700 nanometer, from 700 to about 1 micron. And they are sensitive to that. But in fact, most manufacturers put an IR block filter on top of the sensor that blocks anything that's greater than 700 nanometers. So, what you can do is remove that IR block and you can then capture nice 3D images like this. This of course, is all-- they're all fake images because you don't really get any sense of color once you go beyond 700 nanometers. So people usually just fill in fake colors based on an original visual image or something like that. But another place where you can get these colors is if you use IR film. Kodak has this color IR film which reacts differently to different wavelengths beyond the 700 nanometers. And that gives you these interesting colors. PROFESSOR: So these are not nighttime photos. These are just daytime photos with some pseudo color superimposed. ANKIT MOHAN: So, the one interesting thing is that I think sky becomes really, really black and opaque. So, that's because if you remember, sky does not allow IR to come through as much. But also, any vegetation becomes very bright and white. And that's why you have the snow like effect on trees and things like that. But your barks of the tree and the food actually does not-- it actually absorbs more light. It does not reflect so much back, so it's just interesting things. And I mentioned this briefly earlier, one of the biggest applications of this thing has traditionally been in remote sensing also. And by capturing multi-spectral or hyperspectral images of scene of especially vegetation, you can actually classify what crop is what and what kind of-- what's the role and where you have plantations and what kind of plantations over here they're able to distinguish between all these different kinds of crops. And this is something you can do if you have enough resolution in the multiple spectrums that you're getting just because each vegetation or tree actually has a very different reflectance profile when you look at it. In other words, they both appear green to the human eye. If you look at the actual spectrum response, it's quite different. Then you can distinguish between different materials based on that. STUDENT: This is not based on IR though? This-- ANKIT MOHAN: This is-- STUDENT: Visible spectrum? ANKIT MOHAN: This is-- I think, it usually goes into near-IR at least, most of the remote sensing stuff. Because I think most of the interesting stuff of this kind of distinguishing thing actually happens in IR. But they do include visible light. PROFESSOR: They might have a band of 5 nanometers or 10 nanometers and capture 20, 30 channels and [INAUDIBLE] high dimensional signal [INAUDIBLE].. ANKIT MOHAN: I'm not exactly sure. Do you know Michael, what trains they usually use for this kind of [INAUDIBLE]? MICHAEL: Usually-- I mean, I don't know usually, but they don't-- it is definitely common to go into the IR and sometimes [INAUDIBLE]. ANKIT MOHAN: Right. So, talking about UV, you can also do interesting photography in UV range. And this is an amazing website. If you get a chance, you should definitely visit it. They have-- this guy has like pictures of all kinds of flowers and both invisible spectrum and then in UV. And it turns out that the flowers look just amazingly different when you look at them in UV. And you have these-- almost these landing strips that invite the bees to come and sit-- give directions. Don't sit here, sit over here kind of almost. Whereas if you look at it in the visible light, it's all yellow or it's all red and there's hardly any difference between-- you don't see that-- [INTERPOSING VOICES] PROFESSOR: One is especially striking. ANKIT MOHAN: Yeah. Yeah, there's [INAUDIBLE] portions which are-- I think, well again, this is all fake color. There really is no color in this UV. But one thing I want to point out here is that you cannot do this type of photography with most traditional cameras because glass is actually-- it absorbs UV. So, they use these special rare Earth quartz lenses which are super, super expensive in order to do this kind of photography. OK, so-- PROFESSOR: What you could use, for example, what Professor [INAUDIBLE] was talking about instead of using a lens, you could use a [INAUDIBLE] zone plate which is like a pinhole camera except glorified. It has more interesting pattern. And then on the sensor, you could put a layer of fluorescent material, so that you will stimulate the fluorescent and then make an image for that. So, you can kind of go around some of the limitations in a traditional camera to do UV photography. That's what they do in a way for a lot of medical imaging [INAUDIBLE]. ANKIT MOHAN: Also a lot of these images that you find, they all use film photography. They rarely use digital cameras for these kind of things. I mean, even for near-IR, I think-- PROFESSOR: This [INAUDIBLE] is just one of the major craze that has been around for several years. So, remember the UV in the UV spectrum, these images are going to look just monochrome. It's black and white. But artists like to start assigning some colors to different intensity levels to seem more interesting. And for NASA and for astronomy, [INAUDIBLE] sometime over 20 years ago, [INAUDIBLE] put in those pictures of the nebulas and all that is sort of just putting them in [INAUDIBLE] really boring monochrome colors, let's start coloring them. Beautiful reddish hues and greenish lavas and [INAUDIBLE].. The real thing doesn't look like that. But it allows ordinary people to start appreciating astronomy. ANKIT MOHAN: Yeah, it's like the wallpaper on-- that comes with OS X has that. OK. So, now switching gears a little bit. This is more getting into the human perception of color. And I'm not going to get into too many details, just a very high level overview. So, many of you looked at this figure before, this chromaticity diagram. So, what this really-- this is the thing that people use in order to see how humans are actually sensing color. And what you have along over here is each of the spectrally pure colors, so went from 400 to 700 nanometers and everything inside. So, you have this blue over here going to green and then red over here. And anything that's inside is basically a combination of multiple of these colors that are on the outside. So, it turns out that most devices actually cannot-- so, this actually represents the space of all colors that the human eye can detect. It turns out that most devices actually occupy a much smaller space within this whole chromaticity diagram, this horseshoe shaped thing. And so sRGB which is the default color of most CRT monitors and it's the default color that's used on the internet is actually a much smaller region. You can only represent colors that are in between this triangle in sRGB. Now, what's interesting with this color space is that once you have these three points, any point that's inside this triangle can actually be represented by a sum of just these three color primaries. So, that's what I mean by fixed color primary is that most cameras that have a single green, single red, and single blue filter on the sensor actually have-- the green is somewhere over here, blue somewhere here, red is somewhere here. And that allows you to represent any color that lies within this triangle. Now, that's what the color response of traditional film, and that's what it looks like for most cameras. So, it's very similar. There is very little difference between the color response of the two. And in both cases, the response is made to close the [INAUDIBLE] out of the human eye itself except for in firmware, like this firm [INAUDIBLE] I think is something that's very good for nature photography. So, they stress more on the red than what the digital camera does just because-- it's a little unnatural, the colors. PROFESSOR: [INAUDIBLE] for ocean photography. ANKIT MOHAN: It's any landscape kind of thing, sunrises and sunsets, nature. It doesn't work very well for skin tone, for example. So, this works fine as long as your color is within this triangle. But once you want to represent a color that's outside, it becomes-- it's not possible to do that because this RGB, the values, can only be between 0 and 1. So, there are a number of algorithms that you can use in order to estimate what the color should be. But each of them is a [INAUDIBLE] lose information because you may project it to the nearest point or you may project it to the perceptually nearest point and all those kinds of things. But eventually, you end up losing this information. So, it turns out like all this region over here, it's very hard to represent colors over here just because you are triangle is actually-- it's outside this triangle. So, one alternative you can do is that instead of using-- putting the color primary over here, you can put a color primary out there. So, now you have this really big color gamut and you can represent all colors that lie inside this color space. Unfortunately, what that means is that your green is now very close to, let's say 520 nanometers here. So, it's a very spectrally pure-- it's just one very narrow range of wavelengths like that over there. And that means you have to use a very sharp wavelength profile, LED, or a laser or something of that illuminating it or a filter that's really dark. So, it turns out that these optimal color gamuts that you have there are very good compromise between having a wide gamut and having filters that allow a large amount of power to come into the system. So, if you have these primaries very close to the edges, you end up throwing away too much of the light. You get very little light coming in. So, what you'd ideally want to do is have these adaptive color primaries. So, for a scene like this, you don't have too many reds in this for example. You want color primaries to be like this that's just some arbitrary shape. And for a scene like this, you want them to be more like that. Rather than have the same set of colored primaries for every image that you capture, you want-- you want to be a little more adaptive about it. STUDENT: [INAUDIBLE] ANKIT MOHAN: So, it's kind of like the equalizer, you mean? So, you want to be able to tweak the various wavelengths-- what wavelengths should be sensed more and what should be sensed less. PROFESSOR: And if you're playing pop music versus classical music, you may want to change your synthesizer, more bass, less treble, and so on. And because those settings on your equalizer are also boosting certain frequencies and attenuating other frequencies. In case of cameras, the RGB has already been fixed [INAUDIBLE] three points. And those three points have been fixed. But maybe you ought to change the frequency. ANKIT MOHAN: So, that's what I'm going to briefly talk about how one way of doing that. And-- is everyone aware? Have you talked about [INAUDIBLE]?? STUDENT: I have a question. So, here in this graph, the chromaticity diagram, it seems like you can represent it in a 2D plane so which would mean that two primaries are enough to define any color, because it's just a 2D plane. [INTERPOSING VOICES] ANKIT MOHAN: There's an intensity, which is-- this is a projection of the whole-- STUDENT: Oh, so it's actually a 3D thing. [INTERPOSING VOICES] PROFESSOR: But in terms of the color, you're right. You could represent with two numbers and the third one is the intensity. And that's why you have LUV or LAB or what is luminance and there's A and B which is chromaticity A and Chromaticity B or [INAUDIBLE] is the old one. ANKIT MOHAN: One of the things that these-- so this-- when I said this is the RBG color space, this is actually incomplete. Just defining RGB is not enough. You need to define the white point inside, where is the white. And that's also something that's defined over there. And that-- if you look at the third dimension, that's what represent the various levels of gray. So, in order to represent the intensity, you need that third axis. MICHAEL: And let me emphasize something that it's easier to lose I think in this. We tend to use the words color and wavelength interchangeably in some situations. And wavelength is a truly physical phenomenon. It's a property universe of the light itself. Color is specific to human vision, maybe other animals too. But it's different for them than it is for us. So, when we talk about turning wavelength into color, what we're talking about is the process of human perception. And so this plot, for example, is specific to human vision. If it's for some other creature or some physical device that we built, it would have a completely different plot. PROFESSOR: Exactly. So here's-- before we get onto the color sensing, here's an interesting puzzle. If you take a red laser pointer, which-- they're using a red color for right now. And as Mike just reminded us, don't think of color, think of the wavelength. So, let's say it has certain wavelength-- I don't know-- 680 nanometers, something like that? MICHAEL: 630 probably. PROFESSOR: 630. And if I just take the laser pointer and shine it in a piece of water-- a piece of glass or a tank of water, what's going to happen to it? [INAUDIBLE] [INAUDIBLE] shine laser in it. What happens to it? [INTERPOSING VOICES] It bends. And why does it bend? Does anything change about its physical properties? Of course, it does. Either the wavelength or speed or something has to change. The speed is actually decreasing. So c in air versus c in water is related by a factor of what? STUDENT: [INAUDIBLE] PROFESSOR: The refractive index. So, this is actually reduced by a factor of 2/3. Now, we know that here, we have a wavelength of light times the frequency of light. I'm just talking about a for air. This is for a laser. And similarly here, we have wavelength in water and frequency in water. So, this has gone down by 2/3. Something here also has to go down by a factor of 2/3. Yes. STUDENT: [INAUDIBLE] frequencies fixed. ANKIT MOHAN: Yeah, the frequency [INAUDIBLE].. PROFESSOR: So, the wavelength has changed. So, that means from 630 you say? From 630, we have gone down to 2/3 of that. So, that would be what 420. So now, this red laser is actually-- has a wavelength of 420 nanometers. And so we started from somewhere here and the light wavelength is actually now even further down. So does it mean that when you shine a laser in water, it's actually blue because it's wavelength now is 420 nanometers? Is it blue? STUDENT: Well, we can't ever see that because we are [INAUDIBLE]. PROFESSOR: So, when it comes out it's back to 630. [INTERPOSING VOICES] But when it's inside, it's 420. MICHAEL: You could immerse the CCD in water, for example. ANKIT MOHAN: I guess we sense frequency more than wavelength. PROFESSOR: Very good. [INAUDIBLE] So, remember that message. Don't think about color, think about the physical properties. And the way to think about that is you may have different wavelengths but they mean different things in different media. In air, if you really want to think about colors, in air, 630 is red. In water, 420 is red. So, it gets too confusing. So, you can use colors when they're convenient. But when we start talking about physical interaction, it's better to talk about wavelengths or really something that remains constant, [INAUDIBLE].. ANKIT MOHAN: I think the important thing is that the energy of the ray remains the same. It's e is equal to hnu-- it's the frequency. So, that's what the [INAUDIBLE] it's the energy of the electrons that get displaced. But when-- so, that's why I said for whatever reasons when you're talking about visible light, they always like to talk about it with wavelength. And it's not entirely correct because as Ramesh pointed out, this figure is only true for air. And it won't be-- it would be completely very different if you were underwater. But it's just a convention that people like to follow. And that's what we are sticking too. PROFESSOR: [INAUDIBLE] to ask this question to your photographer friends because they like to talk about colors. ANKIT MOHAN: So again, going back over here. As I was saying, in different-- that's why they have both the frequency and the wavelength over here. In different fields and in different-- for different purposes, you would use either frequency or wavelength. It's just that for photography and for especially for this kind of imaging and [INAUDIBLE] the visible region, it's something that people usually use wavelength and not the frequency or the energy. So, we wanted to try to come up with a way of having adaptive color primaries. And so I'm going to go through a analysis of an optical system that we developed. And it's going-- everything that I'm going to discuss is going to have be in what's called a flashland case. So, it's just in two dimensions but it scales up to a real 3D or 4D case also. So, we start with a simple 1D signal and this 1D signal is arbitrary intensity. And along-- this is the x position of the signal and this is the intensity. And it's a white signal, which means it has all the visible frequencies between 400 and 700 nanometers. And the intensity of each one of those wavelengths is actually the same. So, that's what the wavelength profile or the color profile, spectral profile looks like at any point between a and c. So, we take this signal and we put it in front of a pinhole. So, here's your signal and here's a pinhole. So, the pinhole essentially creates an inverted image of the signal. So, you have this a,b, c here, you get a prime, b prime, c prime over here. It's just a pinhole camera. So now, instead of positioning a film or sensor over here, we put a lens in this plane. And this lens essentially [? complements ?] all these rays, so you have a prime, b prime, c prime, just like an orthographic set up so-- except it's inverted which is not so important, but you have a ray coming in for each point in the scene. So, next we place a prism in front of this. And now as I talked about-- mentioned earlier that a prism actually bends the incoming ray where the bending angle actually depends on the wavelength of light. So, what I'm going to show for simplicity over here is that when you have a white light coming in, the green-- the red corresponding to the green wavelength are 550 nanometers or something. Actually goes through straight and bend. And red gets bent upwards and blue gets bent downwards. Now, once again, I'm going against what Michael was saying. I'm calling it red, green, and blue, but it really is the wavelength, 400, 500, and 700 nanometers. I'm just going to call red, green, blue because it's easier to talk with it that way. And the other thing to notice here is that I'm only drawing these three rays, but really, it's a whole fan of rays because it's a continuum and it's a white object-- PROFESSOR: A whole rainbow basically. ANKIT MOHAN: You have the whole rainbow here. And I'm just drawing three of those rays. So, you would actually have a ray going down here and up there and anywhere between this blue and the red ray that we have. So, now looking at this prism more closely, along this axis, we have the spatial points of the scene. So, you have a prime, b prime, and c prime. And coming out of each of those points, you have this wavelength angle or lambda. Now, this figure should remind you of something. And anyone wants to guess? Does this figure look familiar to-- [INTERPOSING VOICES] It's exactly like a light field, except the only difference is that instead of having lambda-- theta over here or the angle, we replace it with lambda. So, this is something we call the spectral light field. It's-- the angular light field or the spatial light field-- the spectral light field. So, any point over here in x or lambda will represent the intensity of a ray in the space coming out of some point along a and c and going in a particular direction. And since we started with a white light source, it's going to have the same intensity along each of these along each wavelength. So, now-- since we've reduced this to nothing more than just a light field, it turns out we can use the various properties of a light field in analyzing the system. And the one-- again, this is something that you should be familiar with if you know what a light field is is that if you place a screen somewhere in front of a light field, you get a projection of the light field on that screen in a direction perpendicular to the screen in terms of the light field. So, what that means in the flashland cases that now if I place a light-- place a screen over here, this yellow thing, what that does to the light field is basically get a vertical projection, which is a direction perpendicular to the direction of the screen, which is along the x-axis. So, what we're getting for every point over here is an integration over the various all the wavelengths for that particular x position. And you essentially get the shape of the signal in this projection. So, if you were to place a screen over here, not surprisingly, you would get an image of the signal itself on the screen because it's so close to the prism it's-- these rays haven't dispersed as much yet. It's almost the same thing as if there was no prism over there. But now as you move the screen away from the prism, the angle of projection changes and it becomes more-- you get this sheer thing. And because of the sheer, you get the various wavelengths, the signal corresponding to the different wavelengths is now dispersed and it's overlapping and shifted. And once again, I've shown just the blue, green, and red wavelengths here. But clearly, it's a continuum and you're getting this rainbow like smear over here. And when you were looking through the diffraction grating, you could almost see this. You get a rainbow coming out of every point when you look through it. And that's similar to what you're getting over here. Now, if you move this plane away to infinity, when it's at infinity, you're going to get this horizontal projection through the spectral light. And now, for each point on the screen, we are integrating over all the spatial positions, all the points on the signal for each wavelength. So, what you get on the screen is nothing but a rainbow because we started with a white light source. And you have something going from blue to red. Any questions? So, what we've done is that we have this one plane where if we put a screen on that plane, we get the spectral characteristics of the signal that we started with. So, the problem with that, of course, is that this plane is infinitely far away. But we want to move it closer so we place another lens in front of the prism. And this lens does two things. The first is that it creates a copy of the prism itself on this plane at some distance from the lens where basically this is imaged, c prime is imaged at c double prime, b prime, and b double prime and so on. So, similar to the x that you had there, you get an x over here. But you also get this plane in the middle where each ray for each scene point of a particular wavelength actually converges and meets at a point on this plane. So, all the red rays are meeting here, green rays are meeting here, blue rays are meeting here and so on. So, once again, looking at this, when you place a screen at this plane at the end, which is an image of the prism itself, you're getting this vertical projection. And this is an image of the scene itself. This is where we place our sensor. And we call this the sensor plane of the optical system. And if you look at this plane, which had this nice cubed property of all the reds coming together, greens coming together, and so on, we actually get the horizontal projection. And this is the plane, which was infinitely before. It's moved much closer than that. And you get this nice looking rainbow at this plane. And we call this the rainbow plane. So, the nice cube property of the rainbow plane is that all the rays of a given wavelength coming from all the points in the scene actually converge to a single unique point in this plane. So, just to give you a couple of examples, if instead of the signal being wide if it had been completely red, let's say between 650 and 700 nanometers or something like that. When you took this horizontal projection at TR or the rainbow plane, you would only get this part of the rainbow. You won't get the other remaining rainbow. Similarly, if your signal was half blue and half red, you would get something like this at the rainbow plane. So, now if you place a mask in the rainbow plane, let's say you block out all these red rays that are going through, you actually put an occluder in that plane. When you-- what that essentially does is that it multiplies the incoming light field at this plane, which is this, with a light field which is all zeros corresponding to the wavelengths that you block and all ones corresponding to the all-- to all the other wavelengths. And so what you get is basically this-- what looks kind of like this. And now when you put the sensor at the screen at the sensor plane and you take the vertical projections, you don't get any of the red components. You only get the green and the blue component. So, you essentially-- by including the red channel over there or the red color or the red wavelengths over there, you remove them from the light field that gets projected on the sensor. And you only get what looks like cyan. It's just green plus blue. If you were to do that to the green channel, you could just occlude that you have a big 0 over here and you only get this plus this and this plus this and so it looks magenta and so on. So, essentially what you can do is place any arbitrary mask you want in this rainbow plane and that will influence what colors get sensed by the sensor and what colors are not sensed by the sensor. So, it-- by placing a mask in the rainbow plane, it allows you to control effectively the spectral sensitivity of the whole imaging system on the camera by modulating what rays go through and what rays get occluded over there. So, that's what the whole optical system looks like. We have the lens, the pinhole, the prism, another lens that images the rainbow plane and we place a mask over there and then the image sensor itself. PROFESSOR: So what's the benefit of this and what's the disadvantage compared to the other schemes we have seen before? Let's look at the disadvantage [INAUDIBLE].. STUDENT: Well, starting with the pinhole. PROFESSOR: That's a bad idea. [INTERPOSING VOICES] It's always a bad idea. [INAUDIBLE] pinhole when you're blocking light, it's always a bad idea. So, what would be-- does everybody see that? There's a-- what would happen if the pinhole was made larger? STUDENT: When you have to deal with focus problems [INAUDIBLE]. PROFESSOR: So, you'll get blur. But what kind of blur [INAUDIBLE]?? STUDENT: You're getting a blur between your spectrum pieces exactly. PROFESSOR: The blur will be the wavelength as opposed to its space. Because even if you make this pinhold larger, conceptually, I guess you want to follow this trend. STUDENT: Well, that's-- PROFESSOR: Well, just to get your thinking, if you increase that pinhole, you will still get an image at the end that looks sharp just like [INAUDIBLE] image. Because we created two images. The prism plane is basically some kind of a virtual image. We formed an image on that. And the image of that is being formed on the sensor. So, there's no problem in terms of the focus blur. But the ability to control specific wavelengths has now decreased. So, there is a blur in terms of wavelengths, spectral copies are still missing. STUDENT: So what if we put some kind of like pattern to light in more light but then we can [INAUDIBLE] out later. PROFESSOR: Some kind of [INAUDIBLE] aperture or something like that? STUDENT: Or mask or anything. PROFESSOR: Very good. Very good Thank you. We haven't covered aperture yet so-- ANKIT MOHAN: I briefly talked about it in the end. But yeah, you're right. You could do something of that to deepen all the effects of the blur arising from this. But it turns out that this is a different kind of trade off that you from what we saw earlier like where we were doing the scanning in time. When you're scanning in time, it's almost like have a pinhole in time. And then if you have scene motion, then you would have blur due to that. In this case, you have a pinhole in space. So, you're getting all of this in one shot but you're turning of a light over there because you need this-- you can't have an infinitely-- very large aperture. And you can have some sort of-- reasonably not very tiny aperture, you can get something like six or 10 different wavelengths over here, which is not great, but it's something that gives you a control of what you can control the wavelength. The aperture size and control the fidelity of the wavelengths that you're going to get here. So, that's sort of the setup that we build. You have the image sensor over here, which is nothing but standard camera. This is the [INAUDIBLE] the mask that controls what wavelengths go through and what don't and then a bunch of lenses over here. And that's the diffraction grating that bends everything. So, one thing you would notice is that there is a bend in the optical axis, something that I didn't show in the optical diagram so far. But it's-- again, something you have to take care of when-- if you're actually building the system. So, you want me to go through all the examples or I mean? So, this is a simple test setup that we built. And the idea was we had a spectral rainbow generator. So, we have this rainbow which is going from red to blue. It's kind of like that. And then we are imaging this with our agile spectrum camera. Now, notice that the color that you see here is because of the color of the Bayer sensor On the camera. And it's really-- if you were to use a monochrome sensor, it would be all gray and just everything with equal intensity. So, first when we block off a certain wavelength, so let's say about 606 nanometers over here. You see a corresponding gap in the image that's captured by the camera. And it gives you an indication that it is blocking off in the right range and the Bayer filter actually helps you see that here. And if you put some other arbitrary mask that's actually blocking off in two regions, you get similar-- a corresponding image on the sensor. It looks very different over here. It's like more blue over here and so on. So, one of the applications you can do with this thing that we worked on was trying to reduce the layer coming out of this led. So, it's almost impossible to see here but you actually have text written in the background, which is some EG over here in the background. And then you have this right LED in the foreground. And you want to be able to capture both the background and the foreground at the same time. And the background is much, much darker than the foreground. So, if you were to do traditional high dynamic range imaging, you would-- if you increase the exposure much more, this halo that's coming out of this LED starts to occupy even more parts of the scene. And you can't really see the background. If you decrease the exposure, then this halo or this artifact reduces. But then the background becomes even darker. So, what you really want to do is just occlude this LED, the effect of the LED, and be able to see the background. And you can do that by-- you still don't see it. So, you can do that by blocking out these wavelengths corresponding to the LED. And now the LED is much dimmer. And actually, if you look at it over here, you can clearly read the background. So-- PROFESSOR: This is strange. Now, we can see [INAUDIBLE]. [INTERPOSING VOICES] ANKIT MOHAN: So, you can see the background a little bit over there but this way. So that's-- it's doing high dynamic range but by modulating the spectral profile of the scene rather than by cutting off all wavelengths equally or cutting of just certain wavelengths that you know [INAUDIBLE] the scene that are causing this disturbance in the scene. So, you can-- it turns out that you can build a similar projector also. [INAUDIBLE] was all for a camera but you can also use this as a projection system. So, now we have a traditional projector diffraction grating, this lens, and then you have this R plane, you have the mask and then the screen is up there. And the advantage of doing it with this projector system is that you already have a pinhole it turns out inside the projector. And most projectors actually have a very long optical path between the light source and the projection lens in order to increase the depth of field that they get then once they project the image. And it turns out we didn't even have to stop the lens or anything of that sort. And it just worked for a projector, a standard projector, without modifying it much. So, this is an acute example where the thing I wanted to mention over here is this concept of metamers. I don't if that's been discussed. But metamer is anything-- two colors which appear the same to the human eye or a camera when viewed under a certain type of illumination. So, this is the wide illumination of the scene and you probably can't see this, like this orange cloth or blue cloth and so on. And many of the colors over here, you can probably see there. This-- it's very hard to distinguish between the start and the [INAUDIBLE]-- [INTERPOSING VOICES] PROFESSOR: Is that 3M stickies, right? ANKIT MOHAN: Yeah. These are all stickies. So, these are actually fluorescent, these envelopes I think. And if you look on the-- so, if you look over here, you'll see as we project different wavelengths, how these colors they appear almost black and very, very similar to one another under different illumination. And now, one of them is going to become brighter than the other. And you can actually see the difference. So, it's very interesting to see the progression of how-- it gives you an intuition into that there's much more to the wavelength profile than what the human eye simply sees in white light. STUDENT: So, can you just explain again what happened I mean, during this animation? ANKIT MOHAN: Well, this-- I'm just projecting monochromatic very narrow wavelength of light. PROFESSOR: [INAUDIBLE] projecting red light. ANKIT MOHAN: Yes, Yes. So red, green, and blue. So, about 12 or 15 different wavelengths. PROFESSOR: So, it starts with very reddish, slightly yellowish, then greenish, then magenta, and then finally, bluish-- not magenta, cyan and then bluish. ANKIT MOHAN: It just shows you some of the-- I mean, I think if you look at the background, does this looks so different from this envelope whereas over here, it's actually very similar color in white light. PROFESSOR: And these papers are very fluorescent intentionally, so they look very beautiful when they're on your tabletop. But they're also responding to very narrow wavelengths of light. So, that's why some of these envelopes become completely dark when you eliminate all your particular wavelength. STUDENT: I have a question. Could it have been possible for you to just take an image and then use different channels to gender those 15 images that you generated? Did you have to definitely take them at different illuminations or-- ANKIT MOHAN: Or if you have a camera that captures all 15-- [INTERPOSING VOICES] MICHAEL: Broadband illumination. Those are the two things you need, broadband illumination and a camera that captures-- [INTERPOSING VOICES] PROFESSOR: 15 spectrums. If you take only three spectrums, then you only have three numbers of pixels. So, there is no way you can recover the 15. And if you have a camera like this, then [INAUDIBLE] other applications like distinguishing between a fake vegetable and real vegetable and freshness of skin and looking through fog-- all those-- not fog maybe, but some of the problems become manageable. And film and even digital photography is just trying to mimic human eye, three colors, in fact, three fixed colors as Ankit was saying. But what you really want is something like [INAUDIBLE] equalizer. You can tune any frequency, any wavelength so you can see the world in interesting ways. ANKIT MOHAN: So, this is another example of where adaptive color parameters would help. And as I mentioned earlier, there is very little-- cyan is one of the hardest colors to predict for most projectors because your color gamut is usually over there. And there's very little representation from there. And it turns out there are two definitions of cyan. One is the traditional printing cyan and the other is what's called the electronic or electric cyan. And this color is-- so what I'm projecting here is just a ramp between blue and green. So, this is blue here and green and colors in between. And this is what the computer thinks of cyan at the top and bottom. And you can see there's clearly a leak over here. And that's because when you-- along this line is what I'm projecting over here, this line. And so this cyan is nothing but a point that lies somewhere on this line. So, you cannot project a color which is out here using a standard projector or a display. But if you use this [INAUDIBLE] spectrum, you can tweak it so that your color over here is actually something that's outside your color spectrum that the projector could have displayed. That's because you're using-- you're not using the filters that the projector was using but actually displaying something outside it. PROFESSOR: So, some companies are also trying to sell you four or six color projectors. So, they'll have this point here, maybe another point here, another point here, here, here, and here so they can cover more colors than a standard three color projector. Because there is a three color projector, you must pick some three points. And you cannot represent this shape with three points. If you start going to close to over here, if you take this part too close here, then as Ankit said earlier, it has to be very pure green. And that's difficult to generate unless you have a laser projection. STUDENT: So, do the laser projectors have a wider color gamut? PROFESSOR: Exactly. So, the laser projections can go off-- they can stay almost on the pure colors. So they can go all the way here and here and here. So, they can cover a larger gamut. But there are other issues with that. STUDENT: So, they usually end up with higher contrast but they are less brightness or-- how do they usually? PROFESSOR: They have to have good color rendition. STUDENT: Yeah. PROFESSOR: [INAUDIBLE] a problem. [INTERPOSING VOICES] Yeah, so in general, a laser projector is going to create a very nice rendition. It's just that it's not compatible directly with the human vision. So, sensitivity about the human eye for pure color is not so great, unfortunately. ANKIT MOHAN: And there Are some constraints in the wavelength that you can use. You can't just choose any wavelength. They have to have a very specific problem. Then they use frequency doublers or something to get the other colors. It's just a little more cumbersome to do that. And then there's the issue of power that you need if you want the laser projector to be bright enough. You need to have very bright lasers. MICHAEL: Yeah. The amount of optical power that a projector throws out compared to a typical laser is quite large. So these things actually put out a lot of power compared to your laser pointer for example. So, putting a laser bright enough to generate this much light-- yeah, that is challenging. ANKIT MOHAN: Yeah. Yeah, so-- I mean, this looks bright because it's illuminating just one point, the laser pointer. But if you were to actually distribute this over the whole region, you would barely see it STUDENT: Most of them are scanning a single point, aren't they? Aren't most laser projections-- PROFESSOR: Yes, exactly. ANKIT MOHAN: Yeah. So, it's as though you're actually distributing this over the whole space. PROFESSOR: So if you have a million pixels, all one frame, every pixel is illuminated only one millionth of a frame. MICHAEL: That's the trade, right? Is you still need a lot of power because you're not spending much time in any one place. PROFESSOR: [INAUDIBLE] problem? I'm very, very hopeful that solid-state lasers will-- MICHAEL: Actually, that's [INAUDIBLE].. If we assume for the moment that laser pointer is as bright as the red chunk of that slide you were showing us, like on the previous one, if it's roughly is bright, same ballpark, but that's roughly a megapixel image-- PROFESSOR: A million times more-- MICHAEL: About a million times more power. ANKIT MOHAN: Yeah. PROFESSOR: Which is true. This is about one milliwat. And the project is 250 watt. It's 250 million times more the [INAUDIBLE] compared to the [INAUDIBLE]. ANKIT MOHAN: I think laser projectors work great if you are in this room where it's not too bright and you had the projector here [INAUDIBLE].. It works reasonably well, but not if you have the lights on you probably won't see anything. So, another thing you can do with this is don't-- since you're not stuck with those three color primaries anymore. So, this is just a scene again. It doesn't show up there as well. You have blue over here and you have yellow over here. And so traditionally, you would have a RGB filter where for yellow, you're turning it on in the red and the green color filter is in front of your projection. And for the blue, you are turning it on only when the blue part of the color is in front. But if you know that your scene has just as yellow and blue, you can actually just project a yellow. And you can use a yellow and blue projector. And you don't have to use the same traditional fixed color primaries. And this will give you a brighter scene and one that has more saturation also. So, one other cube example is that of colorblindness simulation [INAUDIBLE] Professor Emanuel has done a lot of work also in this area, I think. And the idea over here is so one of the most popular types of colorblindness is this different-- you can't tell the difference between red and green. And I think it's called [INAUDIBLE].. And so when you have white light illumination, you can-- I think, on the right and it's-- so, the lower ones are due to [INAUDIBLE] so you cannot tell the difference between the rose and the leaves and they both appear at the same color. Again, it's much easier to see over here. But if you actually project a certain wavelength of light on the object, it becomes much easier to tell the difference between these two. And it-- you can again, clearly see on the-- [INTERPOSING VOICES] PROFESSOR: [INAUDIBLE] ANKIT MOHAN: Yeah. You can actually see the difference between those two when you have the certain projection. So conceivably, using these kind of projections or using these kind of filters, you can help solve the problem. You can at least get some help in telling the difference between colors for people with colorblindness. So, [INAUDIBLE] go over some of the limitations. There's diffraction artifacts. Because you have a pinhole, you need a reasonably small f number in order to get a large number or even some different color bands. And that's basically the limitations. And so future work, one of which you just mentioned is actually using a mask over here. And I'll go into that just a little bit. And one of interesting cubed application, there's actually a company which does that they use this RGB wavelength multiplexing in order to do red and light-- the red and right eye separation. So, they project using-- PROFESSOR: [INAUDIBLE] ANKIT MOHAN: Left and right. Left and right. And so they use these projectors which have-- when you view them with naked eye, both this and this appear blue because they're both in the blue wavelength. But they're actually different parts of the wavelength that's being occupied by those two projections. And if you put a color filter in front of your eyes, a different for each eye, you can have the right eye only see this part and the right eye see with that part. So, when you view it visually, it would appear just like a normal projection. But if you put on the eyeglasses, you can actually distinguish between the two and you have multiplexing in wavelength rather than in time or in polarization. So far, we've talked about what's called the spectral light field. But it turns out, you can actually combine the concept of a light field with the spectral light. So, what we have here is a traditional [INAUDIBLE],, now it's x and theta. And you've placed a prism in front of this. So, what this prism has done is in a direction that depends on the distance between the x plane and the prism plane, you're actually going out and smearing it and getting all the wavelengths in that direction. So, it gives you what looks kind of like that or something like that. So, you have a whole bunch of overlapping spectral light fields on a spatial light field or you can think of it the other way. You have a whole bunch of overlapping spatial light fields of a spectral light field. So, I mean, you can either put x theta here and lambda here or you can replace it and put lambda here and theta here. And it's really not very different. But what becomes interesting is the case where this plane is diffused in front of parallel. In which case, each of these rays are exactly the same intensity because it's a [INAUDIBLE] or diffuse scene. So in this case, what you get-- the light field that you get over here is the special light field that we call a blurred light field. It's because you have-- you're taking the standard spectral light and then blurring it in this direction. So, what you could do is what Kevin was pointing out is use-- if this blurring is not just a box function, but a special coded function, you could then invert that and you could get the full spectral light field out of it without having to use a pinhole. So, this is what that design looks like. So you can put-- so, before I get into that, one of the things I didn't talk about is so you can-- by putting a pinhole and putting a light field camera over here, you can simply by using a standard lightweight camera, you can capture a multi-spectral image. So you can build a multi-spectral image by taking a light field camera and putting a prism, a lens, and a pinhole in front of it. So, what you're essentially doing here is replacing the-- is getting rid of the theta component of the light field and replacing it with the lambda component of the light field. So, once again, you had this light field and now you put a light field camera in front of it, you can sense that data. And this is initial results that we had in that where each-- it's basically a microlens [INAUDIBLE] that's placed next to the sensor and so you can see-- you still have the Bayer mosaic. So, you have this red to blue back with green in between, which is what it's sensing. But another thing you can do is put what's called a linearly variable interference filter. So, this filter allows only red wavelengths to come through at one end green in the middle and blue at the other end. And this is a very spectrally pure filter. It's basically built out of multiple layers of coating and using interference. And what this does is that it replaces the theta component of the light field with the lambda component of the spectral light field. And now if you put a light field camera in front of it, you can again capture the complete light the complete multi-spectral image. The trade over here is that any one point on this filter allows only one very narrow wavelength to go through. So, it's replacing a pinhole in space with a pinhole in wavelength. And so again, you're going to get very little light. But what you can also do is place instead of that filter, you can place one of these carefully designed masks, which are similar to the mask we use in the [INAUDIBLE] approach, but I guess we haven't talked about that. PROFESSOR: No, we haven't. ANKIT MOHAN: So, you can do this thing and then use deconvolution in order to recover the multispectral image. PROFESSOR: Let's take a short break and then we can-- ANKIT MOHAN: I just have this thermography stuff that you said-- PROFESSOR: Yeah, we'll do it after the break. ANKIT MOHAN: OK, cool. PROFESSOR: Cool, excellent.
|
MIT_6868J_The_Society_of_Mind_Fall_2011
|
10_Question_and_Answer_Session_3.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MARVIN MINSKY: So I think I've talked about this before a little bit. But as you know. The longevity in the developed countries was increasing about one year every four since the introduction of antibiotics. So I suppose a moderate fraction of that is due to the advances in killing germs and developing new vaccines and so forth. The lifespan in underdeveloped countries is much more effective than the developed ones by things like tuberculosis, and malaria, and AIDS, and a few things like that. But what I expect is that there'll be a bunch of breakthroughs in the next few years, by few, I mean 20 or 30. There has been a almost alarming lack of development of new antibiotics in the last 20 years so that the conquest of infectious diseases actually slowed down in recent years, and now we're getting into science strange problems about some of the drugs suddenly hugely increasing in prices because of no one knows exactly what. But I'm beginning to wonder whether the patent system should be removed for developments in public health just because the pharmacy companies claim that evaluating a new drug is very expensive of the order of a $1 billion. And so that's 1,000 million, and so they have to price the new drugs at very high rates. But I suspect that if you took the valuation out of the hands of commerce and set up some other kind of system, then we could collect public health records on everybody in the world or all the people on Facebook, which is almost the same thing and find some way to collect medical records efficiently at much lower costs, and then when there's a new drug, you evaluate it as best you can on guinea pigs and Drosophila and rats and people like that and then let people volunteer, maybe pay them and collect a lot more data much more quickly. So, anyway, if it doesn't happen in the United States, maybe some other countries like China will face the problem of rising health costs and do something about it. Who had something to discuss? AUDIENCE: So I have a quick question. [INAUDIBLE] If we in the future decide to let robots take care of the elderly, are we also going to trust them to take care of our children and, if so, to educate them? And with that, in the future, are artificial intelligences going to become teachers? MARVIN MINSKY: Well, the arithmetic is sort of funny because there'll be a huge number of elderly because we have no idea how productive the people between age 100 and 200 will be, and presumably, if we're talking about the next 100 years, there'll be the consequences of global warming and things like that. So I don't know how many children they'll be, but once the machines are smart enough, then the idea of smart educational programs will-- I mean, just imagine if a teacher knew most of what you know, then, of course, somebody would have to design or make some plans for how to adapt to each child and ask what should the child learn next. But it's not clear that what teachers do with classes of 20 to 40 people-- it's not clear that the decisions they make are particularly good either. So, yeah? AUDIENCE: At that point, what's left for humans to do? MARVIN MINSKY: Well, you mean as soon as the machines are smarter than the people? AUDIENCE: Sure. MARVIN MINSKY: Well, you could ask, what's left for Tyrannosaurus rex to do. What do you think people should do? AUDIENCE: I guess the usual answer is, you know, let robots take care of the intellectual things or, I guess, the more workplace things. We'll do the leisurely, having fun, and artistic activities. But robots might take part in those too. MARVIN MINSKY: Well, yeah, I think you're asking a very hard question, namely what should we be thinking about our future because along with the next 100 years of making brains healthier, no one knows how to fix Alzheimer's, for example, and that's becoming maybe the most expensive single disease now that all of the usual diseases have disappeared. So one solution is-- which has been discussed for years by people like Hans Moravec and a few others-- is can you scan the human brain and copy it into a ROM and then become immune to biological diseases because once you've got the data, then you can write all sorts of software to prevent software viruses, and so I think in the next 100 years, we might learn enough about how things are stored in the brain to start copying the brain, and that's another way out or in or whatever you want to call it. Who wants to be copied into a Macintosh? And then how many copies should you have, and what do you want to do with the backups? Didn't I mention Arthur Clark's, The City and the Stars, the novel he wrote twice. One was Against the Fall of Night, and the other was The City and the Stars, and I can't remember which was later. But he rewrote the first novel because he realized computers were going to be important. But in any case, that was the-- that was the story in which the planet earth has maybe a billion people, but all but a few million are stored in ROMS here and there, and you come out every 1,000 years if you live 100 years or I forget the arithmetic. In fact, when you ask a question, like, what should we do with people, I forget, but my usual instinct is to just say, as far as I know, the only people who have thought a lot about that are in that little-known world called science fiction and people like Larry Nevin and David Brin and Greg Egan and Greg Benford, and there's maybe two dozen people who, in my view, have better ideas about things that might happen in the future and what we might do about it than any of the reputable philosophers or sociologists or futurists who all seem like they're very good at looking seven years ahead. And no respectable futurist looks 100 years ahead. There are too many possibilities. In fact, I once gave a talk on this telepresence stuff, and it was at a meeting, and the writer, Larry Niven, was there, and somebody ordered him to write an article on what I said about future robots, and when it came out, it was a pleasure to see how articulate I had been. It's nothing like having a really first-rate writer paraphrase what you said. Are there any great writers at MIT? You? AUDIENCE: Oh, no. [LAUGHTER] MARVIN MINSKY: Who's the best writer here? AUDIENCE: Well, really? [INAUDIBLE] MARVIN MINSKY: That's silly. Right. There's no best. AUDIENCE: [INAUDIBLE] MARVIN MINSKY: What's the name of the science fiction writer who used to? AUDIENCE: Joe Haldeman. MARVIN MINSKY: Joe Haldeman, he's a good futurist. Is he still active? AUDIENCE: [INAUDIBLE] MARVIN MINSKY: What? AUDIENCE: Yes, he teaches a class. MARVIN MINSKY: Yeah. Well, go to it. So, anyway, that Chinese situation is interesting. I don't know what they mean by a million robots. I sort of assembled the slide about the current situation. And I can't find it. I haven't seen any really exciting advances in robotics for quite a while, and it'd be nice to know if this Foxconn company has anything really interesting, but how many human hours does it take to make an iPhone? Any of you have-- how much labor is involved in-- AUDIENCE: Including the design? Or just manufacturing? MARVIN MINSKY: No. The manufacturing. Anybody know what is the manufacturing cost of the thing? Is it $5 or $100? AUDIENCE: I think it's like $200 and change [INAUDIBLE] That's for the flash storage. MARVIN MINSKY: Well, flash is-- AUDIENCE: It seems to be around $180. AUDIENCE: That's what it says on the [INAUDIBLE].. MARVIN MINSKY: I guess, probably, the Apple ones are very up-to-date, so they haven't had the cost reduction of mass production. But I wonder how many hours of work it takes to make the thing. Can you ask Google that? AUDIENCE: So they estimate that it costs $8 to manufacture, with the majority of the rest of the cost being parts. So if you want to estimate based on what the the going rate for Chinese labor is these days, then I figured out eight hours-- I mean, $8. It probably can't be significantly more than eight hours or so. MARVIN MINSKY: Well, just popping all the things into the box can't be terribly hard, but making the chips. At the computer store, I was there the other day, and it said only two hard disks to a customer. And this is because there's a flood in Korea, and something like half the hard disks in the world are made in one small area there. It says only two to a customer. But what's the ratio of flash memory cost to hard disk like? Hard disks are about $1 a gigabyte, half that. They're almost comparable. Is that possible? AUDIENCE: A hard disk probably costs less than $1 a gigabyte. You can get like 1 terabyte at $80. MARVIN MINSKY: How much is a terabyte? AUDIENCE: Like $80. MARVIN MINSKY: OK, great, that sounds familiar. How much memory do you think you need? I made some estimates in an old article, but I don't have much confidence in them. Allen Newell did a survey among psychologists and literature about memory, and he concluded that if you take all the published psychological experiments that he could find, then over a long period, no human could memorize more than one bit every two seconds. And there's 30 million seconds in a year. So that's 4 megabytes. But you're not memorizing all the time, or maybe you are. So if it's just a megabyte or two per year, then 100 years, it's just a couple of hundred megabytes. So copying a human should be really easy if you only knew what the bytes were and what they represent, et cetera. AUDIENCE: And it seems [INAUDIBLE],, but I agree with that pretty low estimate because I think our compression algorithms are very, very good in terms of connecting things to other things, as opposed to memorizing new things. So we probably have [INAUDIBLE]. MARVIN MINSKY: Yeah. Now probably those experiments don't say much about, for each of these items, how many others it gets connected to. So-- AUDIENCE: How many bits is each connection? Because those sort of [INAUDIBLE].. MARVIN MINSKY: Yeah, because if something can be connected to 30 million others, then that's 25 bits for each. It's not so much. So if you believe that stuff, that means that making a machine that knows as much as a person shouldn't be very hard in terms of manufacturing. It's just that we don't really have the slightest idea of how to represent that knowledge and-- I mean I've got a slide here somewhere about what do you do with a fragment of knowledge? Well, how do you retrieve it? You ought to know something about-- it has to be linked to something about what kind of problems it might be relevant to solving and what kind of purposes it might serve to activate that statement or fact. Generally, I sort of agree with the old theories of Roger Schank that it doesn't make much sense to store things like facts, although maybe a few million of them, like books are made of paper, usually, makes sense, but Schank's idea was that it would be very hard to use random fragments of knowledge unless they were connected into stories. And what's a story? A story is not just a string of sentences. For a story to be called a story, it must engage the listener in a certain way, or else they'd say, why are you telling me these strings of assertions. So what's in a standard story? Most stories are about people, and there's usually a main character, maybe two, but there's a point of view usually, and that's because it's not just a character, but it's a character with a problem. So a story starts out with the situation, or develops one. And, typically, then something goes wrong. If nothing goes wrong, then you're back to the, why are you wasting my time with this? So if some problem comes up, and the hero tries to solve it, and he tries something, and that just makes it worse so that didn't work. And then he tries something else, or somebody comes to help or whatever you want, and, eventually, the problem gets broken into sub-problems, and solutions appear, and then everything's wrapped up, even if it's like a Greek tragedy in which everything is wrapped up by everyone being dead at the end. What was that Oedipus story? AUDIENCE: The first is Oedipus Rex. They are also known as the three Theban plays. MARVIN MINSKY: What's that? AUDIENCE: The first one is Oedipus Rex, but the three of them are also all known as the the three Theban plays. MARVIN MINSKY: I think I remember the one where what's her name is both his daughter and his wife. AUDIENCE: Yes. Yeah, that's [? Russ. ?] And it's also including [INAUDIBLE] MARVIN MINSKY: So those are wonderful stories, and they haven't changed much in 2,000 years. And somebody with a problem, and there's a solution, and in the most popular ones, the solution is to kill the person who is causing the trouble. And, anyway, so Roger Schank has several books about theories of classifying stories, and how to get a machine. He doesn't talk about AI very much, but it's in the background, and he's saying that the way a machine would be smart is to have a huge number of stories so that whatever problem comes up, maybe you'll have half a dozen stories that give you a hint about how to deal with that problem. And if you have enough of those, then you'll get through life and people will think you're smart and so forth. When Newell did that review of publications about memory, he didn't ask how many stories does a person know. And I don't think I've ever seen any discussion of trying to estimate this higher level. Have you ever run-- Dustin, because you're thinking about understanding stories. Can you understand a story unless you know a story like it? AUDIENCE: I think it's always going to connect to some sort of events. MARVIN MINSKY: So what's the shortest stories? If you just have a problem and a solution, then that's a very short story. So what do psychologists do? That's actually a good question because until there was a field called cognitive psychology, which from our point of view, is, how do you do AI without any technical machinery? Psychologists didn't have theories that you could use for much. The behaviorists had non-stories so that everything was made of very small fragments of behavior, like, if you smell something bad, run away. So here's his very short list, and I'd like a bigger one. I wish Henry were here because any of you in Lieberman's project. They're starting to add all sorts of new kinds of links to the commonsense knowledge base, and I don't know if any of them consider that. What kinds of problems are there? How would you classify-- can you think of a theory? Yeah? AUDIENCE: Well you could classify them based on the tools you needed to use. MARVIN MINSKY: Based on? AUDIENCE: The tools you might need to use to solve them. Like, If there were super classes of tools, if there was like types of tools you could, like, for example, there are certain problems that you can solve with an axe, and there are other ones that you can solve with a piece of string. MARVIN MINSKY: Yeah, if you do that, then you're in very good shape. I guess when I talked about the mind being made of critics and selectors, that's-- and I might have gotten that idea from somewhere in Freud. That's almost the same thing because critics are things that recognize not just that there's a problem, but they recognize a type of problem, and a critic is sort of unless it points to some tool or way of thinking that you might use to solve it. Where would we look for-- what literature would contain a classification of what are the problems people face? AUDIENCE: Medical. MARVIN MINSKY: Sure. Health. Survival, a Darwinian might say, all problems are of the same kind namely, making sure that you have enough children that your genes survive. But, of course, the funny part of evolution is that there isn't any representation anywhere in the whole thing of that description of what the whole thing is for. Yeah? AUDIENCE: Like, OK, maybe all sorts of problems you solve could be related to your general evolutionary needs in some way or other. Like, for example, you can think of a pathfinding problem as you needing to move around in the world and get places so that you can achieve your goals. You have into problems of social life where you try to understand people and figure them out and manipulate that information to get whatever it is that you want. MARVIN MINSKY: OK, well, I don't think-- the evolutionary system doesn't know these, but that's a great idea. Maybe we just look up the names of all professions because a profession is a-- if it has a name, then somebody has recognized a particular class of problems as a thing in itself. That would be a great start. So there are storytellers, plumbers. OK, so that's almost the same as-- what? Bold, bigger. So for each thing, for each fragment of knowledge, there are kinds of purposes it might serve, but that also applies for each fact and each object. What purpose does this serve? If it were 20 years ago, nobody could imagine any purpose. I was visiting Toshiba one day many years ago when the first Bluetooth worked. But I thought they were saying Brutus. And there was this great demonstration where the thing was 2 feet from the computer, and somebody did something, and the computer did something, and everybody clapped, and I thought, boy, is that useless. So for each fragment of knowledge, well, I don't know what this one is for-- other things it is similar to. Well, if a piece of knowledge didn't quite work, then you would want links to-- then that's a sub problem. I tried to cut this rope, but my scissors was too small. So if you say, oh, maybe a bigger scissors would do it, just ordinary kinds of thinking like that allow you to use knowledge that doesn't quite fit if you can characterize-- oh, well-- so maybe the most important thing is what's the bug here? So the reason we're so smart is you see a problem and something pops into your mind-- I'll try this, and it doesn't quite work. And then you automatically turn on this whole bunch of other things, which is other things that this is similar to, and do they have the same bug? And can you find something almost situation or a memory which is very similar, but it doesn't have that bug? And then you say, well, what's the difference? It's a sharper knife or a bigger hammer or whatever, and we're back to the idea that solving problems is by removing the differences between what you have and what you want. And stories are usually about something like that. The hero has a problem. There's something that the hero wants, doesn't have, and you look at a knowledge base and say, what produces this kind of property or value or change or difference. So how many stories could you know, if it takes two seconds to-- and that was the highest rate. I think they couldn't find any instances where you could learn a bit every two seconds for several hours, but it was easy to find instances where you could do that for several minutes? How could you estimate how many stories you know? AUDIENCE: There was a minimal boundary on event perception. Like, if you flash two different colors into somebody's retina, there's a certain speed at which they won't be able to detect that there was a change. MARVIN MINSKY: Oh, is that like 30 milliseconds or three? AUDIENCE: It's on the order of milliseconds. But I'm not sure exactly how much. AUDIENCE: [INAUDIBLE] AUDIENCE: Yeah, you'd have to, I guess, do it for each modality, but-- AUDIENCE: And if they were out of phase, then it's possible maybe between multiple ones. You could do something faster. MARVIN MINSKY: Yeah? AUDIENCE: I feel like if you're going to, like, estimate how many stories, you know, it shouldn't be, like, when you said that you could memorize two bits per second, I mean, that's, like, if you took in all your experience through verbal stories or something or like information stories. But I think you learn a lot more just, like, through-- I think, like, learning more bits than that through, like, repeated actions that, like, your body knows, like, you know, learning to ice skate or something. I feel like there's more bits encoded in that kind of knowledge. AUDIENCE: [INAUDIBLE] MARVIN MINSKY: Well, but it takes them many hours to learn to ice skate well. Look at sports people. They practice. AUDIENCE: I feel like those kind of things make up just as much-- make up a lot of your knowledge too. It's not just-- MARVIN MINSKY: Right, but if you're using a lot of seconds, then it's making up that fraction. AUDIENCE: Well, I feel like that part of your-- I feel like you make-- when you learn all those things through your senses, when you're teaching someone else, then you make up a story from that, but it wasn't encoded in you as a story. MARVIN MINSKY: Or if it was a completely different story, then it wouldn't apply to them. AUDIENCE: [INAUDIBLE] format. I mean, you could theoretically store a book as images on the page or as an ASCII encoded character or something like that. [INAUDIBLE] format. But I like what he was saying earlier, where it was, like, different things we have different compression schemes, so we are really good at compressing and seeing relationships between multiple tasks, so we don't need to store like different copies of the same words, or letters, or whatever. We can just have references to them. MARVIN MINSKY: Well, you hear stories about people who can read a whole book and remember the whole thing. And the question is, are any of them real, or are they all amateur magicians who are fooling you? Yeah? AUDIENCE: About how many stories you might know of [INAUDIBLE] and [INAUDIBLE] that there aren't that many stories of, like, people write books and books somehow, no matter how complicated a story looks, it's the underlying hero in the journey, that kind of scheme. So we might know very few stories, but [INAUDIBLE].. MARVIN MINSKY: Well, it's true that the most popular stories probably are there's 50 or 100 plots that make up 90% of-- it's easy to make up numbers like that, but there certainly is a distribution for all sorts of psychological things where a large amount of your performance comes from a small number of very useful things, and then smaller numbers of things or on less frequent intervals, you're using more obscure fragments of knowledge. Yeah? AUDIENCE: I also think that not everything in our heads is represented in stories. I mean, I feel like sometimes, you do learn a story that's a complete story, and every time you go through it, it's the same story. But I think, oftentimes, when we learn, like, individual bits of information or chunks of information, and then we either create the connections when we learn them, or we develop new connections. And, oftentimes, when we're telling a story, what we're actually doing is sort of like traversing different paths throughout the bits so that, like, one time that we tell the story, it might be slightly different from the other time, or it might take a different path entirely and like branch out. So these stories have various possibilities and variations. And it's only a story in the sense that it has a progression as we construct the path. MARVIN MINSKY: Yeah. I wonder if anybody's has tried to take a collection and represent the whole knowledge net, like if Shakespeare plays, there are lots of people and their mother, and the mother is likely to be a queen. I can't remember the difference between Hamlet and Macbeth because there are so many features in common, but, of course, once you have a lot of knowledge, then you may have an experience, and that might seem very complicated, but you're actually making two or three links between two already large structures, and then each time you think about that, you elaborated. AUDIENCE: But I think, oftentimes, so we have this network in our heads, right? And when we communicate with other people, we generally communicate these so-called stories. So we can either, like, improvise the entire thing on the spot, much like what I'm doing right now, or we can kind of, like, practice these stories and then keep the paths stored somewhere else so that the next time we just look at the instructions that we wrote down and say, OK, so now it's time to talk about this thing, this thing, that thing. MARVIN MINSKY: You mean plot units like. AUDIENCE: Like these sorts of like paths, stored paths. It could also be, like, smaller units so that you might improvise part of the story, but use the preplanned [INAUDIBLE] version or a different [INAUDIBLE]. MARVIN MINSKY: Yeah. Wendy Lehnert was a PhD student, I think at UMass Amherst in the '70s, and she wrote a brilliant thesis about the structures of stories. It's called plot units. So here, it's very much of what you were suggesting, that a new story might be made by extracting some of the structure of an old story and putting the new characters and events in, but it still might have some analogous links to the old ones so that then when you're reading Hamlet, you'd say, oh, that's like Macbeth in this respect. Does Winston have Hamlet? Does he have both? AUDIENCE: I believe so. MARVIN MINSKY: What? AUDIENCE: Hamlet and Macbeth. MARVIN MINSKY: Which plays do you have? AUDIENCE: Actually, we went mostly with Hamlet, Macbeth, Julius Caesar, Romeo and Juliet. They're all about the same. MARVIN MINSKY: Yeah. AUDIENCE: Especially Hamlet and Macbeth. That's why you can't keep them straight. AUDIENCE: Yeah, I was looking at the Oedipus one, where his daughter is also his sister, very complicated. You didn't get that one in there. [LAUGHTER] Incest makes the family relations look pretty funny. So what happened to Wendy Lehnert? Did she ever publish anything after that? AUDIENCE: She separated herself from natural language research because she didn't like the way the center of gravity was moving towards statistical methods and went into Oriental medicine or something. MARVIN MINSKY: Oh, yeah, I keep [INAUDIBLE] things. AUDIENCE: Great loss. She was Schank's best student without a doubt. MARVIN MINSKY: I bet if we had a job, we could get her back because I talked to her at that reunion the other day. Well, it's terrible to lose a good student. In the meantime, Roger Schank wrote about 15 books, but I don't know how different they all are. But everyone should read his first book, which has a lot of ideas about how things depend on each other and one another in different ways. It's called Conceptual Dependency, and it led to a lot of interesting research for quite a few years. Well, what's bothering you? It's almost the same list. AUDIENCE: Shall I lower the screen again? MARVIN MINSKY: Oh. Sure, I hadn't thought of looking up. But the projector went off too. Are they correlated? Or were wiped out by daylight saving time? AUDIENCE: I don't think so. It needs to be selected. That's a good sign. AUDIENCE: And it's showing it here that it was [INAUDIBLE] up. Hm. MARVIN MINSKY: Each fragment of knowledge, types of problems it might help to solve, what kind of goals it could serve. AUDIENCE: Turn the light on. MARVIN MINSKY: Starting up. What bugs come from using this fragment of knowledge? What are typical cases? And maybe most important, if you have a fragment of knowledge, what kind of contextual cues is going to get it to be retrieved? So having things in memory is no use unless you have links to them and how do you learn those? Is that slide actually the same as the one I had before? That's-- yep. Bugs that might come from using it. So that certainly is-- I'm trying to guess how many of you were around when something called new math started to be tried. Does that ring a bell? You remember it? AUDIENCE: [INAUDIBLE],, but I've heard of it. MARVIN MINSKY: It was a period when some group of mathematicians said, this is terrible. Children aren't learning about the empty set. And then they realized that there was a committee of really very good mathematicians who thought things should be more clear and precise, and you should distinguish between ratios and fractions and quotients and percentages. And, oh, god, there were nine of them-- and explaining the differences in ratios and fractions-- ugh-- and distinguish between nothing and the empty set. And so a short generate-- I think it was five or six years till finally the teachers admitted they couldn't understand what they were talking about. But it was very exciting, and I remember mathematicians feeling that this was great because the public was going to learn more about what they were doing. Quotients. But it did go back to Russell and Whitehead, and they explained to children that you could start with the set that only contains one subset, namely the empty set, and that one of them is 0. I think the empty set is not 0 because that wouldn't be quite right. But the set that contains only the empty set could be 0. And then the set that contains that could be 1, and that was one proposal of Russell and Whitehead, but I think they found that wasn't quite good enough. And so they got the idea that 2 should be maybe-- should be 1 and the set that only-- I've forgotten. The numbers were becoming ordered pairs. I'm sorry I've forgotten what Russell and Whitehead did, but, of course, what Russell and Whitehead did was explain numbers and then explain fractions and then explain limits and then Dedekind sets, which are sets of rationals that have an upper bound, but don't contain the upper-- may or may not be closed. And they got all the way up to number theory and calculus and continuity stuff like that and making all of that out of the empty set was quite a tour de force. But the idea of teaching that to children actually swept the country for maybe five or six years. I wonder what they learned in computer science. Great! AUDIENCE: Don't you think there is a problem with just, like, trying to teach computers common sense with stories with like technical stories, like, don't you think there's a-- do you think there's a natural limit to how useful a system like that could be? MARVIN MINSKY: I am not sure what you mean by useful. AUDIENCE: I mean, aren't there pieces of knowledge or wisdom that you could never really get from just a collection of stories? AUDIENCE: Well, it depends on what you think a story is. If the story includes recipes, then it includes learning how to do calculus problems. MARVIN MINSKY: I think someone might argue that it would be very hard to teach a computer what smells mean if it didn't have a nose. But on the other hand, you could make a simulation of a sense organ, which just has 10,000 different bit strings, and you could decide that this represented a rotten egg and this represented such and such. And then if you had enough stories, the computer would act as though it had a humanoid collection of records of experiences. So you can sort of make some arbitrary statement and say, I think a computer could not understand x or y, but if you think about it, you might say, well, I don't understand how people do it. So maybe if I got more scientists to study it, we could, in fact, understand how people represent smells in their brains and the problem would go away. So, generally, if you have a feeling that there's something that a machine cannot do, it will be very hard to defend that indefinitely because every year, we discover new representations in new languages and new kinds of processes. AUDIENCE: [INAUDIBLE] MARVIN MINSKY: Yeah. Sure. AUDIENCE: OK, I just wanted to throw an idea out there, to respond to whether there is a limit. It seems like as long as your machine is equipped with the ability to understand stories and generate stories, there should be no limit, right, because I remember-- OK, I think it's because in 6034, one time Professor Winston asked us what happens if you run down the street with a bucket full of water, and we all knew what would happen. The water would just spill. And it was not because we had necessarily run with a bucket of water, but we were able to simulate that kind of story in our heads. And so as long as you have a machine that is able to generate stories, it doesn't even have to know all the stories ever because it can make its own stories to answer questions. MARVIN MINSKY: Well, I guess we have a very stratified society, and there are lots of people who feel that because a computer is mechanical or something, or because it's understandable, which is worse, then there must be things it couldn't do because-- I'm trying to characterize this collection of ordinary people and philosophers, and my favorite example is the idea of a quale. People talked about qualia because there's a word in English called qualities. But if you're going to be respectable, you have to find the Latin word for something. And so there is a very popular branch of philosophy in which people argue that machines can do this and that. But there are some things that are innately, by their nature, irreducible, and, for example, the experience of redness is a sort of absolute and not reducible to combinations of other concepts or activities or processes or synonym, synonym, synonym. And what could lead someone to argue that something is, in that sense, irreducible, cannot be described in terms of other things. Well, one thing that could lead you to that idea is that, ordinarily, you like to think that a good description of something is an expression that is entirely consists of simpler things than the thing you're describing that's reducing something to other things that are simpler. If you say something is irreducible, I'm trying to-- are there some synonyms for that? And they say, well, you can't reduce red. AUDIENCE: [INAUDIBLE] MARVIN MINSKY: Yeah, it's sort of like [INAUDIBLE].. I remember me being told that green was a mixture of yellow and blue, so that gets rid of green, doesn't it? But, anyway, that's a very strange idea because maybe the right explanation, I'm sure there's no maybe about it, is that redness is a complicated combination of things that are more complicated than it, so the word reduce isn't the right thing. For some mental things, one part of the brain gets a bunch of inputs, and it says red, and that's the output, but the inputs might be a whole collection of very complicated things. I think I sort of tried to list, like, when you were a little kid, and you got cut and you were bleeding or you aid a red pepper and your mouth felt terrible, or there was a sunset. And there are lots and lots of different things that give rise to this sensation of redness. And to say that it's irreducible is to say, oh, the part of my brain that said that word didn't have a good theory of what combination of origins the inputs to it came from. So I'm afraid that if you look up criticisms of artificial intelligence on the web, you'll find what's his name, Steven-- who's the qualia guy? Grossberg? No. Louder? I forget. But if you look for the combination of artificial intelligence and philosophy, you'll find droves of people who are sort of arguing that, well, machines can do this and that, but they'll never really be people, or they'll never really think because there are these problems like the problem of irreducible qualia, and there's no way a machine could have anything like that. And then if you ask, well, how could a person have that, and they say, well, I won't tell you. I think I understand it, or-- yeah? AUDIENCE: What's your response to the argument, the more reasonable version of that argument, which is that, sure, we may one day be able to build a machine that emulates human intelligence and consciousness, whatever those things mean, but it won't actually be intelligent or conscious because it'll be simulating intelligence. Just by analogy, if you built a simulation of the digestive system, it would be simulating digestion. It wouldn't actually be digesting anything. So if you built a machine-- MARVIN MINSKY: Yeah, I've seen the one that Steven Pinker, I think, does that, or Searle, you simulate a thunderstorm, but the things aren't really wet. Well, foo, wetness is what? If you look into wetness carefully, it goes away, and its molecules of hydrogen oxide, which are sticking to oily things with the hydrogen side, and they're sticking to ionic sides, things with the oxygen sides. And the reason the water sticks to some things and wets them is because it's a polar molecule, and it has a charge here and no charge there, and the hydrogen likes to bond to the hydrogens. The oxygen likes to-- if the hydrogen is pulling the electrons here, the oxygen likes to stick to these things there. And the idea of wetness-- it turns out to be not an irreducible, unachievable thing, but a rather complicated thing that only an advanced quantum mechanics theory can even approximate. AUDIENCE: How does that respond to the argument because we're not trying to simulate the brain atom by atom? We're trying to simulate [INAUDIBLE].. MARVIN MINSKY: No, but I'm saying something like-- I'm saying to say there's no wetness in the brain is saying, there's no process that corresponds to this oxidation reduction electron sharing thing, and the answer is, yes, there are. Every neuron that doesn't have a myelin sheath is probably wet here and there, and who cares? AUDIENCE: [INAUDIBLE] AUDIENCE: I actually disagree with your analogy on digestion. What you're saying-- AUDIENCE: So am I. [INAUDIBLE]. AUDIENCE: OK, whatever metaphore you have on digestion, so what you're saying is, like, basically, we're trying to simulate the digestive system by building it on the computer, but I think what we're actually trying to do, like, equally for the brain is if you actually build, like, a digestive system using different physical materials, but it still, like, chemically digests the things. MARVIN MINSKY: No, what I'm saying is-- AUDIENCE: That would be something like trying to build a brain by hooking up electric wires to the neurons. MARVIN MINSKY: No, what I'm saying is that there's no such thing as wetness. Wetness is a very, very complicated collection of processes, and you can have in the brain very complicated collections of processes, which are similar, but not exactly the same, and who cares and why do you think it's important? AUDIENCE: I'm saying if you build the brain, if you build whatever thing out of a different material, then it can't possibly do the same things, which is like a full argument to-- MARVIN MINSKY: No, I'm not saying that. AUDIENCE: Not you. I said [INAUDIBLE] MARVIN MINSKY: I'm saying it's not important what the lower levels of the brain do because, at least in my case, I think that there's an insulation layer in the brain, which is something like the cortical columns, which protect the higher levels of thinking from the weird and unreliable processes that individual synapses and nerve cells have. So the great thing is that our brain is simulating a computer, and our higher-level thinking is of the more or less digital nature no matter what the neuroscientists will tell you. That is, why do we have these columns? And my answer is, it's the same reason why computer have flip-flops instead of transistors. A computer is really analog, but it's designed so that the awful properties of analog things don't interfere with what it's doing. AUDIENCE: Another way to respond to that would be, I think, OK, two systems are equivalent if given the same input, they have the same output, right? Maybe in assimilating the digestive tract then, maybe you're not entirely successful-- well, OK, well, you're not given the exact-same inputs, right? You're not getting to get some sort of system that's really not there. But if you have a machine which is given the same input as humans are doing such as knowledge, sensory input, and stuff, they are able to produce the same kinds of outputs that humans produce such as, I don't know, making inferences or reacting to these inputs then essentially you have in your hands a successful simulation, a system that cannot distinguish from the original. AUDIENCE: Right, but couldn't you have the same-- you could have the same [INAUDIBLE] outputs or simulate some organs, so that's not-- AUDIENCE: Sure, then maybe-- maybe-- [INTERPOSING VOICES] AUDIENCE: This is not simulatable, because you're not-- OK, but-- sorry, I interrupted you [INAUDIBLE].. AUDIENCE: That's OK. AUDIENCE: But, OK, maybe the digestive system example doesn't really work because what's happening is you're doing this in a digital media where really your inputs and outputs to the-- your inputs to the system that you're trying to simulate is not the inputs-- are not the inputs that you're using in your simulation. However, in your simulation of a brain on a machine, your inputs could very much be the same things. AUDIENCE: Yeah, but you're simulating that the digestive system, your inputs are virtual [INAUDIBLE].. AUDIENCE: Exactly. Yeah. Yeah, so maybe it has to do with your inputs having the correct representation and the exact same thing, the same agent that you're able to in the real world, [INAUDIBLE]. AUDIENCE: And who cares if your AI is philosophically equivalent to a brain, as long it does [INAUDIBLE].. [INTERPOSING VOICES] AUDIENCE: It's an interesting question. AUDIENCE: Yeah, but from an engineering point of view, you don't care. From a philosophical point of view, you care. MARVIN MINSKY: Especially if you want to live forever, then you don't want to have the biological bugs. AUDIENCE: It seems to me like it's a little bit [INAUDIBLE].. MARVIN MINSKY: Without [INAUDIBLE].. It depends if-- yeah? AUDIENCE: It seems to me like it's a little bit like railing against differential equations. Of course, the model is always a surrogate for the real thing, but it still tells us about the constraints that the real thing has to obey, and so that's why it's a model. AUDIENCE: Right, but then like in artificial intelligence, whatever intelligence system is a model for the brain. We should [INAUDIBLE]. Of course, [INAUDIBLE] like, the claim of this argument is that it's not that conscious. AUDIENCE: It's not. In a trivial sense, it certainly isn't. The question is does it shed light on the thing if that's the thing you're trying to understand? MARVIN MINSKY: I think the problem comes when-- wetness is one thing, but when you come to consciousness, the same people-- and might caricature of them, Pinker and others, is that there's a fluid called consciousness, and it's a kind of wetness, and it's irreducible. There's just this raw understanding, knowing things. It's like being wet as supposed to dry. And to me, that's nutty because I didn't have any trouble listing 40 other English words that look like they're in the same suitcase, like having a limited, but measurable amount of short-term memory about what you were recently thinking. Now, it's hard to think of a drop of water as having that, but, of course, it does to some degree. When it rolled down the side of your cheek, it left a little stream of evaporating water, and so what? Your model may or may not bother to say wherever the water drops has been, it will leave some of itself, but it just depends what you're trying to compute. And in the case of consciousness, if that is a suitcase word of 40 rather different processes, maybe we could very well do without 10 of them, and there's probably 15 more that if we had, we would hold regular people in utter contempt because-- like I can hear two tunes at once, and maybe half of you can, and when I think of a third tune, this is because, as a baby, I spent a lot of time developing this skill. So I look at people listening to music, and I know that some of them, like Todd Machover are hearing more than I am. And most of them are just hearing blub, blub, blub, blub, blub, blub and getting a great deal of stupid pleasure out of it. So thinking of psychological words as like things is usually a big mistake because there are abbreviations for-- they're not combinations of elementary things. They are combinations of more complicated things. It doesn't stop anywhere. Yeah? AUDIENCE: Playing the devil's advocate again. How would you respond to the Chinese room arguments? MARVIN MINSKY: The Chinese room? It understands Chinese perfectly. I don't see any problem if it can answer questions in Chinese, why does Searle think that just because it is a machine, it doesn't really understand them? Because if it gives the right answers, yes, it really understands them in the sense that the word understand means 40 different things, most of which Searle hasn't allowed himself to even imagine and any Chinese room that has 11 or 12 of them would probably understand Chinese better than Searle, only talks [INAUDIBLE]. AUDIENCE: [INAUDIBLE] the understanding-- MARVIN MINSKY: You know how Herbert Simon got the Nobel Prize? He learned Swedish. Sorry. AUDIENCE: Where in the system is the understanding? You're just saying that-- MARVIN MINSKY: The idea that-- AUDIENCE: [INAUDIBLE] overall there's understanding? MARVIN MINSKY: The that there's a thing called-- AUDIENCE: [INAUDIBLE] understanding and a set of rules. MARVIN MINSKY: There isn't anything-- given any word, there's no such thing. That word is an abbreviation for 15 or 20 various different things. Understanding isn't anywhere. Understanding is knowing grammar, having short-term memory, being able to criticize a story by recognizing the bugs. If you give yourself five minutes, you'll write 40 different requirements for understanding, and then the idea that there's some thing, that's too complicated, bottle cap. There's some thing which is the raw understanding. It's sort of just like the bottle cap except that nobody knows what it is. What silly people. Why did they think there is? Just because they have a word, why did they think there's something to go with that word? Well, because other people understand it. Well, when all the gerbils are-- what's the animal that allegedly runs over the cliff into the ocean? AUDIENCE: Lemmings. MARVIN MINSKY: Lemmings. Do they understand how important it is to run over the cliff? I don't know what I'm illustrating. AUDIENCE: Could I ask what you think like that Chinese room argument would have, what kind of [INAUDIBLE] have on AI as a whole, that [INAUDIBLE]? No? [INAUDIBLE] Sorry, I'm asking you what bearing do you think this argument has on AI in general because I would assume that there is plenty of serious people who work on AI that would not say just because you can respond to certain questions that you are intelligent, right, like humans do a lot more things. AUDIENCE: Right, so it's intended not-- the argument is intended to be an argument against strong AI because the argument is intending to say, look, the system appears to understand Chinese, but doesn't actually-- there doesn't seem to actually be anything that's understanding Chinese. Therefore, a, like, properly programmed computer doesn't literally understand what it's doing. AUDIENCE: Well, the problem is the argument applies to you too. So if you have a memory of and processes that operate on that memory, so it's just like Searle's room. So Searle's room isn't intelligent [INAUDIBLE].. MARVIN MINSKY: Let me try this. Please explain Searle's Chinese room . AUDIENCE: [INAUDIBLE] [LAUGHTER] MARVIN MINSKY: It said, "The experiment is the centerpiece of Searle's Chinese room argument, which holds that a program can be used to explain the mind." I'm skipping the dots. "The study of the brain is irrelevant to the study of the mind." It goes on-- if you want to know anything, download Dragon Search. AUDIENCE: But is that phone really intelligent? [LAUGHTER] MARVIN MINSKY: I could ask it. It got in Antigone the other night when I asked it who married their mother or father, or whatever it was. AUDIENCE: OK, I guess-- MARVIN MINSKY: Yeah? AUDIENCE: Just for me, [INAUDIBLE] something else of the idea of-- to understand things [INAUDIBLE] stories. I feel like it's been like bugging writers, playwrites. And people would be like [INAUDIBLE] a lot. And it feels like a common understanding among these group of people who are not psychologists, not computer scientists, but they share a somewhat common theme. And what they have trouble with is, sometimes when things happen, say with a documentary or a TV show, and they're facts or just happening that do not fit into a story model, those are not understood, then they're having trouble with that. They've been exploiting that fact. It's just interesting. MARVIN MINSKY: Well, sometimes somebody will talk, and if you don't see a story, you get confused, or you say, what's the point? And does that mean you're not understanding or you're criticizing, I guess? AUDIENCE: So it's almost like in order to relay information, you have to put them to a point that's understandable that those people think are stories. MARVIN MINSKY: Sometimes a child has a favorite story, and she wants you to tell it over and over. What's going on then? Do you have a favorite story? AUDIENCE: When I was, like about four years old, I basically watched the Lion King, like, just all day. MARVIN MINSKY: The whole thing? AUDIENCE: I did the same thing with the Jungle Book. AUDIENCE: It's a horrible movie. [INTERPOSING VOICES] [LAUGHTER] AUDIENCE: It's sexist, racist, and classist. AUDIENCE: The Jungle Book? AUDIENCE: The Lion King. AUDIENCE: [INAUDIBLE] [LAUGHTER] AUDIENCE: [INAUDIBLE] AUDIENCE: [INAUDIBLE] Shakespeare. It's like 16th century. What were you expecting? MARVIN MINSKY: What's the most popular story in the world for four year olds? AUDIENCE: Fair tales probably. AUDIENCE: [INAUDIBLE] something Chinese. AUDIENCE: It's probably like [INAUDIBLE],, or something. AUDIENCE: [INAUDIBLE] AUDIENCE: Oh, for four-year-olds. I thought you said for four euros. [LAUGHTER] Yes, I just got back from [INAUDIBLE].. [LAUGHTER] MARVIN MINSKY: How did you do that? You played it back and forth over several times? AUDIENCE: Like, realizing what you actually-- MARVIN MINSKY: You replayed the tape? AUDIENCE: What? [INAUDIBLE] as four-year-olds. AUDIENCE: Oh, yeah. I heard four euros, and then that's how I like-- [INTERPOSING VOICES] AUDIENCE: [INAUDIBLE] was like, wait, for four-year-olds? And I was, like, oh, wait, I guess I misheard. [LAUGHTER] MARVIN MINSKY: Yeah? AUDIENCE: But when I was little, my parents' story was the Three Little Pigs, and my mom would tell it to me, like, when I was going to bed, but it was the time when she was also very tired, so she would kind of change the story each time. And I think that's why I like to hear it, like, I knew the actual story. And my mom would be really tired after a day of work, and she'd be sleeping, like, falling asleep in the middle of the story and saying, like, wrong things about the story, and I would just like to point it out to her and tell her, no, that's not how it goes, tell it right. So I don't know. It's, like, the reason you want to hear the same story over and over again is, like, somehow to-- if you have a very concrete template of it to-- I don't know, to figure out what can go differently and then consolidate that somehow. I don't know. I don't know why [INAUDIBLE]. AUDIENCE: I feel like it's a lot, like, like, finding a favorite song that I keep listening to over and over again. It has to be, like, between like the trivial level of, like, you pretty much have decoded like the whole song in your head, and, like, you know, some, like, Rachmaninoff concerto that you really don't understand, you know? Like, it has to be, like, between-- like, it has to be, like, lyrical enough so that there's, like, parts that you understand, but, you know, it's enough information that you haven't, like, distilled it all into like your memory. You don't know what-- you don't always know what's coming next as you're-- as you're listening to it again. MARVIN MINSKY: Mhm. AUDIENCE: I mean, it could just be for entertainment? You just really like the story? MARVIN MINSKY: You'd like to be able to predict what's going to happen next maybe, just you're rewarding your memory for being right. So if you change your story a little bit, they get really angry. I have a story. I had a little daughter, who's now in her 40s, but anyway, we would be driving, as it happens, down Beacon Street in Brookline, and every now and then she would shout, care! And this was a baby who didn't like to go to sleep, so she would fuss, and the only way to get her to be sleepy was to take her for a ride. Anyway, we'd be driving along, and she would yell, care! And she was only two, so I couldn't figure out what was this going on. And then one day, we were home, and there was a television set on, and she yelled, care! And it was an advert-- you know the charitable organization CARE, C-A-R-E? They collect food for-- maybe it's disappeared. Anybody heard of CARE lately? It's gone? Anyway, the feature of CARE is that it's showing a big box full of packages, and on the side of the box, it's stenciled C-A-R-E in the stencil font. Stencil font is such that, you know, the middle of the O is attached by two little tabs so it won't fall out. So it turned out that what she was doing was font recognition. And next time we were driving, I was able to correlate it with the numbers stenciled on the telephone poles. So you could watch this child for a long time and never guess what on Earth it's doing, and it's recognizing a font, how weird. I don't know what the point of this story is, but it's really hard to tell what people are thinking before they can talk. Actually, she designs fonts now sometimes. Maybe I could get two of you to argue. Yes? AUDIENCE: I have a question. We kind of talked about how we would we store stories and why we would do that, but do we have any idea of how we make a decision, whether to remember a story or not? MARVIN MINSKY: Of a number? AUDIENCE: No, just, like saying here-- OK, it seems like it would be unmanageable if you tried to commit to memory everything you've ever heard. MARVIN MINSKY: Oh, right. AUDIENCE: So somehow you need to determine whether something is worth learning or not. And in the case of stories, how do you decide whether you should learn it or not? MARVIN MINSKY: That's a great question. What's the answer? The psychologist. AUDIENCE: Well, if it's kind of surprising, then you can remember it. MARVIN MINSKY: There has to be something unusual about it? AUDIENCE: Surprise, slogan, [INAUDIBLE].. [LAUGHTER] AUDIENCE: Things like that. AUDIENCE: Something that you can learn from it? MARVIN MINSKY: There must be theories of what things people remember. AUDIENCE: I think when there's a very-- when the story or the event is attached to a very intense emotion, you'll tend to remember it, but I mean-- MARVIN MINSKY: Well, as Pat says, there, has to be some new information, and it-- AUDIENCE: You won't remember anything either, if it's-- I mean, when you go to class, you don't understand anything, you don't remember it. If it's stuff you already know, you don't remember it. So it's got to be on that wavefront of understanding for you to remember anything. MARVIN MINSKY: But then it has to be connected to something. So it has to have some-- you have to have some reason why to connect it to something. You may not know that, or is it just-- you just don't want to be surprised maybe. So you have to store all surprises away so they won't annoy you later. AUDIENCE: Well, if we're sufficiently shocked or surprised whether we want to or not, we won't forget it. MARVIN MINSKY: Right. AUDIENCE: You know, the 9/11 disaster, the Kennedy assassination, things like that can't be forgotten. MARVIN MINSKY: Apollo 11. AUDIENCE: It's that damn problem on the freshman physics quiz. I will never forget it. [LAUGHTER] AUDIENCE: But doesn't that explain maybe why you go through the day, and then when people ask you, what have you done? The things you remember. You know, I don't remember [INAUDIBLE] falling out of the sky. Or you don't think that's relevant? AUDIENCE: Yeah, I don't know. It seems like, the more I think about it, it seems like it should also have something at least to do with what stories you already know because, OK, this comes from kind of a shallow place, but, like, when you go shopping for clothes, you tend to pick up-- like, you know what your wardrobe looks like, and you kind of pick out things that will make some sort of sense within your wardrobe. And, I mean, you already know the stories that you know. So maybe in the background you have some sort of process that's going on, trying to figure out whether this new story is something that you can connect to-- connect meaningfully to your already existing-- MARVIN MINSKY: So that's not quite a surprise, but it's filling a goal maybe. Brings me back to wondering what psychologists do. Yeah? AUDIENCE: I have a different question. There seems to me sort of a-- I don't know if it's a problem, but something I'm confused about. So the question that artificial intelligence is trying to create an intelligent machine, it doesn't really define what intelligent means. So we say, OK, intelligence is equivalent to human intelligence. So we say, OK, how are we going to figure out if we've got something that's a [INAUDIBLE]. We don't know, because [INAUDIBLE] computer is much better than the human in some things. And currently, human is much better than a computer in other things. So, I mean, there's a way to say this by OK, our equivalence measure is a Turing test, but that doesn't seem to be a good metric because, I mean, there's already some random, you know, like, scripts and things that pass the Turing test just by doing a few clever things because fooling people isn't that hard. I mean, people are pretty stupid, so how do we define intelligence and figure out if we built-- and, like, what it means to build an artificial intelligence? MARVIN MINSKY: Well, some things are so obvious that you don't define them. I mean, how do you define good? AUDIENCE: [INAUDIBLE] MARVIN MINSKY: I mean, I don't know anybody working on artificial intelligence who feels any need to define it because, you know, each project is trying to do something that they couldn't do last year on most cases. And how do you get the right-- how do you answer questions about a story? How do you get a machine to read a book, and-- AUDIENCE: Right. Because there is apparently a lot of things that we know that a human can do, but we haven't yet built machines that can do them. So we can solve those specific sub problems by building machines-- MARVIN MINSKY: Well, maybe you can't. AUDIENCE: --that do those things. MARVIN MINSKY: But some of them we've been trying to solve for 40 or 50 years and haven't. And I don't see any-- AUDIENCE: At what point in solving but based sub problems whether it takes 40 years or [INAUDIBLE]. At what point I will be like, OK, we've solved enough sub problems that we think we've built an intelligent machine? MARVIN MINSKY: I don't think any of us think in those terms. It's saying, we're trying to make the machine smarter. We're not trying to make it intelligent or smart. We're trying to make it intelligent or smarter. And if you can't tell, then you didn't do it. I've had that argument along with Seymour Papert of people who are trying to evaluate whether children who learned LOGO in second grade were at some sort of advantage than children who didn't know anything about computational concepts and so forth. And I hate to mention it, but at a certain point in the National Science Foundation budget for educational projects, 70% of the money goes to evaluating. Can you imagine? That's a very large number. And the reason was that they had a review of what they had done for the last 10 years, and nobody could point to anything where they could really show that the money hadn't been wasted. Well, of course, you probably have to wait 10 years and see which of those children get into college or better college or blah, blah, blah. It's very hard to evaluate things. But I don't think anybody that I know thinks it's worth a minute to define intelligence since there's 40 different things, and everybody knows what they are, and if you try to get it into one thing, then you lose big because you end up saying it's smarter. AUDIENCE: Right. But I mean there's [INAUDIBLE] things where you don't know-- where you know some subset, but you don't know what the other ones are. MARVIN MINSKY: I'm not sure what you're saying. It's not clear to me that that word is very useful. It means roughly one thing is more intelligent or better at a certain class of problems than another. If it solves more of them or some of them more quickly or some of them at less cost than you're used to. And if you sit there for 10 minutes, you'll find 20 equally good alternatives. And so then you say, this is a suitcase concept. It is not worthy of a definition. AUDIENCE: OK. MARVIN MINSKY: Mental age. AUDIENCE: If you were just building the machines that-- MARVIN MINSKY: The word IQ-- the word IQ had a very good definition for many years because it was mental age. If a 10 year old was answering roughly the same set of questions as the average 13 year old, did you all know that's what IQ meant? And so the word intelligence, intelligence quotient, none of the psychologists in that business considered that important thing to define because they knew what they were doing. They're measuring mental age. And it's pretty easy-- AUDIENCE: [INAUDIBLE] building a system that's better at solving certain problems if-- my understanding is we're not trying to just make it more efficient or better at whatever problems it's already good at. We're specifically trying to get it to be better than solving problems informed by human intelligence, by the types of problems humans can solve. Because otherwise, you'll just be sitting there trying to define better and better graph search algorithms, instead of doing something useful. MARVIN MINSKY: We're not trying to define. We're trying to make these machines do more things? AUDIENCE: Yeah, informed by human intelligence, right? MARVIN MINSKY: I don't know what that means. AUDIENCE: So wait-- MARVIN MINSKY: What does informed mean? AUDIENCE: What? MARVIN MINSKY: You mean smart people are working on it? AUDIENCE: No, I mean, if your only criteria was I made a better machine, and my machine didn't solve any sort of problem any faster and more efficiently, then you would sort of get stuck in working on problems like computers that are already very good at it and just making ever-more refined algorithms. But it's much more useful than what people are actually doing, I think, is trying to make machines solve problems that humans are good at solving, but computers are bad at solving. So in that sense-- MARVIN MINSKY: Well, you're looking at-- AUDIENCE: --problem solving is informing our [INAUDIBLE].. MARVIN MINSKY: Maybe you're using problem in a funny way. What most of us are doing is trying to make machines that fiddle with their own program and get better at things. So they're not solving problems. They are getting better at solving problems. That's the problem they're solving. I mean, if you ask Picasso to solve a differential equation, he would come out with an IQ of 20, I suppose, and yet he would do something incredibly-- now, if Picasso did that, you would all be swooning. But I don't know how to do it, and then there's Michelangelo just drawing a perfect circle. Yeah? AUDIENCE: What do you think about Ray Kurzweil? Do you hear about the singularity? MARVIN MINSKY: Say it again? AUDIENCE: Ray Kurzweil. MARVIN MINSKY: Kurzweil? AUDIENCE: What? MARVIN MINSKY: Ray Kurzweil? AUDIENCE: Yeah, and about, like, disasters that may happen in the future. Are you worried that, for example, we can't defend against biological weapons or something like that? First of all, the [INAUDIBLE]. MARVIN MINSKY: I don't know. I'm not sure I what-- ask it again. Sorry. AUDIENCE: What do you think about the hysteria about, like, technology being, like, produce explanation where he kind of, like, in I don't know, 40 years, you're going to have too much things that are-- you have 10,000 times smarter than us. MARVIN MINSKY: Well, first, I have a serious conflict of interest because Ray was a student here, and we did some things together, and he was awesome at various things. And one day, after he was a student, and he sends me a synthesizer every couple of years. So the house is littered with them, and they're the best thing. Anyway, one day, he was being interviewed because he had the Kurzweil reading machine. He was reading printed characters and pronouncing them pretty well. And it was on the interview, and the interviewer said, well Mr. Kurzweil, could you demonstrate your reading machine? And Ray turns the thing on and puts this press release on it, and it says, this is a demonstration of the Kurzweil reading machine and blah, blah, blah. And it went on-- did that several times. And then for the rest of the half hour, the interviewer said, Mr. Kurzweil. And Ray tried to correct him a couple of times, but it was useless because the machine had said it. That's a funny story. So he predicts that things will increase rapidly until there'll be inventions every minute, and the way he proves that is by showing all sorts of graphs of things that have increased exponentially. I plotted how far a tire will roll on because Google is so good. And I had found in the attic of my house, which was built in 1895, in the attic there was a motor vehicle bulletin from 1915, I think, maybe '17, and it said, one thing you must know, besides not alarming the horses, is you have to know how to change a tire or a tube because if you take a long trip, like all the way to Lexington and back, it said that, you'll probably have to patch a tube. I'm sorry. I can't remember what got me started on that. Oh, so I plotted the length the tires would roll before they go bad from the 15 miles in 1915 up to about 80,000 Miles now. How long do your tires last? Is that the right number? AUDIENCE: I think it's more like 40,000 or something. MARVIN MINSKY: 40,000 or 50,000? AUDIENCE: But I calculated once you wear off less than an atom each time around. MARVIN MINSKY: That's remarkable. Anyway, so he has this big, wonderful book showing all these exponential curves, and if you try, you can find lots of curves, which go up and then flatten out, but there aren't so many of those in his book. But, basically, of course, he's right that there are things like Moore's law where computers get faster and cheaper and all that sort of thing. But a lot of those laws come to an end when you get down to an atom size bit, and if those happen too soon, then the singularity won't come. But I think the proper way to look at is, say, yes, things are happening faster and faster, and many of them are very important. And if you live lower than 20 feet above sea level, you should plan to move soon. Well, what should we think about? Oh, I know, next Thursday is Thanksgiving. How many of you would still be here next Wednesday? That's 40%. AUDIENCE: [INAUDIBLE] MARVIN MINSKY: What? I'd be glad to come in and talk to people about projects and things. But a lot of you probably have to go somewhere for Thanksgiving. So if you don't come, I won't be surprised.
|
MIT_6868J_The_Society_of_Mind_Fall_2011
|
3_Cognitive_Architectures.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit [email protected]. PROFESSOR: So really, what my main concern has been for quite a few years is to make some theory of what makes people able to solve so many kinds of problems. I guess, if you ran through the spectrum of all the animals, you'd find lots of problems that some animals can solve and people can't, like how many of you could build a beaver dam and/or termite nest. So there are all sorts of things that evolution manages to produce. But maybe the most impressive one is what the human infant can do just by hanging around for 10, or 20, or 30 years and watching what other humans can do. So we can solve all sorts of problems, and my quarrel with most of the artificial intelligence community has been that the great success of science in the last 500 years really has been in physics. And it's been rewarded by finding little sets of rules, like Newton's three laws, and Maxwell's four laws, and Einstein's one law or two, that explained a huge range of everyday phenomena. Of course, in the 1920s and '30s, that apple cart got upset. Actually, Einstein himself, who had discovered the first quantum phenomena, namely the quantization of photons, had produced various scientific laboratory observations that were inexplicable in terms of either Maxwell, or Newton, or Einstein's earlier formulations. So my picture of the history is that in the 19th century and a little bit earlier going back to Lock, and Spinoza, and Hume, and a few of those philosophers, even Immanuel Kant, they had some pretty good psychological ideas. And as I mentioned the other day, I suspect that Aristotle was more like a modern cognitive psychologist and had even better ideas. But we've probably lost a lot of them, because there are no tape recorders. Who knows what Aristotle and Plato said that their students didn't write down? Because it sounded silly. The idea that we developed around here, mostly, Seymour Papert, and a lot of students-- Pat Winston was one of the great stars of that period. --was the idea that to get anything like human intellectual abilities, you're going to have to have all sorts of high level representations. So one has to say, the old conditioned reflex of stimulus producing a response isn't good enough. The stimulus has to be represented by some kind of semantic structure somewhere in the brain or mind. So far as I know, it's only in the theories of not even modern artificial intelligence, but the AI of the '60s, and '70s, and '80s, that people thought about what could be the internal representation of the kinds of things that we think about. And even more important, if one of those representations, you see something, or you remember some incident. And your brain represents it in some way. And if that way doesn't work, you take a breath. And you sort of stumble around and find another way to represent it. Maybe when the original event first happened, you represented it in three or four ways. So we're beginning to see-- did anybody hear Ferucci's talk? The Watson guy was up here a couple of days ago. I missed it, but they haven't made a technical publication as far as I know of how this Watson program works. But it sounds like it's something of a interesting society of mind like structure, and it'd be nice if they would-- has anybody read any long paper on it? There have been a lot of press reports. Have you seen anything, Pat? Anyway, they seem to have done some sorts of commonsense reasoning. As I said the other day, I doubt that Watson could understand why you can pull something with a string, but you can't push. Actually, I don't know if any existing program can understand that yet. I saw some amazing demonstrations Monday by Steve Wolfram of his Wolfram Alpha, which doesn't do much common sense reasoning. But what it does do is, if you put in a sentence, it finds five or 10 different representations, anything you can find that's sort of mathematical. So when you ask a question, it gives you 10 answers, and it's much better than previous systems. Because it doesn't-- well, Google gives you a quarter million answers. But that's too many. Anyway, I'm just going to talk a little bit more, and everybody should be trying to think of a question that the rest of the class might answer. So there are lots of different kinds of problems that people can solve going back to the first one, like which moving object out there is my mother and which might be a potential threat. So there are a lot of kinds of problems that we solve, and I've never seen any discussion in psychology books of what are the principal activities of common sense thinking. Somehow, they don't have-- or people don't-- before computers, there really wasn't any way to think about high level thinking. Because there weren't any technically usable ways to describe complicated processes. The idea of a conditional expression was barely on the threshold of psychology, so what kinds of problems do we have? And if you take some particular problem, like I find these days, I can't get the top off bottles. So how do I solve that? And there are lots of answers. One is you look for somebody who looks really strong. Or you reach into your pocket, and you probably have one of these and so on. There must be some way to put it on the floor, and step on it, and kick it with the other foot. So there are lots of problems that we're facing every day. And if you look in traditional cognitive psychology-- well, what's the worst theory? The worst and the best theory got popular in the 1980s, and it was called rule based systems. And you just have a big library, which says, if you have a soda bottle and you can't get the cap off, then do this, or that, or the other. So some people decided, well, that's really all you need. Rod Brooks in the 1980s sort of said, we don't need those fancy theories that people, like Minsky, and Papert, and Winston are working on. Why not just say for each situation in the outer world have a rules that says how to deal with that situation? Let's make a hierarchy of them, and he described a system that sort of looked like the priority interrupt system in a computer. And he won all sorts of prizes for this really bad idea that spread around the world, but it solved a lot of problems. There are things about priority interrupt that aren't obvious, like suppose you have-- in the first computers, there was some problem. Because what should you do, if there's several signals coming into the computer, and you want to respond to them? And some of the signals are very fast and very short. Then you might think, well, I should give the highest priority to the signal that's going to be there the shortest time or something like that. The funny part is that when you made such a system, the result was that, if you had a computer that was responding to some signal that's coming in at a-- I'm talking about the days when computers were only working at a few kilohertz, few thousand operations a second. God, that's slow, a million times shorter than what you have in your pocket. And if you give priority to the signals that have to be reacted to very fast, then what happens if you type to those computers? It would never see them, because it's always-- I saw this happening once. And finally, somebody realized that you should give the highest priority to the inputs that come in least frequently, because there's always-- otherwise, if there's something coming in very frequently, you'll just always be responding to it. Any of you run into this? It took me a while to figure out why. Anyway, there are lots of kinds of problems. And the other day, I was complaining that we didn't have enough ways to do this. We had hundreds of words for emotions, and here's a couple of dozen. They're in chapter seven and eight actually most of these. So here's a bunch of words for describing ways to think, but they're not very technical. So you can talk about remorse, and sorrow, and blah, blah, blah. Hundreds and hundreds of words for feelings, and it's a lot of effort to find a dozen words for intellectual, for-- what should I call them? --problem solving processes. So it's curious to me that the great field called cognitive psychology has not focused in that direction. Anyway, here's about 20 or 30 of them. And you'll find them scattered through chapters seven and eight. Here's my favorite one, and I don't know of any proper name for it. But if you're trying to solve a problem, and you're stuck, and the example that comes to my mind is, if I'm trying to remember someone's name, I can tell when it's hopeless. And the reason is that for somehow or other, I know that there's a huge tree of choices. That's one way to represent what's going on, and I might know that-- I'm sure that name has a Z in it. So you search around and try everything you can. But of course, it doesn't have a Z, so the way to solve that problem is to give up. And then a couple of minutes later, the name occurs to you. And you have no idea how it happened and so forth. Anyway, the long story is that Papert, and I, and lots of really great students in the '60s and '70s spent a lot of time making little bottles of problem solvers that didn't work. And we discovered that you needed something else, and we had put that in. Other people would come and say, that's hopeless. You're putting in more things than you need. And my conclusion is that, wow, it's the opposite of physics. In physics, you're always trying to find-- what is it called? --Occam's razor. Never have more structure than you need, because what? Well, it'll waste your time, but my feeling was, never have less than you'll need. But you don't know how many you'll need. So what I did, I had four of these, and then I forced myself to put in two more. And people ask, what's the difference between self models and self-conscious processes? And I don't care. Well, what's the difference between self-conscious and reflective? I don't care. And the reason is that, wow, it's nice to have a box that isn't full yet. So if you find something that your previous theory-- going back to Brooks, he was so successful getting simple robots to work that he concluded that the things didn't need any internal representations at all. And for some mysterious reason, the Artificial Intelligence Society gave him their annual big prize for this very wrong idea, and it caused AI research to sort of half collapse in places, like Japan. He said, oh, rule based systems is all we need. Anybody want to defend him? The odd thing is, if you talk to Brooks, he's one of the best philosophers you'll ever meet. And he says, oh yes, of course, that's wrong, but it helps people do research and get things done. And as, I think, I mentioned the other day when the 3 Mile Island thing happened, there was no way to get into the reactor. That was 1980. And 30 years later when the-- how do you pronounce it? --Fukushima accident happened, there was no robot that could go in and open a door. I don't know who to blame for that. Maybe us. But my picture of the history is that the places that did research on robotics, there were quite a few places. And for example, Carnegie Mellon was very impressive in getting the Sony dogs to play soccer, and they're still at it. And I think I mentioned that Sony still has a stock of-- what's it called? AUDIENCE: AIBOs. PROFESSOR: Say it again. AUDIENCE: AIBOs. PROFESSOR: FIBO? AUDIENCE: AIBO, A-I-B-O. PROFESSOR: All right, AIBOs, but the trouble is they're always broken. There was a robot here called Cog that Brooks made, and it sometimes worked. But usually, it wasn't working, so only one student at that time could experiment with the robot. What was that wonderful project of trying to make a walking machine for four years in-- there was a project to make a robot walk. And there was only one of it, so first, only one student at a time can do research on it. And most of the time, something's broken, and you're fixing it. So you end up that you sort of get five or 10 hours a week on your laboratory physical robot. At the same time, Ed Friedkin had a student who tried to make a walking robot, and it was a stick figure on the screen. I forgot the student's name. But anyway, he simulated gravity and a few other things. And in a couple of weeks, he had a pretty good robot that could walk, and go around turns, and bank. And if you simulated an oily floor, it could slip and fall, which we considered the high point of the demo actually. So there we find-- anyway, I've sort of asked you to read my two books for this course. But those are not the only good texts about artificial intelligence. And if you want to dig deeper, it might be a good idea to go to the web and type in Aaron Sloman, S-L-O-M-A-N. And you'll get to his website, which is something like that. And Sloman is a sort of philosopher who can program. There are a handful of them in the world, and he has lots of interesting ideas that nobody's gotten to carry out. So I recommend. Who else is-- Pat, do you ever recommend anyone else? PAT: No. PROFESSOR: What? I'm trying to think. I mean, if you're looking for philosophers, Dan Dennett has a lot of ideas. But Sloman is the only person, I'd say, is a sort of real professional philosopher, who tries to program, at least, some of his ideas. And he has successful students, who have made larger systems work. So if you get tired of me, and you ought to, then go look at this guy, and see who he recommends. OK, who has a good question to ask? AUDIENCE: So Marty, I'm talking about how we have a lot of words for emotions. Why can we only have one word for cause? PROFESSOR: It's a mystery, but I spent most of the couple of days making this list bigger. But these aren't-- you know, these are things that you do when you're thinking. You make analogies. If you have multiple goals, you try to pick the most important one. Or in some cases, if you have several goals, maybe you should try to achieve the easiest one, and there's a chance that it will lead you into what to do about the harder ones. But a lot of people think mostly in England that logic is a good way to do reasoning, and that's completely wrong. Because in logic, first of all, you can't do analogies at all, except at a very high level. It takes four or five nested quantifiers to say, A is to B as C is to which of the following five. So I've never seen anyone do analogical thinking using formalogic, first order or higher order predicate calculus. What's logic good for? Its great after you've solved a problem. Because then you can formalize what you did and see if some of the things you did weren't unnecessary. In other words, after you've got the solution to a problem, what you've got by going through a big search, you finally found a path from A to Z. And now, you can see if the assumptions that you had to make to bridge all these various little gaps were all essential or not. Yes? AUDIENCE: What kind of examples would you say that logic came to analogies? Like, well, water is [INAUDIBLE] containment, like why [INAUDIBLE]? PROFESSOR: Well, because you have to make a list of hypotheses, and then let me see if I can find Evans. The trouble is-- darn, Evans name is in a picture. And Word can't look inside its pictures. Can PowerPoint find words in its illustrations? Why don't I use PowerPoint? Because I've discovered that PowerPoint can't read pictures made by other programs in the Microsoft Word suite. The drawing program in Word is pretty good, and then there's an operation in Word, which will make a PowerPoint out of what you drew. And it's 25 years since Microsoft hasn't fixed the fatal errors that it makes when you do that. In other words, I don't think that the PowerPoint and Word people communicate. And they both make a lot of money, so that might be that might be the reason. Where was I? AUDIENCE: Why logic can't do [INAUDIBLE].. PROFESSOR: Well, you can do anything in logic, if you try hard enough, but A is to B as C is to X is a four part relation. And you'd need a whole pile of quantifiers, and how would you know what to do next? Yes? AUDIENCE: Talk a bit about the situation in which we are able to perform some sort of action, like really fluently and really well, but we cannot describe what we're doing. And the example I give is, say, I'm an expert African drummer from Africa, and I can make these really complicated rhythms. But if you asked me, what did you just do? I had no idea how to describe it. And in that case, do you think the person is capable of-- I guess, do you think the person-- we can say that the person understands this, even though they cannot explain it. PROFESSOR: Well, if you take an extreme form of that, you can't explain why you used any particular word for anything. There's no reason. It's remarkable how well people can do in everyday life to tell people how they got an idea. But when you look at it, it doesn't say how you would program a machine to do it. So there's something very peculiar about the idea that-- it goes back to this idea that people have free will and so forth. Suppose, I say, look at this and say, this has a constriction at this point. Why did I say constriction? How do you get any-- how do you decide what word to use for something? You have no idea, so it's a very general question. It's not clear that the different parts of the frontal lobes, which might have something to do with making plans and analyzing certain kinds of situations, have any access to what happens in the Broca or-- what's the speech production area? Broca, and I'm trying to find the name of the other one. It's connected by a cable that's about a quarter inch thick. AUDIENCE: Is that the Wernicke? PROFESSOR: Wernicke, yeah. We have no idea how those work as far as I've never seen any publication in neuroscience that says, here's a theory of what happens in Wernicke's area. Have any of you ever seen one? What do those people think about it, what they'll tell you about? I was reading something, which said, it's going to be very hard to understand these areas. Because each neuron is connected to 100,000 little fibers. Well, some of them are. And I bet they don't do much, except sort of set the bias for some large collection of other neurons. But if you ask somebody, how did you think of such a word? They will tell you some story or anecdote. But they won't be able to describe some sort of procedure, which is, say, in terms of a language, like lisp. And say, I can't this and that, and I took the clutter of this in the car of that. And I put them in this register, and then I swapped that with-- You don't see theories of how the mind works in psychology today. The only parts are they know a little bit about some aspects of vision, because you can track the paths of images from the retina to what's called the primary visual cortex. And people have been able to figure out what some of those cortical columns do. And if you go back to an animal, like the frog, then researchers, like [? Bitsey ?] and others, have figured out how the equivalent of the cerebellum in the frog. They've got almost the whole circuit of how when the frog sees a fly, it manages to turn its head that way, and stick its tongue out, and catch it. But in the case of a human, I've never seen any theory of how any person thinks of anything. There's artificial intelligence, which has high level theories of semantic representations. And there's neuroscience, which has good theories of some parts of locomotion and some parts of sensory systems. And to this day, there's nothing much in between. David, here, has decided to go from one to the other, and a former student of mine Bob Hearn has done a little bit on both. And I bet there are 20 or 30 people around the country, who are trying to bridge the gap between symbolic artificial intelligence and mappings of the nervous system. But it's very rare, and I don't know who you could ask to get support to work on a problem like that for five years. Yeah? AUDIENCE: So presumably to build a human life for artificial intelligence, we need to perfectly model our own intelligence, which means that we are the system. We ourself are the system that we're trying the understand. PROFESSOR: Well, it doesn't have to be exactly. I mean, people are different, and the typical person looks like they have 400 different brain centers doing slightly different things or very different things. And we have these examples. In many cases, if you lose a lot of your brain, you're very badly damaged. And in other cases, you recover and become just about as smart as you were. There's probably a few cases, where you got rid of something that was holding you back, but it's hard to prove that. We don't need a theory of how people work yet, and the nice thing about AI is that we could eventually get models, which are pretty good at solving what people call everyday common sense problems. And probably in many respects, they're not the way the human mind works, but it doesn't matter. But once you've got-- if I had a program, which was pretty good at understanding why you can pull with a string but not push, then there's a fair chance you could say, well, that seems to resemble what people do. I'll do this few psychological experiments and see what's wrong with that theory and how to change it. So at some point, there'll be people making AI systems, comparing them to particular people, and trying to make them fit. The trouble is nowadays, it takes a few months, if you get a really good new idea, to program it. I think there's something wrong with programming languages, and what we need is a-- we need a programming language, where the instructions describe goals and then subgoals. And then finally, you might say, well, let's represent this concept by a number or a semantic network of some sort. Yes? AUDIENCE: That idea of having a programming language where you define goals. PROFESSOR: Is there a goal oriented language? AUDIENCE: So there is kind of one. If you think about it, if you squint hard enough at something, like SQL, where you tell it here, I want to find the top 10 people in my database with this high value. And then you don't worry about how the system goes about doing that. In a sense, that's redefining your goal [INAUDIBLE].. But you got to switch a little bit. PROFESSOR: What's it called? AUDIENCE: SQL. PROFESSOR: SQL. AUDIENCE: [INAUDIBLE] database and curates it [INAUDIBLE].. PROFESSOR: Oh, right. Yes, I guess database query languages are on the track, but Wolfram Alpha seems to be better than I thought. Well, he was running it, and Steve Wolfram was giving this demo at a meeting we were at on Monday. And he'd say, well, maybe I'll just say this, and it always worked. So maybe either the language is better than I thought, or Wolfram is better than I thought or something. Remarkable guy. Yes? AUDIENCE: So I liked this example of you only remember a name after you've given up consciously trying to think about it. Do you think this is a matter of us being able to set up back our processes, and then there's either some delay. Like we give off- there's some delay in the process, where we don't have the ability to correctly terminate processes. Do you think this only works for memory, or could it work for other things? Like could I start an arithmetic operation, and then give up, and then it'll come to me later? PROFESSOR: Well, there's a lot of nice questions about things like that. How many processes can you run at once in your brain? And I was having a sort of argument the other day about music, and I was wondering if-- I see a big difference between Bach and the composers who do counterpoint. Counterpoint, you usually have several versions of a very similar idea. Maybe there's one theme, and you have it playing. And then another voice comes in. And it has that theme upside down, or a variation of it, or in some cases, exactly the same. And then it's called a canon. So the tour de force in classical music is when you have two, or three, or four versions of the same thought going on at once at different times. And my feeling was that in popular music, or if you take a typical band, then there might be four people. And they're doing different things at the same time. Usually, not the same musical tunes. But there's a rhythm, and there's a tympani. And there's various instruments doing different things, but you don't have several doing the same thing. I might be wrong, and somebody said, well, some popular music has a lot of counterpoint. I'm just not familiar with it. But I think that's-- if you're trying to solve a hard problem, it's fairly easy to look at the problem in several different ways. But what's hard is to look at it in several almost the same ways that are slightly different. Because probably, if you believe that the brain is made of agents, or resources, or whatever, you probably don't have duplicate copies of ones that do important things. Because that would take up too much real estate. Anyway, I might be completely wrong about jazz. Somebody, maybe they have just as complicated overlapping things as Bach and the contrapuntal composers did. Yeah? AUDIENCE: What is the ultimate goal of artificial intelligence? Is it some sort of application, or is it more philosophical? PROFESSOR: Oh, everyone has different goals or ones. AUDIENCE: In your opinion. PROFESSOR: I think we're going to need it, because the disaster that we're working our way toward is that people are going to live longer. And they'll become slightly less able, so we'll have billions of 200-year-old people who can barely get around. And there won't be enough people to import from underdeveloped countries to, or they won't be able to afford them. So we're going to have to have machines that take care of us. Of course, that's just a transient. Because at some point, then you'll download your brain into a machine and fix everything that's wrong. So we'll need robots for a few years or a few decades. And then we'll be them, and we won't need them anymore. But it's an important problem. What's going to happen in the next 100 years? You're going to have 20 billion 200-year-olds and nobody to take care of them, unless we get AI. Nobody seems particularly sad about that. How long-- oh, another anecdote. I was once giving a lecture and talking about people living a long time. And nobody in the audience seemed interested, and I'd say, well, suppose you could live 400 years. And most of the people-- then I asked, what was the trouble? They said, wouldn't it be boring? So then I tried it, again, in a couple of other lectures. And if you ask a bunch of scientists, how would you like to live 400 hundreds years? Everyone says, yay, and you ask them why. And they say, well, I'm working on a problem that I might not have time to solve. But if I had 400 years, I bet I could get somewhere on it, and the other people don't have any goal. That's my cold blooded view of the typical non-scientist. There's nothing for them to do in the long run. Who can think of what should people do? What's your goal? How many of you want to live 400 years? Wow, there must be scientists here. Try it on some crowd and let me know what happens. Are people really afraid. Yeah? AUDIENCE: I think the differentiating factor is whether or not your 400 years is just going to be the repetition of 100 years experience, or if it'll start to like take off, then you'll start to learn better. You'll progress. PROFESSOR: Right. I've seen 30 issues of the Big Bang, and I don't look forward to the next one anymore. Because they're getting to be all the same. Although, it's the only thing on TV that has scientists. Seriously, I hardly read anything, except journals and science fiction. Yeah? AUDIENCE: What's the motivation to have robots take care of as we age as opposed to enhancing our own cognitive abilities, or our prosthetic body, or something more societiable? What's the joy of living, if you can't do anything, and somebody takes care of you? PROFESSOR: I can't think of any advantage, except that medicine isn't getting-- you know, the age of unhandicapped people went up at one year every four since the late 1940s. So the lifespan is-- so that's 60 years. So people are living 15 years longer on the average than they did when I was born or even more than that. But it's leveled off lately. Now I suspected you only have to fix a dozen genes, or who knows? Nobody really has a good estimate, but you can probably double the lifespan, if you could fix. Nobody knows, but maybe there's just a dozen processes that would fix a lot of things. And then you could live longer without deteriorating, and lots of people might get bored. But they'll self select. I don't know. What's your answer? AUDIENCE: I feel that AI is more-- the goal is not to help take care of people, but to complement what we already have to entertain us. PROFESSOR: You could also look at them as our descendants. And we will have them replace us and just as a lot of people consider their children to be the next generation of them. And I know a lot of people who don't, so it's not a universal. What's the point of anything? I don't want to get in-- we might be the only intelligent life in the universe. And in that case, it's very important that we solve all our problems and make sure that something intelligent persists. I think Carl Sagan had some argument of that sort. If you were sure that there were lots of others, then it wouldn't seem so important. Who is the new Carl Sagan? Is there any? Is there a public scientist? AUDIENCE: [INAUDIBLE]. PROFESSOR: Who? AUDIENCE: He's the guy who is on Nova all the time. PROFESSOR: Oh, Tyson? AUDIENCE: Bryan Green. PROFESSOR: Bryan Green, he's very good. Tyson is the astrophysicist. Bryan Green is a great actor. He's quite impressive. Yeah? AUDIENCE: When would you say a routine has sense of self? Like when you think there's something that like a self inside us, partly, because there's some processes [INAUDIBLE]. But when would you say [INAUDIBLE]?? PROFESSOR: Well, I think that's a funny question. Because if we're programming it, we can make sure that the machine has a very good abstract, but correct model of how it works, which people don't. So people have a sense of self, but it's only a sense of self. And it's just plain wrong in almost every respect. So it's a really funny question. Because when you make a machine that really has a good useful representation of what it is and how it works, it might be quite different, have different attitudes than a person does. Like you might not consider itself very valuable and say, oh, I could make something that's even better than me and jump into that. So it wouldn't have the-- it might not have any self protective reaction. Because if you could improve yourself, then you don't want not to. Whereas we're in a state, where there's nothing much we could do, except try to keep living, and we don't have any alternative. It's a stupid thing to say. I can't imagine getting tired of living, but lots of people do. Yeah? AUDIENCE: What do you think about creative thinking as a way of thinking? And where does this thinking completely come from or anything that comes after? PROFESSOR: I had a little section about that somewhere that I wrote, which was the difference between artists and scientists or engineers. And engineers have a very nice situation, because they know what they want. Because somebody's ordered them to make a-- in the last month, three times, I've walked away from my computer. How many of you have a Mac with the magnetic thing? And three times, I pulled it by tripping on this, and it fell to the floor and didn't break. And I've had Macs for 20 odd years or since 1980-- when did they start? 30 years, and they have the regular jack power supply in the old days. And I don't remember. And usually, when you pull the cord, it comes out. Here is this cord that Steve Jobs and everybody designed very carefully, so that when you pull it, nothing bad would happen. But it does. How do you account for that? AUDIENCE: It used to be better with the old plugs were perpendicular to the plus, and now it's kind of-- PROFESSOR: Well, it's quite a wide angle. AUDIENCE: Right, so it works at a certain angle. The cable now instead of naturally lining that area actually naturally lies in the area where it doesn't work. PROFESSOR: Well, what it needs is a little ramp, so that it would slide out. I mean, it would only take a minute to file it down, so that it would slide out. AUDIENCE: Right. PROFESSOR: But they didn't. I forget why I mentioned that, but-- AUDIENCE: [INAUDIBLE]. PROFESSOR: Right, so what's the term doing an artist and an engineer? Well, when you do a painting, it seems to me, if you're already good at painting, then 9/10ths of the problem is, what should I paint? So you can think of an artist as 10% skill and 90% trying to figure out what the problem is to solve. Whereas for the engineer, somebody's told him what to do, make a better cabled connector. So he's going to spend 90% of his time actually solving the problem and only 10% of the time trying to decide what problem to solve. So I don't see any difference between artists and engineers, except that the artist has more problems to solve than he could possibly solve and usually ends up by picking a really dumb one, like let's have a Saint and three angels. Where will I put the third angel? That's the engineering part. It's just improvising, so to me, the media lab makes sense. The artists or semi artists and the scientists are doing almost the same thing. And if you look at the more arty people, they're a little more concerned with human social relations and this and that. And others are more concerned with very technical, specific aspects of signal processing or semantic representations and so on. So I don't see much difference between the arts and the sciences. And then, of course, the great moments are when you run into people, like Leonardo and Michelangelo, who get some idea that requires a great new technical innovation that nobody has ever done. And it's hard to separate them. I think there's some place, where Leonardo realizes that the lens in the eye would mean that the image is upside down on the retina, and he couldn't stand that. So there's a diagram he has, where the cornea is curved enough to invert the image, and then the lens inverts it back again, which is contrary to fact. But he has a sketch showing that he was worried about, if the image were upside down on the retina, wouldn't things look upside down? AUDIENCE: [INAUDIBLE] question. Did you ever heard about [INAUDIBLE] temporal memory, like-- PROFESSOR: Temporal? AUDIENCE: Temporal memory, like there is a system that [INAUDIBLE] at the end of this each year on it. And there's some research. They have a paper on it. PROFESSOR: Well, I'm not sure what-- AUDIENCE: This is Jeff Hawkins project? I don't know. Yeah, it's Jeff Hawkins. PROFESSOR: I haven't heard. About 10 years ago, he said-- Hawkins? AUDIENCE: Yeah, Hawkins. PROFESSOR: Yeah, well, he was talking about 10 years ago, how great it was, and I haven't heard a word of any progress. Is there some? Has anybody heard-- there's a couple of books about it. But I've never seen any claim of that it works. They wrote a ferocious review of the Society of Mind, which came out in 1986. And the Hawkins group existed then and had this talk about a hierarchical memory system. AUDIENCE: [INAUDIBLE]. PROFESSOR: As far as I can tell, it's all a bluff. Nothing happened. I've never seen a report that they have a machine, which solved a problem. Let me know if you find one, because-- oh well. Hawkins got really mad at me for pointing this out, but I was really mad at him for having four of his assistants write a bad book review of my book. So I hope we were even. If anybody can find out whether-- I forget what it's called. Do remember its name? AUDIENCE: [INAUDIBLE]. PROFESSOR: Well, let's find out if it can do anything yet. Hawkins is wealthy enough to support it for a long time, so it should be good by now. Yes? AUDIENCE: Do you think that's going to solve the problem? People first start out with some sort of classification in their of the kind of problem it is, or is that not necessary? PROFESSOR: Yes, well, there's this huge book called Human Problem Solving, which I don't know how many of you know the names of Newell and Simon. Originally, it was Newell, Shaw, and Simon. Believe it or not, in the late 1950s, they did some of the first really productive AI research. And then, I think, in 1970, so that's sort of after 12 years of discovering interesting things. Their main discovery was the gadget that they called GPS, which is not global positioning satellite, but general problem solver. And you can look it up in the index of my book, and there's a sort of one or two page description. But if you ever get some spare time, search the web for their early paper by Newell and Simon on how GPS worked. Because it's really fascinating. What it did is it looked at a problem, and found some features of it, and then looked up in a table saying that, if there's this difference between what you have and what you want, use such and such a method. So it was sort of what I called it. I renamed it a difference engine as a sort of joke, because the first computer in history was the one called the difference engine. But it was for predicting tides and things. Anyway, they did some beautiful work. And there's this big book, which I think is about 1970, called Human Problem Solving. And what they did is got some people to solve problems, and they trained the people to talk while they're solving the problem. So some of them were a little cryptograms, like if each letter stands for a digit, I've forgotten it. Pat, do you remember the name, one of those problems? John plus Joe-- John plus Jane equals Robert or something. I'm sure that has no solution, but those are called cryptarithmetic. So they had dozens or hundreds of people who would be trained to talk aloud while they're solving little puzzles like that. And then what they did was look at exactly what the people said and how long they took. And in some cases, where they move their eyes, they had an eye tracking machine. And then they wrote programs that showed how this guy solved a couple of these cryptarithmetic problems. Then they ran the program on a new one. And in some rare cases, it actually solved the other problem. So this is a book, which looks at human behavior and makes a theory of what it's doing. And the output is a rule based system, so it's not a very exciting theory. But there's never been anything like it in-- you know, it was like Pavlov discovering conditioned reflexes for rats or dogs. And Newell and Simon are discovering some rather higher level almost a Rodney Brooks like system for how humans solve some problems that most people find pretty hard. Anyway, what there hasn't been is much-- I don't know of any follow-up. They spent years perfecting those experiments, and writing about-- [AUDIO OUT] --results. And anybody know anything like that? What psychologists are trying to make real models of real people solving [INAUDIBLE] problems. [INAUDIBLE] AUDIENCE: Your mic [? is off. ?] PROFESSOR: It has a green light. AUDIENCE: It has a green light, but the switch was up. PROFESSOR: Boo. Oh, [INAUDIBLE]. AUDIENCE: We're all set now. PROFESSOR: [CHUCKLES] Yes. AUDIENCE: Did that [INAUDIBLE] study try to see when a person gave up on a particular problem-solving method [INAUDIBLE] how they switched-- in other words, when they switched to [INAUDIBLE]?? PROFESSOR: It has inexplicable points at which the person suddenly gives up on that representation. And he says, oh, well, I guess R must be 3. Did I erase? Well. Yes, it's got episodes, and they can't account for the-- you have these little jerks in the script where the model changes. And-- [COUGHS] sorry. And they announced those to be mysteries, and say, here's a place where the person has decided the strategy isn't working and starts over, or is changing something. The amazing part is that their model sometimes fits what the person says. For 50 or even 100 steps, the guy's saying, oh, I think z must be 2 and p must be 7. And that means p plus z is 9, and I wonder what's 9. And so their model fits for very long strings, maybe two minutes of the person mumbling to themselves. And then it breaks, and then there's another sequence. So Newell actually spent more than a year after doing it verbally, at tracking the person's eye motions, and trying to correlate the person's eye motions with what the person was talking about. And guess what? None. AUDIENCE: [CHUCKLING] PROFESSOR: It was almost as though you look at something, and then to think about it, you look away. Newell was quite distressed, because he spent about a year crawling over this data trying to figure out what kinds of mental events caused the eyes to change what they were looking at. But when the problem got hard, you would look at a blank part of the thing more often than the place where the problem turned up. So conclusion, that didn't work. When I was a very young student in college, I had a friend named Marcus Singer, who was trying to figure out how the nerve in the forelimb of a frog worked. And so he was operating on tadpoles. And he spent about six weeks moving this sciatic nerve from the leg up to the arm of this tadpole. And then they all got some fungus and died. So I said, what are you going to do? And he said, well, I guess I'll have to do it again. And I switched from biology to mathematics. AUDIENCE: [CHUCKLING] PROFESSOR: But in fact, he discovered the growth hormone that he thought came from the nerve and made the-- if you cut off the limb bud of a tadpole, it'll grow another one and grow a whole-- it was a newt, I'm sorry. It's salamander. It'll grow a new hand. If you wait till it's got a substantial hand, it won't grow a new one. But he discovered the hormone that makes it do that. Yeah. AUDIENCE: One of the questions from the homework that relates to problem-solving. A common theme is having multiple ways to react to the same problem. But how do we choose which options to add as possible reactions to the same problem? PROFESSOR: Oh. So we have a whole lot of if-thens, and we have to choose which if. I don't think I have a good theory of that. Yes, if you have a huge rule-based system and they're-- what does Randy Davis do? What if you have a rule-based system, and a whole lot of ifs fit the condition? Do you just take the one that's most often worked? Or if nothing seems to be working, do you-- you certainly don't want to keep trying the same one. I think I mentioned Doug [? Lenat's ?] rule. Some people will assign probabilities to things, to behaviors, and then pick the way to react in proportional to the probability that that thing has worked in the past. And Doug [? Lenat ?] thought of doing that, but instead, he just put the things in a list. And whenever a hypothesis worked better than another one, he would raise it, push it toward the front of the list. And then whenever there was a choice, he would pick-- of all the rules that fit, he would pick the one at the top of the list. And if that didn't work, it would get demoted. So that's when I became an anti-probability person. That is, if just sorting the things on a list worked pretty well, our probability's going to do much better. No, because if you do probability matching, you're worse off than-- than what? AUDIENCE: [INAUDIBLE] PROFESSOR: Ray Solomonoff discovered that if you have a set of probabilities that something will work, and you have no memory, so that each time you come and try the-- I think I mentioned that the other day, but it's worth emphasizing, because nobody in the world seems to know it. Suppose you have a list of things, p equals this, or that, or that. In other words, suppose there's 100 boxes here, and one of them has a gold brick in it, and the others don't. And so for each box, suppose the probability is 0.9 that this one has the gold brick, and this one as 0.01. And this has 0.01. Let's see, how many of them-- so there's 10 of these. That makes-- Now, what should you do? Suppose you're allowed to keep choosing a box, and you want to get your gold brick as soon as possible. What's the smart thing to do? Should you-- but you have no memory. Maybe the gold brick is decreasing in value, I don't care. But so should you keep trying 0.9 if you have no memory? Of course not. Because if you don't get it the first time, you'll never get it. Whereas if you tried them at random each time, then you'd have 0.9 chance of getting it, so in two trials, you'd have-- what am I saying? In 100 trials, you're pretty sure to get it, but in [? e-hundred ?] trials, almost certain. So if you don't have any memory, then probability matching is not a good idea. Certainly, picking the highest probability is not a good idea, because if you don't get it the first trial, you'll never get it. If you keep using the probabilities at-- what am I saying? Anyway, what do you think is the best thing to do? It's to take the square roots of those probabilities, and then divide them by the sum of the square roots so it adds up to 1. So a lot of psychologists design experiments until they get the [? rat ?] to match the probability. And then they publish it. Sort of like the-- but if the animal is optimal and doesn't have much memory, then it shouldn't match the probability of the unknown. It should-- end of story. Every now and then, I search every few years to see if anybody has noticed this thing, which-- and I've never found it on the web. Yeah. AUDIENCE: So earlier in the class, you mentioned that the rule-based methods didn't work, and that several other methods were tried between the [INAUDIBLE] [? immunities. ?] Could you go into a bit about what these other methods were that have been tried? PROFESSOR: I don't mean to say they don't work. Rule-based methods are great for some kinds of problems. So most systems make money, and if you're trying to make hotel reservations and things, this business of rule-based systems, it has a nice history. A couple of AI researchers, really, notably Ed Feigenbaum, who was a student of Newell and Simon, started a company for making rule-based systems. And company did pretty well for a while, and they maintained that only an expert in artificial intelligence could be really good at making rule-based systems. And so they had a lot of customers, and quite a bit of success for a year or two. And then some people at Arthur D. Little said, oh, we can do that. And they made some systems that worked fine. And the market disappeared, because it turned out that you didn't have to be good at anything in particular to make rule-based systems work. But for doing harder problems, like translating from one language to another, you really needed to have more structure, and you couldn't just take the probabilities of words being in a sentence, but you had to look for diagrams and trigrams, and have some grammar theory, and so forth. But generally, if you have a ordinary data-processing problem, try a rule-based system first, because if you understand what's going on, it's a good chance you'll get things to work. I'm sure that's what the Hawkins thing started out as. I don't have any questions. AUDIENCE: Could I ask another one for the homeworks? PROFESSOR: Sure. AUDIENCE: OK. Computers and machines can use relatively few electronic components to run a batch of different type of thought operations. All that changes is data over which the operation runs. In the [? critics ?] [? lecter ?] model, are resources different bundles of data or different physical parts of the brain? PROFESSOR: Which model? AUDIENCE: The [? critics ?] [? lecter ?] model. PROFESSOR: Oh. Actually, I've never seen a large-scale theory of how the brain connects its-- there doesn't seem to be a global model anywhere. Anybody read any neuroscience books lately? AUDIENCE: [CHUCKLING] PROFESSOR: I mean, I just don't know of any big diagrams. Here's this wonderful behavioral diagram. So how many of you have run across the word "ethology?" Just a few. There's a branch of the psychology of animals, which is-- AUDIENCE: [CHUCKLING] PROFESSOR: Thanks. Which is called ethology. And it's the study of instinctive behavior. And the most famous people in that field-- who? Well, Niko Tinbergen and Konrad Lorenz are the most famous. I've just lost the name of the guy around the 1900 who wrote a lot about the behavior of ants. Anybody ring a bell? So he was the first ethologist. And these people don't study learning because it's hard to-- I don't know why. So they're studying instinctive behavior, which is, what are the things that all fish do of a certain species? And you get these big diagrams. This is from a little book which you really should read called The Study of Instinct. And it's a beautiful book. And if that's not enough, then there's a two-volume similar book by Konrad Lorenz, who was a Austrian researcher. They did a lot of stuff together, these two people. And it's full of diagrams showing the main behaviors that they were able to observe of various low-cost animals. I think I mentioned that I had some fish, and I watched the fish tanks, what they were doing for a very long time, and came to no conclusions at all. And when I finally read Tinbergen and Lorenz, I realized [? that ?] just had never occurred to me to guess what to look for. My favorite one was that whenever a fire engine went by, Lorenz's sticklebacks, the male sticklebags would go crazy and look for a female. Because when the female's in heat, or whatever it's called-- estrus-- the lower abdomen turns red. I think fire engines have turned yellow recently, so I don't know what the sticklebacks do about that. So if you're interested in AI, you really should look at at least one of these people, because that's the first appearance of rule-based systems in great detail in psychology. There weren't any computers yet. There must be 20 questions left. Yeah. AUDIENCE: While we're in the topic of ethology, so I know that early on, people were kind of-- they were careful not to apply ethology to humans until about '60s EO Wilson with sociobiology. So I was wondering about your opinion on that, and maybe you have anecdotes on [INAUDIBLE] pretty controversial around this area especially. PROFESSOR: Oh, I don't know. I sort of grew up with Ed Wilson because we had the same fellowship at Harvard for three years. But he was almost never there, because he was out in the jungle in some little telephone booth watching the birds, or bees, or-- he also had a 26-year-old ant. Aunt, not ant. Ant. A-N-T. I'm not sure what the controversy would have been, but of course, there would be humanists who would say people aren't animals, but. But then what the devil are they? Why aren't they better than the-- [CHUCKLES] You've got to read this. It's a fairly short book. And you'll never see an animal as the same again, because I swear, you start to notice all these little things. You're probably wrong, but you start picking up little pieces of behavior, and trying to figure out what part of the instinct system is it. Lorenz was particularly-- I think in chapter 2 of the emotion machine, I have some quotes from these guys. And Lorenz was particularly interested in how animals got attached to their parents-- that is, for those animals that do get attached to their parents. Like alligator babies live in the alligator's mouth for quite a while. It's a good, safe place. And Lorenz would catch birds just when they're hatching. And within the first day or so, some baby birds get attached to whatever large moving object is nearby. And that was often Konrad Lorenz, rather than the bird's mother, who is supposed to be sitting on the egg when it hatches, and the bird gets attached to the mother. Most birds do, because they have to stay around and get fed. So it is said that wherever Lorenz went in Vienna, there were some ducks or whatever-- birds that had gotten imprinted on him would come out of the sky and land on his shoulder, and no one else. And he has various theories of how they recognize him. But you could do that too. Anyway, that was quite a field, this thing called ethology. And between 1920 and 1950-- 1930, I guess, 1950-- there were lots of people studying the behavior of animals. And Ed Wilson is probably the most well-known successor to Lorenz and Tinbergen. And I think he just wrote a book. Has anybody seen it? He has a huge book called Sociobiology, which is too heavy to read. I've run out of things. Yes. AUDIENCE: Still thinking about the question [INAUDIBLE].. [INAUDIBLE],, The Society of Mind, ideas in that book, [INAUDIBLE] the machinery from it. What would the initial state of the machinery be [INAUDIBLE] start something? Is that dictated by the goals given to it? And by state, I mean the different agents, the resources they have access to. What would that initial state look like? PROFESSOR: He's asking if you made a model of the program to Society of Mind architecture, what would you put in it to start with? I never thought about that. Great question. I guess it depends whether you wanted to be a person, or a marmoset, or chicken, or something. Are there some animals that don't learn anything? Must be. What do the ones that Sydney Brenner studied? AUDIENCE: C. elegans? They [? learned ?] very simple associations. PROFESSOR: The little worms? AUDIENCE: Mm-hmm. PROFESSOR: There was a rumor that if you fed them RNA-- was it them or was it some slightly higher animal? AUDIENCE: It was worms. PROFESSOR: What? AUDIENCE: RNA interference. Is that what you're talking about? Yeah. PROFESSOR: There was one that if you taught a worm to turn left when there was a bright light, or right, and put some of its RNA into another worm, that worm would copy that reaction even though it hadn't been trained. And this was-- AUDIENCE: That wasn't worms. That was slugs. PROFESSOR: Slugs. AUDIENCE: I think it was [INAUDIBLE] replace the [INAUDIBLE] or something. AUDIENCE: Some little snail-like thing. And nobody was ever able to replicate it. So that rumor spread around the world quite happily, and there was a great science fiction story-- I'm trying to remember-- in which somebody got to eat some alien's RNA and got magical powers. AUDIENCE: [CHUCKLING] PROFESSOR: I think it's Larry Niven, who is wonderful at taking little scientific ideas and making a novel out of them. And his wife Marilyn was a undergraduate here. So she introduced me to Larry Niven, and-- I once gave a lecture and he wrote it up. It was one of the big thrills, because Niven is one of my heroes. Imagine writing a book with a good idea in every paragraph. AUDIENCE: [CHUCKLING] Vernor Vinge, and Larry Niven, and Frederik Pohl seem to be able to do that. Or at least on every page. I don't know about every paragraph. Yeah. AUDIENCE: To follow up on that question, it seems to me that you almost were saying that if this machinery exists, the difference between these sort of animals would be in [INAUDIBLE]. And I think on [INAUDIBLE],, we can create like a chicken or a human [INAUDIBLE].. PROFESSOR: Well, no. I don't think that most animals have scripts. Some might, but I'd say that-- I don't know where most animals are, but I sort of make these six levels, and I'd say that none of the animals have this top self-reflective layer except, for all we know, dolphins, and chimpanzees, and whatever. It would be nice to know more about octopuses, because they do so much wonderful things with their eight legs. How does it manage? Have you seen pictures of an octopus picking up a shell, and walking to some quiet place, and it's got-- there's some movies of this on the web. And then it drops the shell and climbs under it and disappears. It's hard to imagine programming a robot to do that. Yeah. AUDIENCE: So I've noticed, both in your books and in lecture, a lot of your models and diagrams seem to have very hierarchical structure to them. But as you [INAUDIBLE] in your book and other places, passing between [INAUDIBLE] feedback and self-reference are all very important [INAUDIBLE].. So I'm curious if you can discuss some of the uses of these very hierarchical models, why you represented so many things that way instead of [INAUDIBLE] theorem. PROFESSOR: Well, it's probably very hard to debug things that aren't. So we need a meta theory. One thing is that, for example, it looks like that all neurons are almost the same. Now, there's lots of difference in geometric features of them, but they all use the same one or two transmitters, and every now and then, you run across people saying, oh, neurons are incredibly complicated. They have 100,000 connections. You can find it if you just look up "neuron" on the web and get these essays explaining that nobody will ever understand them, because typically, a neuron is connected to 100,000 others, and blah, blah, blah. So it must be something inside the neuron that figures out all this stuff. As far as I can see, it looks out almost the opposite. Namely, probably the neuron hasn't changed for half a billion years very much, except in superficial ways in which it grows. Because if you changed any of the genes controlling its metabolism or the way it propagates impulses, then the animal would die before it was born. And so you can't make-- that's why the embryology of all mammals is almost identical. You can't make a change at that level after the first-- before the-- you can't make changes before the first generations of cell divisions, or everything would be clobbered. The architecture would be all screwed up. So I suspect that the people who say, well, maybe the important memories of a neuron are inside it, because there's so many fibers and things. I bet it's sort of like saying the important memory in a computer is in the arsenic and phosphorus atoms of the semiconductor. So I think things have to be hierarchical in evolution, because if you're building later stuff on earlier stuff, then it's very hard to make any changes in the earlier stuff. So as far as I know, the neurons in sea anemones are almost identical to the neurons in mammals, except for the later stages of growth, and the way the fibers ramify, and-- who knows, but there are many people who want to find the secret of the brain in what's inside the neurons rather than outside. It'd be nice to get a textbook on neurology from 50 years in the future, see how much of that stuff mattered. Where's our time machines? Did you have-- AUDIENCE: Yeah. Most systems have a state that they prefer to be in, like a state that they're most comfortable. Do you think the mind has such a state, or would it tend to certain places or something? PROFESSOR: That's an interesting. I don't-- how does that apply to living things? I mean, this battle would rather be here than here, but I'm not sure what you mean. AUDIENCE: Well, so apparently, in Professor Tenenbaum's class, he shows this example of a number game. They'll give you a sequence of numbers, and he'll ask you to find a pattern in it. So for example, if you had a pattern like 10, 40, 50, and 55, he asked the class to come up with different things that could be described in the sequence. And between the choice of, oh, this sequence is a sequence of the multiples of 5 versus a sequence of the matter of 10 or multiples of 11, he says something like-- he phrases it like, the multiples of 5 would have a higher [INAUDIBLE] probability. So that got me thinking, why would that be-- would our minds have a preference for having as few categories as possible in trying to view the world around us, trying to categorize things in as few things as possible is what got me thinking about it. PROFESSOR: Sounds very strange to me, but certainly, if you're going to generate hypotheses, you have to have-- the way you do it depends on what this-- what does this problem remind you of? So I don't see how you could make a general-- if you look at the history of psychology, there are so many efforts to find three laws of motion like Newton's. Is he trying to do that? I mean, here you're talking about people with language, and high-level semantics, and-- let's ask him what he meant. AUDIENCE: Professor [INAUDIBLE]. PROFESSOR: Yeah. AUDIENCE: This is more of a social question, but there's always this debate about how if AI gets to the point where it can take care of humans, will it ever destroy humanity? And do you think that's something that we should fear? And if so, is there some way we can prevent it? PROFESSOR: If you judge by the recent-- by what's happened in AI since 1980, it's hard to imagine anything to fear. But-- AUDIENCE: [CHUCKLING] PROFESSOR: But-- funny you should mention that. I'm just trying to organize a conference sometime next year about disasters. And there's a nice book about disasters by-- what's his name? The Astronomer Royal. What? AUDIENCE: Martin Rees? PROFESSOR: Martin Rees. So he has a nice book, which I just ordered from Amazon, and it came the next day. And it has about 10 disasters, like a big meteor coming and hitting the Earth. I forget the other 10, but I have it in here somewhere. So I generated another list of 10 to go with it. And so there are lots of bad things that could happen. But I think right now, that's not on the top of the list of disasters. Eventually, some hacker ought to be able to stop the net from working because it's not very secure. And while you're at it, you could probably knock out all of the navigation satellites and maybe set off a few nuclear reactors. But I don't think AI is the principal thing to worry about, but it should very suddenly get to be a problem. And there are lots of good science fiction stories. My favorite is the Colossus series by DF Jones. Anybody know-- there was a movie called The Forbin Project, and it's about somebody who builds an AI, and it's trained to do some learning. And it's also the early days of the web, and it starts talking to another computer in Russia. And suddenly, it gets faster and faster, and takes over all the computers in the world, and gets control of all the missiles, because they're linked to the network. And it says, I will destroy all the cities in the world unless you clear off some island and start building the following machine. I think it's Sardinia or someplace. So they get bulldozers. And it starts building another machine, which it calls Colossus 2. And they ask, what's it going to do? And Colossus says, well, you see, I have detected that there's a really bad AI out in space, and it's coming this way, and I have to make myself smarter than it really quick. Anyway, see if you can order the sequel to Colossus. That's the second volume where the invader actually arrives and I forget what happens. And then there's a third one, which was an anticlimax, because I guess DF Jones couldn't think of anything worse that could happen. AUDIENCE: [CHUCKLING] PROFESSOR: But Martin Rees can. Yeah. AUDIENCE: Going back to her question about example, and if a mind has a state that it prefers to be in, would that example be more of a pattern recognition example? So instead of 10, 40, 50, 55, what if it was [? logistical, ?] like, good, fine, great, and you have to come up with a word that could potentially fit in that pattern. And then that pattern could be ways to answer "how are you?" PROFESSOR: Let's do an experiment. How many of you have a resting state? AUDIENCE: [INAUDIBLE] PROFESSOR: Sometimes when I have nothing else to do, I try to think of "Twinkle Twinkle, Little Star" happening with the second one starting in the second measure, and then the third one starts up in the third measure. And when that happens, I start losing the first one. And ever since I was a baby, when I have nothing else to do-- which is almost never-- I try to think of three versions of the same tune at once and usually fail. What do you do when you have nothing else to do? Any volunteers? What's yours? AUDIENCE: I try not to think anything at all. See how long [INAUDIBLE]. PROFESSOR: You try not to, or to? AUDIENCE: Not to. PROFESSOR: Isn't that a sort of a Buddhist thing? AUDIENCE: Guess so. PROFESSOR: Do you ever succeed? How do you get out of it? You have to think, well, enough of this nothingness. If you succeeded, wouldn't you be dead? AUDIENCE: [CHUCKLING] PROFESSOR: Or stuck? AUDIENCE: Eventually, some stimulus will appear that is too interesting to ignore. AUDIENCE: [CHUCKLING] PROFESSOR: Right, and the threshold goes down till even the most boring thing is fascinating. AUDIENCE: Yeah. AUDIENCE: [CHUCKLING] PROFESSOR: Make a good short story. Yeah. AUDIENCE: There was actually a movie that really got to me when I was little. These aliens were trying to infiltrate people's brains, and like their thoughts. And to keep the aliens from infiltrating your thoughts, you had to think of a wall, which didn't make any sense at all, but-- AUDIENCE: [CHUCKLING] AUDIENCE: But now, whenever I try to think of nothing, I just end up thinking of a wall. AUDIENCE: [LAUGHING] PROFESSOR: There are these awful psychoses, and about every bout every five years, I get an email from someone who says that, please help me, there's some people who are putting these terrible ideas in my head. Have you ever gotten one, Pat? And they're sort of scary, because you realize that maybe the person will suddenly figure out that it's you who's doing it, if they-- AUDIENCE: [CHUCKLING] AUDIENCE: [INAUDIBLE] husband [INAUDIBLE] all them together once, and I think they married. AUDIENCE: [LAUGHING] PROFESSOR: I remember there was once-- one of them came to visit-- actually showed up, and he came to visit Norbert Wiener, who is famous for-- I mean, he's the cybernetics person of the world. And this person came in, and he got between Wiener and the door, and started explaining that somebody was putting dirty words in his head and making the grass on their lawn die. And he was sure it was someone in the government. And this was getting pretty scary. And I was near the door, so I went and got [INAUDIBLE]---- it's a true story-- who was nearby, and I got [INAUDIBLE] to come in. And [INAUDIBLE] actually talked this guy down, and took him by the arm, and went somewhere, and I don't know what happened, but Wiener was really scared, because the guy kept keeping him from going out. [INAUDIBLE] was big. Wiener's not very big. AUDIENCE: [CHUCKLING] PROFESSOR: Anyway, that keeps happening. Every few years, I get one. And I don't answer them. He's probably sending it to several people. And I'm sure one of them is much better at it than we are. How many of you have ever had to deal with a obsessed person? How did they find you? AUDIENCE: I don't know. They found a number of people in the media lab, actually. PROFESSOR: Don't answer anything. But if they actually come, then it's not clear what to do. Last question? Thanks for coming.
|
MIT_6868J_The_Society_of_Mind_Fall_2011
|
13_Closing_Thoughts.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MARVIN MINSKY: So everyone's working on papers, I suppose. When are they due in? AUDIENCE: They're due on Sunday. MARVIN MINSKY: Horrors. Well, when I went to graduate school, it was at Princeton in the math department. And there were all sorts of courses given by very good professors because-- I think I've mentioned-- this is 1950, and the universities in the United States were-- at least the science departments were filled with great mathematicians and physicists and people like that who had been extracted from Europe because of World War II. And the math department at Princeton was very, very, very exciting because, if you had a question, you had a very good chance that the world's leading authority on that subject would be in the next room or two. I was interested in the theory of knots, and there was a professor, Ralph Fox, who was one of the world's experts on knot theory. And in fact, he was so good at it that I gave up-- AUDIENCE: [LAUGHTER] MARVIN MINSKY: --which is a nice, important thing to keep in mind. Always be ready to give up if there's someone doing something better than you, and find the right niche. But anyway, they had courses, and you would register for courses, and everyone got an A in all courses. I'm sorry. There were no grades when I got there because Professor [? Levschetz ?] didn't like the idea of grading people, and he was head of the department. And his feeling was that, if somebody wasn't doing good work, then it wasn't that person's fault. It was the fault of the admissions committee. And if anybody got through the hurdles to get there, then they were assumed to be mathematicians, and that was that. But after a couple of years, the provost of the university got annoyed with the math department, and he sent down a message that they would have to be grades in every course, and [? Levschetz ?] said, OK. He didn't believe in fighting when there was a good alternative. The alternative was everybody got As in every course, whether they took the course or not. AUDIENCE: [LAUGHTER] MARVIN MINSKY: And I think the end of the story is that, when he retired, the new chairman wasn't as good at evading the authorities. So I have a little discomfort at grades myself. And when I gave this course in the '60s and in the '70s, everybody got an A, and it took several years for the departments to notice. Anyway, we have to give grades at MIT. But all that's great fun. What didn't we cover in this course that you wish we had? I'd like to know what some of you are doing, also. What's your plan to make machines smarter in the future? And how hard is it to find a job where you can do research? I haven't tried getting a new job for a long time. I just tried to hire a former student, but he's starting a huge startup. And if you can't find a job, I guess you can make one if you can fool enough investors into supporting a new operation. AUDIENCE: So most of the work we're going to do is designed around a very specific problem. MARVIN MINSKY: Design what? AUDIENCE: Around a very specific problem. So you're addressing getting a machine to solve some particular problem. There's always a fast path to that, of finding the right set of data and finding the right model that will work reasonably well. It's not really a business case that can be made for making the general intelligence when there's motivation for some specific application. MARVIN MINSKY: That's right. It goes back to World War II again. The reason we could do so much in those early years about general theories was that the defense department had created this new wing called ARPA, Advanced Research Projects Agency. And I actually don't know what caused it to begin, but a lot of it was involved with trying to make new kinds of instruments to detect intercontinental missiles and heaven knows what else. But they decided to have a basic research branch, because, if you're in the military, one of your concerns is defense. And for some reason that I don't know how to explain-- but a group of people in the Office of Naval Research concluded that the best defense is to be way ahead, in technology, of everyone else. And since there were no particular constraints on them, they didn't have to say, you have to write a progress report every three months, and stuff like that. So anyway, another unique thing about the situation when I started was that the students knew more than the professors about computer science. Now, there were people around MIT who knew about computers, but we had the bad luck of having the best analog computers in the world-- Vannevar Bush and people like that. So when things change suddenly enough, your best people might turn out to be the worst for a few years. And the leaders in innovation came from the Tech Model Railroad Club and a few institutions like that. I certainly knew something about computers and at least the theory of them, but I didn't know anything about modern programming, and neither did anyone else, except Newell and Simon and Carnegie Mellon and John McCarthy. They invented LISP processing languages. And John McCarthy refined them and basically combined a very clumsy logo-like LISP processing language with algebraic language. So LISP came after there had been primitive LISP processors. And all of Newell and Simon's work, including a chess program, had been done with just a programming language that had something like car, cdr, and cons. So it was about 10 years after the beginning. The first computers at MIT came mainly from Lincoln Lab, who had the job of building supercomputers to analyze radar data and stuff like that, which were indeed connected with national defense, and tried to combine radar reports from all sorts of places to get early warnings of activities in space and so forth. When the first Sputnik was launched by the Russians, it was just a little thing going, "beep, beep, beep" in low Earth orbit. And within a couple of hours after the Russians announced that Sputnik was there and they announced its orbit, a couple of hackers at Lincoln Labaratory managed to track the thing. It was going, "beep, beep, beep," and a guy named Brad Howland, who was a teenager, I think, then, hooked a couple of radio receivers up with some flip-flops and things and was able to get its exact trajectory from the Doppler shifts. Anyway, I think I've talked about that history at some point. About early 1970 or so, 10 or 15 years after ARPA started-- well, the big ARPA project was called Project MAC in 1963. And MIT got $3 million a year for designing and building a time-sharing system. The idea for a time-sharing system had bubbled up in a few places, but it was mainly a couple of other youngsters, John McCarthy and Edward Fredkin, who recognized that, if a computer could compute thousands of operations per second, then it should be able to compute hundreds of operations per second for tens of people. And when it gets to a billion operations per second, then it should be able to compute a million operations per second for 1,000 people, and so forth. And MIT was the first place to develop time-sharing system. And as I said, they sent this $3 million a year to get that project to work. And because the guy in charge of that was a former professor of psychology, named Joe Licklider-- had been a friend of mine when I was an undergraduate, he added another million dollars a year and gave it to us to start the AI lab. And if that ever happens to you-- I hope it will. Anyway, as I've complained a lot during the course, we were able to do anything we thought was possible, and we didn't worry about whether it could be done in a year or two or 20. We just had enough money and enough adventurous people to just start doing those things. So in the very first year, we had Jim Slagle making the program that could do integrals at the level that students in [INAUDIBLE] were learning to do. I think I mentioned I made the first program that computed derivatives. And it was just five or 10 lines of code because it worked lexically. It said, if you see xy, replace it by the symbols xdy plus ydx, and so forth. That's how you do derivatives, where d means derivative. And since it's operating on the names of functions, then you just recurse until all the function names have disappeared. And so that was just a page of code. And then Slagle, who was blind, wrote 20 pages of code. And Joe Moses-- what's he now? Provost? AUDIENCE: He was. AUDIENCE: He used to be. AUDIENCE: He retired. AUDIENCE: He was provost. MARVIN MINSKY: Did he retire or go back to work? Probably both. AUDIENCE: He's just an Institute professor here. MARVIN MINSKY: Yep. And he mentioned that it took him a very long time to decode Slagle's 20 pages, because, since Slagle is writing in Braille, poor Slagle has to understand his code, too, and you can't just glance at a page. So he wrote this incredibly intricate, compressed stuff, and it had the weird property that it solved integrals. It did integrals at just about the same speed as good math students. So if you gave it an expression with a couple of sines and cosines and things, it might take 10 or 15 minutes. And everybody was very impressed that this computer could keep up with a human. But of course, that fact had no meaning whatever. That happened to be how slow computers were at that time. The IBM 704-- I guess we had a 701 and then a 704, and I think that machine was doing about 20,000 operations per second, which is probably more than a person can do, but no one knows. Well, so who has an idea on what should be done next? AUDIENCE: [INAUDIBLE] MARVIN MINSKY: Yeah, what would you like to do? AUDIENCE: A robot that was [INAUDIBLE] AI system that would interact more with your work. [INAUDIBLE] MARVIN MINSKY: Well, we're going to need robots pretty soon. If you look at, just for example, this aging problem, we've getting more and more people who are crippled in one way or another, handicapped, a huge population of people with mild brain disorders-- of course, serious ones, also. And there are not enough nurses to take care of the disabled aged people. And I think that's probably going to be a major need for robotics. As I see it, the problem with robotics is that people build robots. And you look at any laboratory that has built a physical robot-- and Media Lab is a perfect example of it. Or Rod Brooks had a wonderful robot called Cog, with a head and two hands. I took a lot of engineering. And there was only one of it, and it only worked for a few hours a week because it's very complicated, and it breaks. And so the joke is that, if I were starting a robotics lab, I would not permit robots. Once you simulate the robots and get them to do what you want, then you take it out of the university and hire some good mechanical engineers to make the robot you want, but robotics research should not involve robots. And I have mixed feelings about the-- what's the great basketball robot event called? AUDIENCE: Basketball? You mean soccer? MARVIN MINSKY: Well, whatever you call it. They kick a ball, and-- [LAUGHTER] --they try to get it through to a goal, so it is soccer. AUDIENCE: RoboCop. MARVIN MINSKY: Yeah, whatever. That's probably a plus for the high school students who are involved in it. Yes. AUDIENCE: Well, [INAUDIBLE] it had robots that all university use as a base. Why do we keep building hardware? Is it necessary to keep building hardware back again? MARVIN MINSKY: I'm not sure what what's your question. AUDIENCE: When we use computers, we don't build the computers back again. We just create software. MARVIN MINSKY: Yes. AUDIENCE: Aren't we doing the same for robots? And if not, why aren't we just programming the robot to do stuff that we want? MARVIN MINSKY: What I'm saying is that, if people program the robot, then it takes them two weeks to do the simplest thing. If you're programming a simulated robot, then it takes two minutes. AUDIENCE: Why does it take more to program [INAUDIBLE]?? MARVIN MINSKY: Because the robot's broken, and there's only one in the lab, and there are eight people who want to use it. And 3/4 of the time, it's down because it's not in mass production. If you have a machine that somebody made, it's probably the first machine that person ever made, so it's no good. Look at the Honda robot, called ASIMO. You've all seen it. But you all saw it 10 years ago. It has not changed, but they say it doesn't fall over so often anymore. AUDIENCE: So there's a French company that does a robot called Nao Robot that walks pretty good. Why don't we buy some robots from some company? MARVIN MINSKY: Because they're no good. They've only got one elbow. They can't reach around. You can buy a hand for $30,000 or $40,000 that has three or four fingers. AUDIENCE: There is probably a trade-off, though, between making a more complex robot that has more interactions you can make. That gets more [INAUDIBLE] than building the actual robot. But simulating the real environment and gravity in those sort of systems can also be difficult to create, so if you're programming a fake robot, you need to make sure that the environment that it's in is somewhat real, right? MARVIN MINSKY: Well, it depends what you mean by real. It seems to me, if the robot is smart and can learn and it's adaptable and, when a piece of its software doesn't work, then it modifies it-- in other words, if you had a higher level learning machine in the processor of the robot, then it could work in a real environment by learning that, if you do this on a slippery floor, maybe you should lean the other way. [? Patty Mays ?] did a little bit of that 20 years ago. What? AUDIENCE: [INAUDIBLE] rule system that defines that moving [INAUDIBLE] threshold that-- MARVIN MINSKY: Yeah, but you see, if you have a real robot, you've only got an hour a week to debug this thing, and the program is working in real-time, if you look at real robots. Whereas, if you're simulating a stick figure robot, then you can simulate friction and slipping, but you can move it at 100,000 operations per second, maybe. Another thing is there's something weird about the robot people. They tell me there's so much feedback needed that they need a separate computer for each joint. I hear this everywhere. They say, it's distributed. It's really good. Of course, it's still falling over. AUDIENCE: I agree that there are a lot of things about robots that can be simulated on a computer. And I agree that a lot of the things that people are currently working on with real robots would better be first done simulated. But I think that, when you have a robot, especially when you have situations where it interacts with the human being, the reaction of the human being is really different, depending on whether it's simulation or a physical game, because I guess human beings inherently have different reactions to objects with physical embodiments. So I feel, at a certain point, especially when it interferes with humans, such as caretaker robots, you do have to switch to real robots. MARVIN MINSKY: Well, the caretaker robot has to be a smart program that's running a physical robot eventually. It's just that we don't have the smart programs, and we could have had them 25 years ago if the people wanting to make a helper robot didn't have a physical robot in their lab. Look at Cog. Cog is proudly displayed in the MIT museum. I don't know of a single fact or technique or idea that came out of that project in the decade when graduate students wrote bad theses about it. AUDIENCE: So I guess then, ultimately, it's a division of labor problem, because it seems like, when people were testing their code on actual robots, it's like they have too many problems at the same time, whereas you're saying they should focus on the software. And after the software is all good, then they move on to integrating the software with the hardware. MARVIN MINSKY: Yeah, well, the robots in Second Life were pretty good. I don't know of any people who complained that they wanted real ones. AUDIENCE: I'm unfamiliar with the robots at Second Life. What did they do? MARVIN MINSKY: Well, you could make one. I don't think they did anything much. AUDIENCE: Well, they don't fall over. MARVIN MINSKY: But I guess a good example might be-- well, there's Dean Kamen's robot soccer project, and Carnegie Mellon had a robot soccer project for many years. And I've never heard of any interesting discovery that came from that. The Media Lab also-- they're the nice looking robots that smile. [LAUGHTER] And people are impressed by the way that humans are attracted to interact with these robots. But I think once we know that, it's something to avoid rather than to exploit until when the robot says, pleased to meet you-- wouldn't it be nice if it were actually pleased to meet you? The intelligent welcome mat should be our target, even if it doesn't move. If it just sat there and felt good about seeing you, that would be a great achievement. I don't know how you would test for it. AUDIENCE: Have you ever played with a PLEO dinosaur? MARVIN MINSKY: Yes. And there was a cat that Hitachi made. In fact, we ordered a PLEO, but we never got it. Are they still in production? AUDIENCE: I believe so. I don't know. I believe so. AUDIENCE: They appear to be, yes. MARVIN MINSKY: What? AUDIENCE: They appear to be. MARVIN MINSKY: Yeah, it's a beautiful job. Somebody made a cat-- I'm trying to remember-- another Japanese company. It was an awfully good cat. It wouldn't do much. But every now and then, it would mew and look at you and beg for something, but you couldn't resist it. Yeah. AUDIENCE: So from your experience, do you know how long would you take to [INAUDIBLE] create a simple simulation of the real [INAUDIBLE]?? Because you're saying that we should simulate robots in the computer and create software for that instead of creating real robots. And so we need to create the software that simulates the real world. That seems a little complicated for me. Do you have some ideas? How long could it take? How many people? And why aren't we doing that? Isn't there somebody [INAUDIBLE]?? MARVIN MINSKY: There are epidemics of ideologies so that, around 1980, the dominant idea of making machines better was to build rule-based systems. And that's probably still the most popular way to program a computer for a complicated commercial application, like making airplane reservations and keeping stock of the parts in the factory and on and on. Now, when the first rule-based systems appeared, they were called production-based languages. Production-- I think the word comes from Emil Post in the 1920s. And the idea is you have a set of symbols-- x, a, y, b, z, where x, y, and z are variables, and a and b are constants-- in other words, an algebraic expression, basically. And if the state of the world or the state of some process is described by this expression, then you change it into something else, like ax. So one operation might be, if there's an a anywhere, let's move it to the left. That's just a simple production. Anyway, if you read my old book on computation, there's a couple of chapters about production systems, and those were the first universal logical systems that could be applied to computers. So although there were no computers in the 1920s when Emil Post started working on that, there had been some discussion back in 1900 or so, when mathematicians like Hilbert, who were interested both in applied mathematics and basic logic theories, were trying to say, what kinds of processes can be represented by what kinds of algebraic or production-like systems? So there was quite a bit of research before computers even appeared. And in the 1950s, making rule-based systems just out of transforming sequences of symbols became popular, and that was fine because those systems are universal. However, to make a production system to play a good game of chess, there might be some point at which just making rules saying, if there's a pawn here and a queen here and a bishop here, do this, maybe you can't express things that way, and you might have to have a higher level expression, such as, if two pieces are under attack at distant parts of the board, then you should decide to make a move that improves your situation in one of these situations and blocks a way of the opponent improving his situation on the other one. So you can't express that in terms of the positions of the pieces. You have to have a higher level abstraction. And what happened in the development of rule-based systems is that you could make rules about rules, but nobody ever got very good at it, and you still don't find six-level systems like the kind proposed in The Society of Mind book, which says, whatever you do, you ought to have multiple levels. And at the third or fourth level, the system should have some reflective ability so that it's keeping track of how successful it's been at something. That's a simple kind of learning. Or it's noticing that abstractions of a certain sort have been very productive in a certain kind of situation described by some other abstraction. Nobody's ever built any programs that go up four or five levels of that sort. And I suppose the reason is that virtually no one has a job where they could definitely plan to spend four or five years. I think David here-- did you say three or four years to map out this animal? AUDIENCE: That's my hope. MARVIN MINSKY: Right. And then you should have the meta-hope that you'll have such a good student or two that you'll only have to spend a year of it, and this young Richard Greenblatt or Bill Gosper or Peter Sampson come up, and you'll notice that he's better than you at this, and that's the best thing. And then you can quit and go to a higher level. AUDIENCE: True. MARVIN MINSKY: But there aren't any jobs like that. We had a sub-lab that split off from the Media Lab. I won't mention its name, but its director got into the habit of wanting a progress report each month. Well, when we were funded by ARPA, there was something about a proposal each year. And guess what our proposal was? Anybody want to guess? AUDIENCE: Intelligent machines? MARVIN MINSKY: We proposed to do what we had done last year. Of course, it's nice when the head of the agency is an old buddy of yours from undergraduate years, and that was Licklider. But that worked out fine. And so our proposals were, in fact, progress reports, and, that way, you could do anything you want. Now, if you had to do that every month, then those proposals would look so flimsy and sick that any good executive would say, you should squash that project. It's not getting anything done. Well, I don't know how to fix that. If we try to deal with the economic collapse of the civilization that we're in, it may be politically hard to have five or 10 year research projects. So I have no idea what's going to happen in the next decade. Yes. AUDIENCE: So two questions. What do you think is the most effective fallback for people to make progress in the [INAUDIBLE]---- in universities, within corporations like what IBM is doing, or what? And also, who you think should be working on these problems? MARVIN MINSKY: Well, it looks like this-- what you call the IBM thing? AUDIENCE: Watson. MARVIN MINSKY: Watson. That might be a good thing. It's the first substantial AI project that is trying to combine more than one method, but I actually haven't read even what's available about it. But you've all read about this Dartmouth AI conference in 1954, and that had McCarthy and me from around here and Newell and Simon who were at Carnegie Mellon. I think it was called Carnegie Institute of Technology then, but whatever. And it had a couple of people from IBM, and it had about six or seven assorted individuals from here and there. So IBM had a guy named Nat Rochester who had been involved in designing computers, and he had a lot of good ideas. And I had written on paper a geometry theorem program, and he started a project to actually put that on a computer. And so for a few years, IBM had a few people working on different small AI projects. Then that disappeared after about 10 or 15 years, and there was essentially very little programming research at IBM, and they were mostly into hardware and commercial software, and so forth. This Watson thing is apparently three or four years old, and it looks like a substantial project. And I'm a little worried that, since they're not very academic, they might not have the idea that, if you've discovered something, the first thing you should do is publish all its details so that you get credit for it. Well, it's hard. Not very many companies encourage the individuals getting credit, and it looks like the guys in that project are able to publish a substantial amount. But I think the first good sign will be if they just dump their code out, too, and let people go over it and copy it and see if there's anything there they can use. It's very rare for any AI researchers to use anyone else's code, as far as I know. It's almost never happened. I don't know. The best example, almost, was Winograd's SHRDLU program, which was exported to Carnegie Mellon and to Stanford. And both of them ran that program quite a bit. But I don't know why it died, but apparently Winograd himself hadn't commented enough to remember how it worked. And the beginners never got very far in improving either its robotic ability, which was non-trivial, or its linguistic ability, which was unique. It had better parsers and better semantic operations, if only for a limited blocks world. Put the big red block on top of the little green block. And it would do things like that. And you could say, put the block wedges on top of the big red block, and do such and such with it, so it had fairly complicated ways to refer to and describe things. But it became a showpiece where you could demonstrate it. Has anyone tried it? I haven't. AUDIENCE: I've looked into it. But it's in Mac LISP, there isn't even a path to run it. MARVIN MINSKY: So there's no interpreter to run it. AUDIENCE: Yeah. MARVIN MINSKY: It should've had the whole package, of course. But of course, that program was big, and Mac LISP would have only been 2% of the whole thing. Winograd had quite a number of graduate students who finally left and did something else because the project was-- they just found it too hard to understand the code. But who has an idea of what to do next? I bet you all have some fantasy of how to make machines smarter. Yes. AUDIENCE: I was thinking you could get more funding for science fiction writers, and then hopefully one of them will come up with a brilliant idea. [LAUGHTER] MARVIN MINSKY: Yes. AUDIENCE: Greg, you can have a computer. MARVIN MINSKY: How many of you know the name of Greg Egan? His web page has more exciting mathematical demonstrations. As far as I know, I don't know anything like it. Well, yes. Type Ken Perlin into your Google and get his web page, and I think he must invent two or three incredibly brilliant small programs every week because there are quite a few hundred of these gorgeous demonstrations of one principle after another. Oh, dear. I didn't look ahead. AUDIENCE: [LAUGHING]. MARVIN MINSKY: Another feature of Greg Egan is that he read The Society of Mind and worked a lot of ideas that I hadn't thought of into some of his stories. But another feature of him is that no one has ever met him. That is, if you knock on his door, which is in Perth, he won't answer. And I was discussing this with Greg Benford. Apparently, there are a couple of people who have seen him, but nobody knows anyone who knows him. You could do that. Maybe that's the way to get a lot done-- AUDIENCE: [LAUGHING]. MARVIN MINSKY: Make a list of all your friends and abandon them. Just think of all that free time you'll have. Are there any other Gregs? Oh, well, Greg Benford. But this isn't fair. I've lost the big chalk. Among the science fiction writers, there are only a few who actually know a lot of science. Benford is quite a good physicist and knows a lot and publishes serious physics papers. Larry Niven knew quite a bit of science but was never a practicing one. And so in his science fiction, there are occasional technical errors, like in Ringworld. Ringworld was a big, ring-shaped planet around the sun. So here's the sun, and here's this ring. And the ring is about 8,000 miles wide and goes all the way around its sun in something like Earth's orbit, which means that it has an area of the order of 100,000 Earths. Well, Larry didn't realize that there was a bug. And if you make a ring that's rotating around a sun, well, first of all, there's no particular reason why it should rotate. But if it didn't, it would squash. But it's not gravitationally stable. So as soon as it gets slightly off-center, then, if this is closer, it gets pulled in, and so finally one part of it falls into the sun, and the rest follows it. So in the sequel to Ringworld, some physicists pointed this out to Larry Niven, and so now he had little wires. I forget what they're connected to. They're made of something called "scrith," which is 10 to the 50th times stronger than anything real because it's pretty hard to hold a thing that massive. And oh, then there are little jets, which have been built by the original inhabitants. I forgot all the geometry. So now there's an active system for maintaining the ringworld to not fall into the sun. And the sequel-- this was built by some disappeared aliens millions of years ago, and the hardware is beginning to break down, so they need to find an alien race with enough engineering to fix it, and blah, blah. Anyway, Larry Niven is awfully good at making up things with a pretty good scientific background. And then when he makes a mistake, he writes a sequel to correct it. So maybe that's actually better than getting it right the first time. The transporters in Star Trek-- Niven's transporters are nice because they take out all the chemicals in your cells that shouldn't be there. So once they have transporters, automatically the people now become nearly immortal, because, if you're being scanned and reconstructed by a transporter, it might as well fix all your bugs while it's at it. Anyway, there's a good idea on every page of this guy, even if it's wrong. So when I read a regular novel, which is very rare, there's usually a good idea every chapter, maybe. But I didn't write Larry Niven's name. Probably he had more ideas than any other writer, but they're not all correct. So how many of you are going to pursue AI? [LAUGHS] Are you going to work on trying to make machines smarter? Here's one. We sure could use them. There are a couple of transhumanist communities, whatever that means. And they're worried that the AI will, as many science fiction writers have, be hostile and take over and want everything for itself. And so there are serious discussions on how to make friendly AIs. And of course, the master of that was Isaac Asimov, who said that a robot should always put its interests beneath those of its owner or whatever. How many of you know Asimov's laws? And then there was the fourth law that he had to make, which is that it's all right to harm a human if it's to save a large number of others from being harmed. But it's like saying we shouldn't eat meat. It's very hard to make up ethical principles that are foolproof. I'm not sure what the safe AI people have as their Asimov laws. But if you walk around the AI world today, it's hard to take seriously the idea that they might form some sort of threat. Of course, it could happen overnight. Well, we discussed the big threat the other day-- somebody making a virus that kills everything. That's the road toward nanotechnology. I can't get them to propose anything. What happens in your class, Pat? I haven't been there for a long time. AUDIENCE: Oh, I don't know. [LAUGHTER] MARVIN MINSKY: Yes. AUDIENCE: Do you think efforts to augment human intelligence can [INAUDIBLE],, rather than [INAUDIBLE] ways that we can make people more intelligent? MARVIN MINSKY: That's a great question because we're probably fairly close to the technology to do something exciting in that direction, except that we're afraid to, because it seems to me that-- suppose you just wanted to do something modest like improve some person's memory. I'll bet that, with a few million dollars and I don't know how many years, you could do something, without understanding how the brain works, to attach accessories to it. I think I mentioned Brindley's first experiment on trying to restore vision to a blind person. I think I mentioned that last time. L-E-Y, I think. So what he did was-- here's his secretary, who's blind, and here's her occipital lobe of the brain. And here's this area. I think it's called area 17 for no good reason. That's the place where the signals get into the brain from the eye. Stupid, isn't it? Here's the eye, and here's the optic nerve, and then here's some other stuff, and it's all the way back here. I presume this is because we used to be an octopus or something, and the eye was here. But anyway, what's more, the wires cross and all sorts of things. But anyway, so here's this piece of brain. And what Brindley does is he makes a little gadget with 64 electrodes. And I've forgotten the details, but I think that these actually has little coils. I believe that the thing was-- there's no wire coming out of her head, but these little coils and electrodes-- this is in the 1950s, and silicone had just appeared, and it looked pretty harmless. So there's this little pad and silicone. He has to take the bone off the back of her skull, which is risky, and so there's the gadget. And as I said, when you stimulate these, I believe 48 of them worked, and about eight of them hurt, and about eight of them didn't do anything. But if you fired up this one, then she described it as being half a matchstick at arm's length. And there were little bars, which is interesting. They were not little round dots. But there were enough of them that she could read letters one at a time. And so that's the 1950s. And as far as I know, it's never been done again. Is there some way to look that up? Yes. AUDIENCE: So they can improve the memory of rats very simply. MARVIN MINSKY: Of rats? AUDIENCE: Yeah. And they can turn off and turn on. MARVIN MINSKY: OK. Yeah. But anyway, I suspect that, if you got permission to try, I'll bet you could put 1,000 electrodes maybe in a couple of places on the cortex and just poke around, and I'll bet that you could do it without much danger and eventually get enough inputs and outputs to hook it up to something stupid like the Oxford Dictionary, which would be about that big or make it a memory retrieval system, where I know what you'd think, but people having this gadget for in their head for a long period of time with a screen producing outputs would learn how to control some of those little spots, and then another part of their brain would learn to control groups of them and so forth. And maybe after a few days or weeks or months, you'd be able to operate this like the touchscreen on your iPhone, and it would provide you with a huge, inexhaustible amount of memory. And for one in a dozen people, it might produce outputs that get right into your mind in some graphic or auditory or unspeakably alien sense, and you'd know something that you didn't know before. So that would be a very thrilling area. And there are lots of epileptics who are having pieces of their brain removed because it's saving their life. And it would be almost risk-free to slide this little gadget in while you're at it because now it's 60 years later, and we know which kinds of silicone implants are tolerated. And I'm sure there are other things. What substances can you put in the body that don't irritate it? Must be quite a few. AUDIENCE: In a sense, we're already doing this brain augmentation thing with our computers or with our smartphones, except for the interface. MARVIN MINSKY: Right. AUDIENCE: [INAUDIBLE] and the device is a lot clunkier. This is just more direct, whereas right now, we have to use our bodies to actuate the machine and then use our senses to get the information back into our mind. MARVIN MINSKY: Right, and we're being very sluggish about it. I don't see why you couldn't put little pads in the toes of your shoes so you wouldn't need a mouse. AUDIENCE: Or even tongue interfaces, where your tongue displays-- where it sends electric signals on your tongue in a 2D grid so you can, I guess, feel some shape. And I saw this talk last week of Microsoft Research working on this tongue-controlling interface. AUDIENCE: [LAUGHING]. AUDIENCE: No, seriously. They gave this to disabled people and put them in this wheelchair, and they could use their tongue to control the wheelchair, and it's almost like they're mind-controlling it. MARVIN MINSKY: Yes, right. AUDIENCE: I think there's actually something that does it with [INAUDIBLE] activity in the brain. So I saw a TED talk on it a while back. AUDIENCE: Oh, the Emotiv device? I actually tried it. It's not that great. AUDIENCE: Oh. [LAUGHTER] AUDIENCE: [INAUDIBLE],, and they controlled which way [INAUDIBLE]. [INTERPOSING VOICES] AUDIENCE: I did an internship at a video game company, and they [INAUDIBLE] that device to make the video game. And you have to train it, except for it's sort of finicky, because, basically, the way it works is you have this training interface, where they tell you to push or pull, and it associates certain thought patterns with the push or with the pull. But I don't know. Maybe I just didn't practice it enough. It's not that reliable. So for video game or for toy applications, it's OK. But if you actually want to use that to control an Evangelion-style giant robot-- AUDIENCE: Right. [INAUDIBLE] MARVIN MINSKY: This is an old thing. I don't know if there are any better ones. Yes. The people with the tongue gadget claim very good results, but I don't know what it's like to go around with that thing in your mouth. AUDIENCE: They made a wireless one. It's in the shape of a retainer. MARVIN MINSKY: Oh, OK. Is it anchored to the teeth or something? AUDIENCE: It's basically like a retainer, with wires to [INAUDIBLE] on your teeth, and it sticks on the roof of your mouth. MARVIN MINSKY: Oh, so that doesn't sound so bad. Is it wireless? AUDIENCE: Yeah, yeah. There's a talk by Desney Tan from Microsoft Research, and he showed stuff about that. MARVIN MINSKY: This guy-- I think is spelled right, but I'm not sure-- is a professor at Stanford, and he made an array of vibrators that go on your back, and I put it on once. It's 20-by-20, I think. And they all work, so it's better than the eight-by-eight thing. And I was able to recognize a telephone. It's connected to a video camera, and it's just a pat on your back. And if you have a TV camera with the right kind of contrast, you can find the door and stuff like that. It's easy. And I recognized a couple of objects. I was only in it for about five minutes, but the interesting thing is that it didn't take any time to fill this input as a shape. I was pretty surprised. That was many years ago. I don't know what's happened since. There's reading devices for the blind that go on your finger. So instead of Braille, you go like this, and it can make a sound, or it can make a vibration in your fingertip. See, the trouble with Braille is you have to have Braille. But this device-- I can't remember who makes it. You just put this little thimble on. And it has tiny vibrators so that it's like Braille, and you can feel the letters go by as you move your finger. And I've forgotten the professor's name for this. Does that ring a bell? Anyway, only about 10% of blind people find it easy to learn to read this way, so it's not widely in production. But that's all beside the point. I think, at some point, somebody in some country will want to try this direct brain connection thing. It might be really good for an older person who's become blind and a senior person who's lost vision-- might not be able to soup up their hearing and other senses well enough or learn fast enough to get around. So if you become blind/deaf like Helen Keller when you're a few months old or-- was she three years old? Anybody can remember? But so she just adapted and became quite a good writer. AUDIENCE: When thinking about augmenting human intelligence, there are really two big realms to think about. One realm is developing the external device that does the thing [INAUDIBLE] that retrieves information. And then the other realm is the interface between the external thing and the human being. MARVIN MINSKY: Yes. What's the representations? And it'd be wonderful to have a gadget with a bunch of brain electrodes so you could discover what kinds of symbols can people learn to distinguish, and we'd probably learn more about the brain in a couple of years than in the previous century because we could get some idea of representations. Do people represent things as LISP structures or as productions? Computer science has come up with lots of ways to represent knowledge. I've never met a neuroscientist who even knows what the-- your smirk. Maybe there are some I haven't met, I'm sure. AUDIENCE: Well, there's me. That's why I smirked. [LAUGHTER] MARVIN MINSKY: But I've complained about this about four times already, but the best example is to look at the review of The Emotion Machine on the Amazon book page for it. Of course, there's this neurologist who says, he writes all about these K-lines and panalogies and isonomes, and he has no evidence they exist. And it's just so funny because I'm writing this book to help. Here, they're drowning in ignorance, and I've just talked tossed them this life raft. And they say, well, he hasn't tested this life raft to see if-- [LAUGHTER] They don't like the idea of a suggestion if you haven't proved it. That it is not the way to do science. You have to make hypotheses. And if you're a neuroscientist, you can't do that because it's not in your nature, so you have to get them from someone else. It's very simple. How come there are no programming languages that have decent representations of anything? People tell me they use C++, whatever that is. I think Winston wrote a book on how to program in that. And I asked him, do you actually know how to program in that language? He said, of course not. But I know how to write books on how to program. [LAUGHTER] Do you remember that? AUDIENCE: It was a great learning experience. MARVIN MINSKY: You actually said something like that because you write the book, and then you can get other people to put in the examples. AUDIENCE: I couldn't force myself to learn C without writing a book about it. It was too boring. [LAUGHTER] MARVIN MINSKY: Right. If you read my book on computation, which is still actually current because it's about things that they don't teach much, the best thing about the book is that it's full of problems that you can actually solve. But I wasn't good at making up problems, so most of the problems there were made up by a young graduate student named Manuel Blum. And I don't think I gave him enough credit in the book. That I just took it for granted that people are good at making up problems, and I didn't realize how rare this was. Manuel and his wife Lenore and his son, whose name I forgot, are all professors at CMU. AUDIENCE: [INAUDIBLE] MARVIN MINSKY: Yes, are there any other such families? And as far as I can tell, Winston's book has been substantially replaced by this Norvig and what's his name? AUDIENCE: [INAUDIBLE] MARVIN MINSKY: Right. But there's almost no ideas about AI in that book, so we have a textbook that is not helping the field progress very much because it's giving a lot of details about lots of stuff about probabilistic learning, which is very nice, but I don't think it has much future. So if you meet anybody who's interested in AI, get them to read Pat's book. Yes. AUDIENCE: So what happened? Why did MIT change so much? So why did it-- so many machine learning people? MARVIN MINSKY: Well, there was a very strange phenomenon called Rodney Brooks, who somehow got a worldwide reputation for inventing-- what was it called? AUDIENCE: Subsumption. MARVIN MINSKY: Subsumption. Subsumption was a formulation related to what the first computer people called "priority interrupt." So instead of having high level programming languages, Brook pointed out that, if you just have a bunch of statements and give them priorities and then you take all the statements that apply to a situation and take the one with the highest priority and do that, then you can write a PhD thesis and make a movie of an experiment in which a robot actually found a Coke bottle. Now, the fact that the first 40 times, it didn't find the Coke bottle because, when it got near the table, it couldn't see it anymore and it had no memory was ignored by the worldwide community. And so some of Rodney Brooks's ideas swept the entire world, stopped research in Japan, because they all turned toward saying, if you just observe the environment properly, you can solve a lot of problems without making plans. And he won the top prize that the AI Society gave out. It's a complete mystery to me. Here is four or five really bad ideas that swept the world and replaced almost all other AI research for a decade or two. Wonderful. That's why there are no robots that can go into the reactors in Japan and fix things because all the robots we have only react to-- never mind. Yeah. AUDIENCE: So how do you see we're changing that [INAUDIBLE]? Now we have a problem that maybe most of the stuff in computer science or [INAUDIBLE] in AI [INAUDIBLE] probabilistic inference or some kind of supervised learning or supervised learning. And that makes a lot of money. MARVIN MINSKY: Well, that's a problem. The best way to make money is to steal it. So why doesn't everybody just turn and go down the corner and mug someone? AUDIENCE: [INAUDIBLE] thinking long-term. MARVIN MINSKY: Yeah, there's a real problem here. AUDIENCE: Because you can mug a lot of people entirely using [INAUDIBLE]. MARVIN MINSKY: Right. [LAUGHTER] You've got it. AUDIENCE: And isn't there any other smart professors that-- why can't they see that? Why can't they see that? MARVIN MINSKY: It beats me. AUDIENCE: Winston can see it. MARVIN MINSKY: I'd like to hear some theories of why the textbook by Russell and Norwig doesn't mention society of mind theories, for example. I mean, they've been out there for 30 years. So I don't know what the problem is. AUDIENCE: But 250,000 people have taken that course. MARVIN MINSKY: Well, they're either going to get worse or better. AUDIENCE: Whoops. Has anybody watched some of this course? Has it started yet? AUDIENCE: Oh, it's almost done. MARVIN MINSKY: OK. Have any of you looked? No one here? AUDIENCE: I looked [INAUDIBLE]. AUDIENCE: I think [INAUDIBLE] already taking classes. MARVIN MINSKY: I couldn't hear. AUDIENCE: I think it would have more appeal if we weren't already taking classes here. Most people I know who are taking that are working industry or they're in high school. AUDIENCE: It seems to me that-- when people take machine learning, they think they are very smart because they can do compilations with a lot of symbols. And they think, oh, I'm such a genius. I'm not saying that I'm smart. But it seems to me that dealing with symbols make people think that are very smart, and that makes the situation even worse because you have, maybe, people that could solve this problem. And they are, oh, I can do a lot of compilation, and there are these really hard formulas that only I know. It seems that theory of symbols is more complex and harder, and the people tend to stick to theories that have symbols and formulas. MARVIN MINSKY: I guess so. But to me, mathematics is really hard, and symbols are easy. So it's funny. AUDIENCE: Yeah, but there are still symbols. MARVIN MINSKY: Yeah, well, does the machine course ever mention Pat Winston's-- AUDIENCE: [INAUDIBLE] AUDIENCE: Machine learning? MARVIN MINSKY: Yes, because what Winston does is he looks at the differences between things and gets rid of them. And you can think of it as a symbolic servo. But there are a lot of very simple, common sensical operations in the methods that he describes, but none of them are easy to express as equations. Maybe programming isn't technical enough, and you need to dress things up in an algebra. AUDIENCE: [INAUDIBLE] as an energy minimization principle or something. MARVIN MINSKY: That's cute. Yes. Getting closer to the goal-- that's all it is. [LAUGHTER] I think I mentioned Newell and Simon had this great idea of getting close to the goal. And they gave up when they couldn't solve the missionary and cannibals problem because there's a step where you have to take three people back in the boat, and their program refused to do that because it wasn't getting them closer to the goal. And the funny part is that, if you used pairs of operations as the unit, then their method would work. Next question is, how do people make a very beautiful theory and then not take the next step and someone else does it ten years later? I guess it always happens. Yes. AUDIENCE: I would say I think machine learning is incredibly useful, and we shouldn't discredit it because it really isn't AI. And I think a lot of the problems between machine learning people and AI people is that they're trying to do different things and they just have shared history and see different important problems. MARVIN MINSKY: Well, they're not doing different things. One is doing a subset of the other. [INTERPOSING VOICES] AUDIENCE: --the main problems of intelligence and foundation, but they are making a lot of progress in solving certain types of really hard problems. And this can be really useful because a lot of these are problems that people aren't able to solve themselves and aren't good at. [INTERPOSING VOICES] And AI is the science. MARVIN MINSKY: One at a time. Yeah. AUDIENCE: Sorry. MARVIN MINSKY: You could say that about anything, though. Surely, what's really important is not solving all sorts of useful things but getting to a higher level where you can solve things that no one else can solve. AUDIENCE: [INAUDIBLE] be important for what, the importance of the advancement of science? AUDIENCE: Yes. AUDIENCE: Important for miscellaneous applications that are useful to people? MARVIN MINSKY: But in the long run, you're dead. AUDIENCE: To say that everything that doesn't just advance science is completely useless is crazy. MARVIN MINSKY: No, I'm saying science has disappeared because the dark ages-- it's interesting. But if you say it's just advancing science, yes, I take the position that we should spend a fraction of our world resources on long-term things. And it's no excuse-- and no one should say let's spend all our resources on immediate, useful, moneymaking, labor-saving processes. AUDIENCE: I'm not saying that. I think we should spend quite a bit of money on science. But I'm saying that doesn't mean that anything that's not science is-- MARVIN MINSKY: Yeah, but if you look at AI, I only know about 10 people in the entire country who are thinking about hard problems in AI. So it's not just a mild complaint. There's some kind of pathology. There's no support. AUDIENCE: I don't think anybody's denying that machine learning is useful. It's just the problem is that it seems like the obsession with machine learning is overtaking anyone trying to make progress in AI. So in the ideal world, there are the people who work on machine learning, but then there are also the people who are solving the big AI problems. But it seems like, in the current world, so many people who are doing machine learning and there are so many resources for it such that there is not enough for people who are doing AI. MARVIN MINSKY: Yeah, my complaint is that there's almost no one doing AI. It's very peculiar. So what is it about the current culture of science and technology that causes this to happen? A great example would be-- what's the income of IBM? It's a big company. How come they're doing Watson? AUDIENCE: It's because it's cheaper than the Super Bowl ads. [LAUGHTER] They got a lot more publicity out of it than they would've gotten if they'd spent all that money on Super Bowl ads. MARVIN MINSKY: I guess so. But it's a serious company. It fired its real AI group, maybe properly. AUDIENCE: Perhaps a supplementary observation is that when [INAUDIBLE] a long time ago, it was regarded as a great success, and you could get a program to do something for the first time. And the field was taken over by people who want to measure how much better a good thing is than the next thing. And there are only a few kinds problems that are susceptible to that kind of-- MARVIN MINSKY: Scoring. AUDIENCE: --detailed comparison. As a consequence, people working on those kinds of problems are not doing things that ought to be done, like doing something for the first time. So it's a lot easier to write an article about how your parser is 1% better than somebody else's parser than it is to get a program to have common sense, for instance. MARVIN MINSKY: Yeah. Well, now there are four or five common sense research groups, I think. How many? I don't know if any of them make a living yet. AUDIENCE: Well, they're collecting common sense. I'm not sure they have common sense. You can measure how many things you know, but it's unclear that they're measuring how effective they are with the stuff that they know. MARVIN MINSKY: Yeah. AUDIENCE: [INAUDIBLE] MARVIN MINSKY: I wonder how hard it would be to make a program that knows a lot of mathematics and those kinds of things that mathematicians do. But somebody would have to pay them to do it. And where do you get support for common sense in mathematical reasoning? It's a beautiful subject. AUDIENCE: Well, there was [INAUDIBLE],, and there was Maxima, and there was Slagle, and there was Moses. So there were lots of [INAUDIBLE].. MARVIN MINSKY: Right, and Wolfram. It might be, for all his faults, Wolfram's going to get people to work on real AI, because he's hired some people to try to make-- what's it called? WolframAlpha, that's cool. To make it have more and more common sense. Yeah. AUDIENCE: I also think that another challenge is to trying to make systems that want to do math or the things that mathematicians do is deciding what problems are interesting and what ones to work on. There is online [INAUDIBLE] verification improving, although those are also practical problems when you get complicated things that don't support that as well. What should be proven is something that I've never seen happen and doesn't seem really all that feasible at the moment. AUDIENCE: Well, Doug [INAUDIBLE] first piece of work was on a thesis that did a kind of mathematical exploration on the basis of some theory of why an expression or a transformation might be interesting. So even that had some precedent. AUDIENCE: Oh, really? And it was a system that had started from scratch and proved a few theorems and looked around for things that were interesting and invented the opposite of the least common divisor or something like that. MARVIN MINSKY: Yeah, that was AM. And it was pretty exciting, and that's something that anybody could take up again anytime, and it would be a great thesis topic. LeNet-- the phenomenon in AM was that it discovered division and prime numbers, and I think I talked about it. And as Pat says it had a bunch of criterion for interestingness, and it would find sets that were unusually large or unusually small and try to make more things like them and that sort of thing. But it's never been followed up. AUDIENCE: [INAUDIBLE] my high school olympiad math problem. MARVIN MINSKY: Yeah, high school algebra is a wonderful area, and there was [INAUDIBLE] thesis, which did a little bit of it. Now there is so much more known about natural language, you'd think somebody would do a super [INAUDIBLE] program that might actually be good enough to connect into Google so that it would solve problems that real people face every now and then. Yeah, the appearance of the new Siri and Dragon things, which actually answer useful questions, and WolframAlpha might change-- there might be an industry in common sense AI that's actually useful because a few million people might start using it as soon as it gets just a little bit better. And I find that Dragon and Siri-- I haven't used Siri, but Dragon now has a thing called DragonSearch. You talk to it and ask a question in ordinary English, and it goes to the web and makes a Google search sentence out of what you said. And a good fraction of the time, it saves you a half hour of search. AUDIENCE: None of those really have any idea of context, though. A very simple improvement would be-- and WolframAlpha already does this a little bit-- is to say, assuming that you meant x, did you mean y or z? And you can click those instead. And that's as far as the conversation in WolframAlpha goes. MARVIN MINSKY: Right. But you're a person, and if it had a lot of information about you, first of all, it would say, what kind of person are you? And you'd give your biography. And maybe we'll get common sense databases that say, well, here are the kinds of things that somebody who likes pole vaulting is likely to know. AUDIENCE: We already have the, what kind of person are you, databases in social networks. That's what it is, right? AUDIENCE: Or Google's Personalized Search. MARVIN MINSKY: Well, I just joined Facebook by mistake. [LAUGHTER] And all sorts of people want to link, and all you have to do is look at their biography and see what music they like, and you can see you don't want to have that person as a friend. [LAUGHTER] But has anybody written a program to automate this? I could tell them I don't like music written in the last 20 years, and that would get rid of about half of them right off. Of course, they might be Freeman Dyson or somebody who happens to like one of these tunes, but I don't know. So you have to make a hierarchy. If the person happens to be a really good physicist, then we'll flush that particular filter. [LAUGHTER] AUDIENCE: Also, Google's Personalized Search profile thinks that I'm a man because apparently I've searched too many computer science things in the past. [LAUGHTER] MARVIN MINSKY: That's pretty good. Maybe it thinks most people are a man. AUDIENCE: With other women, they get it right. But if you happen to be a woman who does computer techage stuff and if you happen to do a lot of search on programming, for instance, looking at APIs and things, then it tends to try to think that you're a man. AUDIENCE: Does it store the gender as a Boolean? [LAUGHTER] AUDIENCE: I saw this a while ago, and it was like, according to this, here's what they think that you are, and at the top, it was like, we think you're a man. AUDIENCE: I'm interested to know, though, how manly, exactly, you become [INAUDIBLE].. AUDIENCE: So I don't think they actually tell you their internal representation of who you are, but you just get see a couple of categories, like male, 25 to 35, interested in computers, interested in et cetera. I'm sure they keep it as [INAUDIBLE].. AUDIENCE: You could cross-correlate that with geography and be like, this is the manliness level at a certain-- [LAUGHTER] MARVIN MINSKY: Is there anything in Facebook that would help us get a common sense database? AUDIENCE: [INAUDIBLE]. [LAUGHTER] MARVIN MINSKY: Or is there no useful knowledge? AUDIENCE: Well, apparently, a few years back for [INAUDIBLE] class, somebody made of a program that could find out who was gay or not, even if they don't put, gay, or, not gay, on their profile based on who their friends are. AUDIENCE: Yeah, my friend was gay, and they found out. [INTERPOSING VOICES] AUDIENCE: That's just statistical correlation. AUDIENCE: It's actually useful because you can specify that you're looking for someone who is 0.75 female. AUDIENCE: How many women's names begin with the letter X? MARVIN MINSKY: Begin with? AUDIENCE: The letter X. That may be your problem. [LAUGHTER] MARVIN MINSKY: Yes, it could be. I heard that China is allowing more names. AUDIENCE: It is allowing? MARVIN MINSKY: I read an article which said there are only about 100 names in most of China. And in fact, there was a law that both your names have to be from this small set. And so there are only 10,000 different pairs of names. Do I mean that? Yeah. AUDIENCE: Family names? MARVIN MINSKY: Are there a lot of family names? AUDIENCE: Well, I feel like there aren't really restrictions for names as long as they're Chinese characters. I heard about a lawsuit in China where somebody wanted to name their kid, something, something, dot com, and the government-- [LAUGHTER] But as long as you pick real Chinese characters, you could name your kid, "Big Poopy Head," and it's OK. In fact, naming your kid "Big Poopy Head" is actually rather common in the countryside because there's some superstition that believes that, if your kid has a really terrible name, then he won't die during childhood because even the Devil won't want him. [LAUGHTER] MARVIN MINSKY: And for all we know, it's true. AUDIENCE: There's a child psychologist [INAUDIBLE].. AUDIENCE: Here's Big Poopy Head-- [INTERPOSING VOICES] AUDIENCE: Are you only allowed one first and last name, though, because it seems like you should only need two names and then just give everyone, at most, 64 names, and that should cover all of China, right? AUDIENCE: Well, I think, in Chinese, there's a set of last names that's pretty much standard, and I'm not sure how many, but maybe a few hundred. Then giving names-- so family names are usually one character. Occasionally, they're two characters. And given names are either one or two characters. AUDIENCE: And you just get those two. AUDIENCE: And you can pick any character. MARVIN MINSKY: Oh, well, there are a lot of characters. Yeah. MARVIN MINSKY: Like 7,000? AUDIENCE: Yeah. AUDIENCE: So it's something like 1,000 by 1,000. Or AUDIENCE: 3,000. AUDIENCE: 3,000 by 3,000 by maybe another 3,000. The British solution is to just keep adding simple names. My brother's name is Dylan Nevile Thomas Frederick. And you could just keep changing that names. And then if you get stuck with the "John Smiths," you just do III, IV, V. MARVIN MINSKY: Well, the article I read must have been wrong, then, or I misunderstood it, because it said somewhere in areas of China there are only 100 names. So if you have more than 10,000 people, you get people with the same name. And anyway, that's their problem. AUDIENCE: Maybe the person in the article couldn't tell the difference between their friend's shoe and their friend's shoe. AUDIENCE: Well, one thing that they also did, historically, in China is that sometimes people have middle names, and the middle name is based on this thing that was written centuries ago, that for each generation, it has a set character that must be your middle name. So your name will be a family name, this middle name, and then your actual given name. But that's becoming more obsolete nowadays. MARVIN MINSKY: Well, in English, we have lots of names like and Painter and so forth, but people don't live up to them much. AUDIENCE: Because it's difficult to tell what a given baby is going to do. MARVIN MINSKY: But you can order it around. AUDIENCE: But actually, the historic Chinese system is an interesting way of addressing people, indexing people, because you have the first character, which is the family, and then the second character is the generation, and then the third character is you. So it's like, if your family is X and you're from generation A, then your siblings will be X, A, B, X, A, C, whatever. And then the next generation will be, X, B, whatever. OK, maybe I should have picked different [INAUDIBLE],, but you know what I mean. MARVIN MINSKY: Like a generation number? AUDIENCE: Kind of, except for it's a character. AUDIENCE: There should be enough IPv6 addresses. [LAUGHTER] AUDIENCE: I feel like we're just going to end up with us all being numbers. That being said, I've always wanted to name my kids something that rhymed with a letter, like, one, two, three, four, so that I can just call them by their number without them realizing that I'm calling them by their number. [LAUGHTER] MARVIN MINSKY: That's a great idea. Yeah. [INTERPOSING VOICES] AUDIENCE: It's a really good name. MARVIN MINSKY: You could read the number of letters in the name, like four. AUDIENCE: You name your fourth kid "Forest." AUDIENCE: "Four," I'm here. [LAUGHTER] AUDIENCE: You could name your third "Tray." [LAUGHTER] MARVIN MINSKY: Eventually the name could be longer, and it could be a whole little semantic net with a little LISP structure in it. There was a nice science fiction story about some culture where you had to know your genealogy back hundreds of generations. I forgot the author. But it was made into a movie. Well, who wants the last word? AUDIENCE: [INAUDIBLE]. AUDIENCE: Someone else can have it. MARVIN MINSKY: Yeah, do we have any announcement to make? AUDIENCE: No, just final projects are due on Sunday. AUDIENCE: Do you know what you're teaching next semester? MARVIN MINSKY: We're trying to make a plan for some kind of weekly seminar that's not for credit, although I might have to make it for credit for financial reasons or nobody will pay me. [LAUGHTER] But we could go back to giving everyone As again. AUDIENCE: So you won't be teaching this subject until next fall, then? MARVIN MINSKY: I guess not. I haven't absolutely decided, but I'm pretty sure. Can they take your class? How many students do you have each year? AUDIENCE: Well, we've got room for about 40, and usually about 120 sign up. MARVIN MINSKY: And then you weed them out. AUDIENCE: Yes. [LAUGHTER] MARVIN MINSKY: I'd like to borrow your weeding program. OK, well, thank you all for coming. It's been fun. [APPLAUSE]
|
MIT_6868J_The_Society_of_Mind_Fall_2011
|
8_Question_and_Answer_Session_2.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MARVIN MINSKY: Well, I don't have a lecture. Go ahead. AUDIENCE: I had a random question. MARVIN MINSKY: Great. AUDIENCE: So you've been a teacher for a very long time. Have you noticed any patterns in the students over the years or decades? MARVIN MINSKY: Have I noticed any pattern in students? AUDIENCE: Yeah, like intellectual patterns or just people you're interested in, just anything. MARVIN MINSKY: Well, a few. The foreigners seem better educated than the Americans. There are more girls. When I came to MIT, it was about 20%. And I think now it's 53%. Does anyone know? AUDIENCE: It's like 48%. MARVIN MINSKY: What? AUDIENCE: 48%. MARVIN MINSKY: 48%? I read that it actually went past 50 for a few minutes. AUDIENCE: [LAUGHS] MARVIN MINSKY: No, I think I've complained about the future though, which is that a large proportion of my students, by students I mean the ones whose thesis-- I hate to say supervised, because in the case of Pat Winston, for example, I learned much more than I-- or Sussman. But most of the students became researchers or faculty members eventually. And now it varies. Now very few of them do. I'm not sure of all the reasons. In the 1960s, which is a long time ago, the universities were still growing, as an after effect of World War II, I suppose. I really don't know what caused these major trends. But there were also a lot of career research institutions that were large and growing. Even General Motors had places where there was some basic research. IBM was a big research laboratory that was supporting some very abstract and basic research of various sorts. I don't think there's very much of that now. Even CBS Laboratory. Westinghouse was doing interesting robotics. And of course Stanford Research Institute, which had no relation to Stanford. Still exists, and it's still pretty good. But in those early days, it was one of the three or four richest computer science and artificial intelligence research places. There a place called the RAND Corporation, which I think still exists. Does anybody-- AUDIENCE: Yeah. MARVIN MINSKY: I don't know what it does. Any idea? AUDIENCE: They do government [INAUDIBLE] sort of things, just in terms of writing and [INAUDIBLE] AUDIENCE: They make some pretty important things but not necessarily about war, economy games, or politic [INAUDIBLE] MARVIN MINSKY: But in the 60s, it had a lot of basic research. It had Newell and Simon and me and a few other people. And we just went there, and you could walk on the beach in Santa Monica and go to your office and talk and do things. And no one ever bothered us. And we wrote lots of little papers. Anyway, grumble, grumble. Another feature was that places like the National Institute of Health had five year fellowships. And now you have to renew-- there are very few appointments of that sort anywhere. And usually, no sooner do you get funded than you're starting to write proposals for the next year. And some people want reports every quarter. And Neil Gershenfeld, who was running a big lab here, wanted reports every month. And some of us finally gave up on that. That's a long answer. So if you want a career in being a professor, it's just harder to find now than it was then. And so a lot of people recognize this pretty early and find some place to work in Wall Street and stuff like that. There are lots of jobs for smart people. But then you have to sneak your research in on the side. Anybody can think of a way to fix it? [LAUGHTER] In the last 20 years, Taiwan made 100 new math departments I read somewhere. I don't know if any of you who know anything about Taiwan. I just wonder if that-- AUDIENCE: Yeah. MARVIN MINSKY: Yes, were they successful? [LAUGHTER] Is there a lot of research there? AUDIENCE: No. MARVIN MINSKY: Very often, when a government decides on the right thing to do, it doesn't work. I had some friends in Italy who were trying to start an AI group. And they had accumulated a critical mass in-- what's the big city in the north-- Milan. And then some government committee said, oh, there is a bunch of computer sciences there, but there's no good computer scientists in Pisa and Verona. So the government can order a professor to leave one place and go somewhere else. So the next year, there were no groups. And occasionally, there are people like Isaac Newton who liked to work alone. [LAUGHTER] But I got the impression that the product of the Italian researchers diminished after that. Might be wrong. How about a more technical question? Thanks. AUDIENCE: It looks like you had a complicated diagram concerning story. Do you recall of any layers? MARVIN MINSKY: Yeah. AUDIENCE: Was that meant to be a bi-directional diagram? Because it worked from the bottom up as well as the top down. MARVIN MINSKY: I'm confused about whether that-- let's see if I can find it. Why did this shut down? Do I dare press start? AUDIENCE: It's alive on the screen. MARVIN MINSKY: Oh my gosh. [LAUGHTER] I never saw that phenomenon before. AUDIENCE: Could you do [INAUDIBLE] displays? MARVIN MINSKY: Yeah, I can. [LAUGHTER] AUDIENCE: There should be a button that changes [INAUDIBLE] AUDIENCE: Oh, here we go. MARVIN MINSKY: What? Did it go on? AUDIENCE: [INAUDIBLE] MARVIN MINSKY: Oh, it's up. Oh well. It might be in this random lecture. How do I get rid of those? AUDIENCE: I think you might be able to go into View at the top to get rid of it. MARVIN MINSKY: There's a sort of bug in the tool box thing on the Macintosh, which is, if you make one of these too long, there's no way to get rid of it except to restart the machine in some other mode. I can't catch it. Maybe this works. Oh well. That diagram, there's two hierarchical diagrams. The theme of the emotion machine book is mostly the six layers of instinctive, built-in reactions, learned, conditioned reactions, and going up to reflective and self reflective and so on. And the other diagram starts out with just a neural net, and then things like K-lines, which are ways to organize groups of activities, and then frames and trans frames. A trans frame is a way of representing knowledge in terms of how an action effects a situation or a particular situation and an action produces a new one. And then a story is usually a chain of trans frames. And of course, a meaningful story is one which I didn't have a level for, good stories and useless stories. So somewhere at a very high level, we all have knowledge of, if you're facing some sort of problem, what kind of strategy might be good for solving that kind of problem? And in that case, each layer is made of things in the lower layer. Whereas in the society of mind hierarchy, each layer does different things that operates on the result of the other layers. I guess if you look at any mechanism, you'll have a diagram of what the parts do and how they relate. And you'll have a diagram of which isn't in the machinery, of what are the functions of the different sets of parts and how are those functions related? So that might be a bug in both books, that I drew the diagrams to look pretty similar. It's a bad analogy. AUDIENCE: [INAUDIBLE] was it a stimulus-response model, where if you fed a story into it, beneath it were the interpretive mechanisms? But does it flow the other way? Is it generative from bottom to top as well? MARVIN MINSKY: Well, in some sense, this trans frame says, here's a piece of knowledge, which says, if you're in such a situation, this is a way to get to another situation. In the traditional behavioristic-- behaviorist is a word for the class of generations of psychologists who tried to explain behavior just in terms of reacting to situations. And that wasn't connected to-- what am I trying to say? In the standard behaviorist models, which were occupied most of psychology from the 19th century up to the 1950s when modern cognitive psychology really started, you just looked at the animal as a collection of reactions. And then in cognitive psychology, you start to look at the animal as having goals and problems. And then some machinery is used to go from your-- the way you describe your situation, to generating a plan for what you're going to do about that. And then the plan ends up being made of little actions, of course. But before 1950, there were only a few psychologists who-- and philosophers, I should say, going all the way back to people like David Hume and Spinoza and maybe Emmanuel Kant. They made up-- if you read their stuff and ignore the philosophy, you see that there was a very slow progress over really three centuries of trying to get from logic, which sort of first appears around the time of Leibniz-- when is Leibniz? 1650 or so? AUDIENCE: [INAUDIBLE]. MARVIN MINSKY: Around, yes. They never met, I believe. So a lot of philosophy has-- which I don't know how to describe the rest of it. But a lot of it is making-- trying to make high level theories of how thinking works. And it's, of course, mixed with all sorts of problems about why the world exists and ethics and what are good things to do and bad and all sorts of mixed up things. And psychology doesn't appear-- I don't think there's a name for that field until the 1880s or so. Who's the first psychologist you can think of? AUDIENCE: William James. MARVIN MINSKY: William James is around 1890. There's a guy named [INAUDIBLE] in Austria, I think. Sigmund Freud starts publishing around 1890. Francis Galton in England is maybe the first recognizable psychologist. He has a big book called An Inquiry Into Human Faculty which makes good reading right now. Because it has-- each chapter is about a different aspect of what would be called modern cognitive psychology. How do people recognize things? What kinds of memory cues do you use to retrieve stuff? All sorts of sort of-- they're like term papers, the chapters. Some little theory. And you'd say, I can do better than that. And indeed, you could. But at that time, no one could. Yes? AUDIENCE: I feel like psychology is thinking about how people think, which I think [INAUDIBLE].. Aristotle does it. MARVIN MINSKY: Aristotle has more good ideas than, as far as I'm concerned, everyone else put together for the next 1,000 years. It's just very remarkable. And we don't know anything about that because there are no manuscripts. Anybody-- there's that wonderful play by-- who's the Italian? What? AUDIENCE: Dante? MARVIN MINSKY: No, no, a recent one. AUDIENCE: [INAUDIBLE]. MARVIN MINSKY: No, he's sort of contemporary-- oh well. Anyway, he has a play about searching for the lost-- there's some record that Aristotle had a book of jokes, or rather a book-- he has books on ethics and things like that, and there's a book about humor which is lost. And most scholars think it's not important, because if you look at the 10 existing books on Aristotle-- I think there's about 10-- allegedly by-- and there are students' notes. And almost every subject appears in at least two of them anyway. So one conjecture is that there really isn't any-- very much lost from ancient times. Anyway, if you ever read books, you might as well read one or two of Aristotle's. Because it's-- the translations I'm told are pretty good, and you can actually get ideas from it. Yes? AUDIENCE: I don't know if you ever heard about [INAUDIBLE].. MARVIN MINSKY: Umberto Eco is the writer. [LAUGHTER] Sorry. How does memory work? Something-- something about your expression. Sorry. AUDIENCE: [INAUDIBLE] he tries to explain consciousness. But you say that consciousness is [INAUDIBLE] work. But I don't quite agree with his definition. But basically his definition is that the more the information is integrated in more portions, being is. MARVIN MINSKY: The more information you have? AUDIENCE: The more integrated the information is. So for example, I don't know, he used the example of a MacBook that has a lot of information that's not integrated. Like, it is not correlated, and so it's not very conscious. MARVIN MINSKY: That sounds like an important idea and there ought to be a name for it. AUDIENCE: Yeah, he had something. But I think [INAUDIBLE] And this guys is, like, a neuroscientist and psychologist. And like you see some edge cases of people that split their brain in half. And it seems that both halves are kind of conscious. But I [INAUDIBLE] because that people, they still have information that's integrated. But it seems that they are not conscious. So there must be some action into that information, even if it's passive or active. But it seems very interesting. MARVIN MINSKY: Well which of my 30 features that go into that suitcase do they have? It doesn't make any sense to say something is conscious or not, does it? You just said it yourself, that there's some degree of integration perhaps. But can you say what you mean by integration? You probably need to say 20 things and many of them might be independent. Here's an example of something. Many years ago, people in the 1950s and '60s, it was very popular to talk about the left and right brain. Have you heard people say-- what's the difference between the left brain and the right brain? AUDIENCE: Rational-- MARVIN MINSKY: Rational versus emotional? Now I haven't heard anybody discuss that for the last 15 or 20 years. AUDIENCE: Although it seems to have become really enmeshed in popular culture now. If you asked anybody what they know about the brain, what the person will say is, well, I'm more of a right-brained person or a left-brained person. That seems to be a sticking point. MARVIN MINSKY: They used to, but I haven't heard that for at least 15 years. I have not heard a single person, psychologist, mention it. Have you? AUDIENCE: I think fMRI has all but obsoleted that theory. AUDIENCE: There's one thing it's good for. It's disproving that. MARVIN MINSKY: Anyway, I mention-- in The Society of Mind, I think, I had a grumble about it. Which is that, as far as I can tell, it appears to be true that language is located in most people in two very definite areas in the left brain but occasionally, in the right brain of some people. But other than that, as far as I can see, when you actually catalogue the differences that the psychologists reported in the 1960s and '70s, then the things in the left brain were largely adult kinds of thinking, and the things in the right brain were largely childish. Not-- it wasn't that they were rational or not, it was that they weren't very hierarchical and tower like. And I think there was a nice romantic idea of contrasting emotions and intellect and all those dumbbell distinctions and projecting them onto the brain. But I don't know how I-- what started me on that track. But it's interesting that it was very, very popular and psychologists talked about it all the time when I was a student. And I haven't seen it mentioned by any cognitive psychologist for-- yeah? AUDIENCE: So he mentioned this theory, but we don't-- I believe we don't test our theory with edge cases. So like mental [INAUDIBLE] people or people that-- probably there are a lot of people that-- not a lot, but some percentage of people that are mentally ill or don't have-- form so well in some part of the brain. And maybe we can have some idea of like what consciousness is, just by seeing people that don't have some part of the brain that might interfere with something. I don't know. Like this big brain may give a reason why-- what consciousness is. Because maybe some half a brain [INAUDIBLE] consciousness. MARVIN MINSKY: But I don't understand what you're-- you're trying to-- you're trying to construct a meaning for the word "consciousness." AUDIENCE: Well, Tony is definitely onto something interesting. And I think the reason that he uses the word "consciousness" is that it's in the sense that people talk about losing or regaining it. And so he can actually experimentally test this theory-- people who are asleep, or in a coma, or dreaming, or locked in, or is just in a vegetative state. [INAUDIBLE] this theory actually agrees with sort of a common-sense idea of whether this person is conscious in a temporary way. MARVIN MINSKY: But then is that different from-- if you used the word "thinking" instead, you could say when somebody is in a coma, they're not thinking. AUDIENCE: I don't think that it's good for him to use the word "consciousness." I think that the word "consciousness," to many people, refers to a lot of things that his theory does not treat at all. MARVIN MINSKY: See, it's really dangerous if you-- is it Pinker who likes-- I forget. AUDIENCE: Yeah. MARVIN MINSKY: It's dangerous to feel sure that there is something very important and a central mystery and-- what does he call it? The hard problem of psychology. And so here is really a very smart guy, Steven Pinker. And as far as I can see, he does nothing but harm to the people he talks to, because he gets them to do bad experiments and waste their time. So instead of trying to revive consciousness, it's worth considering that might be a very bad thing to do to yourself and other people. What problem are you trying to solve? Is there any way-- or the problem of qualia, for example. Because the standard view-- and this is something that still is a serious disease even today in philosophy. That is, the idea that the redness of red things is a very fundamental thing. It's indivisible. It's not describable. It's like-- to those philosophers, that's just as important as when-- who was the Greek-- Democritus, was it? Who discovered atoms? The idea of atoms was an enormous breakthrough. Of course, it took 2,000 years before people realized that, yes, there are atoms and they're not. They're actually complicated systems made of quarks and 5 or 10 other things. So now we don't have atoms anymore. But I think Pinker has the idea that red is irreducible. And you can't describe it. It's like the atom of thought. And these qualia are the fundamental problem of psychology. To me, it's exactly the opposite. Why do we have a word for it? When I say red, do you experience the same thing as anyone else who says red? And it seems to me that somebody who got sick after eating a tomato has a different qualia for red and, you know, blood, violent things, bad. Maybe another child has all sorts of pleasant associations with things that are red. And the concept of red is-- it's not that it's inexpressible because it's indivisible. It's inexpressible because it's connected with thousands of other ideas and experiences. And therefore, there's no way to make a compact definition of it. But it's exactly the opposite. It's not the hard problem of psychology. It's not a problem-- it's something that will fall out automatically without any effort when you have a pretty good theory of psychology. AUDIENCE: But why do we have these qualia [INAUDIBLE] Why? MARVIN MINSKY: Why do we have descriptions of things? Because the animals that don't have compact descriptions of things get eaten very quickly, because they can't recognize things that might hurt them. It's very important to have machinery for recognizing real things. And real things have features. In fact, there is such a thing as redness-- namely, the frequencies of light of what? Around 400 nanometers? What's the frequency? What? AUDIENCE: 700 nanometers? MARVIN MINSKY: That far? That's infrared, isn't it? AUDIENCE: A little bit. 650, 680. MARVIN MINSKY: Anyway. One of the things somebody pointed out to me in later life is that there's only one yellow. There are a lot of shades of red but interesting how tiny the yellow spectrum is. I don't know what it means. If you look around a room there-- I don't see a single one. AUDIENCE: It might be a lion. MARVIN MINSKY: What? AUDIENCE: It might be a lion. MARVIN MINSKY: A lion, yes. Does anybody see anything yellow in here? AUDIENCE: [INAUDIBLE] the consistency of yellow light and can you do it? [INAUDIBLE] MARVIN MINSKY: Yes, what element has a bright yellow line? AUDIENCE: Sodium. MARVIN MINSKY: Sodium. It's, yeah, orange-ish. AUDIENCE: It's orange. Yellow as the sun. MARVIN MINSKY: Yes. Maybe that's very important. It's in the bin. That's great. AUDIENCE: So this color is called warm white. [LAUGHTER] MARVIN MINSKY: In the story, yeah. AUDIENCE: It has a qualia [INAUDIBLE].. [LAUGHTER] MARVIN MINSKY: Warm white. AUDIENCE: Warm white. MARVIN MINSKY: What is it in Finland? AUDIENCE: I don't-- it's called-- the light like that [INAUDIBLE] comes from the tungsten-- the [INAUDIBLE] tungsten light bulbs [INAUDIBLE] MARVIN MINSKY: Yes, that's right. I've stocked up on 20 watt tungsten bulbs. Because my house is full of fluorescent bulbs that are remote controlled by things. And if there's no incandescent bulb in one of the sockets, then the remote controller breaks. These are the things you buy with, what are they called? Little units that-- AUDIENCE: X10? MARVIN MINSKY: X10, right. The old X10 units, the receivers burn out if there's no resistive load on them. So I have to have enough incandescent bulbs for the next 20 years or get rid of the X10s. I think they're illegal in Japan or have-- they're still there? AUDIENCE: Yeah. You can still find it in some shops, and people buy them so that [INAUDIBLE] MARVIN MINSKY: I bought a lot of LED light bulbs at the swap fast the other day. Back to AI. AUDIENCE: So the reading, you seem to imply that evolution is the best strategy for creating AI. Because, one, it'll take a lot of time. And two, because you'll get stuck a [INAUDIBLE].. But if we had infinite time and enough mutation, do you think it'd be possible to create a good artificial intelligence using evolution? MARVIN MINSKY: Well, if there's somebody in charge. If you have evolution like on a big planet, then you get a lot of lifeforms. And so the problem is that you might have some really stupid life form that eats the smart ones. But I have a more serious objection to evolution. You see, there have been several projects in the last-- well, since computer science started-- of trying to make problem solvers smart by imitating evolution, which is variation and selection. So I know of about five or six such projects which were fairly well funded and serious. What's most interesting maybe was the one of Doug Lenat, which was just him by himself. So if you look up Douglas Lenat's thesis, which was called-- I forgot the name. AUDIENCE: AM? MARVIN MINSKY: AmM, Automated Mathematician. And a second publication called Eurisko E-U-R-I-S-K-O. Those were projects in which he did variation and selection. And he imitated chromosomes by having strings of simple operations which were usually things like adding and subtracting and conditional jump and so forth. But there are several bugs with organic evolution. And the most serious one, which is that evolution doesn't remember what killed the losers. So there's no record in the genes of the mutations which were lethal. And in fact, it's almost the opposite. I'm told that in the human genome-- I believe, is it still 90% doesn't do anything? Some large fraction? AUDIENCE: Someone who [INAUDIBLE] do something, actually. MARVIN MINSKY: Well, they once did presumably. About 90% of the human genome and a lot of other animals is not transcribed into proteins. And a fair amount of it is old inactive viruses. So it has, you know, maybe 90% of some really deadly virus that got incorporated into the genome and gets copied. So the big bug in evolution, to me, is that if you're going to build a system that's going to try to develop a new kind of program by trial and error, the standard approach is to imitate Darwin. And you mutate these programs, you give them a test, and you then copy the programs that pass the test and repeat the cycle. So what happens is you collect-- because you're mutating them as you go along, you're collecting genes that help solve problems. But you're not collecting information about genes that make the animal worse or make it fail to solve problems. So this is true of all of evolution, as far as I can see, that there's no record kept of the worst things that can happen. And so every lethal mutation eventually kills someone. A lethal mutation is one-- you know, you have two copies of every gene, one from a mother and one from a father. And if you get two copies of the same gene-- and most genes have-- a lot of genes are recessive in the sense that, unless you get two of them, they're not expressed. If you have a lethal recessive gene, that usually means that you can have one of that gene and you're not sick. But if you have two of them, it eventually kills you. And it might kill you before birth, so you don't even get an embryo. Or it might kill you when you're 40 years old, as in that horrible Huntington's disease, where you can carry one and not suffer. But if you get two, it kills you in middle age which is very expensive for society. Anyway, there's no record. What you want to do is, for each problem solver that doesn't work, you want your evolution program to see why it doesn't work and not make that kind of gene again or whatever was responsible for it. So that's a big bug in Darwinian evolution. And the interest in fact is that every lethal recessive gene will eventually, on the average, kill someone. This is not a well-known. You see the arithmetic? Because it has to wait till there are two of them, and then it kills that person. And if you calculate the probabilities that there's a half chance of getting each of them in each generation, the math shows that eventually there's one premature death for each recessive gene. It's kind of funny. So it would be nice if we had some way to clean them up once and for all. And then everybody would be a lot healthier. I bet, within the next 20 or 30 years, we'll see some project which is to get rid of-- just take somebody's genome, sweep out all the lethal recessives, and get rid of 100 diseases or more. And suddenly, everybody will live to be 150 years instead of 100. Something like that ought to happen. AUDIENCE: There's a theory as to why recessive genes stay in the population despite killing off people. And there are some genes for which it seems to be the case that, you know, when you get two recessive genes, you die. But having the heterozygous population gives you some benefit by giving benefit against a different disease. And that's why it exists. So just getting rid of all the recessive lethal genes might cause problems. MARVIN MINSKY: Wow, I hadn't thought of that. Are there some examples? AUDIENCE: Oh, yeah. Malaria. AUDIENCE: Yeah, sickle cell anemia. Malaria, so if you have-- you have sickle cell, you cannot get malaria. AUDIENCE: If you're heterozygous for the sickle cell disease, [INAUDIBLE] MARVIN MINSKY: But that's not very beneficial, because you usually die when you're around 40. AUDIENCE: No, no, no. If you're heterozygous for sickle cell. MARVIN MINSKY: Oh. AUDIENCE: Then you don't have sickle cell disease, but you have benefits against malaria. MARVIN MINSKY: Oh, I didn't know that. AUDIENCE: The best example commonly given in all biology classes. But I'm sure there must be other examples. MARVIN MINSKY: I never took a biology-- that's good. So we could probably find one that-- we just have to tailor it a little bit. Yeah, so the mosquitoes don't like it? Is that what it is? AUDIENCE: It's just bad enough blood that the mosquitoes will ignore you, but not bad enough that you die. MARVIN MINSKY: Does it keep the mosquito from biting you? Or does it make the mosquito sick or what? AUDIENCE: [INAUDIBLE]. MARVIN MINSKY: It's just in-- yeah. AUDIENCE: Yeah. Some stuff I've read about viruses, you have people changing their theory about viruses. And one thing that could maybe-- in some sense, we're symbiotic with viruses, in some sense [INAUDIBLE].. But like you say, the jump comes at the genome. It may be process that takes advantage of that. So one thought is maybe the viruses are the things that [INAUDIBLE] the losers, remember why losers lost. MARVIN MINSKY: That's a good point. There are lots of things we don't know and wrongly believe. With this synthetic life, there are two groups starting to make-- maybe more. There are probably some secret groups trying to make them, too. AUDIENCE: Also in some sense, the bacteria that live in the human body weigh far more than the cells that are really yours and so forth and so on. You know, they're starting to think that the entire genome [INAUDIBLE] bacteria colonize you are also part of that equation in some way. So, you know, it could be that some of the genetic information in evolution is not kept in your own genome but are kept in all the organisms that are-- that live in the human [INAUDIBLE].. MARVIN MINSKY: Yeah, it's-- AUDIENCE: Is there [INAUDIBLE]? AUDIENCE: Yes, there is. That somebody is trying to sequence the-- MARVIN MINSKY: Bacteria [INAUDIBLE]?? AUDIENCE: Yeah, [INAUDIBLE]. Do you know what that's called? MARVIN MINSKY: How many do you think-- AUDIENCE: He's trying to sequence every genome of everything that lives in your gut. MARVIN MINSKY: Yeah, how many-- I understand there are more bacterial cells than somatic cells by a factor of 100 or something. Because bacteria are so small. But how many different bacteria infest a person? Is it hundreds or tens or thousands? AUDIENCE: I guess that's what we're trying to find out. MARVIN MINSKY: Yeah. AUDIENCE: So when you say like, in evolution, it would be nice if we had everything that went bad-- and then you said-- and then we could see what went wrong, right? But isn't it that what we're doing evolution [INAUDIBLE] we don't have a clear idea of how someone doesn't have to solve the problem? So even though we have the information of the solver, that they don't work. Like we-- I feel like if we had a way to know what went wrong, then we would already have information enough to know what is right, you know? MARVIN MINSKY: Oh, yes. AUDIENCE: So how do you decide [INAUDIBLE]?? MARVIN MINSKY: I was thinking of a fairly high level system. Because when Lenat or Larry Fogel-- was another one of these learning by evolution systems. I'm not suggesting that we could make a simple evolution simulation that would think of reasons why it failed. So this would be a high level one, if you're writing a big AI program. For example, when you learn arithmetic, after awhile, you learn not to divide by 0. So what do we call negative knowledge? What are the commonsense things? Is there a name for the things you should never do? AUDIENCE: Well, when people talk about-- you know, they-- search tree as a possibility, you prune the trees. MARVIN MINSKY: You prune the tree. But, you know, we have rule based systems. And they got very popular around 1980 and wiped out most of symbolic AI for a long time. But there aren't any rules that say, don't do x. Are they ever? Do they have some? AUDIENCE: Some experts [INAUDIBLE] MARVIN MINSKY: So the question is, when are they invoked? In a certain situation, turn off this bank of rules, maybe. So I'm not suggesting that you can make a very simple system do that. Because, in fact, figuring out why this mutation was bad might be a very hard problem. But as you build smarter and smarter ones, then you want to put-- well, what I called critics. Or I don't know. Freud had a name for them. At some point, you want to have prohibited actions and in Sigmund Freud's early model of psychology, there was a place for things that you would go away from or not do and these sensors, he called them. And they never appeared in the main line of psychology. When they threw out Freud, who had a few bad ideas, they threw out all these good ideas, practically. AUDIENCE: You might be pleased to hear that some of the monkey neuroscientists are starting to find some [? critics. ?] It's pretty handwave-y stuff as of now. But at least they're thinking about it. There's certain tasks where the monkey is cued to pay attention to one thing or another. Usually, if any, it's color versus orientation. And when they found is that orientation has dominance. And so when a cue is telling the monkey that they have to ignore the orientation and pay attention to color, the part-- those neurons which are responsible for looking at the orientation are being actively inhibited by another group of neurons, which they're now calling a [? critic. ?] MARVIN MINSKY: Are these in the same-- or is it a little nearby nucleus that's-- AUDIENCE: Nearly nucleus. MARVIN MINSKY: That's nice. So that would be a good place for-- is there a word for negative knowledge? AUDIENCE: They call negative knowledge, I guess. MARVIN MINSKY: It would have too many different senses. Advice not to take. There's some-- AUDIENCE: So this question would imply that there is a metric for intelligence. But is there a limit to intelligence? As in, is it possible to say one day we have artifical intelligence that is the most intelligent possible thing? MARVIN MINSKY: Seems unlikely, because presumably the survival value of a particular system depends on the world the thing is in. It might be that for all really-- for all worlds above a certain complexity, maybe there are some overall strategies that are universally better than others or something. But measuring intelligence doesn't make any sense. Because you'd-- I think you have to go the way Howard Gardner did and say, well, there's social intelligence and-- I don't know. Can anybody rattle off his list? What are his eight ways of thinking? Just look up Howard Gardner. So the amount of intelligence is-- clearly, it's a useful, intuitive idea that for any particular machine you could imagine another one that can do everything that one can do and more. But you're going to get a lattice, not an ordered thing. And the lattice won't-- at some point, it will start getting inconsistent. And this will be better than that one for this and not that. And-- AUDIENCE: Gardner had about nine different types of intelligences, according to his Wikipedia article, logical, mathematical, spatial, linguistic, bodily, kinesthetic, musical, interpersonal, intrapersonal, naturalistic, and existential. MARVIN MINSKY: There you go. And if you take any one of those-- when I was a mathematician, I was really good at topology but not at algebra. And at some point, that stopped me from being even better at topology. So if you take any one of those-- I think Howard wants to keep it simple, but I wonder if he has a sub psychologist who has chopped up mathematics into the right-- what are the right eight? [INAUDIBLE]? How many of you are bad at some kind of mathematics and know why? AUDIENCE: I'm really bad at Fourier series, just because I don't like them. [LAUGHTER] MARVIN MINSKY: I wonder what Newton would have thought about them. In my PhD thesis, I had a-- it was mostly about neural networks. And there were some people who thought that you could put information-- if you had a bunch of neurons in a circle, then you could put in a string of signals of different durations and store the bits in this circular thing. Because in World War II, there were no digital computer memories. But there were some computer-like things that stored signals in a tube of mercury with a speaker and a microphone, and it was possible to store a lot of information in sort of analog bits for a long time. But what you do is you have something that would regenerate them and synchronize them with the clock each time around. And I was trying to prove a theorem that, given what we know about the delay in neurons, if you stimulate a neuron very strongly, it reacts more quickly than if you just stimulate a little bit above threshold. Then it takes a longer time to fire. So I was trying to prove that in neural networks, in something like a human brain, you couldn't store a lot of information in circular loops. And I kept having trouble proving that. And I ran into John Nash who was another student a bit ahead of me. And he listened to me for a minute and he said, expanded in Fourier series. And after about two days, I figured out what he probably meant. And I proved this nice theorem, and it turned out it also-- and it had been discovered a long time ago it was called a Lipschitz condition. And if you have a certain condition like this, then the information will go away. But if you don't, you can keep the information around for a very long time. So in this case, the proof showed that you couldn't store-- unless you had a renormalizer or a clock somewhere, you couldn't store circular information in a mammalian brain very well. It's a nice example of something where one person had a different way of looking at it. Nash was pretty famous for his results in game theory, but I suspect he might have been responsible for 5 or 10 other things that he-- Norbert Wiener had this habit of talking to a student. He says, what are you working on? And the student would explain it. And Wiener said, oh, well you just do this. And I was present at a meeting of the-- I was in the math department where they had a meeting about who would tell Wiener not to do that anymore. [LAUGHTER] Some student had-- oh well, it's a true story. I wonder what else I've forgotten. Yes? AUDIENCE: I'm curious. You say this could be updated with a clock. Is there any evidence to suggest that biologically one could or could not construct a clock? MARVIN MINSKY: There are lots of clocks. I suspect that if I had thought about it more I would have-- because I'm talking the middle 1950s, and people knew a lot about brainwaves. And, you know, there are three or four fairly large synchronous activities in the brain. And I don't think anybody knows much about what they're for. Do you know? Have you heard any rumors? What is the delta wave for? AUDIENCE: Well, actually, the monkey experiment I was just talking about relies on the assumption that the beta wave is for suppression and the alpha wave is for activation. And I think people are still sort of debating about the delta and theta waves. MARVIN MINSKY: Mhm. The alpha wave-- what's the 10% in-- I think that's the big one. And it goes away when you are thinking hard. That is, if you're not focusing much on anything, then it's a fairly nice regular 10 per second. And if anything gets your attention and you focus on it, then the alpha wave pretty much gets noisy and disappears. I think. I don't know what the other what the others do. Is that correlated with any event? AUDIENCE: Obviously, the usual room shutting down. MARVIN MINSKY: I brought all this, but I decided not to use it anyway. AUDIENCE: I think it's correlated with a certain period of time after the signal from the computer stops changing. MARVIN MINSKY: Oh. You mean it might wake up again? AUDIENCE: No. It shuts down at the same time every class. AUDIENCE: It's not always the same time. MARVIN MINSKY: It's usually at 8:30. AUDIENCE: And he stopped using the [INAUDIBLE].. Correlation implies causation. MARVIN MINSKY: I wonder if Steve Jobs had-- this little thing has two batteries. And at one end, there's a dot. And the other end, there's a slot which is for a screwdriver. But it's also the minus sign of the battery. It could have been plus, but-- but-- what's that? AUDIENCE: It's probably wired so you can put a coin in. MARVIN MINSKY: Any coin, actually. AUDIENCE: Yeah, so you don't actually need a screwdriver. MARVIN MINSKY: I don't have a coin. [LAUGHTER] AUDIENCE: But you do have a screwdriver, right? MARVIN MINSKY: Of course. It's usually one. It's somewhere. No tips. [LAUGHTER] Good question. Yeah? AUDIENCE: Do you think artificial intelligence will ever be elected as a leader of a government? MARVIN MINSKY: In most science fiction stories, it doesn't give us a choice. [LAUGHTER] The Moon Is a Harsh Mistress. That was Robert Heinlein, wasn't it? It had a really smart computer emerge from the internet on the moon. Yeah? AUDIENCE: Yes. I was curious whether you had the idas as to how we attempt to determine the representations of information that either people or animals use to solve problems. Clearly this is a critical problem with intelligence. And lots of AIs got into various ways of representing information. But it would be really interesting to see how is that measuring-- has ideas of how that could be tested. MARVIN MINSKY: That's wonderful. What are the cognitive psychologists doing about representations? Have you run across any? AUDIENCE: They studied reaction times, [INAUDIBLE] way [INAUDIBLE]. They don't have very good ways of setting [INAUDIBLE] MARVIN MINSKY: Yeah. Rule-based systems are still the-- I haven't read a modern cognitive psychology-- has anybody read a modern cognitive psychology book? Do they have trance frames or scripts? What's happening in that realm? Try to remember what-- I guess I've never seen any Winston-like diagrams in anything but AI. But there must be some somewhere. It's 1970. Who has taken a psychology course? Is that true? What's in it? AUDIENCE: They talk about babies a lot nowadays. [LAUGHTER] MARVIN MINSKY: Well, there's a little industry of trying to show that Piaget was wrong. Is that what they say about babies? When do babies get conservation of quantity or something? AUDIENCE: Yeah, basically just go throughout the whole development stage and explain that. But I have not seen Winston and [INAUDIBLE] predicts. MARVIN MINSKY: Well, there is a problem with the low resolution of brain scanning. So if you can only tell when a square centimeter of brain is more active than another part, then it's hard to imagine how you could look for the representation of an arch as a block on top of two others. But you should be able to make a hypothesis about representation and then design an experiment in which you show a picture of an arch and then quickly show a picture where there's a little space between them, so it's not being supported by-- and blink those on and off and see if different kinds of changes in the representation cause different kinds of brain activity. But I suspect that most experiments on watching brain activity are from giving a stimulus and not a pair of quickly changing ones maybe. So you want to find what parts of the brain are activated when a certain kind of difference appears. And it shouldn't be hard to make such experiments, but my impression is that they don't do that so much as, you show a certain face for a couple of seconds, and then you show something else, and you look to see if the activity moves somewhere. But if your resolution is low, maybe you should be putting in stimuli that change, so that you're finding the response to the changes. It's just a-- AUDIENCE: One of the problems is that there is a delay with the kind of brainwaves you can get. Like you can get more real-time reactions, like fMRI. MARVIN MINSKY: Yeah, it usually takes several seconds to get anything. You have to do-- AUDIENCE: [INAUDIBLE] MARVIN MINSKY: You'd have to repeat it many times, and I think it still takes several seconds to get any information, doesn't it? What's the-- the first brainwave experiments were in the late-- in the 1940s. And that Englishman Grey Walter, who also made that first robot turtle and things like that. I was just reading some of the-- some papers he wrote in the middle 1950s. They're not very illuminating about AI, but they show you what some people were thinking in the days before computer science. Yeah? AUDIENCE: When you talk about your book about [INAUDIBLE] and big machines that accumulate huge libraries of statistical data. Use that-- they cannot develop much cleverness because they don't have this-- sorry, the-- can't-- what's-- because they don't have higher reflective levels. What are these higher reflective levels? MARVIN MINSKY: Well, that's thinking about what you were thinking a minute ago. You know, you think something and then you say, that was a bad idea, why did I get that? Or now I realize I didn't understand something. I've wasted five minutes because-- reflective thinking is just thinking about your recent thoughts. Maybe all thinking is-- any coherent train of thinking-- each thought is something about the previous thought, but it doesn't have the word "I" in it, you know? You say, why did I waste so much time? Why did I focus on this rather than that? What did that person say? Maybe I missed the point. Maybe most of your thinking is, what did I just think? Maybe I missed the point. Yeah? AUDIENCE: So here we often talk a lot about the economy science in psychology. And I'm curious, how important do you think [INAUDIBLE] science and psychology are to the field of AI and whether the right way of trying to build intelligent machines and understand intelligence is through understanding what we've already seen. Or it's playing around with computers and trying to make systems that solve the problems we want to solve. MARVIN MINSKY: I'm glad you asked that, because I don't think it's very important. Because I think we all-- we've got to the point where we know that people solve problems, and we all know how to think about how we solve some problems. We don't know the details of how we did it, but I think-- you know, if you look at what's been done in AI, it's more than clear enough where the present system stopped and where they fail. And we keep thinking of ways to fix them, and we get sidetracked. Because that's-- you get some idea and it's too hard to program, and somebody says, use C++ and somebody else says, why did you go back to Lisp and-- And I guess my answer is, I don't think we need, desperately, to know more about psychology. Because we already have programs that are pretty good at things and we can see where they get stuck. But it would be nice if there were a community out there helping us. Because the AI groups are all alone, and they don't communicate very well with each other, and they're not very well supported. But I bet as long as we make machines smarter, the psychologists will pay more attention and they'll come back and tell us better things. And eventually, they'll be a real cognitive science. Sort of like physics. Physics got very well with Newton and Galileo and quantum mechanics. But now they have a great community. And when some serious problem comes up, somebody-- spend a billion dollars for a new accelerator or something. There's nothing like that in AI. If you say, why did the Newell-Simon general problem solver get stuck on the missionary in cannibals? Somebody used to say, well, here's a billion dollars. I know it's not enough, but maybe you can make it a little smarter. Nobody's offering this. AUDIENCE: [INAUDIBLE]. Somewhat related question. So first, since AI is mostly an engineering discipline, it's a question of, how can we make machines to solve these problems with intelligence? Do you think this is going to lead to a better understanding of intelligence? And how important do you think that is to this more I guess, mostly scientific but also slightly philosophical question? MARVIN MINSKY: I think it's just an engineering question. There just isn't a way to get enough bright people to compete with each other to make better AI systems. It's-- anybody have a theory? You see, I'm speaking from the point of view, feeling that there hasn't been much progress in recent years. And maybe I'm wrong and there's a lot of great stuff just ready to be exploited. But I don't see it. AUDIENCE: I think we're kind of in a spinning [INAUDIBLE] of sorts where people are doing a lot of the work in terms of, for instance, tuning the parameters and choosing machinery approximations in order to solve problems that there are incentives out there to solve. And in principle, if we had AI that was good, AI that would do that work instead of programmers having to tune parameters and figure out which algorithms are good for different problems. But as of now, the way the incentives are structured, it's going to take a big energy push to sort of get over the hump of actually creating the infrastructure that's necessary for that stuff to happen automatically. MARVIN MINSKY: Yeah, there are AI groups. There are a few people at Georgia Tech and Carnegie Mellon. Although, my impression is that they're mostly playing robot soccer or something. So a lot of the people who are empowered to do the right thing are-- or you look at Stanford. It's wonderful to make these self-driving cars. But I don't think a single thing has been learned from that. Maybe a little has been learned from the Watson thing, but-- AUDIENCE: They won't give out their source code. MARVIN MINSKY: Right. And if they did, I think they could read The Society of Mind that says, have a lot of different methods and find some way to integrate them. What's missing in The Society of Mind is better ideas on how to integrate them. And Watson might have some. But on the other hand, it might not. Maybe if it can end up with an answer that's one word, like a person or a sport, then it's done. And so it may be that we know it's at the lower levels. And we don't know what's at the higher levels, and maybe it's no good. On the other hand, maybe there are 10 very important ideas there, and you'd have to read that long paper and try to guess what they were. Do we have a spy in there? Are they telling us something? AUDIENCE: I get little bits and pieces back at the end. I think it is kind of-- you know, the good news about that is it has made some progress, and it is kind of a society of models. And they have some supervisory processes to try to figure out which-- actually, the most important thing is to try to figure out which methods are good for which kind of questions. MARVIN MINSKY: That would be good. So they might have some good critics and selectors-like things. AUDIENCE: Yeah. So there's some of that, I think, in there. I don't think there are a lot of very brand new techniques, but I think there's probably some of that, yeah. MARVIN MINSKY: They fired their other AI group, but I don't think it was getting very far either. You know the one I mean, the Eric Mueller and-- no he moved. AUDIENCE: He worked on Watson. MARVIN MINSKY: No, I mean Doug Riecken, Riecken's group. It was doing more mathematical AI than, I think, heuristic AI. Any other company doing anything? What are the common sense groups in Korea and places like that? AUDIENCE: Well, I'll point it out in December when I go there. MARVIN MINSKY: Henry's going to visit some of them? The mysterious East. Yes? AUDIENCE: So there has been, since a long time ago, from [INAUDIBLE] there has been machines that are trying to build a reflective [INAUDIBLE].. There are critics. And even though the idea died out in the '80s, but then there's still some maching, like maybe Watson, has critics. But the reflective layer, I feel like it does a lot of different things. So what do you think is missing from that layer that no project has [INAUDIBLE]? MARVIN MINSKY: I'm not sure what you're asking. But there is Pat Winston's group working on stories, and my impression is that that's making definite progress. And if he can integrate with Henry Lieberman's kind of large, commonsense knowledge base, maybe something great will happen. But progress is a little bit slow. Gerry Sussman is still full of ideas, but he keeps teaching courses in physics. [LAUGHTER] And he's out there fixing telescopes, and he's absolutely a prodigy. And now he's working on this theory of propagators, which he claims is relevant to AI, and I don't understand it yet. But-- AUDIENCE: It's good. MARVIN MINSKY: What? AUDIENCE: [INAUDIBLE]. MARVIN MINSKY: I'd like to see it solve some interesting problem. But-- so we have a lot of resources here. But if you look at the world as a whole-- AUDIENCE: Yeah, for example, you talked about how we should combine [INAUDIBLE] group with the [INAUDIBLE] knowledge base. So I feel like doing that, we need some newly invented machinery. MARVIN MINSKY: Yeah, to what extent is-- AUDIENCE: Well, if you would like to work on it, please come see me afterwards/ MARVIN MINSKY: It's a very lively group. What's happening to Lenat's group? Is he just hiding or is he-- AUDIENCE: No, I think Doug Lenat is on a side project. And it's been steadily growing. And I think one thing-- so what was-- he had a very interesting article recently about using it for common sense for medical queries. So the Watson guys said that, you know, they want to apply Watson to medicine. But I think Lenat had a really good article about applying it to medical queries. It was things like-- you know, so the doctors would ask things like, which operations, for some disease or something, have complications? And the system would have them say, what's a complication, right? And the complication is when things don't go right. So having a drug reaction be a complication. Leaving a scalpel in a patient could be a complication. So you have to understand some of the ideas of-- you know, common sense ideas of what might be a complication and what might cause trouble and those kind of things. And I thought that was a very nice system at the Cleveland Clinic. And the doctors loved it, and they wrote about it in [INAUDIBLE] magazine. I thought that was a real success. MARVIN MINSKY: Oh, I haven't seen that. Dr. Lenat. AUDIENCE: I mean, the problem is that the reason that you haven't heard a lot of applications for so long is because were funded, you know, for decades by three letter agencies in the government. And they did-- I think they did actually quite good work for them. Because otherwise, the program would have continued for 25 years. But the problem is, you know, when they do something good for the secret agencies, nobody else finds out about it [INAUDIBLE]. MARVIN MINSKY: I have a great story about that, which is almost unbelievable. Which is, I was at a meeting with John Glenn at-- this was a long time ago, when it was just starting. And this was in a building a block from the White House, and it had all these people from some agency about whether AI could help them with their problems. And somebody pulled out some slides and was about to give a lecture. But the shelf that had the projector on it had hinges, and all the screws were missing on one side and it fell down like this. And they fussed for a long time and couldn't get the projector to line up. And then I had this thing. And I took three screws out of-- it had three hinges and I took three screws out of here and put them in here and here and here. And then the shelf stayed up and the show went on. You know, it's like the joke about the-- anyway. So they were astounded because I actually fixed this stupid thing. And I said, well, why didn't you? And they said, we asked maintenance three weeks ago and they never got around to it. And I said, this is the agency? And they said yes. And then they said, but why did you have that thing with you? [LAUGHTER] AUDIENCE: That's OK. You never get past the metal detector. MARVIN MINSKY: When I was a kid, I heard some story-- oh, never mind. About when a car wheel rolls off, you take one screw from each of the other three. So I was doing exactly that, and these agency people had never thought of doing it themselves. So what does it mean when you have a government run by people who can't fix this hinge? I once met a freshman who didn't know which way to turn a screw. At MIT. How many of you have to try both-- [INTERPOSING VOICES] AUDIENCE: [INAUDIBLE] screw. MARVIN MINSKY: The left hand. Some rule, right. AUDIENCE: You're not doing the [INAUDIBLE] anymore. AUDIENCE: Well, if you're screwing into weird angles, like [INAUDIBLE] pieces and stuff. MARVIN MINSKY: Yeah, sometimes. AUDIENCE: [INAUDIBLE]. MARVIN MINSKY: That's right. Enough stories. AUDIENCE: Are you sure? [LAUGHTER] MARVIN MINSKY: So has he-- oh, can you send us a pointer to that paper? Lenat's? AUDIENCE: Oh yeah, sure. MARVIN MINSKY: That would be nice. He's one of the great pioneers of AI. AUDIENCE: I guess I have a question about like extracting a piece of knowledge from experience. I feel like this is something that I think we reflectively [INAUDIBLE] all layers. But maybe-- it's probably also this-- probably the reflective layer. So how do you think it does that? MARVIN MINSKY: How do you retrieve your knowledge? AUDIENCE: How do you turn an experience into a piece of-- a rule? MARVIN MINSKY: How do you learn from an experience? AUDIENCE: Yeah. MARVIN MINSKY: You do something and then you get some knowledge and where do you put it? AUDIENCE: How do [INAUDIBLE]? How general [? do you need ?] it? Or do you just try it? Do you have to group a lot of experiences and then results [INAUDIBLE] MARVIN MINSKY: If we could answer that, we could all quit and go home. You're asking the central problem of, how do you learn from experience and later retrieve how you learned? I just can't think of any way to answer that except write a whole book and then have everybody find out what's wrong with it. Yeah, science is making the best mistakes. If you make a really good mistake then somebody will fix it and you'll get progress. If you make a silly mistake, then nothing is gained. AUDIENCE: I think we should have the surgeon [INAUDIBLE] make better mistakes. MARVIN MINSKY: Well, how do you decide what to try? Yeah. That's my complaint about the probabilistic methods. Because if there are a lot of-- well, I talked about it the other day. If there are a lot of different aspects of the situation, like 100, then there's 2 to the 100th conditional probabilities to think about. And so probabilistic learning machines work wonderfully well on small problems where the search trees aren't too big. But they don't-- but the hard problem is what to do when there is a lot of different factors and you don't know which are important. And in lots of situations, just first order correlations, there are 100 factors, and you just look at the probabilities of each of them. And then there's 10,000-- 5,000 pairs of things. And you look at the 5,000 joint conditional probabilities of two things, and maybe five of them pop up, and you've only got five things to look at. And that's where that kind of AI system works. And it's become immensely popular. And the trouble is, it'll never get smarter. Because if you have to look five steps ahead, then instead of 10 possibilities, you have 100,000. And anyway, my concern is that there are quite a lot of millions of dollars going into AI research. But most of it is going into dead ends. So it's not as though there were-- maybe there is enough money, but it's going to the easy problems instead of the hard ones. Who has an easy question? AUDIENCE: So lots of our people like to play games and procrastinate. Do you think artificial intelligence will also play games and procrastinate? MARVIN MINSKY: Well, there's the opposite question. I got a message from somebody I don't remember who-- I had complained that nobody's been able to get Push Sing's AI program to work. And somebody suggested-- what? Yeah. And somebody suggested that-- I forget the name. There's some group of people who like problems. And I can't remember what-- it's just a bunch of people out on the web who like to solve programming problems. And this person suggested sending the code to that group of a couple of thousand people and maybe they would self organize to try to figure out how it works and fix it. So do you think that would-- could that work? AUDIENCE: Yeah. MARVIN MINSKY: We have a big bunch of code. It's partly commented. Could we get 1,000 really aimless hackers out there with lots of ability to-- so maybe I'll try it. Sort of-- AUDIENCE: [INAUDIBLE] is that they might have their own code or their own sections. MARVIN MINSKY: Well, if they're self organizing enough. I mean, if an individual tries to fix it, that's fine. But maybe these people know how to work together. So they could chop it up and talk to each other and agree. It doesn't have to be the same as Push's. It just has-- I don't know if you've seen the movie. I'll bring it next time. You had a robot coming to try to screw the legs onto a table, but the robot has only one hand. So there's another robot over there. The first one says, help. And the other one figures out just enough to come over and pick up the other end of the table. So as far as I know, this thesis only worked out one example. AUDIENCE: Yeah. Actually it was-- the tricky part in that was that when the other robot said [INAUDIBLE] the other robots looks away. Well, you know this [INAUDIBLE] So the other one [INAUDIBLE] the other robot [INAUDIBLE] The first robot is trying to take the table apart. MARVIN MINSKY: Yes. AUDIENCE: So then you have the second robot doing that. And then you have to correct it, no. Then you go back and show it [INAUDIBLE] fix it. MARVIN MINSKY: Right. And the first robot just says no. Which the other robot has to be very stupid to be able to interpret that as exactly one thing not to do. If it were smarter, it probably wouldn't work. But anyway, I'll bring the movie in. Have you looked at the code? AUDIENCE: [INAUDIBLE] debug it myself. MARVIN MINSKY: Yeah. It looks pretty horrid. [INTERPOSING VOICES] MARVIN MINSKY: My favorite story, which I think is true, is that Slagle's program for doing integration for-- yeah, was about five pages of lisp. And Joel Moses said that it took him several weeks to figure out-- because Slagle was blind and had to program in Braille. So Joel said that he made the most intricate convoluted expressions so that he wouldn't have to type so much. And then Joel-- so that was the first. I had written a program that differentiated algebraic expressions. And that was a great breakthrough although it was completely trivial. Namely, I just put in the letter D. And if it saw an expression x times y, it would say xdy times dy plus y times dx. And there are only four or five such rules. Then just sweep through until it had this big long expression, and that turned out to be the derivative. But then Joel wrote-- the trouble is it was too long. And then Moses wrote something to simplify it. And then Slagle wrote a simple integration program. And then Moses wrote a really complicated one. And eventually, a couple of other mathematicians studied that and extended it and worked out a theory of-- for the final integration program, that could integrate any expression that had an integral in close to algebraic form. Which means a function of exponents, sines and cosines and polynomials. And the result of that was a sort of nice story, which is that the American mathematical society had a big suite of rooms in Providence, their headquarters, where they had collected all the integrals that were known for hundreds of years, ever since Newton did the first ones. And there were rooms full-- So every time somebody found a new integral, they would write it up and send it to the American Math Society. And it would get cataloged there. And they had raised funds for-- it was called the Bateman Manuscript Copy. And there was a fund for organizing all this data. And the minute the program came out, the Bateman Manuscript Project was terminated and closed. [LAUGHTER] Because-- and I think Maxima had the solution in it. And Mathematica is the sort of big successor to that. But it was a nice piece of history spread over about five or six years. And I don't know that anybody works on that anymore. We had a couple of PhD theses starting of trying to solve differential equations, and they didn't get very far. That's probably an important-- it looks like Lemelson is over. [INTERPOSING VOICES] AUDIENCE: It's the 100K. AUDIENCE: Elevator Pitch. MARVIN MINSKY: You think they actually awarded one? AUDIENCE: Yeah! AUDIENCE: Probably. [INTERPOSING VOICES] AUDIENCE: Well, they have 100K. I think the Elevator Pitch contest as separate things. AUDIENCE: Yeah, they-- oh. There's three parts of it. That's the first one. MARVIN MINSKY: Well, any more-- oh, way back. AUDIENCE: Do you think there's ever going to be a way to crowd source the AI research at all? MARVIN MINSKY: That's what I meant. That's the expression I was looking for for fixing the push thesis. But it would be nice. AUDIENCE: It wouldn't be a self-organizing thing. Like someone would have to-- I mean, is that-- I feel like that would not be-- it would be hard for people to self-organize to do that. But there were already [INAUDIBLE] structure. And that minimal piece that everyone could do for AI research that's already defined. Do you think AI research is structured in a way that could never be broken down? MARVIN MINSKY: Well, don't these crowd things usually start with some-- they must start with some sort of leader but then they become self organizing or-- AUDIENCE: I mean, they [? weren't. ?] Because every participant had a specific-- has a specific and distinct-- had basically the same small [INAUDIBLE] And they don't become more complex than that. I mean because of the community or whatever. But it doesn't-- the idea of lowering the floor of doing AI research so that more people can contribute. MARVIN MINSKY: It's a nice question. Well, let's think about it. If we take this PhD thesis code, it wouldn't be much good to send the whole thing to everyone. Well, of course it wouldn't cost anything to do that. But you need somebody to make a first pass at chopping it up into, say, look at this function didn't work. Maybe there's some code missing. So you might need a person or a couple of people to sort of organize the project. But once you've got a community, they might be able to cooperate. AUDIENCE: [INAUDIBLE] already [INAUDIBLE] to be able to contribute. MARVIN MINSKY: They might do without a leader once the problems became clear enough. AUDIENCE: [INAUDIBLE]. I would say that, I mean, there are some crowdsourced AI projects. Certainly, if you go to Source [INAUDIBLE] or the restaurant game of That's crowdsourced. [INAUDIBLE] crowd sourcing [INAUDIBLE].. But I don't think the crowd sourcing is really great for problems that enable our creativity. [INAUDIBLE] commands that are-- projects that are labor intensive. It's good for [? study ?] at home, that kind of thing. But for projects that demand a lot of creativity, it kind of breaks my heart almost. Because if you look at like the open source Unix, you know they've done a great job at organizing people to work on Unix and three versions of Linux and three versions of Unix. But on the other hand, the software isn't very innovative. You know, they just implement 60 versions of the [INAUDIBLE].. And the Unix interface hasn't changed since the 1960s. pretty much so. You know, it's everybody still programming on terminal windows. So I think it's-- crowdsourcing is mainly good for projects that are labor intensive. I think AI, you would see individual creativity more. It's like McCarthy said. We'd see 2.3 Einsteins and 1.7 Manhattan Projects. That, you know, [INAUDIBLE] it's probably good for the Manhattan Projects but not for the Einsteins. MARVIN MINSKY: On the other hand, given we have the movie, it might be that the problem of getting this code to make that movie isn't so creative. You could see-- you can start it up and see where it gets stuck, and-- it's worth a-- AUDIENCE: [INAUDIBLE] if you have something along the lines of Watson that incorporates lots and lots of small programs. And you can have people contribute small programs whether they're good or bad. [INAUDIBLE] figure it out and then [INAUDIBLE].. AUDIENCE: Well, in some sense, Watson was crowdsourced. Because it wasn't only developed by that IBM group. They had Watson collaborators in ISI and CMU and other places. And they crowdsourced it by getting little research grants to integrate their part of the research into Watson. So I think you could argue that actually was crowdsourced. MARVIN MINSKY: Another thing we haven't tried is called throwing money at it. If we-- suppose we got $500,000 or $1,000 and told some programmer, can you get this to work? AUDIENCE: Well, if you have good enough inspirations of various small programs that [INAUDIBLE] something, then I think that part of the problem is creativity [INAUDIBLE]. Because some parts-- or some [? soft ?] programs could be more [? stupid. ?] And as long as you have a description of what that program is doing, then you could have some really creative program, that's-- that might use those [? stupid ?] programs. [INTERPOSING VOICES] AUDIENCE: But you still need that one very creative person to [INAUDIBLE] AUDIENCE: But you must be so-- [INAUDIBLE] AUDIENCE: Yeah. I'd [INAUDIBLE] World of Warcraft 10 and leak it on the internet. [LAUGHTER] MARVIN MINSKY: My grandson was suspended from Warcraft for three weeks because he hacked the thing to get a higher priority on something. [LAUGHTER] I think he was not-- do you know how old he was? AUDIENCE: [INAUDIBLE]? MARVIN MINSKY: Yeah. No, Miles. AUDIENCE: No. I don't think he's played one of those games in a long time. MARVIN MINSKY: No, I think he was about 10 or 12. And he had actually managed to get into the thing and get instant service. He was very proud of being banned. [LAUGHTER] I give up. Any last request or idea? Thanks for coming.
|
MIT_6868J_The_Society_of_Mind_Fall_2011
|
5_From_Panic_to_Suffering.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MARVIN MINSKY: If your have any opinions about consciousness. There is one problem in the artificial intelligence people, is there's a lot of pretty smart people like Steve Pinker and others who think that the problem of consciousness is maybe the most important problem no matter what we do in artificial intelligence. Anybody read Pinker? I can't figure out what his basic view is. But there's a feeling that if you can't solve this all important mystery, then maybe whatever we build will be lacking in some important property. There was another family of AI skeptics, like Penrose, who's a physicist and a very good physicist indeed, who wrote more than I think three different books arguing that AI is impossible because-- I'm trying to remember what he thought was missing from machines. AUDIENCE: Quantum mechanics. MARVIN MINSKY: Quantum mechanics was one and Godel's theorem, incompleteness was another. And for example, if you try to prove Godel's theorem in any particular logic, you'll find some sort of paradox appearing where if you try to formalize the proof, you can't prove it in the logical system you're proving it about. I forget what that's called. So there are these strange logical and semi-philosophical problems that bother people. And Pinker's particular problem is, he doesn't see how you could make a machine be conscious. And in particular, he doesn't see how a machine could have a sense called qualia, which is having a different experience from seeing something red and from seeing something green. | you make a machine with two photo cells and put a green filter on one and a red filter in front of the other and show them objects of different colors, then they'll respond differently and you can get the machine to print out green and red and so forth. And he's worried that no matter what you do, the machine will only have some logical descriptions of these things, and it won't have a different experience from the two things. So I'm not going to get into that. I wonder if Word is going to do this all the time until I kill something. What if I put it off screen? That's a good way to deal with philosophical problems, just put them in back where you can't see them. Oh, the picture disappeared. That's really annoying. Everything disappeared. OK, think of a good question while I reboot this. Whoops. Well, how about pressing-- that did it. Whoops. That's the most mysterious problem. Does anybody have an explanation of why computers take the same amount of time to reboot, even though they're 1,000 times faster than they were? AUDIENCE: They have to load 1,000 times more stuff nowadays. MARVIN MINSKY: Yes, but why can't it load the things that were running last time? For some reason, they feel they have to load everything. AUDIENCE: Maybe there is a certain amount of time that they think humans are willing to wait, so therefore, they will load as much as they can during that time. Maybe that might be it. I think if they could, they would upload even more, but they can't because that's how the human patience. And so they always run out of that one. MARVIN MINSKY: Does the XO take time to do? AUDIENCE: Yes, it takes several seconds. MARVIN MINSKY: So it keeps it in memory. AUDIENCE: It doesn't have it organized. MARVIN MINSKY: I'm serious. I guess it would. But it doesn't cost much to keep a dynamic memory refreshed for a month or two. If anybody can figure it out, I'd like to know because it seems to me that it should be easy to make Unix remember what state it was in. AUDIENCE: Well, if it remembered exactly what state it was in, it wouldn't be very useful. We'd have to change the statement every time. MARVIN MINSKY: Well, I mean it could know which applications you've been running or something. Anyway it's a mystery to me. For example, in time sharing systems, you have many users. And the time shared system keeps their state working fine. Let's see if this is actually recovered from its bug. Maybe one of those forms of Word doesn't work. That's a bad-- AUDIENCE: When computers hibernate and stuff, they say if they have to read the disk, it takes generally on a modern system 30 to 45 seconds just to load its entire memory content from disk. MARVIN MINSKY: That could be the trouble. Why can't it load the part that it needs? But never mind. I'm sure that there's something wrong with this all. But now I've got another bug. That one seems better. Nope. Sorry about all this. I might have to use the backup. Anyway, I'll talk about consciousness again. But I'm assuming that you've read all or most of chapter 4. And we could start out with this kind of question, which is I think of evolution as-- is this working? I think of us as part of the result of a 400 million year-- 400 mega year process. And because the first evidence for forms of life occurred about 400 million years ago, which is pretty long. The earth appears to be about 4 billion years. So life didn't start up right away. And so there was a 100 million years of the first one celled animals. Maybe there were some million years of molecules that didn't leave any trace at all. So before there was a cell membrane, you could imagine that there was a lot of evolution. But nobody has posed a plausible theory of what it could have been. There are about five or six pretty good theories of how life might have started. There had to be some way of making a complex molecule that could make copies of itself. And one standard theory is that if you had just the right kind of muddy surface, you could get some structure that would form on that, peel away, and leave an imprint. But it sounds unlikely to me because those molecules would have been much smaller than the grains of mud. But who knows? Anyway that's 100 million years of one celled things. And then there's 100 million years of things leading to the various invertebrates, and 100 million years of fish, reptile like things and mammals. And we're at the very most recent part of that big fourth collection of things. I think there's a-- whoops. Is this not going to work? All right, that's my bug, not MIT's. So humans development, splitting off from something between a chimpanzee and a gorilla, has a history of about 4 Million years. The dolphins developed, which have very large brains somewhat like ours in that they have a big cortex, developed before that. And I forget, does anybody know? My recollection is that they stopped developing about 4 million years ago. So the dolphins brains got to a certain size. The fossil ones of I think about 4 million years ago are comparable to the present ones. So nobody knows why they stopped. But there are a lot of reasons why it's dangerous to make a larger brain. And especially if you're not a fish, because it would be slower and hard to get around and you would have to eat more and that's a bad combination. And other little bugs like taking longer to mature. So if there are any dangers, the danger of being killed before you reproduce is a big hand handicap in evolution. In fact, if you think of the number of generations of humans since presumably they've been living for sort of 20 year lifespan for most of that 4 million years, like other primates. Compare that to bacteria. Some bacteria can reproduce every 20 minutes instead of 20 years or 10 years or whatever it is. So the evolution of smaller animals is vastly faster, in fact, by factors of order of hundreds. And so generally these big slow long-lived animals have huge evolutionary disadvantages. Anyway, here's four major ones. So what made up for that and that's why chapter 4. I don't think I wrote anything about this in chapter 4. But that's why it's interesting to ask why are there so many ways to think and how did we develop them? And a lot of that comes from this evolutionary problem that as you got smarter and heavier, it got more and more difficult to survive. So your collection of resourcefulness had to keep track. Well, in that four billion years this only happened once. Well, the octopuses are pretty smart. And the birds, just consider how much a bird does with its sub pea sized brain. But it seems to me that it's hard to generalize from the evolution of humans to anything else because-- because what? We must have been unbelievably lucky. William Calvin has an interesting book. He's a neurologist who writes pretty interesting things about the development of intelligence. And he attributes a lot of human superiority to a series of dreadful accidents, namely five or six ice ages, in which the human population was knocked down, nobody knows to how small. But it could have been as small as tens of thousands. And we just squeaked by. And only the very, very, very smartest of them managed to get through a few hundred years of terrible weather and shortage of food and so forth. So that's-- anybody remember the title of-- have you read any William Calvin? Interesting neurologist out in California somewhere. There is very small handful of people, including William Calvin, that I think have good ideas about intelligence in general and how it evolved and so forth. And Aaron Sloman is a philosopher at the University of Birmingham in England, who has theories that are maybe the closest to mine. And he's a very good technical philosopher. So if you're interested in anything about AI, if you just search for Aaron Sloman, he's the only one. So Google will find him instantly for you. And he's got dozens of really deep essays about various aspects of intelligence and problem solving. The only other philosopher I think is comparable is Daniel Dennett. But Dennett is more concerned with classical philosophical issues and a little less concerned with exactly how does the human mind work. So to put it another way Aaron Sloman writes programs and Dennett doesn't. AUDIENCE: He's basically a classical philosopher. MARVIN MINSKY: What's that? AUDIENCE: If you're in an argument with a classical philosopher about issues in classical philosophy, Dennett's arguments can back you. MARVIN MINSKY: Yeah. But I'm not sure we can learn very much. AUDIENCE: No. MARVIN MINSKY: I love classical philosophy. But the issues they discuss don't make much sense anymore. Philosophy is where science has come from. But philosophy departments keep teaching what they were. What chapter does this story first appear in? Joan is part way across the street. She's thinking about the future. She sees and hears a car coming and makes a quick decision about whether to back up or run across. And she runs across. And I have a little essay about the kinds of issues there, if you ask what was going on in Joan's mind? This is a short version of a even larger list that I just got tired of writing. And I don't know how different all of these 20 or 30 things are. But when you see discussions of consciousness in Pinker and everyone except Dennett and Sloman, they keep insisting that consciousness is a special phenomenon. And my view is that consciousness is-- there certainly a lot of questions to ask. But there isn't one big one. I think Pinker very artistic-- art-- I can't think of the right word. He says this is the big central problem. What is this amazing thing called consciousness. And he calls that the hard question of psychology. But if you look at this and say, how did she select the way to choose among options? Or how did she describe her body's condition? Or how did she describe her three most noticeable recent mental states or whatever? Each of those are different questions. And if you look at it as from the point of view of a programmer, you could say, how could a program that's keeping push down lists and various registers and caches and blah, blah, blah, how would a program do this one? How do you think about what you've recently done? Well, you must have made a representation of it. Maybe you had a push down list and we're able to back up and go to the other state. But then the state of you that's wondering how to describe that other state wouldn't be there anymore. So it looks like you need to have two copies of a process or some way to timeshare the processor or whatever. And so if you dwell on this kind of question for a while, then you say there's something wrong with Pinker. Yes, he's talking about a very hard problem. But he's got blurred maybe 20, 30, 100, I don't know, pretty hard problems. And each of these is fairly hard. But on the other hand, for each of them, you can probably think of a couple of ways to program something that does something a little bit like that. How do you go from a verbal description to block supporting a third block to a visual image if you have one? Well, you could think of a lot of ways those-- I didn't say what shape the blocks were and so forth. And you can think of your mind. One part of your mind can see the other part trying to figure out which way to arrange those blocks. Maybe all three blocks are just vertically like this, this and this. That's two blocks supporting a third block. And so instead of saying consciousness is the hard problem, you could say consciousness is 30 pretty hard problems. And I bet I could make some progress on each of them if I spent two or three years or if I had 30 students spending or whatever. Actually, that's what you really want to do is fool some professors into thinking about your problem when you're a student That's the only way to actually get anything done. Well, I'm being a little dismissive. And another thing that Pinker and the other people of his ilk, the philosophers who try to find a central problem, do is say, well, there's another hard problem which is the problem called qualia, which is what is the psychological difference between something that's red and green? And I usually feel uncomfortable about that because I was in such a conversation when I discovered that Bob Fono who is one of our professors was color blind. And he didn't have that qualia, so sort of embarrassing. In the Exploratorium, how many of you have been at the-- a few. Maybe the best science museum in the world, and somewhere near San Francisco. But one trouble or one feature of it, it was designed by Frank Oppenheimer, who is Robert Oppenheimer's brother. He quite a good physicist. And I used to hang around there when I spent a term at Stanford. And it had a lot of visual exhibits with optical illusions and colored lights doing different things and changes of perspective and a lot of binocular vision tricks. And there's a problem with that kind of exhibit-- we have them here in the science museum too-- which is that about 15% or 20% of people don't see stereo very well. And at least 10% don't view stereo images at all. And some of these is because one eye's vision is very bad. But actually if one eye is 20/20 and the other eye is 20/100, you see stereo fine anyway. It's amazing how blurred one of the images can be. Then some people just can't fuse the images. They don't have separate eye control or whatever. And a certain percentage don't fuse stereo for no reason that anybody can measure and so forth. But that means that if a big family is looking at this exhibit, probably one of them is only pretending that he or she can see the illusion. And I couldn't figure out any way to get out of that. But I thought if you make a museum, you should be sure to include some exhibits for the-- what's the name for a person who only-- is there a name for non-fusers? When you get a pilot's license, you have to pass a binocular vision test, which seems awfully pointless to me, because if you need stereo, which only works for about 30 feet, then you're probably dead anyway, maybe the last half second of landing. So anyway, so much for the idea of consciousness itself. You might figure out something to say about the difference between blue and green and yellow and brown and so forth. But why is that really more important than the difference between vanilla and chocolate? Why do the philosophers pick on these particular perceptual distinctions as being fundamentally hard mysteries whereas they don't seem to-- they're always picking on color. Beats me. So what does it mean to say-- going back to that little story of crossing the street-- to say that Joan is conscious of something? And here's a little diagram of a mind at work. And I picked out four kinds of processes that are self models, mock whatever you're doing. There are probably a few parts of your brain that are telling little stories or making visual representations or whatever, showing what you've been doing mentally or physically or emotionally or whatever distinctions you want to draw. Different parts of your brain are keeping different historical narrations and representations maybe over different time scales. And so I'm imagining. I'm just picking on four different things that are usually happening at any time in your mind. And these two diagrams are describing or representing two mental activities. One of which is actually doing something. You make some decision to get something done and you have to write a program and start carrying it out. And the program involves descriptions of things that you might want to change, and looking at records of what usually happens when you do this so you can avoid accidents. So one side of your mind, which is sort of performing actions, could be having four processes. And I'm using pretty much the same-- they're not quite. Wonder why I changed one and not the others. And then there's another part of your mind that's monitoring the results of these little actions as you're solving a problem. And those involve pretty much the same kinds of different processes, making models of how you've changed yourself or deciding what to remember. As you look at the situation that you're manipulating, you notice some features and you change your descriptions of the parts so that you were-- in other words, in the course of solving a problem, you're making all sorts of temporary records and learning little things, stuffing them away. So the processes that we lump into being conscious involve all sorts of different kinds of activities. Do you feel there's a great difference between the things you're doing that you're conscious of and the often equally complicated things that you're doing that you can say much less about? How do you recognize the two? Do you say I've noticed this interval and that interval, and then in the next four measures we swap those intervals and we put this one before that instead of after? If you look at Twinkle, Twinkle, Little Star, there's a couple of inversions. And if you're a musician, you might, in fact, be thinking geometrically as these sounds are coming in and processing them. Some composers know a great deal about what they're doing. And some don't have the slightest idea, can't even write it down. And I don't know if they produce equally complicated music. What's this slide for? Anyway, when you look at the issues that philosophers discuss like qualia and self-awareness, they usually pick what seem to be very simple examples like red and green. But they don't-- but what am I trying to say? But someone like Pinker a philosopher talking about qualia tend say there's something very different about red and green. What is the difference? I'm just saying, why did I have a slide that mentioned commonsense knowledge? Well, if you've ever cut yourself, it might hurt. And there's this red thing. And you might remember, unconsciously, for the rest of your life that something red signifies pain and uncertainty and anxiety and injury and so forth. And very likely you don't have any really scary associations with green things. So when people say the quality of red, it's so different from green. Well maybe it's like the differences over being stabbed or not. And it's not very subtle. And philosophically it's hard to think of anything puzzling about it. You might ask, why is it so hard to tell the difference between pleasure and pain or to describe it? And the answer is you could go on for hours describing it in sickening and disgusting detail without any philosophical difficulty at all. So what do you think of redness? You think of tomatoes and blood. And what are the 10 most common things? I don't know. But I don't see that in the discussion of qualia. And the qualia of philosophers try to say there's something very simple and indescribable and absolute about these primary sensations. But in fact, if you look at the visual system, there are different cells for those, which are sensitive to different spectra. But the color of a region in the visual field does not depend on the color of that region, so much as the difference between it and other regions near it. So I don't have any slides to show that. But the first time you see some demonstrations of that, it's amazing because you always thought that when you look at a patch of red, you're seeing red. But if the whole visual field is red slightly, you hardly can tell at all after a few seconds what the background color is. So I'm going to stop talking about those things. Who has an idea about consciousness and how we should think about it? Yeah. AUDIENCE: Maybe it's just the K-lines that are in our brain, so the K-lines are different for an average person. MARVIN MINSKY: That's interesting. If you think of K-lines as gadgets in your brain which-- each K-line turns on a different activity in a lot of different brain centers perhaps. And I'm not sure what-- AUDIENCE: So like at a moment you have a set of K-lines that are active. MARVIN MINSKY: Right, but as you mentioned in different people, they're probably different. AUDIENCE: Yeah, yeah. MARVIN MINSKY: So when you say red and I say red, how similar are they? That's a wonderful question. And I don't know what to say. How would we measure that? AUDIENCE: I know I can receive some-- so, for example, a frog can receive some like with his eyes like pixels. And like these structures are the same. Like we can perceive some automatic things. And like this would be the same for us. But when we're growing, we probably create these K-lines for like red or green. MARVIN MINSKY: Right. The frog probably has them built in. AUDIENCE: Yeah. And probably it's very similar because we have centers in our brain. So, for example, for vision, we have a center. And probably like things that are close by will have a tendency to blend together. And so red would be similar to each one of us because it's very low level concept. But if you go higher, it probably, for example, for numbers to have different representation than red. I think there's started off by learning that we represent numbers by saying, like there is another person that presents just by seeing the number. And then you got to see it. MARVIN MINSKY: He has an interesting idea that maybe in the first few layers of visual circuits, we all share. They're pretty similar. And so for the primary-- for the first three or four levels of visual processing, the kinds of events that happen for when red and green are together or blue and yellow. Those are two different kinds of events. But the processes in for most of us are almost identical. The trouble is when you get to the level of words that might be 10 or 20 processes away from that. And when you say the word red, then that has probably closer connections to blood and tomatoes than two patches of-- anyway it's a nice-- AUDIENCE: So like animals still have most of this because they don't have the K-lines. For example, monkeys or dogs, but when you filter, these animals doesn't have the ability to break K-line out of consciousness. And so you will have some kind of-- with the animals you have like less social visualization or linear function representation. MARVIN MINSKY: Yes, well, I guess if you're make discrimination tests, then people would be very similar in which color patterns. Did I mention that some fraction of women have two sets of red cones? You know, there are three colors. AUDIENCE: It's between the red and green. MARVIN MINSKY: I thought it was very close to the red, though. AUDIENCE: Very close to red. MARVIN MINSKY: So some women have four different primary colors. And do you know what fraction it is? I thought it was only about 10% of them. AUDIENCE: Yeah, it's 5% of people, 10% of women. MARVIN MINSKY: I thought it's only women. AUDIENCE: It might be. MARVIN MINSKY: Oh, well, AUDIENCE: We could look it up. MARVIN MINSKY: One of my friends has a 12 color printer. He says it costs hundreds of dollars to replace the ink. And I can't see any difference. On my printer, which is a tectonics phaser, this is supposed to be red. But it doesn't look very red to me. Does that look red to any of you? AUDIENCE: Reddish. MARVIN MINSKY: Yeah. AUDIENCE: Purple brownish. MARVIN MINSKY: It's a great printer. It has-- you feed it four bars of wax as your solid and it melts them and puts them on a rotating drum. And the feature is that it stays the same for years. But it's not very good. AUDIENCE: It might look red on different paper. MARVIN MINSKY: No, I tried it. AUDIENCE: I'm sure if you put it up to a light bulb, we could make it all sorts of colors. MARVIN MINSKY: I think what I'll do is-- I saw a phaser on the third floor somewhere. Maybe I'll borrow their red one and see if it's different from mine. Well, let me conclude because I-- I think this really raises lots of wonderful questions. And I wonder if we wouldn't-- does this make things too easy? I think what happens in the discussions of the philosophers like Pinker and most of the others is that they feel there's a really hard problem, which is what is the sense of being? What does it mean to have an experience, to perceive something? And how they want to argue that somehow-- they are saying they can't imagine how anything that has an explanation, how any program or any process or any mechanical system, could feel pain or sorrow or anxiety or any of these things that we call feelings. And I think this is a curious idea that is stuck in our culture, which is that if something is hard to express, it must be because it's so different from anything else, that there's no way to describe it. So if I say, exactly how does it feel to feel pain? Well, if you look at literature, you'll see lots of synonyms like stabbing or griping or aching or you might find 50 or-- I mentioned this in first lecture I think, that there are lots of words about emotional or-- I don't know what to call them-- states. But that doesn't mean that they're simple. That means-- The reason you have so many words for describing simple states, feelings, and so forth is that not that they are simple and a lot of different things that have nothing to do with one another, but that each of those is a very complicated process. What does it mean when something's hurting? It means it's hard to get anything done. I remember when I first got this insight because I was driving down from Dartmouth to Boston and I had a toothache. And it was really getting very bad. That's why I was driving down because I didn't know what to do and I had a dentist here. And after a while, it's sort of fills up my mind. And I'm saying this is very dangerous because maybe I shouldn't be driving. But if I don't drive, it will get worse. So I really should drive very fast. So what is pain? Pain is a reaction of some very smart parts of your mind to the malfunctioning of other very smart parts. And to describe it you would have to have a really big theory of psychology with more parts than in Freud or in my Society Of Mind, book which has only about 300 pages, each of which describes some different aspect of thinking. So if something takes 300 pages to describe, this fools you into thinking, oh, it's indescribable. It must be elemental. It couldn't be mechanical. It's too simple. If pain were like the four gears in a differential. Well, most humans don't-- if you show them a differential, and say what happens if you do this? The average intelligent human being is incapable of saying, oh, I see, this will go that way. A normal person can't understand those four little gears. So, of course, pain seems irreducible, because maybe it involves 30 or 40 parts and another 30 or 40 of your little society of mind processes are looking at them. And none of them know much about how the others work. And so the way you get your PhD in philosophy is by saying, oh, I won't even try. I will give an explanation for why I can't do it, which is that it's too simple to say anything about. That's why the word qualia only appears once in The Emotion Machine book. And a lot of people complained about that. They said, why don't you-- why doesn't he-- they say, you should read, I forget what instead. Anyway. I don't think I have anything else in this beautiful set of-- how did it end? If you look on my web page, which I don't think I can do. Oh, well it will probably-- there. I just realized I could quit Word. Well, there's a paper called "Causal Diversity." And it's an interesting idea of how do you explain-- how do you answer questions? If there's some phenomenon going on and something like being in pain is a phenomenon, what do you want to say about it? And here's a little diagram that occurred to me once, which is what kinds of sciences or what kinds of disciplines or ways of thinking do you use for answering different kinds of questions? So I got this little matrix. And you ask, suppose something happens and think of it in terms of two dimensions. Namely the world is in a certain stage. Something happens and the world gets into a different state. And you want to know why things change? Like if I stand this up-- oh, I can even balance it. I don't know. No I can't. Anyway, what happened there? It fell over. And you know the reason. If it were perfectly centered, it might stand there forever. Or even if it were perfectly balanced, there's a certain quantum probability that its position and momentum are conjugate. So even if I try to position it very precisely, it will have a certain momentum and eventually fall over. It might take a billion years or it might be a few seconds. So if we take any situation, we could ask how many things are affecting the state of this system and how large are they? So how many causes, a few causes or a lot? And what are the effects of each of those? So a good example is a gas, if you add a cylinder and a piston. And if it's this size, then there would probably be a few quadrillion or trillion anyway molecules of air, mostly oxygen and nitrogen and argon there. And every now and then, they would all happen to be going this way instead of this way. And the piston would move out. And it probably wouldn't move noticeably in a billion years. But eventually it would. But anyway, there is a phenomenon where there is a very large number of causes, each of which has a very small effect. And what kind of science or what kind of computer program or whatever would you need to do to predict what will happen in each of those situations? So if there's a very few causes and their effects are small, then you just add them up. Nothing to it. If there is a very large number of causes and each has a large effect, then go home. There's nothing to say because any of those causes might overcome all the others. So I found nine states. And if there are a large number of small causes, then neural networks and fuzzy logic might be a way to handle a situation like that. And if there is a very small number of large causes, then some kind of logic will work. Sometimes there are two causes that are XOR-ed. So if they're both on, nothing happens. If they're both off, nothing happens. And if just one is on, you get a large effect. And you just say it's X or Y, and analogies and example-based reasoning. So these are where AI is good, I think. And for lots of everyday problems like the easy ones or large numbers of small effects, you can use statistics. And small numbers of large effects, you can use common sense reasoning and so forth. So this is the realm of AI. And of course, it changes every year as you get better or worse at handling things like these. If you look at artificial intelligence today, it's mostly stuck up here. There are lots of places you can make money by not using symbolic reasoning. And there are lots of things, which are pretty interesting problems here. And of course, what we want to do is get to this region where the machines start solving problems that people are no good at. So who has a question or a complaint? AUDIENCE: I have a question. MARVIN MINSKY: Great. AUDIENCE: That consciousness again. Would it have been easier-- MARVIN MINSKY: Is this working? No. AUDIENCE: It goes to the camera. MARVIN MINSKY: Oh. AUDIENCE: You can hand it to him. MARVIN MINSKY: OK, well I'll try to repeat it. AUDIENCE: Would it have bit easier if we never created the suitcase, as you put it in the papers, the suitcase of consciousness, and just kept those individual concepts? The second part of that question is, how do we know this is what they had in mind when they initially created the word consciousness? MARVIN MINSKY: That's a nice question. Where did the word consciousness come from? And would we be better off if nobody had that idea? I think I talked about that a little bit the other day that there's the sort of legal concept of responsibility. And if somebody decided that they would steal something, then they become a thief. And so it's a very useful idea in society for controlling people to recognize which things people do are deliberate and involve some reflection and which things are because they're learnable. It's a very nice question. Would it be better if we had never had the word? I think it might be better if we didn't have it in psychology. But it's hard to get rid of it for social reasons, just because you have to be able to write down a law in some form that people can reproduce. I'm trying to think of a scientific example where there was a wrong term that-- can anybody think of an example of a concept that held science back for a long time? Certainly the idea that astronomical bodies had to go in circles, because the idea of ellipses didn't occur much till Kepler. Are there ellipses-- Euclid knew about ellipses, didn't he? Anybody know? If you take a string and you put your pencil in there and go like that, that's a terrible ellipse. people knew about ellipses. Certainly Kepler knew it, but didn't invent it. So I think the idea of free will is a social idea. And well, we certainly still have it. Most educated people think there is such a thing. It's not quite as-- just as most people think there's such a thing as consciousness, instead of 40 fuzzy sets. How many of you believe in free will? AUDIENCE: My free will. MARVIN MINSKY: It's the uncaused cause. Free will means you can do something for no reason at all. And therefore you're terribly proud of it. It's a very strange concept. But more important, you can blame people for it and punish them. If they couldn't help doing it, then there's no way you can get even. AUDIENCE: It has the implication that there is a choice. MARVIN MINSKY: Yeah. I suppose for each agent in the brain, there's a sort of little choice. But it's it has several inputs. but I don't think the word choice means anything. AUDIENCE: Well, you have the relationship between free will and randomness. Certainly there are some things that start as random processes and turn out to be causes. MARVIN MINSKY: Well, random things have lots of small causes. So random is over here, many small causes. And so you can't figure out what will happen, because even if you know 99 of those causes, you don't know what the 100th one is. And if they all got XOR-ed by a very simple deterministic logic, then you're screwed. So but again, it's illegal the freedom of will is. It just doesn't make sense to punish people for things they didn't decide to do, if it happened in a part of the nervous system that can't learn. If they can't learn, then you can put them in jail so that they won't be able to do it again. But you'd have to-- but the chances are it's not going to change the chance that they'll try to do it if it's in fact a random. Did you have-- yeah. AUDIENCE: So machine learning has been on for a long time and like processors are really fast right now, like computers are really fast. Do you believe there is some mistake like people that do research should learn? I mean the-- MARVIN MINSKY: Well, machine learning is to me it's an empty expression. Do you mean, are they doing some Bayesian reasoning or-- I mean nobody does machine learning. Each person has some particular idea about how to make a machine improve its performance by experience. But it's a terrible expression. AUDIENCE: So like, statistical methods like improving methods to machine learning to the machine to infer like what point will belong to a data set or whatever? MARVIN MINSKY: Sure. AUDIENCE: People that do that, do you think they are doing some mistake? Like do you think there would be more advance into representing intelligence in another way and try to program that? MARVIN MINSKY: The problem is this. Suppose you have-- here's some system that has a bunch of gadgets that affect each other, just a lot of interactions and dependencies. And you want to know if it's in a certain state, what will be the next state. So suppose you put a lion and a tiger in a cage. And how do you predict what will happen? Well, what you could do is if you've got a million lions and a million tigers and a million cages, then you could put a lion and a tiger in each cage. And then you could say the chances that the tiger will win is 0.576239 because, that's how many cases the tiger won. And the lion will win-- I don't know-- that many. So to me, that's what statistical learning is. It has no way to make smart hypotheses. So to me, anybody who's working on statistical learning is very smart. And he's doing what we did in 1960 and quit, 50 years out of date. What you need is a smart way to make a hypothesis about what's going on. Now if nothing's going on except rounding and motion, then statistical learning is fine. But if there's an intricate thing like a differential, which is this thing and that thing summing up in a certain way, how do you decide to find the conditional probability of that hypothesis? And so in other words, you can skim the cream off the problem by finding the things that happened with high probability, but you need to have a theory of what's happening in there to conjecture that something of low probability on the surface will happen. And I just-- So here's the thing. If you have a theory of statistical learning, then your job is to find an example that it works on. It's the opposite of what you want for intelligence, which is, how do you make progress on a problem that you don't know the answer to or what kind of answer? So how did they generate? I don't know. Are you up on-- how do the statistical Bayesian people decide which conditional probability to score? Suppose these 10 variables, then there's 2 to the 10th or 1,000 conditional probabilities to consider. If there's 100 variables-- and so you can do it. 2 to the 10th is nothing. And a fast computer can do many times 1,000 things per second. But suppose it is 100 variables 2 the 100 is 10 to the 30. No computer can do that. So I'm saying statistical learning is great. It's so smart. How do-- I'm repeating myself. Anybody have an argument about that? I bet several of you are taking courses in statistical learning. What did they say about that problem? AUDIENCE: Trial and error. MARVIN MINSKY: What? AUDIENCE: Largely trial and error. MARVIN MINSKY: Yeah, but what do you try when it's 10 to 30th? Yeah. So do they say, I quit, this theory is not going to solve hard problems. So once you admit that, and say I'm working on something that will solve lots of easy problems, more power to you. But please don't teach it to my students. AUDIENCE: What do you think about the relationship of statistical inference methods? MARVIN MINSKY: I can't hear you. So in other words, the statistical learning people are really in this place, and they're wasting our time. However, they can make billions of dollars solving easy problems. There's nothing wrong with it. It just has no future. AUDIENCE: What do you think about the relationship between statistical learning methods? MARVIN MINSKY: Of what? AUDIENCE: The relation between statistical learning method and maybe something-- MARVIN MINSKY: I couldn't get the fourth one. AUDIENCE: Relationship of statistical-- MARVIN MINSKY: Statistical, oh. AUDIENCE: --to more abstract ideas like boosting or something where the method they are using at one and they-- MARVIN MINSKY: There's a very simple answer for that. It's inductive probability. There is a theory. I wonder if anybody could summarize that nicely. Have you tried? AUDIENCE: Basically-- MARVIN MINSKY: I can try it next time. AUDIENCE: You should assume that everything is generated by a program. And your prior over the space possible program should be the description length of the program. MARVIN MINSKY: Suppose there is a set of data, then what's the shortest description you can make of it? And that will give you a chance of having a very good explanation. Now what Solomonoff did was say, suppose that something's happened, and you make all possible descriptions of what could have happened, and then you take the shortest one, and see if that works and see what it predicts will happen next. And then you take-- say, it's all binary, then there's two possible descriptions that are one bit longer. And maybe one of them fits the data. And the other doesn't. So you give that one half the weight. And so Solomonoff imagines an infinite sum where you take all possible computer programs and see which of them produce that data set. And if they produce that data set, then you run the program one more step and see what it does. In other words, suppose your problem is you see a bunch of data about the history of something, like what was the price of a certain stock for the last billion years, and you want to see will it go up or down tomorrow. Well, you make all possible descriptions of that data set and weight the shortest ones much more than the longer descriptions. So the trouble with that is that you can't actually compute such things because it's sort of an uncomputable. However, you can use heuristics to approximate it. And so there are about a dozen people in the world who are making theories of how to do Solomonoff induction. And that's where-- Now another piece of advice for students is if you see a lot of people doing something, then if you want to be sure that you'll have a job someday, do what's popular, and you've got a good chance. If you want to win a Nobel Prize, or solve an important problem, then don't do what's popular because the chances are you'll just be a frog in a big pond of frogs. So I think there's probably only half a dozen people in the world working on Solomonoff induction, even though it's been around since 1960. Because it needs a few more ideas on how to approximate it. But unless you want to make a living, don't do Bayesian learning. Yeah. AUDIENCE: I don't know if this actually works. But if you take like Bayesian learning and we kind of advice sometimes like let's say we see something with very small probability and we type just like this part of that is never considered any good. Would that kind of like be like what we're trying to do with getting representations and things? I mean-- MARVIN MINSKY: Yeah, I think-- AUDIENCE: Would this make it much more discrete and kind of make it much more easier and more attractable? Or is it like-- my question would be, is it really representations for things saying, this chair has this representation. Isn't that kind of doing the same like kind of statistical model, but just throwing away a lot of the stuff that we might not want look at, what we consider as things that shouldn't be looked at? MARVIN MINSKY: I think-- say there's the statistical thing and there's the question of-- suppose there's a lot of variables x1, x2, x 10 to the ninth, 10 to the fifth. Let's say there's 100,000 variables. Then, there's 2 to the 100,000 Pijs. But it isn't ij, it's ij up to 10,000 subscript. So what you need is a good idea for which things to look at. And that means you want to take commonsense knowledge and jump out of the Bayesian knowledge. The problem with a Bayesian learning system is you're estimating the values of conditional probabilities. But you have to decide which conditional probabilities to estimate the values. And the answer is-- oh, look at it another way. Look at history and you'll see 1,000 years go by, what was the population of the world between 500 AD, between the time of Augustine and the time of Newton, or 1500, like O'Brien, those people, 1,000 years? And I don't know is there 100 million people in the world, anybody know? About how many people were there in 1500? Don't they teach any history? I think history starts-- I changed schools around third grade. So I never-- there was no European history. So to me American history is recent and European history is old. So 1776 is after 1815. That is, to me, history ends with Napoleon, because then I got into fourth grade. Don't you all have that? You've got gaps in your knowledge because the curricula aren't-- somebody should make a map of those. AUDIENCE: There were about half a billion people in 1500. MARVIN MINSKY: That's a lot. AUDIENCE: Yeah, I found it on the internet. MARVIN MINSKY: This is from Google? AUDIENCE: This is from Wikipedia. MARVIN MINSKY: Well. AUDIENCE: It's on the timeline of people. MARVIN MINSKY: OK. So there's half a billion people, not thinking of the planets going in eclipses. So why is that? How is a Bayesian person going to make the right hypothesis if it's not in the algebraic extension of the things they're considering? I mean, it could go and it could look it up in Wikipedia. But Bayesian thing doesn't do that. RAIs will. Yeah. AUDIENCE: But when we are kids, don't we learn the common sense knowledge? MARVIN MINSKY: Well-- I'm saying what happened in the 1,000 years? You actually have to tell people to consider. I'm telling the Bayesians to quit that and do something smart. Somebody has to tell them. I don't meet up with Newton. But they need one. What are they doing? What do they hope to accomplish? How are they going to solve a hard problem. Well, they don't have to. The way you predict the stock market today is Bayesian with the reaction time or the millisecond. And you can get all the money from the poor people that were investing in your bank. It's OK, who cares? But maybe it shouldn't be allowed. I don't know. Yeah. AUDIENCE: Do you think the goal is to replace human intelligence that can create a computer that will be able to reason by itself or is there also the ability to create a system-- MARVIN MINSKY: It have to stop getting sick and dying and becoming senile. Yes. Now there are several ways to fix this. One is to freeze you and just never thaw you out. But we don't want to be stuck with people like us for the rest of all time, because, you know, there isn't much time left. The sun is going to be a red giant in three billion years. So we have to get out of here. And the way to get out of here is make yourself into smart robots. Help. Let's get out of this. We have to get out of these bodies. Yeah. AUDIENCE: So you talked a lot about emotions. But emotions you described as like states of mind. And if you have like, for, example n states of mind that represent-- I don't know-- log n bits of information, why should we spend so much time talking about like so new information? MARVIN MINSKY: Talking about? AUDIENCE: Little information. Like if we had n states or n emotions, they would represents log n bits of information. And like that's very different information that they will see. So for example if I'm happy or sad, like if I had just two states, happy or sad? MARVIN MINSKY: If we just had two states, you couldn't compute anything. I'm not sure what you're getting at. AUDIENCE: Like emotions seem too little information. They don't represent much information inside our brain. Why should they be so important in intelligence since they-- MARVIN MINSKY: I don't think-- I think emotions generally are important for lizards. I don't think they're important for humans. AUDIENCE: Like if we-- MARVIN MINSKY: You have to stay alive to think. So you've got a lot of machinery that makes sure that you don't starve to death. So there's gadgets that measure your blood sugar and things like that and make sure that you eat. So those are very nice. On the other hand, if you simplified it, you just need three volts to run the CPU. And then you don't need all that junk. AUDIENCE: So they're not very important for us. It's just-- MARVIN MINSKY: They are only important to keep you alive. AUDIENCE: Yeah. MARVIN MINSKY: But they don't help you write your thesis. I mean, the people who consider such questions are the science fiction writers. So there are lots of thinking about what kind of creatures could there be besides humans. And if you look at detective stories or things, then you find that there are some good people and bad people and stuff like that. But to me, general literature is all the same. When you've read 100 books, you've read them all, except for science fiction. That's my standard joke, that I don't think much of literature except-- because the science fiction people say what would happen if people had a different set of emotions or different ways to think? Or one of my favorite ones is Larry Niven and Jerry Pournelle, who just wrote a couple of volumes about what about a creature that has one big hand in two little hands. Do you remember what it's called? The Gripping Hand. This is for holding the work, while this one holds the soldering iron and the solder. That's right. That's how the book sort of begins. And there is imagination. On the other hand, you can read Jane Eyre. And it's lovely. But do you end up better than you are or slightly worse? And if you read hundreds of them-- luckily she only wrote 10, right? I'm serious. You have to look at Larry Niven and Robert Heinlein and those people. And when you look at the reviews by the literary people, they say the characters aren't developed very well. Well, foo, the last thing you want in your head is a well-developed literary character. What would you do with her? Yes. I love your questions. Can you wake them up? AUDIENCE: When we are small babies, like we kind of are creating this common sense knowledge. And we have a lot of different inputs. So for example I'm talking to you, there is this input of the sound, the vision, like all these different inputs. Aren't we so involved when we are babies, like in very positive relations between these inputs? For example, the K-lines, is it like the machine learning guys argue that with a lot of variables and maybe 10 to the third was small set. What would be the difference if you go deep down? Are they trying to find like a very simple path? MARVIN MINSKY: I think you're right in the sense that I'll bet that if you take each of those highly advanced brain centers, and say, well it's got something generating hypotheses maybe or something. But underneath it, you probably have something very like a Bayesian reinforcement thing. So they're probably all over the place and maybe of 90% of your machinery is made of little ones. But it's the symbolic things and the K-lines that give them the right things to learn. But I think you raise another question, which I'm very sentimental about because of the history of how our projects got started, namely nobody knew much about how children develop in 1900. For all of human history, as far as I know, generally babies are regarded as like ignorant adults. There isn't I there aren't much theories of how children develop. And it isn't till 1930 that we see any real substantial child psychology. And the child psychology is mostly that one Swiss character, Jean Piaget. It's pronounced John for some reason. And he had three children and observed them. I think his first publication was something about mushrooms. He had been in botany. Is that right? Can anybody remember? Cynthia, do you remember what Piaget's original? AUDIENCE: Biology. MARVIN MINSKY: Something. But then he studied these children and he wrote several books about how they learned. And as far as I know, this is about the first time in history that anybody tried to observe infants very closely and chart how they learned and so forth. And my partner, Seymour Papert, was Piaget's assistant for several years before he came to MIT. And we started the-- I started the artificial intelligence group with John McCarthy who had been one of my classmates in graduate school at Princeton in math, actually. then McCarthy went to start another AI group in Stanford and Seymour Papert appeared on my scene just the same time. And it was a kind of miracle because we had both-- we met in some meeting in London where we both presented the same machine learning paper on Bayesian probabilities in some linear learning system. We both hit it off because we obviously the same way. But anyway Piaget had been one of the principal people conducting the experiments on young children in Piaget's group. And when Piaget got older and retired in about 1985, Cynthia, do you remember when did Piaget quit? It's about when we started. AUDIENCE: Didn't he die in 1980 or something. MARVIN MINSKY: Around then. There were several good researchers there. AUDIENCE: He was trying to get Seymour to take over. MARVIN MINSKY: He wanted Seymour to take over at some point. And there were several good people there, amazing people. But the Swiss government sort of stopped supporting it. And the greatest laboratory on child psychology in the world faded away. It's closed now. And nothing like it ever started again. So there's a strange thing, maybe the most important part of human psychology is what happens the first 10 years, first 5 years? And if you're interested in that, you could find a few places where somebody has a little grant to do it. But what a tragedy. Anyway, we tried to do some of it here. But Papert got more interested in-- and Cynthia here-- got more interested in how to improve early education than find out how children worked. Is there any big laboratory at all doing that any? Where is child psychology? There are a few places, but none of them are famous enough to notice. AUDIENCE: For a while there was stuff in Ontario, and Brazelton. MARVIN MINSKY: Brazelton Yeah. Anyway. It's curious because you'd think that would be them one of the most important things, how do humans develop? It's very strange. Yeah. AUDIENCE: So like infants, when they are about a year old, I think there's a favorite moment, like they learn how their goal, like how to achieve goals, like rock the knees. And then after one year, they learn how to clap, how to achieve a means. So for example, I think they do the experiment of putting like a hand in their ear, like left ear. And then chimpanzees do the same as one-year-old infants. And somehow I believe that, for example, reflexes between infants and chimpanzees are very similar. We tend to represent things better, because like we have this-- MARVIN MINSKY: You're talking about chimps? AUDIENCE: Chimpanzees. MARVIN MINSKY: Yep. AUDIENCE: They are like apes in general. MARVIN MINSKY: Right. AUDIENCE: I believe there are some apes that can learn sign language. I am not sure if that's right. But they can take the goals. And, for example, dogs can achieve a goal. But they can't imagine themselves like each moment. Maybe that's because of how they represent things, maybe they represent badly. They don't have good hierarchy. MARVIN MINSKY: There's some very interesting questions about that. That's why we need more laboratories. But here's an example. We had a researcher at MIT named Richard Heald. And he did lots of interesting experiments on young animals. So for example, he discovered that if you take a cat or a dog, if you have a dog on a leash and you take it somewhere, there's a very good chance it will find its way back because it remembers what it did. But he discovered if you take a cat or a dog and you take it for a walk and go somewhere, it won't learn because it didn't do it itself. So in other words, if you take it on a road passively, even a dozen times or 100 times, it won't learn that path, if it didn't actually have any motor reactions. So that was very convincing. And the world became convinced that for spatial learning, you have to participate. Many years later, we were working with a cerebral palsy guy who had never locomoted himself very much. I'm trying to remember his name-- well, name doesn't matter. But the Logo project had started. And he by putting a hat with a stick on his head, he was able to type keys, which is really very boring and tedious. And believe it or not, even though he could barely talk, he quickly learned to control the turtle, a floor turtle, which you could tell its turn left and right and go forward one unit, stuff like that. And the remarkable thing was that no sooner did he start controlling this turtle, than the turtle went over here and he turned it around and he wanted it to go back to here. And everybody predicted that he would get left and right reversed because he had never had any experience in the world. But right off, he knew which way to do it. So he had learned spatial navigation pretty much never having done much of it himself. And Richard Heald was very embarrassed, but had to conclude that what you learned from cats and dogs might not apply to people. We ran into a little trouble because there was another psychologist we tried to convince of this. And that psychologist said, well, maybe this was-- it took three years for him to develop a lot of skills. And the psychologist said, well, maybe that's a freak. I won't approve your PhD thesis until you do a dozen of them. I didn't mention the psychologist name, because-- Anyway, so we had a sort of Piaget like laboratory. But we never worked with infants. Did we? You think it would be a big industry. Nixon once came around and asked. There was a great senator in Massachusetts, I forget his name. He said, what can we do for education? The senator said, research on children how they learn. And Nixon said, that's great idea. Let's put a billion dollars into it. And he couldn't convince anybody in his party to support this idea. The only good thing I've heard about Nixon except for opening China, I guess. He was determined to do something about early education. Oh, the teachers union couldn't stand it. He didn't get any support from the education business I'll probably remember the senator next. Who's next? Yes. AUDIENCE: So it's kind of along the same lines. So if we come out thinking about how we represent things, even if we think about language itself, so the early, early stages of learning the language obviously have a lot of this statistical learning involved where we learned morphology of the language, rather than learning that language is actually representing things. So for example, if we're going to learn like how certain letters come one after the other or how they go, we kind of listen and we see that it's the way everyone else does it. And there are certain words that exist and certain words don't exist, even if they could exist. I guess these are all like statistical learning. And then like after this structure is there, we use this structure to make this representation. So isn't it-- like wouldn't it kind of be right to say that these two are basically the same thing, just that the representation, the more complex one is just another version of the statistical learning where we've just done it? MARVIN MINSKY: Well there's context free grammar. And there's the grammars that have push down lists and stacks and things like that. So you actually need something like a programming language to generate and parse sentences. There's a little recursive quality. I don't know how you can-- it's hard to represent that in a Bayesian network unless you have a push down stack. The question is does the brain have push down stacks or are they only three deep or something? Because if you say this is the dog that bit the cat that chased the rat that so on, nobody has any trouble. And that's a recursion. But if you say this is the dog that the cat that the rat bit ate, people can't that. AUDIENCE: It's empirical evidence that the brain got its tail cut off. MARVIN MINSKY: That it's what? AUDIENCE: The brain influenced tail cut off representation. MARVIN MINSKY: Yeah. Why is language restricted in that you can't embed clauses past the level of two or three, which Chomsky never admitted. AUDIENCE: Can't it be the case that we also learn that? Like we also learned that certain patterns can only exist between words. We do parse it using your parse tree. We learn using a parse tree. Like we learn that when you hear a sentence, go after trying to parse it using three words, two words, four words and just try that, see if it works. If it doesn't try another way. It can't distance itself from like learning the number of words that usually happen in a clause. Is it this type of learning? MARVIN MINSKY: Well I'm not sure why is it very different from learning that you have to open a bottle-- open the box before you take the thing out. We learn procedures. I'm not sure-- I don't believe in grammar, that is. AUDIENCE: If we were trying to teach a machine to be like a human being, would we just lay out the very basics and let it grow like a child with learning or would we put these representations in there, like put the representations-- MARVIN MINSKY: Well, a child doesn't learn language unless there are people to teach it. AUDIENCE: Right. MARVIN MINSKY: However-- AUDIENCE: So maybe we can expose the machine to that white board to-- or we can expose it to the world somehow to some kind of input. MARVIN MINSKY: I'm not sure what question you're asking. Is all children's learning of a particular type or are they learning frames or are they learning grammar rules or do you want a uniform theory of learning? AUDIENCE: I think which one is a better approach, that the machine has very basic things and it learns? So there's a machine, should we makes machines as infants and let them learn things, by for example giving them a string that's from the internet, from communication over the internet or communication among other human beings, just like a child learns from seeing his parents talk. MARVIN MINSKY: Several people have-- AUDIENCE: Is it better to actually inject all that knowledge into the machine, and then expect it to act on it from the beginning? MARVIN MINSKY: Well, if you look at the history, you'll find that-- I'm not sure how to look it up. But quite a few people have tried to make learning systems that start with very little and keep developing. And the most impressive ones were the ones by Douglas Lenat. But eventually he gave up. And he had systems that learned a few things. But they petered out. And he changed his orientation to trying to build up commonsense libraries. But I'm trying to think of the name for self-organizing systems. There are probably a dozen. If you're interested, I'll try to find some of them. But for some reason people have given up on that, and so certainly worth a try. As for language, I think the theory that language is based on grammar is just plain wrong. I suspect it's based on certain kinds of frame manipulation things. And the idea of abstract syntax is really not very productive or it hasn't-- anyway. Because you want it to be able to fit into a system for inference as well. I'm just bluffing here. Did you have a question? AUDIENCE: I was just going to say it seems that what you're saying might be considered to be a form of example-based reasoning. You just have lots and lots of examples, which are not unlike the work that DuBois does with a child the word water from hearing lots of people use that word in different contexts and examples. MARVIN MINSKY: While you're here, Janet Baker was a pioneer in speech recognition. How come the latest system suddenly got better? Are they just bigger databases? AUDIENCE: That's a lot of it. MARVIN MINSKY: Of course, the early ones you had to train for an hour. AUDIENCE: But we now have so many more examples and exemplars that you can much better characterize their ability, which is tremendous, between the people. And you typically have multiple models, a lot of different models of how-- so it knows in a space of how people say different things and allowing you to characterize it really well, so it will do a much better job. You always do better if you have models of a given person speaking and modeling their voice. But you can now model a population much better when you have so much more data. MARVIN MINSKY: They're really getting useful. AUDIENCE: Oh, dear. MARVIN MINSKY: OK, unless somebody has a really urgent question. Thanks for coming.
|
MIT_6868J_The_Society_of_Mind_Fall_2011
|
11_Mind_vs_Brain_Confessions_of_a_Defector.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. DAVID DALRYMPLE: Thanks, Marvin. I decided to call my talk "Mind vs. Brain-- Confessions of a Defector." I used to be an AI-ist. My thesis was reviewed by Marvin. I worked in the Media Lab. I thought about models of computation that might be more suited to building AIs. Really, I didn't have that many good ideas, although people thought they were good ideas. And this kind of scared me. So I backed away a little bit. And then I found a really cool problem in the area of neuroscience. And I've now left AI at MIT. And I'm a PhD student in biophysics at Harvard And I am working on worms. And I'm trying to figure out how they think, to the extent that they do. And so I wanted to sort of go over, not really the details of neuroscience or the details of my work. I mean, I'm going to do sort of what Marvin does. I'm going to talk for a little bit. And then we can talk about whatever you want. And if you want to get into the details, that's great. But first, I just wanted to kind of give an overview of, from my perspective, where I see neuroscience and AI sort of fitting in with each other and with the larger context of sort of the history of science and the taxonomy of science. So I sort of self-identify as a mathematician, to the extent that people have sort of discipline identities, like gender identities, or racial or cultural identities. My discipline identity is math. And so I see everything as sort of springing out from that. So on one side, you have sort of the scientific tower, where you have physics. And on top of physics, you put chemistry. And on top of chemistry, you put biology. And on top of biology, you have neuroscience. And there is also the sort of computer science, where you start from the theory of computation. And then you have sort of software engineering, and then AI. And in both cases, what you're really stretching toward is an understanding of what thought is. And we sort of got to some success in sort of figuring out what the universe is, at least down to a certain level of description. [LAUGHTER] I could turn on a blackboard light. MARVIN MINSKY: Is there one? DAVID DALRYMPLE: Yeah, there is a blackboard light. MARVIN MINSKY: I have to correct this down here. You don't have transparencies for this? DAVID DALRYMPLE: No. I don't have-- I don't know where to buy transparencies anymore. [LAUGHTER] MARVIN MINSKY: I have some transparencies. DAVID DALRYMPLE: That would have been good to know. [LAUGHTER] But-- MARVIN MINSKY: You can't use them. DAVID DALRYMPLE: What's that? MARVIN MINSKY: But you can't use them. [LAUGHTER] DAVID DALRYMPLE: But what we're really trying to get to is sort of this fundamental question of what is human experience? And human experience is sort of dominated by consciousness, or cognition, or whatever you want to call it. And we really don't know what's going on there. We have something called cog sci. And it definitely connects to both neuro and AI, but it's pretty fuzzy right now. And a lot of people take the metaphor of transistors in talking about the brain. And that, oh, as neuroscientists we spend a lot of time looking at the details of what happens in the nonlinear regime of this sort of neurotransistor. But it really doesn't matter, because what matters is when you put the things together and so on, which is a good metaphor. But a metaphor that I also like, and I see used less often, is that we're sort of right now looking at thought and its relation to the brain. I think we're sort of where Copernicus was when he was thinking about the planets. In a sense, we had sort of the right basic idea. We have the idea that thought happens in brains. And Copernicus had the idea that planets orbit the Sun, which at the time was a new idea for Copernicus. And in relative scheme of things, it's kind of a new idea for us that thought happens in brains and happens by electrical impulse. But Copernicus didn't have gravity. He didn't have Newton. And so in describing the orbits, he had all of these little corrections, and epicycles, and deference in trying to make sense of the things that would later all follow from this very simple theory of calculus and of gravity, but that was yet to be discovered. And so I think that there's something that we're getting to in the realm of cognitive science, some sort of new mathematical insight that I think will be on the same par as discovery of calculus in terms of how networks emerge to perform complex computation in living systems. And the way that I think about that is when we really get down to the essence of calculus, it's about what happens in the limit of things sort of acting in similar ways as you cut them into smaller and smaller pieces and have more of those pieces. And what we're looking at both in neuroscience, and in sociology to a lesser extent-- because fewer people are doing quantitative things there-- but we're looking at scale-free networks, where as you partition the network, at different sizes of partitions you get the same sort of in-degrees and out-degrees. And we really don't know what we're talking about when we go into scale-free networks. But it seems that there's something there that relates to sort of the mysteries-- the things that we're bumping into that I feel are the same sorts of things that people were bumping into shortly before we figured out calculus. So anyway, that's sort of my big philosophical spiel. I also wanted to tie into this quote from Ernest Rutherford, who's kind of one of the greatest curmudgeons of science history, who once said, "All science is either physics or stamp collecting." Either you're writing down the equations and you know exactly how things work or you're just sort of saying, oh, this looks nice. Let me write down what it looks like and where I found it. And biology is definitely still largely in the stamp collecting realm. And it doesn't have to be. It's not the nature of studying living systems, which is the reason that we have biophysics. It's the reason that I am in a biophysics program. But there's sort of this cultural tradition in biology that goes back to sort of Darwin, where you just sort of look around the world. You write down what you see. And it's a time-honored tradition. And it certainly gets you pretty far. But it seems to break down when we look at the brain because you start cataloging these things that are so minute. And you start cataloging them in isolation, instead of considering them as dynamical systems that interact with their surroundings. And it seems that you really have a hard time putting those pieces back together once you've sort of collected them in separate observations. So that's another point I wanted to make. And then I was just going to talk a little bit about why I think things are changing, especially in the neurodomain. There's the sort of classic way that you do neuro experiments, is you have some sort of furry creature. And you present some sort of stimulus. And then you stick a wire somewhere into the brain. And you measure what happens depending on the stimulus. And you don't know what you're looking at. But you know that you're looking at something. And you can get some sort of idea as to what things are similar. Hubel and Wiesel did these great experiments where they stuck electrodes into cats' visual cortex and basically just kind of waved their arms around and saw that some of the neurons were orientation sensitive. And that's the foundation for much of our current knowledge and research about vision in mammals. But it doesn't tell you how the different things that you're looking at relate to each other. And it gives you only the merest glimpse. And even in the most dense sort of electrode applications, you're going to get a maximum of thousands or maybe 10,000 signals out of this at once. And most likely none of those neurons that you're looking at are going to be anywhere near each other on the scale of synapses. And so you're essentially just sampling a population. And, in fact, there's a lot of research in this field that goes under the rubric of population dynamics, where you're sort of just looking at things in the aggregate. I heard a nice little metaphor actually just today in an unrelated class about populations, where if you're, for instance, examining a population of people walking down Fifth Avenue in New York City, you can measure things like the average rate that people are walking down. But you're never going to capture something if you're just looking at flow across everything, like every 100 meters people will turn into a shop and stop for a few minutes and then come back down and start again. So you're only getting sort of the very broad strokes. Or otherwise, you're taking the neuron out of its natural context. And you're just saying, OK, here I have a neuron in a dish, whatever, in some sort of growth medium. And I'm just going to probe it and see what happens when I stimulate this neuron in different places. And then you're likely exploring parts of the face base that this neuron would never experience in the system. And you're getting no indication as to which parts of the face base it actually is in or how that would relate to other parts of the system. And when you're dealing with this complex information processing network that's pretty critical. So that's sort of the classical state of neuroscience, which basically plateaued I think about 30 years ago, which is the reason why people like Marvin are pretty frustrated with it. But in more recent times, we've started to get some different ways of looking at neural systems that I think are really exciting. This is why I decided now is a good time to be a neuroscientist. So now if we zoom in to sort of a physicist's spherical neuron, it has a cell bilayer. And neurons are powered by channels, which interrupt the bilayer and can pass certain types of ions. And naturally ions carry charge. And some channels, depending on the concentrations and the voltages, will have positive ions flowing in; positive ions flowing out; negative in, out. And these dynamics basically form the basis for neural activity, and action potentials, and everything. And we found some of these channels are things that functionally are like channels in prokaryotes-- that means basically single-celled organisms, in algae, in Archaea, in bacteria even-- that have really useful properties. For instance, there's one called channelrhodopsin, which turns on if and only if there's blue light impinging upon it. And this was discovered about 15 years ago. And about 10 years ago, it was sequenced. And now with the ability to synthesize genes and introduce them into other organisms, we can take that gene and add it to a neuron, which is not at all where it belongs. Nature would never put this type of channel in a neuron because there's no reason for neurons to be sensitive to light unless they're photoreceptors in the retina. But now that we have this blue light activated channel, we can point a laser at the cell. And when we turn it on, if it's the right wavelength, the cell gets activated. When we turn it off, the effect disappears. And so you can now do these very precise sorts of perturbations and see what happens. And because it's a genetically expressed channel, what a lot of people did initially is they would just express the channel in a certain class of cells that has a specific promoter that's been discovered. And then just use a wide-field blue lamp, light up the entire brain. And then only that class of cells in which you express a light-sensitive channel will turn on. So you can see what does this class of cells actually do? There is a similar channel that's activated by yellow light that removes-- well, actually it introduces negative ions, introduces chlorine. And that causes the cell to hyperpolarize or deactivate. So you can inhibit-- selectively inhibit populations based on genetics. And this allows users to do something like a knockout. A lot of people do knockout mice, where you remove some class of cells. They use the behavioral deficit. But you can do it without any-- basically, you can do it with a positive control because if you aren't shining yellow light into the skull, then it's just like a regular mouse. And this is really helpful for determining those effects. But even so, this is still population dynamics. You're still just talking about some broad class of cells. All pyramidal neurons, all basal ganglia, here's what happens when you remove them. So there's another piece of it. There's another piece of this puzzle, which is multiphoton microscopy. And this is something that-- how many people know multiphoton or two-photon? OK. So I recently learned about this too. And it's really cool because this is the sort of thing that when you're first like eight years old and you hear about lasers, there's a sort of thing that you would imagine that you would do with lasers. And then you learn a bit more about lasers and you realize that lasers don't work that way. And then you learn more about lasers, you're like, whoa, actually you can do that with lasers. And what you do is you have one laser. It's a femtosecond laser, which is necessary because of the ridiculous synchronization that's required in doing this. Not only does it have to be a femtosecond laser, but you actually-- even though it's two-photon, you're using two beams, you can't have two lasers because they won't be well enough synchronized. So you have to split a laser into two beams. And then you focus-- using fancy optics that I don't know enough optics to draw-- those two beams onto a single point inside your sample. And the wavelength of this laser is twice the wavelength necessary to excite your channelrhodopsin or your halorhodopsin. And what happens is there's a small, but nonvanishing probability that two photons from the two branches of this will arrive at exactly the same point in not quite Planck time, but in sort of molecular excitation time. And if those two photons arrive close enough to each other, they'll have exactly the same effect on the state of the molecule as a photon with half the wavelength, so twice the energy, because those two photons each deliver that amount of energy, and so it gets doubled. So this only can happen at exactly the spot where the two beams converge. So you get this extremely selective, not only z slicing, but also an even more selective x and y slicing. So you can use this to target an individual neuron. And you can do it repeatedly. You can do it reliably. Any space in your working volume you can target using basically acousto-optic deflectors, which again I can't draw. So they're going to be black boxes. But you can do it very fast. And it's expensive. But you can direct these at hundreds of Hertz. So you can essentially write to anywhere in the brain. You can turn things off or on as you wish as long as you know the locations. And the other final piece of this optogenetics puzzle is there's a fluorescent protein called GCaMP, which is sort of like a GFP, but with a calmodulin attached to it. And the calmodulin binds calcium ions. And the GFP is a green fluorescent protein. But when the calmodulin isn't bound to a calcium, it sort of hangs out here and disturbs the conformation of the GFP so that it can't fluoresce. But when this binds a calcium, then the GFP fluoresces green. And calcium is one of the major signalings for neuron firing. Especially for neurotransmitter release, you need an influx of calcium. So this basically tells you whether the neuron is active. And as if that weren't enough-- because it is a second messenger-- just this year, there was a protein developed that's a membrane protein, which is also genetically encoded. All of these can be just engineered into a line of animals. And then you don't need to worry about it anymore, no injections or anything. So there's another one that-- originally, nature intended this as a proton pump. It's archaerhodopsin. It's a light-sensitive proton pump. But what they were able to do is to silence the proton pumping, basically to disable that aspect of the protein's function. But then they discovered that it actually then has a fluorescence which is proportional to the voltage across the membrane that it would be moving those protons across. So, in sum, you can activate neurons. You can inhibit neurons. You can measure the calcium concentration or activation. And you can measure the voltage. And you can do it all, anywhere you want, very fast. So this is basically a toolkit for doing experiments that you could really only dream of with electrophysiology. In particular, the one that I'm working on right now is the worm C. elegans, which is a very well-studied organism. And, in fact, it's the only organism for which we actually know the complete connectome. So a lot of people talk about connectomes. And it's sort of a dark secret of neuroscience that we already have the connectome for C. elegans and we can't do anything with it. Because it turns out that just knowing where-- you one neuron here. And it synapses to this neuron here. And this controls the body wall muscles. That doesn't really tell you anything, like maybe this is an inhibitory synapse, maybe it's excitatory, maybe it's non-functional. Maybe it's stronger than the other synapses on that cell, maybe it's weaker. And so it basically gives you very little information to start from if you're trying to understand how this organism thinks or computes. But since we know where all of the neurons are, and there are only 302 of them, it's not crazy to think about using this microscope and these biophysical techniques to actually build a model, pair-wise if need be, all 90,000 pairs, of how every neuron affects the behavior of every other neuron. So that's what I'm working on. I think that actually doing these sorts of observations is something that's never really been done before. And again, to make a ridiculously grandiose comparison, it's kind of like how Newton only was able to do what he did because of Galileo developing the tools to observe phenomena that had never been seen before. And I think that advances-- well, this is actually a quote from Sydney Brenner, who was the first person to suggest that you can study this worm and maybe learn something from its nervous system. Advances in science usually come from new techniques, new discoveries, and new ideas in that order. And so now we have the techniques. We're working on the discoveries. And the hope is that it will lead to new ideas. So questions? Yeah? AUDIENCE: So you've given us this overview of the state of the art right now and possibly how it's going to be in five years, but how do you think neuroscience is going to be in about 20 years or maybe 30 years? DAVID DALRYMPLE: Well, what we can do right now is this most basic organism that nature has to offer. And the natural thing to do once that gets solved, which we don't know how long it'll take, because we don't know how much detail is really important, but I think that we can solve the worm in three or four years. And then the next step is the zebrafish, which is also optically transparent, which is handy for using these sorts of optical microscopes. But the zebrafish has 100,000 neurons. So it's a big jump in complexity. And it has a lot of the same sorts of brain regions that you see in mammals and even humans, although they often go by different names. But it's a similar structure. You know, it's a vertebrate. And it has eyes, which the worm doesn't. So that would be the next thing to look at. And I think that'll probably take another five years or so. And then maybe Drosophila-- bees are pretty complicated. That's the first place where you get something that resembles language. So that might be really interesting, and eventually, mice, cats, dogs, monkeys, and humans. And you know, definitely the path that this goes on in an ideal world is toward taking an individual human brain and turning it into a model. AUDIENCE: What would you consider to be solving? Or how much do we need to know about this? What are the problems there? What are the things that we still have to understand? DAVID DALRYMPLE: The way that I have set up the criteria, there is a big list of publications, basically of behavioral results. And there is a lot of stereotyped behaviors. There is something like 30 or 40 different conditions where you can put the worm in these conditions and they exhibit particular omega turns, or reversals, or things like that. So that's sort of the baseline, say, well, if you put the worm in these, the virtual worm sort of in these conditions in a virtual simulated Petri dish, they exhibit all the same behaviors. That's sort here your first-order check. And then the next step is, well, what happens if you remove a neuron? You pick one of the 302 neurons in a physical worm, you can ablate it with a laser. You can kill a cell specifically. Or you can just inhibit it with a halorhodpsin. Then you can see, does the virtual worm exhibit the same behavioral differences as the physical worm under that condition? And you can also do larger scale sorts of things. You can say, well, what happens if I were to activate this neuron 10 times a second every five seconds? And how would that change things? So you could do lots of different perturbations. And that, to me, is the best way to check that you have what I consider a biologically relevant model. But what you're looking for is not anything on sort of the-- you're looking for observables, basically. And I think that's what you have to do if you're doing science. You have to be looking at what's observable. And right now what's going on inside the synapses is not observable. So I'm not going to be simulating that. And part of the hypothesis-- since I'm doing this as a PhD thesis, it has to answer a scientific question. And the question is, can you capture the qualitative aspects of behavior as viewed externally without modeling what's going on at the molecular dynamic level? Yeah? AUDIENCE: My question is on the connectome. So in my understanding of the human brain, I thought that neurons could grow new connections with other neurons. So in that sense, it's like the map of the connections between all pairs of neurons is constantly changing. DAVID DALRYMPLE: Yes. AUDIENCE: So how would that work when we're trying to find the connectome of a more complicated organism whose neurons do make new connections? DAVID DALRYMPLE: So C. elegans, nicely enough, doesn't do that. But you're absolutely right. Mammals do. Mammals do form new connections. And there is also, even in C. elegans, there is a question of development. It's been shown, not conclusively, but fairly convincingly, that electrical activity is not only important for cognition. It's also important for development. And if you introduce genes that basically only break action potential function of a neuron, those neurons, as they develop from birth onward, they don't form the connections that they should. And so there is something going on. There is some computation going on there that's development-specific probably, because in most areas of the brain, once you reach a certain level of maturity, those sorts of processes stop growing. So there is some sort of computation there. And I am explicitly leaving that out because I want to graduate in a reasonable amount of time, saying you know, development, future work. And it is. It's future work. And at the same time as hopefully, this is a success, someone will go look at the zebrafish, and someone will go look and try and figure out how C. Elegans develops from a larval stage to an adult with all 302 neurons, and how they find each other to connect. And that's definitely important, because how the nervous system develops gives us some clue as to what the functional organization is, because the things that develop in concert and sort of stem from the same developmental program, in a sense, probably have the same functions when they're finished developing. But separate from development, there is also this question of learning. And if you do just capture connectome, there is a-- it seems to me that there is a possibility that you could wind up doing is sort of capturing a connectome frozen in time. You could wind up with some anteriograde amnesia effect, because if you're missing some aspect of plasticity, on a short-term scale, you would get the same sorts of responses, but you wouldn't get the same sorts of changes over time. So that is a possibility. The way that you can get around that is, if you have tools-- again, we're talking 20, 30 years in the future. These are pretty recent. Who knows what we'll have then? If we can visualize something either at a lower level in terms of what's going on with transcription factors or if we can visualize how the axonal processes grow, then we can build models of that in the same way that now we can build models of sort of the steady-state dynamics in the sense of short time-scale dynamics of electrical activity. AUDIENCE: Thanks. DAVID DALRYMPLE: Yeah? AUDIENCE: So how many neurons can you look at at once with that [INAUDIBLE]? DAVID DALRYMPLE: So it depends on how many lasers you have. Number of lasers over 2 equals the number of simultaneous observations. But you can direct the lasers at hundreds of hertz. And so if you want to look at 100 neurons at 30 hertz, you can do that. You just have to multiplex them. And because things, especially in C. elegans, are on a fairly slow timescale, because the C. elegans doesn't have action potentials, or at least they're not thought to be significant for computation, it's doable. As technology gets better, again, you'll be able to scan faster. There is actually people working on just fancier optics tricks that let you scan faster. And also you can just use more lasers. There is nothing to say that you have to be pointing one laser. You can multiplex much faster if you're talking about pulsing the lasers on and off, because they're femtosecond lasers. So the more lasers you have, the better. But ultimately, when we're talking about systems like mouse where you have to penetrate through millimeters of tissue to get to certain regions, it's probably not going to be optical. At least it's not going to be visible light. One potential direction is called magnetoencephalography. When there is a neural current, it induces a magnetic field just by Maxwell's equations. And there is-- right now if you have superconducting magnets, SQUIDs, you can detect the currents of order of 1,000 neurons activating at once. So you have to have, like, that level. But when you're talking about 100 billion neurons in the human brain, that's still pretty impressive. It's pretty impressively fine grained. It's much better than an MRI. And as time goes on, again, hopefully those things will continue to evolve and improve to the point where you can measure what's going on at a very low level. And then on the control side, similarly we have transcranial magnetic stimulation. And that also is rapidly increasing in resolution and accuracy. Sergei? AUDIENCE: Do you ever name the worms? DAVID DALRYMPLE: A friend of mine suggested Ellie for C. elegans, but I haven't come up with any others. Yeah? AUDIENCE: So this question is currently not super well formed, but [INAUDIBLE] DAVID DALRYMPLE: No problem. AUDIENCE: OK, so currently, since you're devoting a few years of your life to this and you're doing a PhD on it, you do think it's important to try to understand the low level specifics of it all to understand the mind and thoughts, right? Is that what's going on? DAVID DALRYMPLE: So I think what I'm trying to establish is a lower bound on what's important, because there is a lot of people out there, like Terry Sejnowski, who argue that what's going on inside the synapse is important. The mechanics of vesicles diffusion is important. And if you're not keeping track of the vesicles, you're really missing the point. And so what I'm trying to do is, at least for one organism, for 30 behaviors of this organism, say, you know what? Vesicle motion is not important for this. And then hopefully once that's established, we can sort of move forward and say, OK, well, maybe the neuron, the different compartments in the neuron aren't important either. Maybe there is some functional units that we can start to consider. But I'm trying to just sort of establish that lower bound. And in addition, I think in C. elegans, where you have a total of 302 neurons, that's on the same order of magnitude as the number of functional regions that we've identified with MRI in human brains. And I think in C. elegans, each of the neurons really is pretty specialized to do a certain job in the organism. So I'm not sure that you could go that much higher in this model system. But at least I'd like to say the neurons are the lowest level that you need to worry about. AUDIENCE: OK, so as a follow up to that-- so that's really cool. That actually clarified some things for me. But do you think that work on higher-level stuff is still useful at this point? DAVID DALRYMPLE: Oh yeah, absolutely, but I think that work on higher level stuff is largely the same sort of work that has been possible for a while. I mean, there is certainly an argument that we have way cheaper and more powerful computers than we did when people started doing AI. But I feel like most AI is not-- it's not really about scale, you know, unless you're talking about Google-style AI, which I feel like is not really the point. The reason that I moved into neuroscience is because it's clear to me that there is something that you can do now that you couldn't do before. There is something you can see that you couldn't see before. And so there has got to be something that you can learn from that. I don't think that this is the most probable path to intelligence, in the sense, I think there is many, many paths to intelligence. And the collection of all of them that don't involve looking at brains at all, there is a greater probability of success than the collection of all that involve looking at brains. But I think this specific path is the most probable single path. If you were to compare it to hierarchical temporal memory as a specific way that you can go, or if you were to compare it to Bayesian networks as a specific way to go, I think that this is more likely than any specific thing. So it's not that I think that it's the best, but it's certainly the clearest in how to proceed. And I like that. Marvin? AUDIENCE: Is there an estimate of how many genes control the nervous system in the worm? DAVID DALRYMPLE: In the worm, you know, I don't know that number. I think if you're talking about just like channels and transporters, it's probably something like 100, if that. There is a class of genes called unc, for uncoordinated. And when you remove those genes, the worm doesn't really swim very well. And there is about 120 genes in that class. So I think that's sort of roughly the nervous system genes, if you will. AUDIENCE: I've seen estimates for the mammalian brain which are 20,000-- DAVID DALRYMPLE: Well, 20,000 is how many genes there are in a human. But maybe all of them were important for the brain. Who knows? Yeah? AUDIENCE: When you're talking about moving some of those neurons, are you talking about doing that in a mature worm that doesn't have any more development, obviously? DAVID DALRYMPLE: Yeah. AUDIENCE: What about doing that in a worm that hasn't developed yet? With that, could you see things develop new connections and new-- DAVID DALRYMPLE: Things get screwed up. If you do it in a mammal, things adapt. And you wind up kind of being OK. The worm developmental system is not that complicated. And if you start killing things in the larval stage, it kind of just isn't happy. It'll usually live, but it'll-- the neurons that were supposed to go there will just sort of get lost. AUDIENCE: Is there a tipping point for what animal is complex enough to adapt, versus what isn't, like a worm? DAVID DALRYMPLE: So the word for it is non-eutelic. Eutelic means that the network structure is fixed and it won't adapt. And I think you can get up to the level of about a snail. There are some eutelic snails. And then beyond that point, certainly all insects have adaptive neural networks. Yeah? AUDIENCE: Could you just explain to me [INAUDIBLE] the kinds of experiments using short term-- I mean, you're getting some channels to [INAUDIBLE]?? DAVID DALRYMPLE: So the nearest term thing is actually just to focus on the read out, which is the calcium image read out, to express in all the neurons, which no one has ever done, because there is this-- again, it's sort of a cultural bias. You know, not that I'm unbiased-- I have one bias. And biologists have the other bias, which is to isolate the smallest publishable unit and sort of say, OK, I'm going to work on this cell and figure out its function. Anything else is noise. And so you try to minimize the expression of your transgene. And what I'm trying to do is maximize that. I want it in all of the cells, because I want to be able to capture the entire system so that I can treat it as sort of a closed system with well-known inputs and outputs. So in this case, the first thing I'm trying to do is just to express calcium just to do confocal imaging and to see if there is any patterns that pop out. And right now, I'm just sort of in the process of trying to get this gene to actually express in all of the neurons. Yeah? AUDIENCE: So if every worm has the same number of neurons and they're all connected in the same way, what accounts for the functional differences between worms? Say, like, what makes one more uncoordinated than another? DAVID DALRYMPLE: Oh, so when you do have those mutations, you do get different network structure. Well, not all of the time-- sometimes it's not different network structure. Sometimes it's just that the that a certain class of neurons is not excitable, it won't fire because it's missing voltage-gated channels, or something like that. Actually, none of them have voltage-gated channels, but if it's missing receptors, for instance, it'll just sit there. And it'll be a roadblock to signals that are supposed to go through there. So yeah, when you start mutating, that breaks the rule that they're all exactly the same. Yeah? AUDIENCE: Followup question. How certain are you that genetically this kind of case changes the topology. And are modified or genetically modified worms would behave in the same way and the same manner as an ordinary worm? DAVID DALRYMPLE: It's a very good question. And the way that I've sort of dodged that is to say, what I'm looking for is this sort of repertoire of 30 or 40 behaviors. And so suppose that introducing all of these foreign channels really does change what's going on at the level of neural dynamics. But suppose that when you look at what the worm does under various experimental conditions, it's still the same, then yes, you're going to be simulating something that isn't natural. You're going to be simulating the modified state with whatever dynamical changes that are introduced, but you're still going to be capturing the computations that lead to the same behavior. And so in some sense, if you can't tell the difference right away, and what you're trying to do is not to be able to tell the difference to your model, then it doesn't matter if those changes are introduced. But there is definitely a risk that some of the behaviors drop away. For instance, with the voltage sensor, as I said, it's originally a proton pump. But if you put a proton pump into a neuron, you're not going to get any spontaneous activity, because it'll just depolarize-- it'll hyperpolarize all the time, well, all the time that that channel's being activated. And since it's supposed to be a passive sensor, that's not good. So it's critical that this sort of performs as advertised by the people who engineered it, that it doesn't perturb the neuron. But you know, if it does, you're going to know. It's not going to do the things that it's supposed-- the worm won't do the things it's supposed to do. Yeah? AUDIENCE: [INAUDIBLE] sort of focus? Or will it be a bit more unpredicitable than that? DAVID DALRYMPLE: So I think, my intuition from what I've observed so far, is that it's a collection of largely autonomous local control loops with some long-distance modulatory connections that are usually not active except in exceptional conditions. So you're going to have in each body segment just a tiny control loop. In fact, there is some evidence that the control loop consists of one cell that just happens to synapse onto muscle and also be a sensory neuron that basically says, if the body segment ahead of me is stretching this way, then about 50 milliseconds later, then I should stretch that way too. And so this sort of propagates the undulating wave. There isn't a pattern generator, for instance, like you would see in a higher organism that sort of says, OK, you, you, you, you. They sort of coordinate with each other. And then when something goes wrong, then there is other connections that seem to get brought into the loop and modulate those typically autonomous systems. But I don't know. I expect that many surprises await in the full model. Yeah? AUDIENCE: What do you think about the current efforts to jump directly to the connectome of say, a mouse? Do you think it's possible that these projects will succeed without insight? DAVID DALRYMPLE: Without? AUDIENCE: Insight. DAVID DALRYMPLE: I think that it's possible that they will succeed and not provide any insight, if that's what you mean. It's certainly physically possible if you have enough time and resources to do electron photomicrographs of an entire mouse brain. I mean, it's not that far off. If you had, say, $80 million, you could just do it. It just takes a lot of microscopes and a lot of time. So it's not that-- it's not actually a very high-risk project, in that sense. But it's also not really that much of a high-reward project, because no one knows what to do with that data. Like, the most advanced-- I actually did a lab rotation with Jeff Lichtman who did the Connectome project. And what they're looking at right now is they're looking at, well, when axon synapse onto dendrites, are they choosing which dendrite to synapse on to randomly or not? And that's the sort of analysis that you can do on that data. And it seems pretty obvious that they're not random. There is some sort of pattern to how things connect in the brain. I think that's intuitively clear. But there is still also this community of people out there who say, no, you know what? We've done neural networks with random connections. They seem to be pretty clever. They're just as clever as the neural networks we've built with carefully designed connections. So you know, there is no real reason that the brain needs to have specific patterns of connection. And nature is always parsimonious and efficient, so it's probably random. And so this is the sorts of stuff that you see in connectomics. And it's not exactly what I'm interested in. AUDIENCE: What sorts of inputs and outputs does the worm have? You mentioned muscles and-- DAVID DALRYMPLE: Yes, so the outputs that we can observe are pretty much all muscles. There are body wall muscles. There are egg-laying muscles. There are anal muscles. There are head muscles. And then the inputs, there is a few light-sensitive neurons. So it can do sort of-- it's actually very much like a QAPD, which is the sensor that was put on top of early heat-seeking missiles. So it has just enough information that it can navigate toward light or away from light. It has a large variety of chemical sensors, or basically odor sensors. So that's how it finds food. It also has chemo sensors that aren't classified as odor sensors, like carbon dioxide sensors. So if there is too much carbon dioxide, it goes away from there. It has touch sensors. So if you poke it, it goes away. I'm trying to think some of the other sensors. It's mostly odor, definitely, by cell count. Odor, touch, and light, I think that's about it. AUDIENCE: So in the stimulation you intend to just stimulate the inputs and try to get some expected outputs? Or are you going to brute force all the possible neural states for the worm? DAVID DALRYMPLE: So it's 302 neurons. So if each neuron has, say, two states, that's a lot. That's a big number. I'm not going to do that. There is sort of, there is a hierarchy of different sorts of levels of data-collecting pain. And that's the top of it. Well actually, the top of it is that the state of each neuron is a real number and you have to sample a vector field to the precision of its curvature, a 302-dimensional vector field. And then it goes down from there to, at the very bottom, you just sort of saying, OK, each of the neurons is a linear function of the other neurons. And we just have to put together a 302-by-302 matrix of synaptic weights. And in fact, it's known that there is only about 7,000 synapses, so you can zero most of that out right away. And there is only 7,000 numbers you need. That's sort of hyper-optimistic point of view, which is kind of what I thought going into this. And it's probably more complicated than that. But I think there is at least some separability where you can say the synapses and global peptide diffusion are basically the only ways that you can have an impact of one neuron on another. And you can do some sorts of things from genetics just to say, well, you know, here are the channels that are in there. You aren't going to have anything that can't be explained in terms of a potassium current, and a sodium current, and a calcium current. And so there is definitely prior information that you can put into it from the connectome and from genetics. And you can even go so far-- and people have-- as to put in prior information from what you would like to get out of it. And you say, well, I want to evolve the synaptic rates toward something that exhibits this type of undulation. And lo and behold, it exhibits that type of undulation, but then you don't know whether it correlates to reality or not. But if you have an instrument where you can check, does this correlate to reality, then you can still use those sorts of approaches as shortcuts through the sort of state space of possible neural networks. Yes? AUDIENCE: What's the lifecycle of a worm like? How long does it take to create a worm that has been genetically-- DAVID DALRYMPLE: It's wonderful. The lifecycle is four days. Yeah, it's very convenient. And it's a self-fertilizing hermaphrodite too. So it clones by itself. It's pretty good. It's also the only organism that cryogenics has proven to work on. You can put worms into liquid nitrogen. 20 years later, you can thaw them out. And they start crawling around. Yeah? AUDIENCE: So you said that we hope that the neurons have basically a binary state. What's to say that they don't have 10 binary states, or something? DAVID DALRYMPLE: Yeah, no, I don't think they do have a binary state. Again, in some ways, it's not it's not the right way to think about the problem, that the neuron has a state, because it seems like a lot of what's going on is, especially in C. elegans, because we're essentially talking about analog computation because there aren't known to be spontaneous action potentials, what you're really looking is more like a control system. Like for instance, there was a paper just published this month that showed that certain networks in C. elegans are there to compute time derivatives. And I believe that that's just a functional block of a PID controller. Then we're going to find the integral part and then something that linearly sums them and then feeds back. So I think it's not really so much about finding the set of states and then saying, OK, for this state, you go to there, in the sense of sort of an if-do rule, because I think that it's a lot simpler than that, in that there are just simple equations that govern most of the processes and how they relate to each other and to the environment. That's my hope. AUDIENCE: How big is the worm? DAVID DALRYMPLE: The adults can grow to be about 700 or 800 microns long and about the diameter of a hair, about 50 to 100 microns in diameter, depending on stage. And they're transparent. So they're challenging to spot with the naked eye, but you can in the right lighting conditions. Yeah? AUDIENCE: I remember in the first days Brenner actually bred small ones. And for example, I think the nervous system and because are separate, pretty much, and with selective breeding, he found some where they were in the same plane and half the length, and so forth. And the purpose of all of that was so that they could get a whole worm into the target of his electron microscope. That we he had similar pictures of the whole thing. DAVID DALRYMPLE: I see. AUDIENCE: And it sounds like they don't have to do that anymore, but it was an interesting case where they controlled the evolution of this beast to-- DAVID DALRYMPLE: To be easier to study. AUDIENCE: And so it's not a bad idea. DAVID DALRYMPLE: No, it's not. AUDIENCE: I don't know if they reduced the number of neurons by accident. DAVID DALRYMPLE: Yeah? AUDIENCE: What percentage of this effort do you think is going to be devoted to combining all of these tools? And what percentage do you think is going to be running the aggregate to learn something? DAVID DALRYMPLE: I think once everything works, the process of taking a worm and sort of scanning it in will only take a couple of hours, so 99.99% building the tools, because really, that's about how long you have before these sorts of-- the dyes get photobleached and the laser power starts to heat things up uncomfortably for the worm. And then you're studying a different system. Yeah? AUDIENCE: Who funds this research? DAVID DALRYMPLE: Larry Page. AUDIENCE: Personally or-- DAVID DALRYMPLE: Yes. Yeah, the NIH isn't a big fan, although there was a faculty member who put in a valiant application. He managed to tie this research in plausible ways to a cure for Parkinson's, which I was really impressed by, but the NIH was not so impressed. AUDIENCE: Did you guys apply to a grant from Larry? Or did he-- did someone talk to him? DAVID DALRYMPLE: It was his idea, actually. AUDIENCE: Oh. DAVID DALRYMPLE: Well, I mean, he wasn't the first person to have the idea. And in fact, he wasn't even the first person to tell me about it. He was the third person to tell me about it. But when he told me about it, I listened. And then when I decided that I was going to do it, I said, well-- not to him, obviously. I went through my network of many people and eventually got a message through that I wanted to do this for real. And you know, could he spare a 0.001% of his fortune to make it happen? And yes. AUDIENCE: So how far along are you? How many worms do you have? DAVID DALRYMPLE: It's not anyone's number-one priority to make these genes, again, because all of the people who have the skills to make the genes and breed the worms to express them are naturally biologists who have this bias towards, no, no, we want to do small systems. And so I have to figure out how to do it myself. And not having any background in biology, most of my time lately has been spent taking classes in biology. But I've also been working on some of the computational tools that we'll need. For instance, when you have a worm in this sort of wobbly conformation, and it's changing, obviously, as it behaves, you need to find the neurons and track them at rates of at least 100 hertz in order to keep your lasers pointed in the right place. And so I've been working on that. And I have some pretty good algorithms for that, as well as isolating the signals. And basically, when you're imaging something in confocal mode, which is going to be the first thing that we do, because it's a lot cheaper than two-photon, is you have two separate neurons that are on top of each other. They have different signals. And the way that I'm doing it is to say each pixel is some roughly linear function of some set of the neurons that are near that pixel in [? 3D ?] space. And you can actually just use a singular-value decomposition followed by some simple heuristics derived from Microsoft Paint to say, here is where-- you just fit. Literally, you flood fill in where the neurons are. And it works incredibly well. Yeah? AUDIENCE: So do you have a multiple state process or something, like not just that you wanted to look at in the worm, like something, like if you wanted to look at all these neurons as it's producing a new clone or something, like that multiple stuff, would you be able to? DAVID DALRYMPLE: Sorry, what do you mean by multiple states? AUDIENCE: Yeah, like instead of a stimulus and response sort of thing, if you wanted to measure something that, like, a process that took longer? DAVID DALRYMPLE: Oh, yeah. So the nice thing about having control over all of the neurons is that you can also control the sensory neurons, and thereby put the worm into the Matrix. And you can make it experience whatever you want, in principle. This has been done with zebrafish. So the way that you do this is you use a myotoxin to prevent any of the muscles from contracting. And so then you have a perfectly still, paralyzed animal. And then you can feed it whatever sensory stimuli you want. You can read out the motor stimuli, or motor responses, you can feed that back into your simulation. It's exactly like The Matrix. And then the animal is convinced that it's in this environment that doesn't exist. Yeah? AUDIENCE: So one thing that I don't perhaps understand with this [INAUDIBLE] but what I'm trying to understand is how much of the state would the worm have? So it's 300-odd neurons, how much of the state do you have knowledge of at any particular time? Is it all of those neurons? DAVID DALRYMPLE: So again, it is. You can measure-- the way that it works, if you have one pair of lasers, you can measure any neuron at any time. And the measurement process takes about a few milliseconds. And the time constant of these neurons is on the order of 40 or 50 milliseconds. So you have to be a little bit smart about which neurons you want to look at when. So in that sense, you can't look at all of them simultaneously unless you have more lasers. Again, more lasers solve everything. But yeah, so you can look at calcium. You can look at voltage. And you can't do those at the same time either. You have to say, OK, I'm going to read the voltage of this. I'm going to read the calcium of that. And there is actually a theory called optimal exploration of dynamic environments which was also just published this year about a way to algorithmically make that decision of what thing do I want to look at that is most likely to lead in the long term to gaining the most information about the system, given my expectations of what it might look like? AUDIENCE: And then conversely, how much of the state can you [INAUDIBLE]? DAVID DALRYMPLE: And it's the same. You can perturb, either stimulate or inhibit any neuron at any time to any intensity. So you have a little bit more control there. But again, you do have to multiplex. And you have to take advantage of the fact that the neurons are not going to react as quickly as you can. AUDIENCE: And then for each sum, how much of the state do you think you can support? DAVID DALRYMPLE: You're basically just looking at two numbers, but you're looking at-- you can look at those numbers anywhere in the cell. So you don't have to assume that the cell isopotential, although it probably is for most of the neurons-- not for the ones that run the whole length of the worm, but there aren't as many of those. So you can you can collect, in some sense, as many numbers as you want, if you're interested in looking at all of the-- gritting a particular neuron to a lot of different locations and you don't care about looking at anything else. But you are only going to be looking at calcium or voltage, at least with the proposal that I've got here. There are other genetically encoded sensors. But as far as I know, none of them are likely to be nearly as relevant to neural activity. At the same time, again, you're not getting what's going on in the synapses. You're not getting phosphorylation. You're not getting methylation. And all of those things are certainly important for learning and plasticity. But since C. elegans doesn't have that much of it, it might be OK. AUDIENCE: So you haven't got a complete state transition diagram for the dynamics of it? DAVID DALRYMPLE: Right. Yeah, like I said, a complete transition diagram would take years to construct. And the life cycle of four days, it's not going to work. Yeah? AUDIENCE: I just wondered about your thoughts about higher-level questions. For example, if you look at a neurology book, it will explain on the basis of what happens when people get a concussion that short-term memories are stored in hippocampus or amygdala-- I forget. They can last there for 20 minutes or so. And then over the next day, a memory trace is copied into some other part of the brain, the parietal lobe, the frontal lobe, or something. And I've never seen even a paragraph about, well, how does it figure out where to put a memory? And how are memories represented? So a nice question would be-- if you take something like the idea of [? k lines, ?] the kind of methods you're describing might be nice for that, because the usefulness of the [? k line ?] might be that it's a bunch of neurons which go several centimeters. And so if you could look at neurons in 100 places a centimeter apart, which is very low resolution, then you might be able to find evidence for correlated activities related to some stimulus or whatever. So some of these techniques might work on a much larger brain, just because increasing size is liberating. It means that the interpretation can be clumsier if it's looking at a whole bundles of fibers. DAVID DALRYMPLE: Right. Yeah, that's what Ed Boyden is starting to look at now. Basically, I think he's calling it optodes. I'm not sure if he coined that phrase, but-- the optical equivalent of sticking an electrode into a brain is-- AUDIENCE: Is he using humans or? DAVID DALRYMPLE: Well, he's doing-- personally, he's doing mice. And he's working with people who do monkeys. The problem with humans is-- AUDIENCE: [INAUDIBLE] DAVID DALRYMPLE: Yes. Yeah, humans have a thick skull. That's part of the issue. But the bigger issue is that the only reason that we get to stick electrodes in humans at all is because it's an approved treatment for epilepsy. Now, he's working on getting optodes approved as a treatment for, well, first for blindness, which is kind of an obvious thing. If you can use an adenovirus to express opsins in an eye that doesn't have them, that's-- and it probably doesn't affect that many people, because most blindness is caused by other things. But for those that do, it's a very obvious intervention. But then also PTSD and other sorts of diseases-- so as soon as it gets approved to treat any sort of disease, then you can piggyback on that to do human research. But that hasn't happened yet. AUDIENCE: In the early '60s, there were some successful experiments. There is a guy named [? Brindley ?] who-- I'm not sure what his profession was. But he made some of the first electrodes. And he actually got permission from his secretary, who was blind, to put a little plate with 64 electrodes on her occipital [INAUDIBLE]. DAVID DALRYMPLE: Wow. AUDIENCE: --and I put a little currents in. And she could recognize visual patterns. And each of these electrodes he described as being a little bar which was about half a toothpick and arm's length. And of the 64 electrodes, about 30 of them actually produced these. And the others didn't work. And so he did that. And then he removed that, because nobody-- it was a pretty risky thing to do anyway. And at the time, we had a great neuroscientist here named Warren [INAUDIBLE]. And he got [? Brindley ?] to come over and talk to us. Incidentally, [? Brindley ?] later discovered the use of nitrous oxide for producing erections in human males. And he gave a demonstration of that in a famous lecture. [LAUGHTER] At the time, the colleague said, if you're interested in stimulating vision in the human brain, you better do it in the next five years or it will be illegal. That was in the early 1960s. And this is the same time that the worms that Brenner was-- and we actually thought about that and decided not to. But anyway, it would be nice if we could get back to that. And it might be that low-resolution things distributed very widely would also give a lot of new information. DAVID DALRYMPLE: Yeah, I think there is a lot of promise in MEG, especially because when you're just sort of putting things on the surface of the head, there aren't issues, because it's not surgery, even if you're performing the same effective perturbation to the neurons. And what people are doing with transcranial magnetic stimulation is not that great, but it's certainly promising. AUDIENCE: And there might be some way of getting things into cells that actually synthesize proteins, encoding data, and could come out in the bloodstream later. DAVID DALRYMPLE: Yeah, although there is a group working on that. They're calling it the molecular ticker tape, where you basically-- it's complicated. I don't think I can explain it properly. There are a lot of people who are looking for problems to match the solution of high-throughput sequencing, where you can take huge amounts of DNA and sequence it cheaply now. And that's one of those. But it would take pretty heroic effort to then correlate those with the actual experiment that you performed after you've extracted them. AUDIENCE: They have to say where they came from and how long they took. DAVID DALRYMPLE: Right. And I asked them how do you identify the-- barcode the cell. And they were like, well, we don't know. We'll figure that out eventually. AUDIENCE: Danny Ellis and I once consulted for Schlumberger who make instrumentation for oil wells. And when they have a deep oil well, there is a pipe that's a couple of miles long, believe it or not. And they get one bit out of every 5 or 10 seconds by putting pressure [INAUDIBLE]. And we designed hideously elaborate things that would punch tape and then would come floating up a few days later. It was fun going to the meetings where the geologists explained why each of them were-- DAVID DALRYMPLE: Any other questions? They don't have to be about neuroscience. AUDIENCE: Thank you very much. [INAUDIBLE] AUDIENCE: You focused on the experimental side reading out from the worm. At some point, you want all of that data to drive a computational model. Can you build a computational model now and initialize it in various ways and look for any kind of behavior at all? Or do you absolutely need this biological input to begin to drive a computational system? DAVID DALRYMPLE: I think-- I am working on some leads for building computational models. The obvious things to do have already sort of been done in that department. Like I said, they're using genetic algorithms to find parameters that satisfy certain conditions. And it doesn't seem that enlightening, so I haven't pursued it too much. But I'm looking at-- and this is actually another, actually, really interesting to me connection in terms of AI and neuroscience, is that when I'm looking at building the computational model that tries to interpret this data, the key idea that keeps coming back is critics and selectors, because you need to have some set of possibilities. And you need to have some heuristics for determining, based on what data is streaming in, what sort of model seems to apply to it. And then you need to have a meta model where you need to build layers of reflection, where you're saying, well, we need to modify this in this way. And it's not easy to implement that from scratch. So I'm thinking about it. But I think that once we have real data, it'll be more clear what needs to be done to simulate the processes underlying it. And again, it's just sort of my hope that, when observing something that has not been observed before, that some insight will come out of that process. AUDIENCE: Is there a behavioral diagram somewhere? For example, if you feed it a lot of food, presumably it will stop eating. DAVID DALRYMPLE: Right. Yeah, no, there-- I don't remember the name of the fellow you mentioned who spent years studying seagulls. AUDIENCE: Tinbergen. DAVID DALRYMPLE: Tim Bergen, right-- there isn't a Tinbergen of worms, unfortunately. All of the studies are-- again, it's this very stamp-collecty approach saying, OK, at 24.6 degrees Celsius, with OP20 growth media, and with worms that are three days into their life, and this number of worms per this area of plate, with this brand of agar, here are the behaviors that we see in response to this stimulus with this number of milliseconds between repetitions, and so on. And no one goes so far as to make a plot with more than one variable, because goodness, what's your control? And it is a little bit frustrating that we're going to have to build those sorts of things ourselves. But hopefully that will be the less hard part than building the model. Yeah? AUDIENCE: Could you just make some thoughts about the way mathematics might relate to-- I mean, you said you were interested in that subject. So I just-- DAVID DALRYMPLE: Sure, might relate to? AUDIENCE: Well, some of the theories of simple animals. DAVID DALRYMPLE: As I said, I think there is something missing in mathematics. There is some sort of theory that follows from some sort of symmetry that shows up in nervous systems and not in many other places, and maybe shows up in societies as well, but I'm not certain of that. But as far as math that we know, certainly nonlinear dynamical systems is the obvious one, because a neuron is a nonlinear dynamical system. And I think that in simple animals and especially in C. elegans, most of the computations that we see are also nonlinear dynamical systems. As I said, they're integrators or they're derivators. They're things of that nature. And I think a lot of them will turn out to be amenable to analysis of certain differential equation systems. But at the same time, I think figuring out certainly more complex organisms will require a new way of thinking about how to put together the computation. Von Neumann in 1956 started working on a monograph to accompany a series of lectures called "The Computer and the Brain," in which he would discuss the differences and similarities, and how he thinks the ideas from computer science will relate to neuroscience. And this was 1956. And unfortunately, he got bone cancer. And he died in 1957, and left the document unfinished, and never gave the lectures. And the document concludes very dramatically with this sentence, "However, if the brain uses any sort of mathematics, the language of that mathematics must certainly be different from that which we explicitly unconsciously refer to by that name today." And that's where it ends. Yeah? AUDIENCE: Do you have an idol? DAVID DALRYMPLE: What? AUDIENCE: Do you have an idol, like, fictional or non-fiction? DAVID DALRYMPLE: Von Neumann would be the closest, yeah. I mean, Iron Man, but that's just obvious. AUDIENCE: So you're over Edison? DAVID DALRYMPLE: What? AUDIENCE: You're over Edison? DAVID DALRYMPLE: Over Edison-- no, Edison is pretty cool too. AUDIENCE: Von Neumann was my hero, because when I finished my thesis in that department, it was on neural networks. And the Math Department didn't know what to make of it. So then came Von Neumann and-- did I tell you this story? DAVID DALRYMPLE: No. AUDIENCE: They said, is this mathematics? And he said, if it isn't now, it soon will be. And I got my PhD. AUDIENCE: Can you talk about any of the projects before yours that people have already [INAUDIBLE]?? DAVID DALRYMPLE: Yeah, so let's see. In terms of optogenetics, I don't know. I think probably the most famous one just because it has a really cool movie is that someone found a promoter for a class of neurons in mice that is coupled to right turns. And so you can put the mouse in some environment. You know, it behaves. You turn on the light. And it starts turning to the right no matter what it's doing. It just turns around and starts going in circles. And you turn off the light and it goes back to what it was doing. It's mostly cool stuff like that. But it's also found use as a replacement for electrophysiology. You know, anything that you could do by using micropipettes or micro electrodes, well, to some extent, depending on what kind of time resolution you need or what kind of manipulations you want to perform, but a lot of the things that you previously would need very precise and expensive equipment and calibration for, you can now do much more simply by using genetics and a blue LED. So there is a lot of things like, for instance, even in C. elegans, a lot of work has been done by Cori Bargmann in recent years using calcium imaging to just sort of explore more quickly and more thoroughly the dynamics of sensory neurons, which is important to me, because if I'm going to simulate the sensory neurons in some pattern that reflects a virtual reality, then I need to know how that pattern would relate to the real reality that they're observing. And so Cori Bargmann has done a lot of experiments where she just sort of flows in an odorant, for instance. And she's got a laser pointed at the neuron that's known to sense that odorant, and just characterizing the dynamics of how the neuron responds and habituates, which is something that you could do with a micropipette. But the worm is so small that you need-- it's just really hard to get something in there onto the specific neuron that you want and have it stick. And you know, it's been done, but it's only been done a few times by people who are very skilled and very lucky. And these types of optical tools make it a lot easier to do those sorts of experiments, if nothing else. AUDIENCE: This is a dumb question, I think. So you can't change anything about the worm once it's already in there? Like, once it's already under the lasers, you can't add more stuff as-- but why would you want to? DAVID DALRYMPLE: Well, I mean, there are reasons that you might want to. A lot of experiments are done-- it's actually really creative what a lot of cellular neuroscientists do, because they realize that once you have something stuck into the cell body of a neuron, you can introduce whatever kinds of molecules you want. And there are certain molecules that are selective blockers of certain channels. And so you can do things like, OK, what if you don't have any potassium channels? What now? And you can do those sorts of perturbations when you have a physical connection to the cytosol, which you can't do optically. So those, again, are things that I hope to show, but it is not known that you don't need to do them in order to model behavior at the scale of the organism. Yeah? AUDIENCE: [INAUDIBLE] like a chemical triggers that are controlled by physical [INAUDIBLE]?? Like in humans, there are different glands. DAVID DALRYMPLE: Yeah, so there are there aren't really glands in C. elegans. There isn't a circulatory system either, but there is sort of like a shared body of fluid through which waste is channeled, that it contacts a lot of cells. And there is some evidence that there are a few neurons that do diffuse transmitter into that, thus affecting a sort of global change in excitability, so yes. The way that that manifests in a model is basically just as an extra node saying this represents the sort of global concentration of glutamate or whatever. AUDIENCE: Thank you. DAVID DALRYMPLE: Thank you. MARVIN MINSKY: What's the simplest animal that does a little bit of learning? DAVID DALRYMPLE: C. elegans its does a little bit. I mean, it's really just associative learning. So it can learn aversions or attractions to temperature or to chemical stimuli [INAUDIBLE].. But it's probably just one synapse that represents an aversion to this thing and that synapse changes in strength or something like that. As far as I know-- and I'm not a zoologist, so I really don't know. But as far as the set of classic neuroscientific model organisms, I think zebrafish are the simplest that show sort of abstract, anything resembling abstract learning. MARVIN MINSKY: I just realized that I think probably we all know something about our ancestry, that is, the sort of 100 million years of being bacteria and things like that. And if you go backwards, there have been mammals for about 100 million years, I think. There is 100 million years of fish, and 100 million years of amphibians, and 100 million years of-- those are all vertebrates-- of mammals, of reptiles, and so forth. And I don't know what happens in the early period, except that we're descended from yeast somehow. And so it'd be nice to know, what are the first few steps up to the worm. And where did it branch? And are we in that lineage? Or did that lead off to the Coelenterates and other things that we don't have any horizontal relation? Anybody know? DAVID DALRYMPLE: I'm pretty sure that we're not descended from C. elegans. I think that it's a separate branch from vertebrates. MARVIN MINSKY: But there must have been something like a paramecium. DAVID DALRYMPLE: Right. So there is actually really interesting work in looking at yeast that flock. In the typical yeast, in sort of wildtype yeast, when it reproduces the daughter cell, it sort of diffuses away. You can get yeast to adhere to itself. So as it reproduces, it forms these globs. And it's actually been shown that, under certain environmental conditions that are not that implausible, that those globs are more Darwinian fitness than individuals. And so there is a hypothesis that that's how yeast started to become multicellular. MARVIN MINSKY: So they must have some extra genes that are not activated normally. I wonder why it isn't taught in grade school? Is that because evolution is not allowed in-- but I came from New York. There weren't many anti-evolutionists yet. Maybe there were. Well, I'm impressed. I think that sounds like a very exciting adventure. DAVID DALRYMPLE: Thanks. MARVIN MINSKY: Do any of you have a plan to pursue AI or psychology? Who has a career plan? AUDIENCE: Were your plans [INAUDIBLE]?? MARVIN MINSKY: I don't remember ever having one. There was just something exciting to do next week. Anyone have a criticism? Should David actually do this? AUDIENCE: Well, biology is slow. And computers are fast. And they get faster. I think that if someone were to put as much work as David is putting into the biological aspect of this to try to brute force model the work faster than he could and try to match it up with the behaviors, they would match up those 30 behaviors faster than he would. DAVID DALRYMPLE: I would love the competition, but I would just like to point out that lasers are also fast. MARVIN MINSKY: Well, there is something wonderful about an animal that can reproduce in four days, because you could actually, as part of your four-years plan, you could actually plan to breed some that have some particular new neurological behavior on the side. AUDIENCE: Is there any kind of social behavior that's documented [INAUDIBLE]? Or is there a small animal with social behaviors that you could study in your system? Like interaction between-- DAVID DALRYMPLE: Yeah, I know what you mean. And C. elegans doesn't. Well, actually, it's kind of funny, because the hermaphrodite doesn't have any social behavior, but the male does, for obvious reasons. There is, in fact, 70 extra neurons in the male for the purposes of finding a hermaphrodite to impregnate. But I don't know-- again, not being a zoologist, I walked onto this because it's well-studied. And it's the very simplest. But questions in the form of what's the simplest under constraint x, I'm less well-equipped to answer. As far as I know, the best thing in that domain would be ants, but I'm sure there is something simpler than ants that exhibit social behavior. I just don't know what. MARVIN MINSKY: Well, isn't there some yeast that forms-- in some stage, it actually forms a sort of tower and stands up? I think it's yeast. DAVID DALRYMPLE: Well, yeast don't have neurons, so these sorts of techniques won't apply there. AUDIENCE: Slime mold, perhaps? MARVIN MINSKY: Yeah, maybe that's-- DAVID DALRYMPLE: [INAUDIBLE]. Yeah, slime molds are interesting, because they're sort of all neuron, in a way. They're just clumps of clumps of cells that happen to have electrical activity. There might be something interesting there. I don't know if anyone's tried to express optogenetic channels in slime molds. MARVIN MINSKY: They're very small. DAVID DALRYMPLE: They are very small, but you could do it with a virus maybe. MARVIN MINSKY: Well what about at a higher level? How would you find something like [? k lines? ?] DAVID DALRYMPLE: I think that you need something, some way of seeing a lot more things in a lot bigger brain. The one thing that comes to mind as sort of a new technique that might turn up [? k lines ?] is diffusion tensor imaging, which is a way of using MRI to find a quite detailed structure. And I don't know the physics of it, but it involves following water molecules as they diffuse, or tracking their flow. MARVIN MINSKY: Oh, so if something is more active, the diffusion is faster. DAVID DALRYMPLE: No, it's not functional. It's structural. It's that if there is a bundle of neurons, then the diffusion will be highly anisotropic. And you can measure the anisotropy. And then you can use some tensor math to turn that into what's called a tractogram. MARVIN MINSKY: But you're measuring the heat flow or something? DAVID DALRYMPLE: I mean, it's magnetic. It's MRI. It's magnetic resonance imaging. So it has some sort of tomographic component. And again, I don't know the physics of it. I just, in fact, heard of in a few weeks ago. It's a new technique. But it's been used to create some maps of the-- particularly I think in monkeys it's being used a lot to create maps of at least the long-range connections between a whole lot of different areas. But even those are-- it really can only resolve thick bundles of neurons. And [? k lines ?] probably aren't, but maybe they are [INAUDIBLE]. MARVIN MINSKY: I wonder how thin the skull of a parrot is. I just mention it because it might be the smartest animal per gram or something. Well, how smart are mice? DAVID DALRYMPLE: Mice are pretty smart. AUDIENCE: Octopodes are supposed to be smarter too. And I don't how much they weigh. MARVIN MINSKY: Which thing? AUDIENCE: Octopodes. DAVID DALRYMPLE: I think they're pretty heavy. AUDIENCE: Octopodes are heavy? That makes sense. They're in water, so they're dense. DAVID DALRYMPLE: I think you'd probably find ants to be the smartest per gram, but again, that's just my guess based on what little I know of animals. AUDIENCE: Parrots are always trying to optimize for weight. And ants aren't. MARVIN MINSKY: Ants are pretty small. Well, they're very variable. I think Ed Wilson had a 26-year-old ant. DAVID DALRYMPLE: Wow. AUDIENCE: It would be glorious to see a large group of ants acting as a parrot. AUDIENCE: So I just have a quick question for David. You said, well, there is an experiment that sort of showed that you can control mice by shining lights on them. Do you think that there is any fear of the possibility that someone could create a virus that affects all humans and then it controls humans to do more than just turn right by shining lasers onto them from space? DAVID DALRYMPLE: So [INAUDIBLE] AUDIENCE: Fear or hope? DAVID DALRYMPLE: What? AUDIENCE: Fear or hope? DAVID DALRYMPLE: Yeah, technology is a double-edged sword. That was fucked up. I mean, human skulls are pretty thick. And even mice skulls are pretty thick. In order to make this happen, you have to have a hole in the skull. And you have to mount your LED in that hole. So there isn't so far any way to do this from a distance without having some sort of physical surgical operation. AUDIENCE: Space is also far away. DAVID DALRYMPLE: Right. If you're inside a building, it probably won't work. And most important people are inside buildings when they're making important decisions. AUDIENCE: [INAUDIBLE] turn right. AUDIENCE: [INAUDIBLE]? AUDIENCE: Yeah. DAVID DALRYMPLE: Yeah, I mean, so right now there isn't any fear of that. But it's certainly something to think about, you know, as technology gets better. You never know when we might cross that threshold. But I think for the next 10 years or so, we're probably pretty safe. MARVIN MINSKY: Well, there are microscopic parasitic worms. And so it would be hard to direct them to go anywhere particular. But you could certainly evolve some that go into the brain and go somewhere without destroying anything important and drop little packages here and there. DAVID DALRYMPLE: There are actually, none that infect humans-- actually, I think there is one that infects humans that has a more subtle effect. But there is actually a whole class of parasites which do locate to the brain of larger animals and do in fact cause them to engage in behaviors that are suicidal for the host but beneficial for the parasite. MARVIN MINSKY: That's right. AUDIENCE: There is? MARVIN MINSKY: What's the one that causes some insect to climb up to the top of a tree and-- DAVID DALRYMPLE: I don't remember the name of it, but I know that it exists. MARVIN MINSKY: It gets in the brain, and makes it climb the tree, and then jump off or whatever. And that spreads this particular parasite. DAVID DALRYMPLE: Right AUDIENCE: There is [INAUDIBLE]. MARVIN MINSKY: So don't go outdoors. AUDIENCE: There is also-- what is it-- Toxoplasma gondii, which was on Cracked.com, which causes mice to enjoy the smell of cat urine. DAVID DALRYMPLE: Right. AUDIENCE: Yeah, that's probably one of the ones you're thinking of. So it'd be interesting to develop something like that for humans. AUDIENCE: I think people can actually get it from cats or something, and it makes you-- AUDIENCE: Yeah, humans carry it. And it does-- DAVID DALRYMPLE: I think that's the one I was thinking of. It as a subtle effect on humans. AUDIENCE: It's pretty benign on humans. AUDIENCE: It's [INAUDIBLE]. AUDIENCE: Like it makes girls more flirtatious or something? AUDIENCE: Yeah, I think that's it. So [INAUDIBLE]. MARVIN MINSKY: Well, these are all instances of the future dangers of getting better scientific techniques for high school students to do experiments with. AUDIENCE: There is at least a way to control people. Just throw money at them. [INAUDIBLE] you want. MARVIN MINSKY: Just throw what? AUDIENCE: Money. DAVID DALRYMPLE: Throw money at them. AUDIENCE: And they will do whatever you want. MARVIN MINSKY: Oh yeah. DAVID DALRYMPLE: Yeah, actually that's one path that I forgot about that some people do actually pursue. You can just reason directly from what makes people the most money. And that gets you something that makes a lot of money. It's kind of like a person, I guess. AUDIENCE: I think there are mutations in people that are not affected by money. So it would be interesting to try to develop a virus to counteract money. DAVID DALRYMPLE: Yeah, are certainly-- AUDIENCE: We can treat it. DAVID DALRYMPLE: Right, we can treat alcoholism with disulfiram. So maybe there is some way that we can treat avarice. AUDIENCE: Won't that lead to the downfall of our economy? DAVID DALRYMPLE: Yup. [LAUGHTER] MARVIN MINSKY: What's this new alcoholism treatment? DAVID DALRYMPLE: Oh, it's not that new, but it's a drug that makes alcohol incredibly repugnant. And so it's sometimes given to people with severe alcoholism so that they are repulsed by alcohol and don't drink it anymore, as long as they continue taking medication. MARVIN MINSKY: There is something like that in my family, because if I drink alcohol more than a small amount, then these little things like ants start crawling on my face. And they're very unpleasant. Any of you have that? AUDIENCE: I do. A lot of Asians have it, actually. MARVIN MINSKY: Really? AUDIENCE: Or maybe it's not-- the one that I have is like you lack alcohol dehydrogenase, the thing that breaks down alcohol. And you have two copies of the gene. And I have, like, one good copy and one bad copy, depending on which you find is good or bad. And so I can drink like a glass. But any more than that, I kind of get really itchy and red. MARVIN MINSKY: Yeah, so I don't have any-- AUDIENCE: But the people have two copies where they lack the enzyme, then they have [INAUDIBLE] it's like bad things happen. MARVIN MINSKY: Oh. Well, evolution produces all sorts of strange things. AUDIENCE: Neutral drift tracing is still illegal. DAVID DALRYMPLE: What? AUDIENCE: Neutral drift tracing is still illegal. DAVID DALRYMPLE: What's that? AUDIENCE: It's when an organism goes under neutral drift, it's two organisms competing to not change. MARVIN MINSKY: Yes, why hasn't the net been destroyed by a virus by now? Is there any-- AUDIENCE: Why hasn't what been destroyed? DAVID DALRYMPLE: The internet. MARVIN MINSKY: The internet. AUDIENCE: Oh, because it has white blood cells. The people, the system administrators act as white blood cells to their attacks. DAVID DALRYMPLE: Kaspersky. AUDIENCE: And it is largely-- DAVID DALRYMPLE: [INAUDIBLE] is the reason the internet is still here. AUDIENCE: Well, it is largely immunized. But it's still infected by parasites. Like, there are botnets the size of which we can only estimate that are sort of parasites running on the internet, interesting [INAUDIBLE].. MARVIN MINSKY: Yeah, I just find it surprising that there hasn't been a really large disaster yet, because-- AUDIENCE: Well, it's partially because it's not Xanadu, right? Right it's not Ted Nelson's super-centralized internet. It's a distributed redundant-- AUDIENCE: There have been a few attacks that took down large parts of the internet, but nothing quite [INAUDIBLE]. MARVIN MINSKY: Say it again? AUDIENCE: There were a few attacks that took down large chunks of the internet in the past. DAVID DALRYMPLE: Yes, I think Croatia's internet was taken down by a woman with a spade who was trying to mine some copper. Oh, this looks like a lot of copper, right? MARVIN MINSKY: Well, I had a Microsoft virus for years, but it never did any harm. It just, if I-- I forget what it was called. If I got rid of it, it would come back again. AUDIENCE: People in Croatia still had internet access, though. They could connect via satellite. DAVID DALRYMPLE: This is true. AUDIENCE: So the internet now has sort of enough redundant methods of making connections. DAVID DALRYMPLE: Not quite. People who have satellite links rarely share them. AUDIENCE: Yeah. I guess the thing is that the initial statement is, why hasn't the internet gone down, where the internet is whatever is-- it's sort of defined whatever is still connected to anything else, because the internet is defined by its connection. MARVIN MINSKY: Well, why hasn't there been a smallpox epidemic that killed everyone? Because that has happened for quite a few species. AUDIENCE: But all the species currently alive have not been killed by a smallpox epidemic. MARVIN MINSKY: Right, that is correct. DAVID DALRYMPLE: Ah, the anthropic argument, always correct and never quite satisfying. MARVIN MINSKY: I heard an hour-long program this morning about-- what's his name on WBUR-- about making high-speed trains in California. And it was all very interesting and incredibly expensive. And the current plan is to make one that'll take 30 years to construct, which seems having-- that's rather odd, because you can't expect any particular government, including California, to be stable. But I wonder what the point of people-- are people really going to travel at great expense and cost when they could have telepresence? DAVID DALRYMPLE: Yeah, moving mass around is kind of a ridiculous way to transfer the information that's inside your brain [INAUDIBLE]. MARVIN MINSKY: And at some point, we might just say, shouldn't we ban international travel just because of the danger of a plague? And I think the danger of a plague is going to suddenly increase because of high school students doing science fair projects. Because the nice thing about evolution is it doesn't have, contrary to some beliefs, it doesn't have any intentional agents directing it. But once you can make gene strings in high school, then Darwinian evolution becomes a minority. DAVID DALRYMPLE: I assume you've heard the recent news of Dutch biomedical engineers who produced a version of H1N1 with 60% mortality. MARVIN MINSKY: Oh. And where does he keep it? DAVID DALRYMPLE: In his basement. AUDIENCE: Wait, why did he do this? DAVID DALRYMPLE: For science! AUDIENCE: How do you know the human mortality? MARVIN MINSKY: Did you make that up? DAVID DALRYMPLE: Well, I assume he did it for science. I made up the tone of voice. MARVIN MINSKY: Oh. AUDIENCE: Was this a military-funded project? DAVID DALRYMPLE: No. This was at a hospital, at a hospital research institute. I mean, presumably it was funded by someone who is interested in making a cure for H1N1. And you know, they're trying to make it more obvious when you have one in your test population of chinchillas or whatever their model organism is. AUDIENCE: Ferrets. DAVID DALRYMPLE: Ferrets, that's right, close enough. AUDIENCE: [INAUDIBLE]. MARVIN MINSKY: OK, well, next time, bring some questions. Thank you. DAVID DALRYMPLE: Thank you.
|
MIT_6868J_The_Society_of_Mind_Fall_2011
|
1_Introduction_to_The_Society_of_Mind.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So what I'm going to do in this course is discuss mostly ideas that are already in the book called "The Emotion Machine." I'm sorry. I used that title, and the older book called it "The Society of Mind." The books are not quite the same. They overlap a bit in material, but they're sort of complementary. I like the old one better, because the chapters are all one page long. And they're moderately independent. So if you don't like one, you can skip it. The new book is much denser, and it has a smaller number of long chapters. And I think it's-- over the years, I got lots of reactions from young people in high school, for example. Almost all of whom liked "The Society of Mind," and found it easy to read, and seem to understand it. There are lots of criticisms by older people who maybe some of them found it harder to put so many fragments together. Who knows? But most of this class, most of the things I'd like to say are in those books. So it's really like a big seminar, and my hope is that everyone who comes to this class would have a couple of questions that they'd like to discuss. And if I can't answer them, maybe some others of you can. So I like to think of this as a super seminar, and normally, I don't prepare lectures. And I just start off asking if there are any questions. And if there are not, I get really pissed off. But anyway, I'm going to start with a series of slides. So why do we need machines? And partly, there are a lot of problems. Unlike most species or kinds of animals, humans have only been around a few million years. And they're very clever compared to other animals, but it's not clear how long they will last. And when we go, we might take all the others with us. So there are a whole set of serious problems that are arising, because there are so many humans. And here's just a little list of things. There's a better list in a book by the Astronomer Royal Martin Rees of England. Anybody know the title? AUDIENCE: "Our Final Hour." PROFESSOR: Yes, "Our Final Hour." It's a slightly scary title. And when I was a teenager, World War II came to an end with the dropping of two atomic-- oh, this is getting terrible. --two atomic bombs on Japan. And I didn't believe the first one was real, because it was in Hiroshima. So I assumed that the US had somehow made a big underwater tanker with 20,000 tons of TNT, and some few grams of radium or something, and blown it up in the harbor. And first, it flew an airplane over, dropping some little thing. And this was to fool the Japanese into thinking that we have an atomic bomb. But when they did it, again, over Nagasaki, that wasn't feasible. And when I was in grade school, Sometimes, if I said something very bright, I would hear a teacher saying, maybe he's another J. Robert Oppenheimer. Because that was the name of a scientist who had been head of the Manhattan Project, and he was, I think, three or four years earlier in grade school than I was. And I thought it was very strange for a person to have a first name just being a letter rather than a name. Many years later when I was at Princeton in graduate school, I met the Robert Oppenheimer, and that was a great pleasure. And in fact, he took me to lunch with a couple of other people I admired, namely, Girdle and Einstein, which was very exciting. Except I couldn't understand Einstein, because I wasn't used to people with a strong German accent. But I understood Girdle just fine. And after that lunch was over, I went and spent about a year learning about Turing machines, and trying to prove theorems about them, and so forth. So anyway, in the course of these talks, we'll run across a few of these people. And here's a big list of the people that I'm mostly indebted to for the ideas in the society of mind and the emotion machine. The ones in blue are people I've actually met. It would be nice to have met Aristotle, because no one really knows much about him. But you really should read, just skim through some of that, and you'll find that this is a really smart guy. We don't know if he wrote this stuff, or if it were compiled by his students, like a lot of Feynman's writing is and Von Neumann's writing is edited from notes by their students. Anyway, the astonishing thing about Aristotle is that he seems to be slightly more imaginative than most cognitive scientists you'll run into in the present day. It would have been nice to know Spinoza, and Kant, and the others also. Freud wrote 30 or 40 books. Did he fall off this list? There he is. I just made this list the other day, and I was looking up these people to find their birthdays and stuff. Yes? AUDIENCE: Why are there no Eastern philosophers in history? PROFESSOR: Because they're religious as far as I can see. AUDIENCE: Religious? PROFESSOR: Well, who would you-- would you say, Buddha? AUDIENCE: No, I mean, just Eastern thinkers thought that. PROFESSOR: Name one. Maybe I've never heard of them. AUDIENCE: Confucius. PROFESSOR: Who? AUDIENCE: Confucius. PROFESSOR: Confucius? AUDIENCE: Or [INAUDIBLE] great thinkers from China. PROFESSOR: Well, I only know of them through aphorisms, single proverbs, but I don't know that Confucius had a theory of thinking. You think he did? AUDIENCE: There are a lot differences of thoughts, and I think they probably do have [INAUDIBLE].. PROFESSOR: Well, I've looked at Buddhist theories, and they're-- I don't think they would get a C plus. And one problem is that there are cultures-- there's something about Greek culture. Because it had science. It had experiments. Somebody has a theory, and they say, like Epimenides Lucretius. Somewhere in the society mind, I think I quoted Lucretius about translucent objects. And he says, they have the particular appearance, because the rays of light bounce many times before they get to the surface. So you can't tell where they started. And I don't find in eastern philosophy theories that say, here's what, I think, and here's a reason why. I've looked at Buddhist stuff, and it's strange lists of psychological principles. Every one of which looks pretty wrong, and they make nice two dimensional diagrams. But no evidence for any of them, so I don't know whether to take it seriously. AUDIENCE: I think knowledge is from observation. I think you're right that in some them probably didn't really test it, because a lot of the ideology cannot be tested. On the other hand, there are scientists-- PROFESSOR: But what can't be tested? AUDIENCE: I mean, some of the ideologies probably. PROFESSOR: If they can't be tested, why should one look at it twice? AUDIENCE: Test it in terms of something logically. I don't know. Like culture, can you test culture? PROFESSOR: OK, I think this is a serious argument. It seems to me that science began a little bit in China, a little bit in India. In the Arabic world, they got up to the middle of high school algebra, but then-- AUDIENCE: That's the foundation. PROFESSOR: What? AUDIENCE: That's the foundation. PROFESSOR: Well, but it wasn't as good as Archimedes, who got to the beginning of calculus. So if you look at most cultures, they never got to the critical point of getting theories, doing experiments, discussing them, and then throwing them out. And so if you look at Buddhist philosophy, it's 2,500 years old. If you look at Greek physics, yes, Archimedes almost got calculus, and he got lots of nice principles. And Buddha mentions, at some point, if you want to weigh an elephant, put him in a boat. And then take the elephant out and put rocks in, until the boat sinks to the same level. So there, you see a good idea. But if you look at the history of the culture, if people still say, this thousand year old stuff is good, then you should say, no, it's not. AUDIENCE: By the way, same story about the elephants. There's like a story in Chinese history that has the same. PROFESSOR: Sure. AUDIENCE: I mean, maybe there's no one person that [INAUDIBLE].. PROFESSOR: No, but the question is, why did it stop? Why did it stop? Ancient wisdom is generally not very good, and we shouldn't respect it for too long. And that's-- AUDIENCE: [INAUDIBLE] past where everybody's standing on the giant's shoulders, right? PROFESSOR: No, we got rid of alchemy. We got rid of-- what do you call it? What's caloric? You jump off their shoulder. You don't stay on them, so it's good to know history. But if the history doesn't get anywhere, then you don't want to admire it too much. Because you have to ask, why did it stop? What went wrong? And usually, it went wrong, because barbarians came in and-- well, you know what happened to Archimedes. Some Roman killed him. Anyways-- AUDIENCE: [INAUDIBLE]. I'm sorry. PROFESSOR: No, it's a good question. Why didn't science happen a million years ago? Because humans are five million years old, so what took it so long? AUDIENCE: [INAUDIBLE] PROFESSOR: No, it's more-- AUDIENCE: [INAUDIBLE] PROFESSOR: Sure, OK. Do you have a theory of why science didn't develop for so long? In most cultures, it might be religion, which is a sort of science that doesn't use evidence, and in fact, kills people who try to get it. So there are systematic reasons why most cultures failed, and maybe somebody has written it. Is there a book on why science disappeared, except once? It's rather remarkable. Isn't it? After all, the idea, if somebody says something, and somebody else says, OK, let's do an experiment to see if that's right, you don't have to be very bright. So how come it didn't happen all the time everywhere? Here he is. AUDIENCE: I don't know the answer to that, but I know Paul Davies has sort of an anecdote about that. Or he's exactly speculating, even in Europe when it did happen was a fluke. And he gives the example of suppose an asteroid or a comet crashed in Paris in-- I forget what year he gives. --1150, or 1200, or something. Then what? Whether it's science [INAUDIBLE] as a thought problem. PROFESSOR: History is full of flukes. I'm trying to remember who wrote that nice book about the plague, some woman. And she mentions that this was spread by rats and fleas or something. And 30% or 40% of the population of many countries in Europe died, and the next generation had a lot of furniture. The standard of living went way up, so anyway, here's a list of disasters. Oh, come on. And Martin Rees is the royal astronomer and has that book about the last hour or whatever. I'm making another longer list. But he has lots of obvious disasters, like some high school student looks up the genetic sequence for smallpox virus has been published, and now, you can write a list of nucleotides and send it somewhere. And they'll make it for about $0.50 or $1 per nucleotide. So for a couple of hundred dollars, you can make a virus or a few hundred. So one possibility is that some high school student makes some smallpox only gets it wrong, and it kills everyone. So there are lots of disasters like that, and no one knows what to do about that. Because the DNA synthesis machinery is becoming less and less expensive, and probably the average rich private high school could afford one. So there are lots of other things that could happen. But one particular one is this graph, which I just made up. An interesting fact is that since 1950 when the first antibiotics started to appear, as I mentioned, I was a kid in the 1940s. And penicillin had just hit the stands, and there wasn't much of it. And there was a researcher who lived a few blocks from us whose dog had cancer. So its father-- I don't know what you call the owner of a dog. --sneaked some penicillin out of the lab and gave it to the dog, who died anyway. But he said, well, nobody's tried penicillin on cancer yet. Maybe it will work. And a lot of people were mad at him, because he probably cost some human its life. But he said, he might have saved a billion humans their lives, so ethics. Ethicists are people who give reasons not to do things, and I'm not saying they're wrong. But it's a funny job. Anyway, since that sort of thing happened and medicine began to advance, people have been living one year longer every 12. So it's 60 years since 1950, so that's five of those six. So they're living six or seven years longer now than they were when I was born. And somebody mentioned that curve stopped the last few years for other reasons, but anyway, if you extrapolated, you'll find that the lifespan is going to keep increasing. How much we don't know-- another problem is that you might discover enough about genetics to get rid of most of the serious diseases. Maybe just 20 or 30 genes are responsible for most deaths right now. And if you could fix those, which we can't do yet, there's no way to change a gene in a person. Because invading all the cells is a pretty massive intervention, but we'll get around that. And then it might be that people suddenly start living 200 or 300 years. Now at some point, the population has to slow down. So you can only reach equilibrium with one child per family and probably less than that. So all the work has to be done by 200 or 300-year-olds, and let's hope they're good and healthy. So anyway, I think it's very important that we get smart robots, because we're going to have to stem the population. And I hope people will live longer and blah, blah, blah. So these robots have to be smart enough to replace most people, and how do you make something smart? Well, artificial intelligence is the field whose goal with has been to make machines that do things that we regard as smart, or intelligent, or whatever you want to call it. And the idea of seriously making machines smart has roots that go back to a few pioneers, like Leibniz, who wrote about automata and that sort of thing. But the idea of a general purpose computer didn't appear till the 1930s and '40s in some sense. The first form of the general purpose computer appears really in the 1920s and '30 with the work of a mathematician, Emil Post at NYU, who I happen to never meet. But we had some friends in common, and he had the idea of production rules. And basically, rule based systems and prove various theorems about them. Then Kurt Girdle showed that, if you had something, like a computer or a procedure that had the right kinds of rules, it could compute all sorts of things. But there were some things it couldn't compute, unsolvable problems, and that became an exciting branch of mathematics. And the star thinker in that field was Alan Turing, who invented a very simple kind of universal general purpose computer. Instead of a random access memory, it just had a tape, which it could write on, and read, and change symbols. And it would go back and forth. And if it's in state x at, say, symbol y, it will print symbol z over the x and move to the left or right and just a bunch of rules like that, where it was enough to make a universal computer. So from about 1936, it was sort of clear to a large mathematical community that these were great things. And a couple of general purpose light computers, very simple ones, were built in the 1930s and more in the 1940s. And in the 1950s, big companies started to make big computers, which were rooms full of equipment. But as you know, most programs could only do some particular thing, and none of them were very smart. Whereas a human can handle lots of kinds of situations. And if you have one that you've never seen before, there's a good chance you'll think of a new way to deal with that and so forth. So how do you make a machine that doesn't get stuck almost all the time? And I like to use the word resourcefulness. Although, I left an R out of that one. Is there a shorter word? So here's a good example. My favorite example of a situation where a person is born, more or less, with a dozen different ways of dealing with something. And the problem that I imagine that you're dealing with is this. My favorite example is I'm thirsty, so I see that glass of water. And I do that and get it. Actually, I am. On the other hand, if I were here, I would never in a whole lifetime do this. You never walk out a window by mistake. We're incredibly reliable, so how do I know how far it is? And that slide shows you 12 different ways that your vision system-- that's only your vision system. --has to measure distances. So gradients, if things are sort of blurry, then they must be pretty far away. That's sort of on a foggy day outside. Here's a situation. If you assume those are both chairs of the same size, and you know that this chair is about twice as far away as that, although, you don't-- well, and how far away they are pretty much by the absolute size. If you have two eyes that work well, then, if something is less than 30 feet away, you can make a pretty good estimate of its distance by focusing both eyes on some feature. And your brain can tell how far apart your eyes are looking, so there's 12 different things. It's more than you need. Lots of people are missing half of those. Lots of people have very poor vision in one eye. Some people cannot fuse stereo images, even though both eyes have 20/20 vision. And in some cases, nobody knows why they can't do that. I think I once took a test for being a pilot, and they wanted to be sure you could do stereo vision, which seem very strange. Because if you're an airplane, and you're less than 30 or 40 feet away from something, it's too-- you could use stereo. But it's too late. Anyway, that's interesting. See if you can think of an example, where a person has even more 12 of these. But it's pretty amazing, isn't it? It's more redundancy. This is too hard to read, but somehow, I found it in Aristotle essay the idea that you should represent things in multiple ways. You might describe a house. One person might describe a house as a shelter against destruction by wind, rain, and heat. Another might describe it as a construction of stones, bricks, and timbers. But a third possible description would say, it was in that form in that material with that purpose. So you see there's two different descriptions. One is the functional description. It's a shelter. The second one is a structural description, how it's made. And Aristotle says, which is the better description? And he dismisses the material one or the functional one is not rather the person who combines both in a single statement. And then I found a paragraph by Fineman, who says, every theoretical physicist who is any good knows six or seven different ways to represent exactly the same physics. And you know that they're all equivalent, but you keep them all in your head hoping that they will give you different ideas for guessing I should put more dots. Anyway, that whole argument is to say that the interesting thing about people is that they have so many ways to do things, and perceive things, and think of things. And in some cases, we even know that there are different parts of the brain that are involved in one aspect or another of constructing those different representations or descriptions. If you look at one of my favorite books, it weighs about 20 pounds. It's the book on the nervous system by Kandel and Schwartz. And the index to that book is quite a lot of pages long, and it mentions 400 different structures in the brain. So the brain is not like the-- well, I shouldn't make fun of the liver. Because for all I know, the liver has 400 different many processes for doing things. But the brain has distinguishable areas that seem to perform several hundred different functions. And with a microscope, at first, they all look pretty much the same. But if you look closely, you see slightly different patterns of how the most layers of the cortex of the brain, most parts of it have six layers, and each has a population of different kinds of cells. There are a lot of cross connections up and down and sideways to other. They're arranged in columns of between 400 and the 1,000 cells, and you have a couple of million of those. And there are lots of differences between the columns in different areas, and we know some of the functions. In most cases, we don't know much about how any of them actually work with the main exception of vision, where the functions of the cells in the visual cortex are fairly well understood at low levels. So we know how that part of the brain finds the edges and boundaries of different areas, and textures, and regions of the visual field. But we do not know even a little bit about how the brain recognizes something as a chair, and an overhead projector, and a CRT screen, and that sort of thing. The kind of question that I got interested in was, how can you have a system, which has a very large number of different kinds of computers? Each of which by itself might be relatively simple or might not, I suppose. And how could you put them together into a larger system, which could do things, like learn language, and prove theorems, and convince people to do things that they would never have dreamed of doing five minutes earlier, and stuff like that? Now the first sort of things I was interested in was, in fact, how to simulate simple kinds of nerve cells. Because in the 1950s, there was about almost 100 years, really more like 50 years of science discovering things about neurons and nerve cells, the axons, and dendrites that they use to communicate with other neurons. So if you go back to 1890, you find a few anatomists discovering some of the functions or connections of neurons in the brain. And you find a few experimental physicists. There was no oscilloscope yet, but there were very high gain galvanometers, which could detect pulses going along a nerve fiber. And by 1900, it was pretty clear that part of the activity in a nerve cell was chemical and part was electrical. And by 1920 or '30 with the cathode ray tube appearing mostly because of television, but it became possible to do a lot of neurophysiology by sticking needles in brains. The vacuum tube appears around 1900, and you can make amplifiers that can see millivolts and then microvolts. So in the beginning of the 20th century, there was lots of progress. By 1950, we knew a lot about the nervous system, but we still don't know much about how you learn something in the brain. It's quite clear that the things called synapses are involved. The connections between two neurons become better at conducting nerve impulses under some conditions, but no one knows how higher level knowledge is represented in the brain yet. And the Society of Mind book had a lot of theories about that. And in particular, there was a theory called k line's, knowledge lines, or something that came partly from me and partly from a couple of other researchers named David Waltz and Jordan Pollock. That's a sort of nice theory of how neural networks might remember higher level concepts. And for some reason, although that kind of work is from around 1980, which is 30 years ago, it has not hit the neuroscience community. So if you look at the emotion machine book or the society minded in Amazon, you might run across a review by a neurologist named Richard Restak, who says that Minsky makes up a lot of concepts, like K-lines, and micronemes, and stuff like that, that nobody's ever heard of. And there's no evidence for them, and he ignores the possibility that it isn't the nerve cells in the brain that are important. But the supporting tissues called glia, which hold the neurons up and feed them. And he goes on for a couple of insane paragraphs. It's very interesting, because it doesn't occur to him that you can't look for something, until you have the idea of it. So here is this 30-year-old idea of K-lines, and go and ask your favorite neurologist, neuroscientist what it is. And he said, oh, I think that's some AI thing, but where's the evidence for it? What do you suppose is my reaction to that? Who is supposed to get the evidence? So it seems to me that there's a strange field in neuroscience, which is that it doesn't want new ideas, unless you've proved them. So I try to have conversations with them, but get somewhat tired of it. Anyway, in this course, I'm taking the opposite approach, which is that we don't want a theory of thinking. We want a lot of them, because probably, psychology is not like physics. What's the most wonderful thing about physics? The most wonderful thing is that they have unified theories. There wasn't much of a unified theory, until Newton, and he got these three wonderful laws. One was the gravitational idea that bodies attract each other with a force that's the inverse square of the distance between them. Another is that kinetic energy is conserved. I forget with the third one is. Oh, equal reaction is equal and opposite. If two things collide, they transfer equal amount of momentum to both. There was a little problem up to Newton's time. Galileo got some of those ideas, and my impression from reading him is that he has a dim idea that there are two things around. There's kinetic energy, which is MV-- oops, momentum is MV. And there's kinetic energy, which is MV squared, and he doesn't have the clear idea that there are two different things here. And you can't blame him. I would think-- you wouldn't think that two quantities would combine in two different ways to make two important different concepts. Well, that got clear to Newton somehow, and Galileo is a bit muddled. Although, he gets almost all the consequences of those things right, but he doesn't get the orbits and things to come out. Anyway, what's happened in artificial intelligence, like most fields, is that people said, well, let's try to understand thinking and psychology. And let's use physics as our model, so what we want is to get a very small number of universal laws. And a lot of psychologists struggled around to do that, and then they gradually separated. So that there were some psychologists, like Bill Estes, who worked out some very nice mathematical rules for reinforcement based learning, got a simple rule. If you designed an experiment right, it predicted pretty well how many trials it would take a rat, or a pigeon, or a dog, or whatever to learn a certain thing from trial and error. And Este's got a set of four or five rules, which looked like Newton's laws. And if you designed your experiment very carefully and shielded the animal from noise and everything else, which is what a physicist would do for a physics experiment, the reinforcement theories got some pretty good models of how to make a machine learn. But they weren't good enough. So here's a whole list of things that happened in the early years of cognitive psychology when people were trying to make theories of thinking, and they were imitating the physicists. By physics envy to borrow a term of Freud, the idea is, can you find a few simple rules that will apply to very broad classes of psychological phenomena? And this led to various kinds of projects. Lots of neural network, and reinforcement, and statistical based methods led to learning machines that were pretty good at learning in some kinds of situations. And they're becoming very popular, but I don't like them. Because, if you have a lot of variables, like 50 or 100, then to use a probabilistic analysis, you have to think of all combinations of those variables. Because if two of them are combined in something, like a exclusive or a manner, you know, I just put the light pen in a pocket. It's either in the left pocket or a right pocket. It can't be both. That's an x or. That will cause a lot of trouble to a learning machine. And if there are a hundreds variables, there's no way you could decide which of the two to the 100th Boolean combinations of those variables you should think about. So lots of statistical learning systems are good for lots of applications. But they just won't cut it to solve hard problems, where the hypothesis is a little bit complicated and has seven or eight variables with complicated interactions. Most statistical learning people assume that, if you get a lot of partial ones, then you can look for combinations of ones that have high correlations with the result. Then you can start combining them, and things get better and better. However, mathematically, if an effect you're looking for depends on the exclusive or of several variables, there's no way to approach that by successive approximations. If any one of the variables is missing, there won't be any correlation of the phenomenon with the others. Anyway, that's a long story, but I think it's worth complaining about. Because almost all young people who start working on artificial intelligence look around and say, what's popular? Statistical learning, so I'll do that. That's exactly the way to kill yourself scientifically. You don't want to get the most popular thing. You want to say, what are my really good at that's different? And what are the chances that would provide another thing? End of long speech. Another problem in the last 30 years-- and as you'll see during my lectures, I think a lot of wonderful things happened between 1950 when the idea of AI first got articulated in the 1950s. And then the 20 years after that from 1960 to 1980, a lot of early experiments-- and I'll show you some of them. --looked very promising. In fact, they may be-- here we go. 1961, Jim Slagle was a young graduate student here at MIT. He was blind. He had gotten some retinal degeneration thing in his first or second year of high school. He was told that he would lose all his vision, and there was no treatment or hope. So he learned Braille while he could still see. And when he got to MIT, he was completely blind, but there was a nice big parking lot in technology square. And he would ride a bicycle. And people, like Sussman, and Winston, and whoever was around, would yell at him, telling him where the next obstacle would be. Jim got better and better at that, and nothing would stop him. And he decided he would write a program that-- oh, I wrote a program that would take any formula and find its derivative. It was really easy, because there were just about five rules. Like if there's a product UV, then you compute U times the derivative of V plus V times U of DV plus DVU. So I wrote a 20 line list program that did all the algebraic expressions, and what it would do is put Ds in the right place. And then it would go back through the expression again. Wherever it saw a D, it would do the derivative of the thing after that and nothing to it. So Slagle said, well, I'll do integrals. And we all said, well, that's very hard. Nobody knows how to do it. And in fact, in Providence at the home of the American Mathematical Society, there is a big library called the Bateman Manuscript Project, which has been collecting all known integrals for 100 years. And when anybody finds a new integral that they can integrate in closed form, they send the formulas to the Bateman Manuscript Project, and some hackers there develop ways to index it. So if you had an integral, and you didn't know how to integrate it, you could look it up. And that was pretty big. I should say that Slagle succeeded in writing a program that managed to do all of the kinds of integrals that one usually found on the first year calculus course at MIT and got an A in those. He couldn't do word problems. And the uncanny thing is that, if it was a problem that usually took a MIT student five or 10 minutes, Slagle's program would take five or 10 minutes. It's running on an IBM 701 with a 20 millisecond cycle time. It's incredibly slow. You can type almost that fast and 16 K of words of memory. So there's no significance whatever to this accident of time. It would now take a microsecond or so. It would be 1,000 million times faster than a student. Quite remarkable. I don't have a slide. Joel Moses, then Slagle went and graduated. Joel Moses was another student, who is-- is he provost now or what? He got tired of it. A terrific student, and he set up a project called maxima for project Max Symbolic Algebra and got several people all over the country working on integration. And at some point, a couple of them, Bobby Caviness and-- forget the other one-- found a procedure that could, in fact, integrate everything-- every algebraic expression that has a-- can be integrated in closed form. I forget the couple of constraints on it. And that became a widely used system. It ultimately got replaced by Steven Wolfram's Mathematica. But Maxima was sort of the world-class symbolic mathematician for quite a few years. And Moses mentioned to me he had read Slagle's program thesis. And it took him a couple of weeks to understand the two pages of-- or three pages of-- Lisp that Slagle had written. Because being blind, Slagle had tried to get the thing into as compact a form as possible. But that's symbolic. It's too easy. It was in a more ambitious one, which was, three years later, Dan Bobrow, who is now a vice president doing something at Xerox-- and it solved problems like this. The gas consumption of my car is 15 miles per gallon. The distance between Boston and New York is 250 miles. What is the number of gallons used on a trip between Boston and New York? And it chomps away and solves that. It has about 100 rules. It doesn't really know what any of those words mean. But it thinks that the word "is" is equals. The distance between-- doesn't care what Boston and New York is. It has a format thing which says the distance between two things. And it never bothers to-- you see, because the phrase "Boston and New York" occurs twice in the example, it just replaces that by some symbol. It was fairly remarkable. And generally, if you had an algebra problem, and you told it to Bobrow, Bobrow could type something in, and it would solve it. If you typed it in, it probably wouldn't. But it was-- it had more than half a chance, or less-- about half a chance. So it was pretty good. And if you look at an out-of-print book I wrote called-- I compiled, called-- Semantic Information Processing, most of Bobrow's program is in that. So that's 1964. I'll skip Winograd, which is, perhaps, the most interesting program. This was a program where you could talk to a robot that-- I don't have a good picture in this slide. But they're a bunch of blocks of different colors. They're all cubes in the-- or rectangular blocks. And you can say, which is the largest block on top of the big blue block? And it would answer you. And you could say, put the large red block on top of the small green block, and it would do that. And Winograd's program was, of course, a symbolic one. We actually built a robot. And I guess we built it second. Our friends at Stanford built a robot. And they imported Winograd's program. And they had the robot actually performing these operations that you told it to do by typing. And it was pretty exciting. My favorite program in that period was this one, because it's so psychological. This is called a geometrical analogy test. And it's on some IQ tests. A is to B as C is to which of the following five? And Evans wrote a set of rules which were pretty good at this. Did as well as 16-year-olds. And it picks this one. And if you ask it why, it says something like, I don't have a reason. It moves the largest object down or something like that, makes up different reasons. So you see, in some sense, we're going backwards in age. Because we're going from calculus, to algebra, to simple analogies. Oh, there it is. That's one where the largest object moves down. I don't know why I have two of them. These are for another lecture. OK. So that was a period in which we picked problems that people considered hard, because they were mathematical. But when you think about it more, you see, well, those math things are just procedures. And it's once you know what Laplace, and Gauss, and those mathematicians-- Newton and people-- did, you can write down systematic procedures for integrating, or for solving simultaneous algebraic constraint equations, or things like that. And so there's very little to it. So in some sense, if you look at the-- what you're doing in math in high school, in education, you're going from hard to easy. It's just that people aren't-- most people aren't very good at obeying really simple rules, because it's so hideously boring or something. So we gradually started to ask, well, why can't we make machines understand everyday things, and the things that everyone regards as common sense, and people can do so you don't need machines to do them? And one of my favorite examples is, why can you pull something with a string but not push? And there's been a lot of publicity recently about that interesting program written at a group at IBM called Watson, which is good at finding facts about sportspeople, and celebrities, and politics, and so forth. But there's no way it could understand why you could push-- pull something with a string but not push. And I don't know of any program that has that concept or way of dealing with it. So that's what I got interested in. And starting around the-- maybe the middle 1970s or late 1970s, several of us started to stop doing the easy stuff and try to make theories of how you would do the kinds of things that people are uniquely good at. I don't know if animals-- well, I don't know. I'm sure a monkey wouldn't try to push anything with a string. Maybe it does it very quickly, and you don't notice. And one aspect of commonsense thinking is going right back to that idea of vision having a dozen different systems, is that, whatever a person normally is doing, they are probably representing it in several different ways. And here's an actual scene of two kids named Julie and Henry who are playing with blocks. It's pretty hard to see those blocks. And you can think that Julie is thinking seven thoughts. I'd like to see a longer list. Maybe a good essay would be to take a few examples and say, what are the most common micro-worlds? See physical, social, emotional, mental, instrumental-- whatever that is-- visual, tactile, spatial. She's thinking all these things. What if I pulled out that bottom block? You can't see the tower very well. Should I help him or knock is tower down? How would he react? I forgot where I left the arch-shaped block. That was real. It's somewhere over here. But I don't think we could-- maybe it's that. I don't know. I remember, when it happened, she mentioned that she reached around, and it wasn't where she thought it was. So commonsense thinking involves this-- in most cases, I think, several representations. I don't know if it's as many as seven, or maybe 20, or what. But that's the kind of thing we want to know how to do. OK, I think I'll stop, and we'll discuss things. But in the next lecture, I'll talk about a model of how I think thinking works. What's the difference between us and our ancestors? We know we have a larger brain. But if you think about it, if you took the brain that you already had and say-- trying to remember the name of the little monkey that looks like a squirrel, jumps around in trees. Anybody know what-- AUDIENCE: Spider monkey? PROFESSOR: What? AUDIENCE: Spider monkey? PROFESSOR: Maybe. It's a squirrel-like thing. You wouldn't know it was a monkey till you took a close look. AUDIENCE: Lemur, maybe? PROFESSOR: Maybe. Lemur? I don't-- I forget. I'll have to-- anyway, if you just made the brain bigger, then the poor animal would be slower, and heavier, and would need more food, and take longer to reproduce. The joke about difficulty to give birth-- I don't know if any animal has the problem that humans have. A lot of people die and so on. So how did we evolve new ways to think and so forth? And my first book, The Society of Mind, had this theory that maybe we evolved in a series of higher and higher levels, or management structures, built on the earlier ones. And this particular picture suggests that I got this idea from Sigmund Freud's early theories. There's been a lot of Freud bashing recently. You can look on the web. I forget the authors. But there are a couple of books saying that he made up all his data, and there's no evidence that he ever cured anyone, and that he lied about all the data mentioned in his 30 or 40 books, and so forth. AUDIENCE: Also [INAUDIBLE]. PROFESSOR: Yes, right. But the funny part is that if you look at his first major book-- 1895-- called The Interpretation of Dreams, it sort of outlines his theory that most of thinking is unconscious, and it's processes you can't get access to. And it has a little bit about sex, but that's not a major feature. And it's just full of great ideas that the cognitive psychologists finally began to get in the 1960s again and never give credit to Freud. So he may well have made up his data. But if you have a very good theory and nobody will listen to you, what can you do? His friend Rudolf Fliess listened to him. And there was another paper on how the neurons might be involved in thinking, which was also written around 1895, but never got published till 1950 by-- forget who-- called "Project for a scientific psychology." And it's full of ideas that, if they had been published, might have changed everything. Because-- anyway, what's on your mind? Who has-- what would you like to hear about? Or who has another theory? AUDIENCE: I've got a question. PROFESSOR: Great. AUDIENCE: So earlier, you talked a little bit about how we don't really see the neuroscience, all of these things like K-lines, et cetera. Do you think it's because they're just really hard to find, or no-one's actually looking for them? PROFESSOR: Well, Restak's review says, he uses vague, ill-defined terms like K-line, and microneme, and a couple of others, and frame, and so forth. They're very well-defined. They're defined better-- I mean, when he talks about neurotransmitters, it's as though he thinks that chemical has some real significance. Any chemical would have the same function as any other one provided there's another receptor that causes something to happen in the cell membrane. So you don't want to regard acetylcholine or epinephrine as having a mental significance. It's just a-- it's just another pulse, but very low-resolution. And yes, a neurochemical might affect all the neurons a little bit, and raise the average amount of activity of some big population of cells, and reduce the average activity of some others. But that's nothing like thinking. That's like saying, in order to understand how a car works-- what's the most insulting thing I could say? Or to understand how a computer works, you have to understand the arsenic, and phosphorus, and/or-- what's the other one? You have to understand these atoms that are-- what? AUDIENCE: Germanium. PROFESSOR: Yeah, well, that's the matrix. So there are these one part in a million impurities. And that's what's important about a computer, isn't it, the fact that the transistor has gain and so forth. Well, no, the trouble with the computer is the transistors. That's why practically every transistor in a computer is mated to another one in opposite phase to form a flip-flop whose properties are exactly the same, except one in a quadrillion times. In other words, everything chemical about a computer is irrelevant. And I suspect that almost everything chemical about the brain is unimportant except that it causes-- it helps to make the columns in the cortex, which are complicated arrangements of several hundred cells, work reliably. Whereas the neuroscientist is looking for the secret in the sodium. When a neuron fires, the important thing is that that lets the sodium in and the potassium out or vise-versa-- I forget which-- at 500 millivolts-- really quite a colossal event. But it has no significant. It's only when it's attached to a flip-flop, or to something like a K-line, which has an encoder and decoder of a digital sort every few microns of its length that you get something functional. So the trouble is, the poor neuroscientists started out with too much knowledge about the wrong thing. The chemistry of the neuron firing is very interesting, and complicated, and cute. And in the case of the electric eel, you know what happened there. The neuron synapse, it got rid of the next neuron. And it just-- in the electric eel, you have a bunch of synapses, or motor end plates they're called, in series. So instead of half a volt, if you have 300 of those, you get 150 volts. I think the electric shock that a electric eel can give you is about 300 volts. And this can cause you to drown promptly if you are in the wrong wave when it happens to bump into you. I don't know why I'm rambling this way. You're welcome to study neuroscience. But please try to help them instead of learn from them. [LAUGHTER] Yeah? They just don't know what a K-line is. And that's a paper that's been widely read. It's published in 1980, and Restak says ill-defined. And I guess he couldn't understand it. Yep? Yeah? AUDIENCE: Why is there no trying to make the neuroscientists trying to find this in the human mind? Why don't we just, as computer scientists, program the K-lines and try to [INAUDIBLE]? This is the human mind, and we can reproduce it. Why is not-- is that not widespread into the computer scientist field? PROFESSOR: Well, there are-- I'm surprised how little has been done. There's-- Mike Travers has a thesis, Tony Hearn. There are three master's theses on K-lines. They sort of got them to work to solve some simple problems. But I'd go further. I've never met a neuroscientist who knows the pioneering work of Newell and Simon in the late 1950s. So there's something wrong with that community. They're just ignorant. They're proud of it. Oh, well. I spent some time learning neuroscience when I was-- I once had a great stroke of luck. When I was a-- I guess I was a junior at Harvard. And there was a great new biology building that was just constructed. You probably know, it's a great, big thing with two rhinoceroses. What are those-- what are those two huge animals? So this building was just finished and half occupied, because it was made with a future. So I wandered over there, and I met a professor named John Welsh. And I said, I'd like to learn neurology. And he said, great, well, I have an extra lab. Why don't you-- why don't you study the crayfish claw? I said, great. So he gave me this lab, which had four rooms, and a dark room, and a lot of equipment, and nobody there. And he had worked on crayfish. So there was somebody who went, every week, up to Walden Pond or somewhere, and caught crayfish, and bring them back. And I was a radio amateur hacker at the time. So I was good at electronics. So I got my crayfish. And Welsh showed me how to-- the great thing about this preparation is you can take the crayfish, and if you-- claw-- and if you hold it just right, it goes, snap. It comes off. Grows another one-- takes a couple of years. And then there's this white thing hanging out, which is the nerve. And it turns out it's six nerves, one big one and a few little ones. And if you keep it in Ringer's solution, whatever that is, it can live for several days. So I got a lot of switches, and little inductors, and things, and made a gadget, and mounted this thing with six wires going to these nerves. And then I programmed it to reach down and pick up a pencil like that and wave it around. Well, that's obviously completely trivial. And all the neuroscientists came around, and gasped, and said, that's incredible. How did you do that? [LAUGHTER] They had never thought of putting the thing back together and making it work. Anyway, it was-- I'm always reminding myself that I'm the luckiest person in the world. Because every time I wanted to do something, I just happened to find the right person. And they'd give me a lab. I got an idea for a microscope. And it was this great professor, Purcell, who got the Nobel Prize after a while. And he said, that sounds like it would work. Why don't you take this lab? It was in the Jefferson. Anyway-- yeah? AUDIENCE: I think part of the reason that you don't see experimental neuroscience on things like K-lines is that neurons are long and thin. So if you want to do an experiment to actually measure a real neural network, you have to trace structures with, roughly, maybe tens of nanometer resolution. But you need to trace them over what might be a couple, or even tens, of millimeters to start to-- and you need to do this for thousands and thousands of neurons before you could get to the point of seeing something like a K-line and understanding it. So it's just a massive data acquisition and processing problem. PROFESSOR: Oh, but they're doing that. AUDIENCE: They're starting to try to. PROFESSOR: But they don't have-- they don't know what to look for. Maybe you don't have to do so much. Maybe you just have to do a few sections here and there and say, well, look, there were 400 of these here. Now there's only 200. It looks like this is the same kind. Maybe you don't have to do the whole brain. AUDIENCE: No, but I mean even getting a single neuron is big. Because it might get down to-- you need to be looking at electron micrographs of brains that are sliced at about 30-millimeter-- sorry, excuse me, 30-nanometer slices. So even just having a single person reconstruct a single neuron takes-- might take weeks. PROFESSOR: Well, I don't know. Maybe a bundle of K-lines is half a millimeter thick. AUDIENCE: Oh, so If you actually do some larger-scale structure to start off looking at, yeah. PROFESSOR: Why not? I just think they have no idea what to look for. I could give you 20 of those in five minutes, but nobody's listening. AUDIENCE: So I guess you need to know what it looks like before you can look for it at that scale. PROFESSOR: What scale? AUDIENCE: I don't know. I mean, they know what neurons look like. So you know-- PROFESSOR: Yeah AUDIENCE: You know what to look for if you're saying a neural net level. PROFESSOR: I'm saying you may only have to look at the white matter. AUDIENCE: Oh, all right. PROFESSOR: Ignore the neurons. Because the point of K-lines is, where do these go? And what goes into them and out? I don't know. It's just this idea, let's map the whole brain, 100 billion things. And then people like Restak says, oh, and there's 1,000 supporting cells for each neuron. He's just glorying in the obscurity of it rather than trying to contribute something. Anyway, if you run into him, give him my regards. [LAUGHTER] I really wonder how somebody can write something like that. Yes? AUDIENCE: Excuse my ignorance, but what is a K-line? PROFESSOR: The idea is that-- suppose one part of the brain is doing something, and it's in some particular state that's very important, like-- I don't know, that-- like I've just seen a glass of water. Then another part of the brain would like to know there's a glass of water in the environment. And I've been looking for one. So I should try to take over and do something about that. Now at the moment, there is no theory of what happens in different parts of the brain for a simple thing like that to happen, no theory at all, except they use the word "association." Or they talk about, what are the purposeful neurons? Goal-- forget. OK, so my theory is that there are a bunch of things which are massive collections of nerve fibers, maybe a few hundred or a few thousand. And when the visual system sees an apple, it turns on 50 of those wires. And when it sees a pear, it turns on a different 100 or 50 of those wires. But about 20 of them are the same, so forth. In other words, it's like the edge of a punched card. Have you ever seen a card-based retrieval system? If you have a book that has-- suppose it's about physics, and biology, and Sumatra. And a typical 5 by 8 card has 80 holes in the top edge. So what do you do is, if it's Sumatra, you punch eight of these holes at random, particular set. They're assigned to Sumatra. And then if it's-- I forget what my first two examples were. But you punch eight or 10 holes for each of the other two words. So now there are 24 punches. Only probably four or five of them are duplicates. So you're punching about 20 holes. And now, if something is looking for the cards that have-- were punched for those three things, even if there are 30 or 40 other holes punched in the card, you stick your 20 wires through the whole deck and lift it up. And only cards fall out that had those three categories punched for. So you see, even though you had 80 holes, you could punch combinations of up to a million different categories into that. And if you have to put a bunch of wires through, you'll get all of the ones that were punched for those cate-- the categories you're looking for. And you might get three or four other cards that will come down also. Because all of the eight holes were punched for some category by accident. Do you get the picture? I'll send you a reference. It was invented by a-- in the 19-- early 1940s by a Cambridge scientist here named Calvin Mooers and was widely used in libraries for information retrieval until computers came along. But anyway, that's the sort of thing that you could look for in a brain if you had the concept in your head of Zastocoding. But I've never met a neuroscientist who ever heard of such a thing. So you have this whole community which doesn't have a set of very clear ideas about different ways that knowledge or symbols could be represented in neural activity. So good luck to them when they get their big map. They'll still have to say, what do I do with 100 billion of these interconnections? Yeah? AUDIENCE: What are your thoughts about the current artificial intelligence research at MIT, such as Winston's genesis project? PROFESSOR: I think Winston is just about one of the best ones in the whole world. I don't know any other projects that are trying to do things on that higher level of commonsense knowledge. He's just lost a lot of funding. So one problem is, how do you support a project like that? Have you followed it? I don't know if there's a recent summary of what they're doing. AUDIENCE: [INAUDIBLE] PROFESSOR: We used to write a big-- a new book every year called the progress report. The nice thing is that we never wrote-- we had a very good form of support from ARPA, or DARPA, which was, every year, we'd-- every year, we'd tell them what we had done. They didn't-- they didn't want to hear what we wanted to do. And things have turned the opposite. So what would happen is, every year, we'd say, we did these great things, and we might do some more. Went on for about 20 years. And it was-- and then it fell apart. One thing-- it's a nice story-- there was a great liberal senator, Mike Mansfield. And unfortunately, he got the idea that the defense department was getting too big and influential. So he got Congress to pass a law that ARPA shouldn't be allowed to support anything that didn't have direct military application. And Congress went for this. And all of a sudden, a lot of research disappeared, basic research. It didn't bother us much. Because we made up applications and said, well, this will make a military robot that will go out and do something bad. I don't remember ever writing anything at all. Because-- but anyway, around 1980, the funding for that sort of thing just dried up because of this political accident. It was just an accident that ARPA, mainly through the Office of Naval Research, was funding basic research. And that was a bit of history. If you look back at the year 1900 or so, you see people like Einstein making these nice theories. But Einstein wasn't a very abstract mathematician. So he had a mathematician named Herman Weyl polishing his tensors and things for him. And Herman Weyl's son, Joe, was at the Office of Naval Research in my early time. And that office had spent a lot of secret money getting scientists out of Europe while Hitler was marching around and sending them to places like Princeton and other forms of heaven in the-- in Cambridge. And again, one of the reasons I was lucky is that I was here. And all these-- you know, if you had a mathematical question, you could find the best mathematician in the world down the block somewhere. And Joe Weyl was partly responsible for that. And the ONR was piping all that money to us for work on early AI. So it was a very sad thing of the-- maybe the most influential liberal in the US government actually ruined everything by accident. ARPA changed its name to DARPA. It was Advanced Research Projects Agency. And it had to call itself Defense Advanced Research Projects Agency. Yeah? AUDIENCE: My question is, do you think the achievement of artificial intelligence is inevitable? Or is there an obstacle that we're just never going to be able to overcome? PROFESSOR: Well, Christianity wiped out science. That might happen tomorrow. Only choose your religion. AUDIENCE: Even favorable circumstances. PROFESSOR: It's a hard problem. The number of people working on advanced ideas in AI has gotten smaller and smaller as the-- f right now, the-- around 1980, rule-based systems became popular. And there were lots of things to do. Right now, statistical-based inference systems are becoming popular. And as I said, these things are tremendously useful. But the problem is, if you have a statistical system, the important part is guessing what are the plausible hypotheses and then making up the-- then finding out how many instances of that are correlated with such and such. So it's a nice idea. But the hard problem is the abstract, symbolic problem of, what sets of variables are worth considering at all when there are a lot of them? So to me, the most exciting projects are the kind that Winston is developing for reasoning about real-life situations. And the one that Henry Lieberman-- would you stand up, Henry? Lieberman runs a world-class group that's working on commonsense knowledge and informal reasoning. And it seems to me that that's the critical thing that all the other systems will need. In the meantime, there are people working on logical inference, which has the same problem that statistical inference has-- namely, how do you guess which combinations of variables are worth thinking about? Then it seems to me that the statistics isn't so important. In fact, there's a great researcher named Douglas Lenat in Austin, Texas who once made an interesting AI system that was good at making predictions and guessing explanations for things. And it was sort of like a probabilistic system. It had a lot of hypotheses. And every time one of them was useful in solving a problem, it moved it up one on the list. So Lenat's thing never used any numbers. It didn't say, this is successful 0.73 of the time, and now it's successful 0.7364825 of the time. What it would do is, if something was useful, it would move it up past another hypothesis. Every now and then, it would put a new one in. Well, if you're doing-- if you're trying to solve the problem, what do you need to know? You want to know, what's the most use-- what's the most likely to be useful one, and try that. You don't care how likely it is to be useful as long as it's the most, right? I mean, if it's one in a million, maybe you should say, I'm getting out of here. I don't-- I shouldn't be working in this field at all, or get a better problem. But Lenat's thing did rather wonderfully at making theories by just changing the ranking of the hypotheses that was considered-- no numbers. It did something very cute. He gave it examples of arithmetic. And it actually-- it was a rather long effort. And it actually learned to do some arithmetic. And it invented the idea of division and the idea of prime number, which was some number that wasn't divisible by anything. It decided that 9 was a prime-- didn't do much harm. And it crept along. And it got better and better. And it invented modular arithmetic by accident at some point. And it's a PhD thesis. A lot of people didn't believe this PhD thesis, because Lenat lost the program tape. [LAUGHTER] So he was under some cloud of suspicion for people thinking he might have faked it. But who cares? Anyway, I think there's a lesson there, which is that, let's start with something that works. And then, if it's really good, then hire a mathematician who might be able to optimize it a little. But the important thing was the order. And a good statistical one might waste a lot of time. Because here's this one that's 0.78. And here's this one that's 0.56. And it's the next one down. And you get a lot of experience. And it goes up to 0.57 and 0.58. And it never-- you know, might be a long time before it gets past the other one, because you're doing arithmetic. Whereas in Lenat's, it would just pop up past the other one. Then it would get tried right away. And if it were no good, it would get knocked down again. Who cares? So it's a real question of-- I don't know. Mathematics is great. And I love it. And a lot of you do. But there should be a name for when it's actually slowing you down and wasting your time because there's a better way that's not formal. Yeah? AUDIENCE: Isn't there a saying there are people who know the price of everything and the value of nothing. PROFESSOR: That's very nice. Yeah? AUDIENCE: I know you're also a musician. So I have a music-related question. What do you think is the role of music? Like, why do all cultures have it? PROFESSOR: I have a paper about it. Oh, OK. I've been trying to revise it, actually. But it's a strange question, because there is music everywhere. On the other hand, I have several friends who are amusical. And so I have this theory that music is a way of teaching you to represent things in an orderly fashion and stuff like that. Well, I have three of my colleagues who aren't musical, but they dance. So they may-- it may be that-- I don't know the answer. It's interesting. The theory-- the first theory in my paper is that when you have a lot of complicated things happening, then the only way to learn is to represent things that happen, and then look at the differences between things that are similar, and then try to explain the differences, right? I mean, what else is there? Maybe there's something else. So in order to become intelligent and understand things, you have to be able to compare things. And to me, the most important feature of what's called music is that it's divided into measures-- bah, bah, bah, bah, bah, bah, bah, bah, bah, bup, bah, bah. And measures are the same number of beats, or whatever they are. And so now you can say da, da, da, da, da, da, da, da, da, da, da, da, da. What's the different? You changed the eighth notes in the second one, the last four eighth notes-- no, the two before last-- to a quarter note. So you're taking things that were in different times, and you're superimposing those times. And now you can see the difference. And the reason you can see the difference is that you have things called measures. And the measures have things called beats. And so things get locked into very good frames. Now there's some Indian music which has 14 measures for a phrase. And some of the measures go seven and five. And I can make no sense of that stuff whatever. And I've tried fairly hard, but not very. So I don't understand how Indians can think. Any of you can handle Indian music? AUDIENCE: I guess, just to add on what you said about this, my favorite quote from your paper on music, mind, and meaning is the one about what good is music, about how kids play with blocks to learn about space, and people play with music to learn about time. And I think, in that sense, both music and dance are different ways that people can arrange things in time. And in a sense, improvisatory and improvisatory movement are both ways of-- different blocks, if you will, in time as opposed to space. PROFESSOR: Mm-hmm. Yeah, my friends who seem amusical, they probably-- maybe there's something different about their cochlea. Or maybe they have absolute pitch in some sense, which is a bad thing to have. Because if you're listening to a piece composed by a composer who doesn't have absolute pitch, then you're reading all sorts of things into the music that shouldn't be there. And if you're-- and the opposite would be true. I read music criticism sometimes. And maybe the reviewer says, and after the second and third movement, he finally returns to the initial key of E flat major-- what a relief. Well, I once had absolute pitch for a couple of weeks, because I ran a tuning fork in my room for a month. And I didn't like it. [LAUGHTER] Because you can't listen to Bach anymore. Oh, well. It's a good question, why do people like music. And I don't know any other paper like mine. If you ever find one, I'd like to see it. Because if you go to a big library, there are thousands of books about music. And if you open one, it's mostly Berlioz complaining that somebody wouldn't give him enough money to hire a big enough chorus. But I've found very few books about music itself. Yes? AUDIENCE: It's not about music anymore. Is that all right? OK. Do you think that having a body is a necessary component of having a mind? Could you do just as well just as a-- you know, sort of a simulated creature? PROFESSOR: Oh, sure, you could-- AUDIENCE: And have all the things? PROFESSOR: Simulation-- I think a mind that's not in a world wouldn't have much to do. It would have to invent the world. And I don't see why it couldn't. But you might have to give it a start, like the idea of three or four dimensions. But can't you-- what happens if you sit back and just think for a while? You wouldn't know if your body had disappeared for-- would you? There also is a strange idea about existence and-- why do you think there's a world? One of the things that bugs me is people say, well, who created it? And that can't make any sense. Because this is just a possible world. Suppose there are a whole lot of possible worlds, and there's one real one. How could you ever-- how could you possibly know which one you're in? And then you could say, well, didn't someone have to make it? And what's the next thing you'd ask? Well, who made the maker? So the body-mind thing-- seems to me that once you have a computer, it can be its own world. It just can sit. The program can spend half the time simulating a world and half the time thinking about what it's like to be in it. Yeah? AUDIENCE: Do you have a current theory of existence? PROFESSOR: Yes, it's an empty concept. It's all right to say this bottle exists, because that's saying this bottle is in this universe. But what would it mean to say the universe exists? The universe is in the universe? So there's something wrong with thinking about-- so there are only possible worlds. There's no-- it doesn't make any sense to pick one of them out and say that's the real one. Yeah? AUDIENCE: So it's existence is relative? PROFESSOR: Yeah, you don't say, this is the world I'm in. But you shouldn't say-- that doesn't mean it exists. Like, 2 is in the set of even numbers. But what's the set of even numbers in? It doesn't stop anywhere. Yeah? Lots of worlds. AUDIENCE: So is mathematics [INAUDIBLE] worlds. But physics, it only explains the current world? Or I don't know how you compare these two subjects. PROFESSOR: Well, you can't tell. Because five minutes from now, everything might change. So nothing ever explains anything. You just have to take what you've got and make the best of it. Yeah? AUDIENCE: So this relationship means systems knowledge in artificial intelligence? PROFESSOR: Which knowledge? AUDIENCE: Systems, basically, in general. PROFESSOR: Well, there are people who talk about systems theory, but I'm not sure that it's well-defined. AUDIENCE: Right, exactly. PROFESSOR: Artificial intelligence means, to me, making a system that is resourceful and doesn't get stuck. And so if you have a system-- and also, it's an-- how do you put it? Some definitions are not stationary, like what's popular. Popular is what's popular now. There isn't any such thing as popular music in terms of the music. So I know there were-- there was once a little department called systems analysis at Tufts, which had a couple of rather good philosophers try to make general theory of everything. And they were writing nice little papers. And it got a-- it moved along. But then there was this Senator McCarthy you've probably heard of. And he announced that he had evidence that the-- one of the principal investigators had slept with his wife before they were married. Well, Tufts was very frightened at this and abolished that department. And Bill Schutz went to California, and started. Esalen, and had a good time for the next 50 years-- more stories. Yeah? AUDIENCE: Kind of as a extension of the body-mind question, it seems to me we-- as humans, we-- learn a lot from just interacting with the environment. Like language, we hear it being spoken. We speak it. We see things. We touch things. But as far as I know, a lot of the efforts in artificial intelligence so far have been confined to the computer that does not go out into the real world, interact. Doesn't [INAUDIBLE] to see, learn new think. PROFESSOR: Well, here's the problem-- I look over at Carnegie Mellon, and there are some nice projects. And the most popular one is robot soccer. And here are these little robots kicking a ball around. They're Sony-- what are they called? AUDIENCE: AIBOs. PROFESSOR: Yes, the Sony AIBOs. Sony stopped making the AIBOs. But it respected Carnegie, and it made a little stash, secret stash, of AIBOs to send to Carnegie when the present ones break. But my impression of AI projects that have robots is that they do less, less, less than projects that don't. The reason is, if you have a robot like ASIMO, made by Sony? No. AUDIENCE: Honda. PROFESSOR: Honda. ASIMO can get in the back seat of a car with some effort, and usually falls over. However, if you simulate a stick figure in a computer getting into a stick figure of a car, then you can make it learn to do that and get better and better. And so all AI projects without robots are way ahead of all AI projects with robots. And the profound reason is that robots are usually expensive, and they're always being fixed. So if you have five students, and the robot is being fixed-- I don't know what they're doing-- but they have to wait. Whereas if you have a stick figure robot, then you can just run it on this, although it might be a little slower than your mainframe-- probably not. Yeah? AUDIENCE: So then back to the idea of the mind and the body, here's a theory that I just thought of-- the idea of the body as a seeming abstract, basically a mechanism for input-output. It's a set of sensors from which our brains can get information about the world, and instead of actuators, in which we can display our state. So in that light, it's almost as if our brains are really independent on our body itself. It can adapt to any sort of body if we happen to hook it up that way. And it just so happens that we've been hooked up to this body since birth, that we have such good mental models of how to use this body. And I guess an example of-- from experiments that support this theory might be how, when people have limbs amputated, it takes them a while to forget that they have the limb. Because their mental model still exists. The mental models don't go away overnight. And also, I guess, they train monkeys to control robot arms with their brains. PROFESSOR: Sure. Well, but it just seems to me that a large amount of our brain is involved with highly evolved locomotion mechanisms. And as I said, when you're sitting back with your eyes closed in a chair thinking about something, then it's not clear how much of that machinery is important. But it might be that-- I have a strange paper on-- I don't know if it's-- trying to remember its name. It's called-- do you think I can actually get a-- I can't remember the name of the title. Oh, I give up. The idea is that maybe-- in the older theories of psychology, everything is learned by experience in the real world-- so conditioning, and reinforcement, and so forth. In this theory I call internal grounding, I make a conjecture. Suppose the brain has a little piece of nerve tissue which consists of a few neurons arranged to make not a flip-flop, but a-- what would you call a three- or four-flop, a flip-flop with three or four states? Let's say three. So when you put a certain input, it goes from-- couldn't find the chalk. So here are three states. And here's a certain input that means if you're in that state, you go to this. And if you pop that input again, it does this. And if you, say, go counterclockwise, it goes. So three of them get you back where you were. But if I go this, this, and that, that would mean to go like this, this, and back. So this would be-- that means that's equivalent to just going one. Get the idea? In other words, imagine that there's a little world inside your brain, which is very small and only has three states. And you have actions that you can perform on it. And you have an inner eye which can see which of the three points of that triangle you're on. Then you could learn, by experience, that if you go left, left, left, you're back where you were. But if you go left, right, left, right, you're back where you are. And if you go left, left, right, that's like going one left. In other words, you could imagine a brain that starts out-- before it connects itself to the real world, it starts by having the top level of the brain connected to a little internal world which just has three or four states. And you get very good at manipulating that. You add more sensory systems to the outer world. And you get to get-- learn ways to get around in the real world. So I call that the internal grounding hypothesis. And my suggestion is, maybe somewhere in the human brain, there's a little structure that's somewhat like that, which is used by the frontal part of the cortex to make very abstract ideas. You understand? The more abstract an idea is, the simpler, and more stupid, and elementary it is. Abstract doesn't mean hard. Abstract means stupid. Real things like this are infinitely complicated. So we might have-- and I wouldn't dare suggest this to a neuroscientist. There might be some little brain center somewhere near the frontal cortex that allows the frontal cortex to do some predicting, and planning, and induction about a very simple-- a few simple, finite-state arrangements. Who knows? Would you look for it? Well, if you were a neuroscientist, you could say, oh, that's completely different from anything I ever heard. Let's look for it. And if you're wrong, you have wasted a year. And if you're right, then you become the new Ramon y Cajal or someone. Who's the best new-- who's the currently best neuroscientist? Maybe it's late. One more question. One last question. This is Cynthia Solomon, who's one of the great developers of the Logo language. Yay. Yes? AUDIENCE: So maybe it's a bad question for the end. I will ask anyway. What do you think about theories such as Rodney Brooks' theories that can speak of no central [INAUDIBLE]?? PROFESSOR: Completely weird. Obviously, those theories have nothing to do with human thinking. But they're very good for making stupid robots. And the vacuum cleaner is one of the great achievements of the century. However, his projects-- what was it called? Cog-- disappeared without a trace. That theory was so wrong that it got a national award. And it corrupted AI research in Japan for several years. I can't understand-- Brooks became popular because he said, maybe the important things about thinking is that there's no internal representation. You're just reacting to situations. And you have a big library of how to react to each situation. Well, David Hume had that idea. And he was a popular philosopher for hundreds of years. But it went nowhere, and it's gone, and so is Rod. However, he is one of the great robot designers. And he may be instrumental in fixing the great Japanese nuclear meltdown. Because they're shipping some of his robots out there. The problem is, can it open the door? So far, no robot can open the door even though it's not locked. [APPLAUSE] Thank you.
|
MIT_6868J_The_Society_of_Mind_Fall_2011
|
2_Falling_In_Love.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MARVIN MINSKY: I usually start by asking if there are any questions. But I thought I'd say a few things about chapter 1, and then see if there are any questions. I can't see the pointer. Oh, anybody remember how to get Word to [CHUCKLES] make its pointer not disappear? Maybe I mentioned this the first lecture, but I was taken by this cute poem by Dorothy Parker because the first chapter was about love and stuff like that. So I tried to get the rights to reproduce it, and it turned out that she was angry at all her friends. She must have been a perpetually pissed-off person. And so she left all her literary rights to the NAACP. And I called them up for hours and they couldn't find the rights. So finally-- so it's in the version of the motion machine on the web. But I had to resort to Shakespeare to replace her. Shakespeare's a slightly better poet, but he's not as funny as Dorothy Parker. [CHUCKLES] So the first chapter starts out-- or, it's mostly about all the things we don't understand about the mind, which is almost everything. And the first discussion is-- well, the whole chapter is making fun of the most popular ideas. And the most popular idea of the mind is that people think that they are not doing the thinking, but there's something inside them that's doing the thinking. And it's this idea that there is a self-- is embedded in just about everything we say and think. And really, it's hard to see how you would do without it. But if you ask, what is the self? Then since this idea is so popular, people begin to believe that there is such a thing and it takes all sorts of various forms. And the most dangerous form, maybe, is the one that religions exploit. Which is that inside a person, with all their complications, there's a little pure essence called the soul, or the self, or whatever you want to call it, and it's impossible to describe it or explain it in physical terms. And so that is one of the reasons why we think there are two worlds-- a physical world and lots of other kinds of worlds. And each of us has some imaginary model of what they are and what they're in. And philosophers talk about it and existentialists, and so on. So there are lots of problems about our ideas about ourselves. And in reading around for half my life, I was puzzled at the strange ideas that are around. And in Aristotle, I find the first intelligible theories of mind and emotions. So if you look at, in particular, Aristotle's-- there are a number of books and one of them is called Rhetoric. And it's full of theories about how people reason and influence each other. And I'll show you some quotes from that because when I look at the history that I've encountered about psychology, sort of Aristotle stands out as being the first and among the best. And as far as I can see, there were no psychologists nearly as good as him. Of course, we don't know whether there was a him exactly because he-- what we have of Aristotle is a lot of writing, but it's all cobbled together by students from all sorts of manuscripts by other people, and people who took notes. And Aristotle claims to have learned a lot from Plato. Well, we have very little writing from him. And so there you go-- three centuries before the Christian era, as it's called. And then a couple of thousand years later, you start to find people like Spinoza and Kant and John Locke and David Hume, who start to make psychology theories, very little of which is as good as the ones that Aristotle has in all his fragments. So one question that frequently bothers me, and it should bother everyone, is why did science disappear for 1,000 years? And the standard explanation is the rise of the great religions. And why did it come back? And you see with the first signs of anything like modern science, at least in my view, with Galileo and Newton-- there a couple of people before that. There's some people in the Muslim world who invented some high school algebra, and they make a big fuss about that. It looks like Archimedes, in a very recently discovered manuscript, computed an integral. He found the volume of a cone, which is-- what is it? 1/6 bh-- I forget. Anyway, so why did science disappear and why did psychology appear so late? Because there isn't much psychology in the modern sense until 1900, or the late 1800s, with Francis Galton and William James lived around here. And Rudolph Fleiss, who-- Sigmund Freud starts writing in 1895. People make fun of Freud, as I mentioned last week. But in fact, among other things-- how many of you have read the recent criticisms of Freud, which claim that he was a complete faker and never cured a single patient? This is popular stuff. I don't believe Freud ever really claimed to cure a single patient. So the critics, who are really very ferocious, claimed that he made up all his data, and so forth. But most of what Freud says, is that psychoanalysis might be a good way to find out what you're really thinking, and discover more about yourself and your goals, and so forth. And I had the good or bad luck to be introduced to L. Ron Hubbard by John when I was an undergraduate. John Campbell was the great editor of the-- I think it was called Astounding Science Fiction in those days-- what a marvelous title. And this fairly mediocre science fiction writer, L. Ron Hubbard, invented a new form of psychiatry called-- what was it called? AUDIENCE: Dianetics. MARVIN MINSKY: Dianetics. AUDIENCE: Dianetics. MARVIN MINSKY: [CHUCKLES] It's pretty good. And I'll tell you that story another time. But John Campbell had Thanksgiving in the Commander Hotel every year and invited a bunch of friends. And I don't remember if that's how I got to meet Asimov and Heinlein, and other people. But anyway, I did, and science fiction had a big influence on me from my, actually, early years. But starting in college, it got very serious. Anyway, so chapter 1 starts to talk about this phenomenon of psychology. And one of the funny parts is this little section, 1.3, of trying to say, what are emotions? And I looked up emotions in dictionaries and-- can you all read that? I don't feel like reading aloud. But there's lots of discussion of emotions, and how mysterious and complex they are. And then the marvelous thing is how many words there are for emotional states. I think I got 300, but I don't remember. Anyway, here's from A to D, and I don't recall how I found those. But I think that's a lot. How many words for ways to think are there? Now that's a serious question because I complained, maybe, on the next page. No, I didn't. I found myself complaining that there were very few words for ways to think. And then this afternoon when I was pruning these slides, it occurred to me that I didn't really try. So maybe I just didn't think enough. So if there are a couple of words for everyday emotions, if any of you can find me a list of 10 or 20 common words for styles of thinking, I'd appreciate it. 'Cause I wonder if there are a lot, and if not, why not? So here is a list of typical situations-- grieving for a lost child, panic at being in an enclosed space. I'm not sure any of the words in the list of 300 standard emotions are good enough to describe how you feel for any of these not unusual states. "Have you ever lost control of your car at high "speed? No, but when I first learned to drive, I couldn't believe that you could read signs at the same time as-- well, anyway. One of the very best psychologists in history, or a pair of psychologists, aren't even called psychologists. These are two guys named Konrad Lorenz and Niko Tinbergen. And they-- somebody made up the word ethology. What they study is the behavior of animals. And in some sense, presumably, they're studying the psychology of animals because just as with the person, when somebody flies into a rage, you're not describing their mental state. You're describing something about how they behave. And so the ethologists, too, are psychologists. And Tinbergen and Lorenz, starting around 1920s, started to analyze the behavior of animals in great detail. And so here's an example of how a certain fish behaves. I actually forgot which fish it is, but there's a picture of it some-- in the book. And at different points in its life, it's in different phases. And this is just one diagram of a dozen for this particular fish and its reproduction, which involves an environment with plants and other things. And he divides its behavior into parenting, courtship, nesting, and fighting. And then, you see each of those as a lot of subdivisions. And Tinbergen and Lorenz and some students discovered all these things by sitting in front of fish tanks and watching the fish for months and years. Tinbergen also spent years on some beach watching seagulls, and so he has a diagram like this for a particular class of seagulls. When I came to Boston, my friends and I used to go to Nahant and look at the tide pools there, where there are a lot of activities, and it was very interesting. And I got a big fish tank and imported all sorts of little animals and plants from the tide pools in Nahant. And I watched them for about a year and didn't learn anything. AUDIENCE: [LAUGHS] MARVIN MINSKY: And that was before I read Tinbergen. And then I realized there's something about those people, which is they could watch a fish and recognize all sorts of behaviors. And I would just watch a fish and wonder whether it was hungry, or-- what? Wouldn't you get bored swimming back and forth for three years in a-- anyway. So here, the great psychologists of our day-- Aristotle, 2,000 years ago, and Lorenz and Tinbergen in the 1930s, and Sigmund Freud and Galton and William James around 1900. And then what went wrong? There's almost no good psychology between then and 1950s, when something called cognitive psychology started. And it was partly due to people who said, let's make psychology more scientific. And you've probably all heard of Pavlov or Watson. And what happened is, around 1900, some psychologists said, well, these Galtons and Freuds and William James are very poetic and expressive and literary, and they write much better than we do and tells good stories. But they're not scientists and they don't do reproducible experiments. So what we have to do is simplify the situation to find the basic laws of behavior. So let's take a pigeon and put it in a vacuum in the dark. Well-- [CHUCKLES] AUDIENCE: [LAUGHS] MARVIN MINSKY: Well, they didn't go that far. But they did put it in the dark and there were two illuminated levers to work. And you could make a sound and the sound could be very annoying, or you could have a bright annoying flashing light, or something. And the animal would push one of two levers and one of them would make the stimulus even more annoying and one of them would make it go away. And you'd plot curves of how often the animal pressed these levers so you would get a quantitative theory of how much he learned and how much it could remember, and how many trials it would have to do. And then instead of just looking at reactions to stimuli, they quickly switched to trying to teach the animal things by giving it to two alternatives-- turn left or turn right, or push this button or that, or whatever. And if they pushed the one you approve of, then you'd give them a little pellet of food. And there was a lot of engineering so that you would make sure that the food got to them right away. Because if there were a 10-second delay between an action and a reward, the pigeon or a squirrel or rat or cat or dog would learn much less quickly than if there was a one-second delay. And anyway, that went on for 50 years, starting around 1900, Pavlov and his dogs. And there's a great movie that some guy came around with that had been taken of Pavlov's lab, and it shows him sort of like a great dictator, or something. There's this room with lot of cages and dogs, and mostly dogs, in this case. And Pavlov comes in and there are a bunch of lackeys who sort of bow and scrape because he's a lord. And he comes in, and all the dogs run into the corner of their cages and yelp. So the Pavlovians tried pretty hard to get that movie suppressed. And I haven't seen it in recent years. But anyway, Fred Skinner, who was a professor when I was an undergraduate at Harvard, was the first one to really automate this experimental psychology, and he invented what's called a Skinner box. But it's just a soundproof, light-proof, well-ventilated and thermally-regulated cage. And you can put a rat or a pigeon-- those are the most common animals. They're very inexpensive because they're free. No one knows much about dolphins. They've been studied for 50 years. Whenever John what's-his-name-- remember the name of the great dolphin-- AUDIENCE: Lilly. MARVIN MINSKY: Lilly-- thanks. He discovered a lot about dolphins and a certain amount about their communication, and a little bit about whales. But there's an interesting mystery-- I forget which whales, but some whales have a 20-minute song, and they repeat it for a whole season. And next year, that song is a little bit different. But it goes, essentially, without repeating-- it's very complicated-- for 20 minutes. And people have studied that a lot and no one has the slightest idea of what it means. And nobody even has any good conjectures, which bothers me. But I think it probably means is this, when there's a whole bunch of fish somewhere for one of these whales, it might be 200 miles away, and it's very-- whales eat a lot. And it's very important to find where the fish are. And I believe this message, which changes a bit during the season, might be telling you where the food is on the Atlantic or Pacific coastline in great detail. Because if somebody finds a lot of fish somewhere, you have to swim 300 miles. And if they're not there-- so anyway, it's interesting that John Lilly got a lot of publicity, but he didn't discover squat. And finally, the dolphin studiers gave up because nothing happened. Anybody have heard anything in-- I haven't paid any attention for 20 years. Have you heard of anybody discovering anything about dolphins except they're very good at solving a lot of physical problems? Anyway, that's-- I'm bothered by the mystery of why was there some psychology in Aristotle's time? And why didn't it get anywhere till 1950, when there was regular psychology, but it was afflicted by what I call, physics envy. Namely, you run into people like Estes and-- well, he was pretty good, actually. But there a lot of psychologists who made up things, like Maxwell's equations for how animals learn, and there were generally three or four laws. And if there's a sequence of events, then animals remember a lot about the first few and the last few in the sequence. They don't remember much about the middle. And of course, the reliability of their memory depends a lot on how recent it was and on how powerful the reward was, and blah, blah. And so they get these little sets of rules that look like Newton's laws, and that was the kind of sort of psychological physics that the so-called behaviorists were mostly looking for. This was not what Tinbergen and Lorenz did because they wrote books with extensive descriptions of what the animals did and made little diagrammatic guesses about the structure of the subroutines and sub-structures. Anyway, end of history, but it's a nice question-- why do some sciences grow, and why was psychology just about the last one? I suspect it could have been earlier, but people tried to imitate the physicists and tried to say, maybe there's something like Newton's or Maxwell's laws for the mind. And they found a few, but they weren't enough to explain much. So there are a lot of questions. When Seymour Papert and I started thinking about these things, which was really around 1960, I had been working on some ideas about AI in the late 50s. And my PhD thesis was a theory of neural networks, which was sort of interesting, but never really went anywhere. I went to a meeting in London somewhere and gave a talk about a theory of learning that was based on some neural network ideas. And there was this person from South Africa named Seymour Papert who gave the same paper. And I hope this happens to you someday. Find somebody who thinks so much like you, only different enough that it's worth it. And that you only have to say about three words a day, and some whole new thing starts. Because we really did write the same paper and it had the same equation in it. And he had been working for Piaget, who was the first great child psychologist. I should have mentioned Piaget, who probably discovered more things about psychology than any other single person in history. And there are lots of people now who say he was wrong about point 73 because children learn that at the age of 2 and 1/2 instead of 2 and 3/4. I'm parodying the Piaget critic community, but it's pretty bad. And I think those poor guys are uncomfortable because John Piaget published 20 books full of observations about children that, as far as I know, no one had made systematically before. And in his later years, he started courting algebraic mathematicians because he wanted more formal theories. And in my view, he wanted to make his theories worse. And nothing much happened, but he did visit here a couple of times and it was really exciting to meet the starter of a whole new field. Anyway, Papert and I discussed lots of things. And somehow or other, we kept finding other-- more ideas about psychology. And it finally gelled into the idea that, well, if you look at the brain, you know that there are several different brain centers. What's all that stuff for, and how could it possibly make any sense to try to explain what it does in terms of four laws like Newton? Like, how does a car work? Is there a magical force inside the engine that causes the wheels to turn? No. There's this funny thing in the back to cause a differential so that if the car isn't going in a straight line, the two wheels going at different speeds won't rip the-- if the two wheels were going at the same speed, the tread would come off in 5 minutes. You ever wonder what a differential is for? Well. So most of the car is fixing bugs in the other parts. That's most of the brain is because we started out as fish, or lizards, or whatever you like. And making the brain bigger wouldn't help much because you just get a heavier lizard that had to eat more and would think more slowly. So size is bad. But on the other hand, if you need another cubic inch of brain to fix the bugs in the other part, then the evolutionary advantage of being smarter had better make you able to catch a little more food per hour. And so each person is an ecology of these different processes. And the brain reached its present size about a million years ago, I guess. What's the current guess? Anybody been track-- they keep discovering new ancestors of humans and I don't have the patience to read about them because you know that next week, somebody will say, oh, that isn't in the main line and you were just unlucky to discover that skeleton. So anyway, Papert and I and a lot of students gradually developed this picture that the mind is made of lots of processes, or agents, or resources, or whatever you want to call them. And it's anybody's guess what they are. If you look at the anatomy of the brain, you know that people label regions, so it's very clear that this occipital lobe back here is largely concerned with vision. But I forget where the one for hearing is. If you destroy the part of the brain for hearing in some animals, you get a little bit of increased function in some part of the visual system that seems to enable the animal to hear a little bit and make some reactions. And there's a whole lot of hype-- I think you have to call it-- about the flexibility of the nervous system. That is, if certain brain areas get destroyed, other parts take over. They almost never are as good. And mostly, many functions never get taken over at all, but are replaced by ones that superficially seem similar. And so there's a whole lot of, I guess, wishful thinking that the brain is immensely resourceful and error-correcting and repairing. I think there was some idea that there was a general phenomenon. But if you do some arithmetic, you get an interesting result. Suppose that each function in the brain occurred in 10 different places at random. Then, if you removed half the brain, how many functions would you lose? Well, almost no arithmetic tells you you would lose about one part in the 1,000. And so, in fact, you would never be able to detect it. So this idea that the brain has enormous redundancy-- well, now change that number to five. Suppose each function is somewhat supported in five different parts of the brain. Then if you take off half the brain, then-- what am I saying? One part in 32 chance of losing some significant function, so probably lots of things that we do are supported in several parts of the brain. Apparently, the language center is pretty unique, and some others. But be careful about the conclusions you read from optimistic neuroscientists. Anyway, Papert and I worked on this idea of how could these large numbers of different processes be organized? And we made various theories about it. And then around, I guess, the late 1970s, we stopped working together. And Papert developed his revolutionary ideas about education, which, certainly, have had a lot of influence, although they didn't sweep the world in the way we had hoped. And I kept working on the society of mind theory, and we didn't work together so much. But we still did plenty of criticizing and supporting of each other. Anyway, my theory ended up with this idea that-- it's sort of based on Freud. I don't know if I kept a picture of his here. Freud concluded that the mind was an interesting arena, sort of. And he had the mind divided into three parts. At one end of the mind, which we inherited from most other animals, is called the id, which is a bunch of instinctive, mainly built-in behavioral mechanisms. And a second part of the mind is what he called the super ego, which is a collection of critics. So in Freud's first image, the brain is in two parts. One is a set of instinctive, built-in behaviors and the other is a set of critics, which actually are associated with a culture or-- well, with culture and a tradition. And you learn from other people things that are good to do and things that are bad to do, and that's called the super ego. This is your set of values and standards and tests for suitable behavior. And the middle is this strange object called the ego, which is not what people think it is-- at least Freud's word. The ego is a kind of big neutral battleground where the instinctive behaviors-- I keep wanting to-- oh, you can see that arrow if I take my finger off it. And then gradually, as I kept trying to figure out how problems are solved and what kind of processes might be involved, I got this picture, which has six layers. And various people come around and say, I don't think you need to distinguish between layers four and five, or why don't you just lump all the three top layers into one? And I sort of laugh quietly and say, these people are trying to find a physics-like unified minimal theory of psychology. And they're probably right, in one sense. But if they do that, they'll get stuck. Because if you get a new idea, there'll be no place to put it. So if you have something that's very mysterious, don't imitate the physicists. Because if you make a theory that's exactly right and just accounts for the data, and there's nothing extra and nothing loose, then when you notice a new phenomenon, like dark matter, then the physicists don't know what to do. Should they regard dark matter as some obscure feature of space-time, or does it have something to do with this universe being near another one that you can't otherwise communicate with? And it's all very puzzling. But there are lots of things that don't fit into Newton's laws these days. And I'm not suggesting a six-layer theory of physics, but it might be worth a try. OK. So what am I-- I made up some nice slides. But I think I'll stop. So who has some questions, and what would you like to see in a theory of psychology? What do you want to be explained? A lot of people are convinced that there are some really serious problems and mysteries, like what is consciousness? And if you look at chapter 4, my feeling is consciousness is an etymological accident, that people got a word which is a suitcase for all of the things they don't understand about the mind and more. But once you've got a word and it goes in the culture-- consider the word consciousness for a minute from a legal point of view. Supposing you happened to be walking along and you're carrying something-- where is that pointer-- and it happens to stick somebody's eye out. Then it's very important when they sue you to establish whether you meant to do it, or whether it was an accident. Did you consciously plan to-- I can't think of the English word for putting somebody's eye out. There's beheading and all sorts of-- AUDIENCE: Gouge? MARVIN MINSKY: Gouge is a good word. AUDIENCE: Impale? MARVIN MINSKY: Yeah. So anyway, it's very important, for social reasons, to have a word for whether an action was deliberately violating the rules, as opposed to accidentally violating the rules. Like if you tripped on the stairs and landed on somebody and broke their neck, that's not a crime unless you were so clever as to make it appear that it was an accident, and probably-- anyway, you see what I mean. So we need a-- our whole system of fairness and ethics and social responsibility is based on the distinction between whether an action was deliberate or not. And so, did he do that consciously-- is a word for that. And somehow, the idea of conscious became elevated. Well, that's a very superficial-- you can probably think of 10 other reasons why a word like that-- yes? AUDIENCE: [INAUDIBLE] in terms of your writing, I-- it seems to be that there is both sides of the argument, both sides of it. But [INAUDIBLE] it seems [INAUDIBLE] in that-- I can well imagine is lack of representation. You have representation of self and representation of your mind. But then you say, there's no self or no consciousness. Why can't you think about consciousness as just a process that is reasoning about your own mind? MARVIN MINSKY: Well-- AUDIENCE: Is there a problem with that? I mean, I understand you don't want to talk about-- MARVIN MINSKY: No. AUDIENCE: --soul, 'cause that's a religious notion. But-- MARVIN MINSKY: No, but sometimes, when you say conscious, that is, do you remember doing it, which is-- AUDIENCE: Yeah, but I don't I load up my process with-- asks me what I think about my own mind? And then I retrieve that and I say, yeah, I do remember it. MARVIN MINSKY: Well, you're right, actually. I went to a lot of trouble to find 25 or 30 different uses for the word consciousness. And probably, if I-- or if one of us worked harder, we could take those and condense them into five or six much better ones that account for more stuff than the 30. AUDIENCE: Well, I think you got five right there-- ideas, goals, memory, thoughts, and feelings, so-- MARVIN MINSKY: That might be just right for something. Who knows? [CHUCKLES] Yes, well, I think that's a great criticism of one reason why people don't like these theories quite so much, because I proposed too many things. I really should reprint that criticism from Restak, or hand it out, because it's this neurologist who says, why is he telling us all these things about K-lines and representations, and so forth? And the answer is, he's from the community that doesn't have enough variety yet. And I'd be the first to admit that I try to go overboard and think of five more things than are in the literature. But that emotion thing is nice. Remember that was a serious challenge because when I made that list-- [CHUCKLES] I do have a laser pointer somewhere-- in my jacket, probably. How many words for at least noticeably distinct ways of thinking, or reasoning, or figuring out, or solving problems can you think of? Maybe there are 20 or 30. I just realized this afternoon that I never looked. I don't remember where I got this list. Yeah? AUDIENCE: Can I ask about your perception of free will? And I gather from the readings that you don't have a strong sense of free will, and so what is that-- MARVIN MINSKY: I think it's the same as the one for consciousness-- namely, it's a legal concept. The idea of free will is completely obscene, isn't it? What could it possibly mean if you did something for no reason? So it's a thoroughly empty idea, isn't it? Or what do you mean by it? Do you mean there's nobody ordering you around, so you're free to do whatever you want? But of course, you're not. You can only do what your computer computes you to do. AUDIENCE: [LAUGHS] AUDIENCE: So in a sense, in the same way that you draw the-- you showed the fish diagram, right? The fish's actions are a product of the environment in its current state, and it's essentially a Turing machine-- MARVIN MINSKY: Well, it's some kind of machine, yes, sure. AUDIENCE: And so, would you argue that we are also Turing machines that are just running out in the world and-- MARVIN MINSKY: Sure. I've never heard of any even interesting alternative. In other words, people who insist on free will appear to me to be like people who believe that there must be a god who created the world. What's the next step? Who created the god? They don't take that step. So if your will is free, OK, then who is controlling it? There's nothing there. But legally, it's great. Because if somebody stole so money of their own free will-- but suppose you were a peculiar kind of epileptic. And every now then, when you go by-- AUDIENCE: [LAUGHS] MARVIN MINSKY: --your hand goes out and steals things? Then they-- what do they do? They put you on parole? No, this is very strange. But if you look at religions, you see that people make money on them. 13% of the world's product goes to people who make their living on concepts like free will and consciousness, so it's a big money thing. It's not just an accident. It's an industry. AUDIENCE: So both of those are concepts of society. MARVIN MINSKY: They're [INAUDIBLE] AUDIENCE: You could imagine a society without the concept of consciousness and free will, but those are [INAUDIBLE] MARVIN MINSKY: I don't think you could. You'd have to make up something to keep people in check and under control and to train them. It's like the rat being-- the rat needs somebody to press the reward or punish button. And we have it built into our-- our culture works because you build into people's head the machinery for suppressing doubt. And it's very clever. But you should think of it as an industry rather than an inexplicable phenomenon. How much money goes into-- yes? AUDIENCE: How many ways of thinking can you think of? MARVIN MINSKY: Say it again? AUDIENCE: How many ways of thinking can you think of right now? MARVIN MINSKY: That's my challenge, in fact. AUDIENCE: [INAUDIBLE] MARVIN MINSKY: There's probably a big list in some chapter or other. But there's nothing like this. What's the trick? Three-- if I go like that. Actually, Dragon has a thing so you can tell it, make things bigger. How many of you use the new Dragon program? Speech thing. I can't believe how good it is. Oh, I was talking to Henry Lieberman about it earlier. AUDIENCE: So I'm under the impression that you would say that the [INAUDIBLE] [? of mind ?] theory applies both to humans and animals. Is it just that we have higher levels of organization than them? And so where would you draw that line? Do animals not have a notion of the self the way you described in the book? Like long-term planning and stuff? MARVIN MINSKY: That's a great question. And it'd be interesting to think about ways to investigate it. People or researchers are often, in fact, raising that question of do animals have a representation of themself? And there's a famous experiment, but I can't remember what its current status is, where you put a red dot on a chimpanzee's head. And when the chimp passes a mirror and sees that, the chimp might go like that. Whereas I don't think a dog, when it passes a mirror, would rub its forehead to see if it has a red spot. I had a cat who walked past mirrors, because we have some full-sized mirrors around the house. And the cat walks by, and there's this other cat in the mirror, and she pays no attention to it whatever. So of course, I don't know what happened the first three times she walked by that mirror. Because if you see another cat going by, you would think it would-- anyway, it's a good question. And people ask that. And there's some evidence that elephants have a model of themself and maybe dolphins. Have any of you heard any stories of other animals that can recognize, for example, when they've been painted? AUDIENCE: Elephants? MARVIN MINSKY: Which? AUDIENCE: You said elephants? MARVIN MINSKY: Yes, I think elephants. AUDIENCE: Infants can't [INAUDIBLE] There's a famous child psychology experiment, where if you're less than a couple months old, you actually fail this test. MARVIN MINSKY: Oh. AUDIENCE: So it's actually something that comes as a sign of your child progressing. So it might not be intrinsic to humanity [INAUDIBLE] So [INAUDIBLE] MARVIN MINSKY: This is off the topic, but I once had a great email correspondence with some woman who was getting a PhD in France about how babies recognize their mothers. And she concluded with-- she did experiments, having other people walk into the room with a mask of the mother or a different hairdo and so forth. And for the first two months or three months, I think, it turned out that the baby recognizes the mother by the hairdo, which had not been known. And then, I think after three months, it's recognizing the mother by her face. And that point, she's doing experiments where you get another woman wearing a copy of the mother's face. And so now there's two mothers, and the baby is absolutely delighted. And then I can't remember, but then I think at four or five months, when two copies of the mother comes in, the baby gets really panicked. So I lost track of her. If you have a baby, let me know. She got her PhD for this. And I haven't heard anything since. Yes? AUDIENCE: Here's a half-formulated thought that I just thought of regarding ways of thinking and ways of feeling. So it just occurred to me that it seems like if you go back to the list of feelings, it seems like when we talk about feeling, we're talking about a state that the brain is in. So it might be a complicated state that's like some combination of a lot of different parameters. But it's a state that you can stay in for like an arbitrary amount of time. But I think thinking is something that's more sequential, as in like when you're thinking, you're necessarily changing the state of your brain all the time because you're moving bits around. So I don't know. I just thought of this. MARVIN MINSKY: No, I think that's right. That I think when we talk about intelligent behavior, you're absolutely right. What you've got is a process that's criticizing itself and seeing when you got stuck and finding things to do. And I suppose in each emotional state, you're certainly also thinking. So that's going on, but maybe it's more restricted. Like if you're confronting somebody, and there's a sort of conflict, then almost all your thoughts are constrained to that subject. And it's not as resourceful. But I'm just improvising. AUDIENCE: Or if you have for each emotional state, when we say that we're in an emotional state, does it mean that we're talking about are the certain switches that get flipped in a certain state, or something is above a certain threshold. And you can have thoughts that are about anything in that state. But perhaps the state itself influences the type of thoughts that you're likely to have. MARVIN MINSKY: OK. I think what I'm talking about in this context is sort of extreme forms, where the person changes into another machine. Like, an angry person won't listen to reason, or it's very hard to deflect them, and so it's this kind of rigid thing. But humans are generally-- are rarely in such extreme states, where nothing gets through. But the whole point of that was that-- I just realize that I maybe was just too lazy-- that we have this huge vocabulary of nuances of emotional activity. Also, people think that these are hard to explain and mysterious and non-physical and blah, blah, blah. But why don't we wait until next week and see if somebody comes up with-- or see if we can come up with a set of 30 or 40 words about intellectual states? Curiosity-- I just don't know how many there are. And I haven't thought of any in the last few minutes. Yes? Has anybody thought of a couple of-- AUDIENCE: [INAUDIBLE] like Csikszentmihalyi's concept. MARVIN MINSKY: What's the word? AUDIENCE: Flow. AUDIENCE: Flow. AUDIENCE: Like Csikszentmihalyi, like when you're really engaged in some activity that you're doing, and you're really in the zone. MARVIN MINSKY: Yes, there's a state of keeping other things out, so you can focus-- not being interruptable. Yes? AUDIENCE: So in this state of mind, you talk about agents and how they divide between themselves. But I don't see anywhere about like how evolution modified a lot agents. I believe that evolution modified the way we think right now. I don't know if you will solve-- there is a paper wrote like two years ago. It's called "The Region of Behavior." And this guy tried to explain how we make some decisions-- evolutionary, like in a point of view of our [? species. ?] We are maximizing the probability of really reproducing a self. But individually, we are not increasing this [? efficiency. ?] And I think somehow these agents will be the decisions of agents or resources would be very determined by evolution since we have a very long time to of evolution of the human being. And somehow, we have hardwired to make some decisions. So for example, he gives the example of a guy-- MARVIN MINSKY: Well, for lizards, that's certainly true. But why do [INAUDIBLE]---- humans keep changing their environment. AUDIENCE: So for example, yeah, so he uses this example of tossing a coin. The guy says that this coin is unfair. He doesn't say the probabilities. But there is 75% of getting heads, and 25% of getting tails. And we-- like the subjects-- they choose randomly 25% of the time the tails, even though they can take a count of the number of the times that you put heads. And even though if you choose always heads, you will get more money or whatever, you would make it the right decision. MARVIN MINSKY: That's called probability matching, and it's not a good strategy. AUDIENCE: Yeah, but if humans do that, like-- MARVIN MINSKY: No-- well, they do it if a psychologist rigs the experiment very carefully. It turns out that the best thing to-- what do you think is the optimal strategy? AUDIENCE: Always choose-- MARVIN MINSKY: No. The optimal strategy turns out it's the square roots of the probabilities normalized to add up to 1. And I'll give you a proof next time. This theorem is due to Ray Solomonoff, who invented inductive probability theory. AUDIENCE: But evolution-- MARVIN MINSKY: So evolution, if evolution did probability matching, then it would be wrong. And I bet you'll find out that those experiments are wrong. You have to see how did he rig the experiment so that people, if it's probability 25%, they guess that 25%? AUDIENCE: They made the experiments with [? goldfish. ?] MARVIN MINSKY: I don't know. There was a theory about why you would expect it. It's a good question. I don't think people use probabilities, though. So even if an experiment shows some, I would look for a flaw in the design of the experiment. Yes? AUDIENCE: So Professor Minsky, you talked a lot about emotions in today's lecture. And my understanding is that like for someone who has this distinct personality, they might have a predisposition to feel a certain emotion of like anger, depression, or whatnot. My question is that if you had any insight or theories on to what extent our personality is affected by events or influences that happen to us over the course of a lifetime. And to what extent is it impacted by chemical or biological makeup of our brain? MARVIN MINSKY: Well, you're asking what do people learn? We don't care if it's chemical or-- see, if it's chemical, it's still physical. AUDIENCE: I've read a little bit about treatments for depression. And the argument is that a lot of the reasons for depression is because of some sort of chemical and biological way our brain is constructed. MARVIN MINSKY: Well, there's lots of complicated things about the brain. One feature of the brain, that I don't know if everybody-- you know that there are inhibitory and excitatory synapses. When one neuron connects to another, the impulse that goes along the axon to the target neuron may reduce the probability or the strength of its firing or increase it. So that's called inhibiting or-- there's no-- or exciting. That's not quite the right word, is it? Now, generally in the nervous system, as a rule, but not always, if you follow a chain of activity, it goes inhibiting, exciting, inhibiting, exciting. If you have too many excitatory things, and there was a loop, then it would explode. And it would wear itself out in jig time. So there is this general feature of the anatomy that you alternate. So when somebody talks about a drug having an inhibitory effect, that's sort of weird because it's inhibiting half the neurons. And therefore, it's lowering the thresholds of the ones they're connected to and so on. I think the best thing is until you have a diagram of the functional relations between different brain centers, it might be best not to try to make generalizations about how the chemistry works. It's easy, but people think of adrenaline as a stimulant-- epinephrine. But in the nervous system, locally, it may be inhibiting things that are inhibiting something else. And so it appears to be exciting. Yes? AUDIENCE: So I have a problem understanding the difference between thoughts and emotions. I know it might be a simple thing. But the only thing that I can separate in my mind is that thoughts, let's call it a time constant, I can change it kind of [INAUDIBLE] Emotions-- the time constant is smaller. And I can control it again much less. But since there is no-- and at least in this class, there is no free will, how can I make a decision [INAUDIBLE] by my [INAUDIBLE] MARVIN MINSKY: Oh, I think it's a waste of time. As far as I can see, emotional mechanisms are generally lower-level, simpler ones than the ones that involve several layers of the-- more layers of the brain. So it's just a relative thing. It's not that some states are emotional. You're always having some high-level thoughts and low-level thoughts. And the distinction-- I just don't know why the distinction has occupied so much tension. I think it's because-- and that goes back to having more words for-- or asking how many words do we have for ways to think. It seems to me that in popular culture, there are very few words for ways to think and lots of words for emotions, and so they are very prominent. Maybe you have to be smarter to distinguish between ways to think, and people generally are dumb-- not because they're inherently dumb, but they come from cultures which bully you if you-- what happens if you're in third grade, and you're smart? You get it beaten out of you, and you learn not to show it. AUDIENCE: I just think that question of why didn't science happen earlier? Why don't we have more ways to describe different ways to think? So that's sort of puzzling. But have we just not reflected as much on different thinking states or different approaches? MARVIN MINSKY: I wonder if the Greeks had more when we-- AUDIENCE: I think they did. I think they had also more concept of ideas in different states and different potencies. MARVIN MINSKY: Who has a theory of that? What's your theory of the Middle Ages? How could things get dark for so-- so dark for so long? AUDIENCE: Well, I do have a theory. [INAUDIBLE] MARVIN MINSKY: And are we about to have one? AUDIENCE: I think it has as much to do with the channels in which one can communicate ideas to other people-- whether they exist or not, or whether we have [INAUDIBLE] The Middle Ages were characterized by scientific discoveries being kept as family secrets. MARVIN MINSKY: Cardano knew how to take cube roots, and he wouldn't tell anyone. AUDIENCE: Well, the classic stupid example is baby tongs, which for 300 years made a single Italian family very rich. Using tongs to extract the baby in childbirth increased the success rate in difficult births by about 10%, they say. MARVIN MINSKY: Wow. AUDIENCE: And that was enough to build a family fortune, until some servant finally spilled the beans, and that was the the end of that. MARVIN MINSKY: Yeah, wow. So who has a theory of the Middle Ages? Is there a standard theory? Yes? AUDIENCE: Well, the concept of the Middle Ages as the Dark Ages is something that emerged mostly in the Renaissance, when people in the 14-- in the 15th, 16th century tried to present themselves as going back to the Classical Age of scholarship of ancient Greece and Rome and as being better than their predecessors for the last few years. And this mostly happened because of the discovery of manuscripts that were translated from the ancient Greek, and in certain cases Latin, by Muslims, who at the time were receding from Europe. So the entire concept of the Middle Ages might be a fabrication of the Renaissance. There were some significant discoveries at the time. MARVIN MINSKY: That's a nice idea. In other words, when was St. Patrick? AUDIENCE: St. Patrick? MARVIN MINSKY: I'm told that he popularized a lot of technical manuscripts-- brought them back into Europe. He has two achievements. One was bringing scientific culture back, and the other was getting snakes out of England or something-- Ireland. I don't know which he was sainted for. Don't you have to do three miracles, or is it-- what's? Yeah? AUDIENCE: Yeah, my theory is the Middle Ages ended around 2100. So I guess I'll say after the Middle Ages end, they'll say, you know, those guys back in the 21st century, they had no idea how thinking worked. They couldn't even think of a few ways to think. They had poverty. They had wars. Those guys were barely out of their loin cloths. MARVIN MINSKY: Right. I just read a history of AI. I forget who wrote it. But it had this section saying-- it mentioned the Newell-Simon-- there was a thing called General Problem Solver, which I mention a couple of times in the book. And it's the idea that the way to solve a problem is to find-- it's a symbolic servo-- find the difference between what you have and what you want. And look in your memory for something that can reduce that difference. Keep doing that. And, of course, it's important to pay attention to the more important differences first and so forth. And I'll send you this article. This article is saying that they made a terrible mistake, and this was a trivial theory. And that's why nobody uses it anymore. And it was interesting how many AI people fell for that idea in the 1960s. My complaint has been that if you look in a modern textbook on-- you must have some in your first volume, Pat. Didn't you have some GPS things? AUDIENCE: Oh, yeah [INAUDIBLE] MARVIN MINSKY: If you want to keep up with AI, you should read Patrick's textbook, even though people are starting to use this new one, which doesn't have any AI in it. Who is it by? AUDIENCE: Norvig? AUDIENCE: Russell and Norvig. MARVIN MINSKY: Russell and Norvig. It's probably pretty good technically. But I leafed through it, and it didn't have any-- never mind. It's probably better than I think because I'm jealous. Yes? AUDIENCE: Kind of a different topic. In The Society of Mind when you're talking about the amnesia of infancy, when you forget what you've learned, and things that were once difficult become common sense. And you can't even remember how it was you learned that. So I've just been wondering about the reverse of that process, when you try and say teach something to somebody that [INAUDIBLE] bringing back up the different levels. I'm not exactly sure if it was a question. [INAUDIBLE] on it and wondering about is that itself another way of relearning the things that we learned? MARVIN MINSKY: That's sure an interesting question. When I first learned about programming, I had the idea that maybe babies think in machine language. And then after a while, they start to think in Fortran. And then finally, when they're a little older, they think in ALGOL or something. But when they switch from machine language to Fortran, then they can't remember their earlier thoughts. And there's almost no evidence of people finding genuine recollections from two-year-olds at later ages. Now, almost everybody thinks they remember something, but there's the problem that you might have rehearsed it and translated it into the Fortran and the ALGOL and the Lisp and the Logo-- whatever it is. One of my greatest influences was a great mathematician named Andrew Gleason at Harvard, who I met practically the first day I got to Harvard. And he would always talk about things I didn't understand. And I would go home and look them up and try to. Anyway, one day, we were talking about number forms. And number forms are a psychological phenomenon which about 30% of people have. And it was first described by Francis Galton. And the phenomenon is if I ask you close your eyes and tell me where is the number three, how many of you have a place for the number three? Well, that's a few. And so typically, if you imagine the visual field, that's a windshield, I guess. So there are these numbers in there. They're nowhere in particular, except that it's usually like that for an older child. So here are these numbers. And what's more, in some people, they're colored. So I was talking to Andrew. I had read this Galton paper, which was 1890 or 1885 or something. And so I was asking people if they had number forms. And he said, oh, yes, he has one. And he sketched it for me. And he said, and they're colored, too. Oh, and his went way up. And the prime numbers were bright. What am I doing? Maybe the composite numbers were. Something was bright, and they were colored. So I wrote this down. And over the next couple of years, I would look in antiques stores for old children's blocks. And I found a set of blocks that matched that. And Andrew Gleason said that he knows when he acquired this thing, and it was about four years old. And he had a window in his house. And there was a hill. And he could just see over the sill, and he imagined these numbers on the side of that hill-- blah, blah, blah. Anyway, people who don't have a number form don't know what I'm talking about. And I don't know if the 30% is still true. But it's an interesting phenomenon. And in most cases of early childhood-- well, you can't find out because show children do remember details of a house they lived in. But you don't know if they've copied it. So what was the original question? How much can you remember from infancy? L. Ron Hubbard thought you could go back to before you were born, and you could remember people talking about you when you were still in the womb. So anyway, John Campbell said, you should look into this. And a few of us made an expedition. We went down to Elizabeth, New Jersey to visit the just starting up Dianetics Center. And I met this L. Ron Hubbard, who had green eyes and was quite hypnotic-looking. And the end of the story is he had been writing about how if you took this treatment of Dianetics, then you could memorize an entire newspaper in five minutes and do all sorts of miraculous things like that, once your mind has been cleared of aberrations and obstacles. And it became a big industry and turned into, later, Scientology. I'm sure you've all heard about that. So we asked Hubbard to look at a newspaper and tell us what was in it. And he explained he was so busy training the other people to be cleared that he hadn't had time to go through the procedure himself. And I never saw him again. Yes? AUDIENCE: What are your thoughts on memes and the fact that we're just replicas, or our thoughts are actually all replicas of something that-- MARVIN MINSKY: Of memes? AUDIENCE: [INAUDIBLE] MARVIN MINSKY: You mean Dawkins? AUDIENCE: Kind of, yeah. MARVIN MINSKY: I didn't quite get the whole question. The idea that-- AUDIENCE: Oh, so a meme, so for example, the way we talk and the way we all talk probably very much mimics the way our parents talked or people around us and possibly the way we think as well. So how does that relate to how our mind develops? Or are we actually creative original characters, or are we not? [INAUDIBLE] MARVIN MINSKY: Well, of course, it's both, because you learn things from your culture. And then you might just mainly repeat things. Or you might get the knack of making new ideas. I'm trying to remember what Dawkins's main ideas are. He invented the word meme to say that the ideas that people have might be considered to be somewhat similar to the genes in our heredity and that societies are systems in which these memes, which are conceptual units of meaning or knowledge, propagate around and self-reproduce and mutate and spread. And I don't know what to say about it, except that it's obviously true that every now and then, someone gets a new idea and tells people. And for one reason or another, they either forget it or tell someone else. And after a while, it spreads. And some of them fill up the whole culture, and some just die out. And whatever else Dawkins says, he's a very smart guy. But almost everything then in my mind is that he's explaining that religions are mostly made of these memes, and they're very bad and cost the world a great deal in progress and productivity. In other words, he's a militant atheist. And there are about five bestsellers in that business today. But I don't know what else to say about memes. It's an obviously generally correct idea. But the great thing about genes is we know the four nucleic acids they're made of and how they're roped together and all that. And I don't think Dawkins's theory develops anywhere nearly as elegantly as modern genetics. So it would be nice if it could, but. A really good theory of good ideas would be nice to have. AUDIENCE: [INAUDIBLE] MARVIN MINSKY: What would it look like? Someday, we'll have an AI that just punches them out. AUDIENCE: I think it would [INAUDIBLE] a really good language. MARVIN MINSKY: Oh, right. Robert Heinlein has some stories in which the super intelligent people have a language that's so dense that in five syllables, they can explain something that would take you a half hour. I forget what it's-- Loglan. Anyway, if you need a good idea, read Robert Heinlein. Sure? AUDIENCE: In [INAUDIBLE] you mentioned this, where you talked about how geniuses, they might have just come up with better ways to think better. Like, better ways to think-- better ways to learn about how to [INAUDIBLE] learning. But why do you think they've never mentioned it? Why haven't they propagated their method of better learning around how [INAUDIBLE] learn? MARVIN MINSKY: That's a great question. And-- AUDIENCE: Do you think [INAUDIBLE] we don't have like a concept of an idea that improves better learning? MARVIN MINSKY: There's a couple of phenomena. Like, how come there were so many geniuses in Athens? And then some of the best mathematicians came from some high school in some little country next to the Baltic. Bulgaria? Romania. Yes, there was some high school in Romania that not only produced von Neumann, but about five or six other world-class mathematicians. I don't remember the details. So that's a nice question. If there are these great memes, how come there aren't more big pockets of them? But there are a lot of cultures which were very inventive in other than intellectual fields. How come Paris got all those artists? And how many of you saw the Woody Allen movie? What's it called? Paris at Midnight? It's so funny. AUDIENCE: Like, would we be able to think of one-- say, one idea that would improve? Your improvements are popular. MARVIN MINSKY: Right. That's a good question for each of us. What's your very best idea? And stop fussing with the other ones, and get that one out. There must be some people who are very quiet and only speak once in a long time. We should watch them carefully. Yes? AUDIENCE: I guess along those lines, do you ever feel restricted by language because [INAUDIBLE] MARVIN MINSKY: Wait. It's the sound. I can't hear with it. Is that strong enough to lift you? AUDIENCE: Let's give it a try. MARVIN MINSKY: I'm sure someone's done it. Sorry about that. AUDIENCE: Oh, do you ever feel restricted by language, and that you must represent your theory of mind or any theory or idea with language? MARVIN MINSKY: No, I don't. But I once was jealous when Papert explained that he got some idea, and then he explained that he gets ideas like that when he thinks in French. And-- AUDIENCE: Well, he'd draw pictures, too. MARVIN MINSKY: What? AUDIENCE: He would draw pictures, too. MARVIN MINSKY: Oh, yeah. AUDIENCE: So it's not just language. MARVIN MINSKY: That's a language-- graphics. So we ought to have devices within the next few years that draw pictures when you think. It's so funny. You know, we had cyborgs. What were they called? I mentioned them last time. We had Steve Mann, and who's the other one? AUDIENCE: [INAUDIBLE] MARVIN MINSKY: Yeah. So there's these two guys around the Media Lab wearing various things on their head. And they're always typing. And you ask them a question, and they've searched Google. And when was that? 1990? AUDIENCE: 1995 I think was when Steve Mann was here. MARVIN MINSKY: But it's all gone. And nobody walks around with direct connection to the web. AUDIENCE: Well, Steve Mann still does. MARVIN MINSKY: Yes. Anyway, I certainly expected it to turn up-- something worn. So anyway, you should be able to buy one, one of these days. And what's her name? Who was that nice woman who had the EEG thing? Do you remember at TED? Forgot her name. Anyway, she had this sort of helmet, which had about 20 electrodes. And she induced me to put it on. I was on a stage with about 1,000 people there, which was rather funny. And there's a little spot on a CRT. And I get to think about it moving one way or the other and rewarding it when it did the right thing for only about half a minute. And then I could steer it around. So here was a nice primitive gadget where you could almost draw just by thinking this spot. And then she started a company and hasn't sent me one because maybe it was just beginner's luck or something. AUDIENCE: It was Tan Le. AUDIENCE: Tan Le? MARVIN MINSKY: What? AUDIENCE: Her name-- Tan Le? MARVIN MINSKY: Yes, Le. Yeah. Did you find the company? But you would think there would be lots of people wearing stuff, and-- AUDIENCE: Well, see, she's doing it right now. AUDIENCE: EMOTIV. AUDIENCE: EMOTIV. MARVIN MINSKY: With your keyboard, right. Why don't we take a five-minute break? I don't know. Well, I hate to interrupt because I see 10 different productive discussions. AUDIENCE: I know. MARVIN MINSKY: Yes, I saw quite a few apparently productive discussions. Maybe that's what the class needs. But anybody come to a conclusion? Yeah? AUDIENCE: So I was just resting over here. MARVIN MINSKY: See if you can knock the wall down. Yeah? AUDIENCE: It's not a conclusion, but it's a question. It's a thought experiment. So if you had a black box that could replace part of your brain, and let's say you could replace like 5% at a time. At what point do you assume that there's this self entity? At what point would you lose yourself? MARVIN MINSKY: If you change? The question is how much do you have in common with the you of yesterday as compared to when you graduated grade school? So this question of-- the idea of identity is very, very fuzzy. Yeah? AUDIENCE: So that question sort of reminded me of these peculiar cases of transplantations suddenly having preferences like the person they got the transplant from-- like, heart transplants and stuff. Somehow, they start to like same foods or use the same words, or there's odd things like that. AUDIENCE: Marrying the same spouses. MARVIN MINSKY: I read a science fiction novel by Robert Sawyer. Any of you know of him? A Canadian writer. And it has to do with somebody who has a fatal disease, so he's going to die soon. But the technology is around where you can make a duplicate of him. And so he has a duplicate made. And he is sent to the moon for some reasons I can't remember, which is a kind of a nursing home for, I think, people who are enfeebled and do much better with 1/7 gravity. So there was some reason why. Anyway, the original copy is sent to the moon, and the substitute takes over. But then our hero is miraculously cured by eating the right stem cells or whatever. I don't remember. So he wants to come back. And the question is, who gets the car? So I can't remember the title of the novel, except that it has "alien" in it. Alienable Rights is not-- something like that. But I wrote an article called "Alienable Rights." So are you the same as you were five minutes ago or five years ago or whatever? And as far as I'm concerned, the answer is who cares? It's a sort of silly question because no two things are exactly the same ever anyway. But again, a lot of these questions which look philosophical are legal. So the joke of that novel is that who owns the car is what matters to decide who is the real original and who is the copy. Frederik Pohl wrote a similar story much longer ago, where people are copied. And the copy is sent on a one-way trip to some planet to fix a broken reactor, and they always die. And you get a million dollars for providing this copy. But one of the copies survived, so it's the same plot. I can't remember the-- if you're looking for a good idea, if you go to 1950s science fiction, look for A.E. van Vogt or Frederik Pohl or all those wonderful writers. That was before it was necessary to describe really good characters. And science fiction got better and better for the literary critics and generally worse and worse for the science fiction fans. Do we really have any more questions? Yes? AUDIENCE: So where do you think we [INAUDIBLE] for sharing information between many people short amount of time is by using the internet and everything like this. Do you think that this will change-- this will bring up many more ideas? Do you think that this will hinder it? Because before this time, people had problems with sharing information. And also now, it's much easier to get a large group of people working on one thing. Do you think that this will change the way that we think? And do you think that this will make us [? data ?] [? resource? ?] MARVIN MINSKY: It's a tough one. Bad things can happen, and good things can happen. But that's funny because that reminds me, again, of science fiction because in science fiction, many, many years ago, some writers got the idea that there would be something like an internet. And some people realized there would be flash crowds. And now, there are flash crowds. And I remember even as a kid talking to people who said, why not wire up the voting machines so that they're always there? And so if the somebody in the government wants to know should we do this or that-- should we bomb China or not-- you could get 100 million people to run up to the keyboard and say yes or no. And presumably, when the-- what do they call those? That great crowd of Jeffersons and Franklins and? AUDIENCE: The Founding Fathers. MARVIN MINSKY: The Founding Fathers did a lot of things to prevent that. And the one that they focused on which was one of the most effective was called the Electoral College. And the United States is different from other places because we don't elect congressmen or presidents. We elect smart people from the community who then get together and decide who should be president. And of course, now, if anybody-- now they belong to parties. And if any of them voted for the other party's candidate, there would be hell to pay. But it was a great idea because the Founding Fathers realized that if you had instant feedback, which is what Hitler got, then you could say something really exciting, and everybody would press the "yes" button, and then you kill all the Jews. And then the next speech, you kill all the black people and all the yellow people and all the people whose last name doesn't begin with M. And so what you don't want is instant feedback. Now, the new social networks are getting us close to that. And the question is, is it time to have-- is it time to stop that? Is it getting dangerous? I don't know. But there must be a lot of people who are recognizing that this thing is creeping up on us. And you might be able to get 50 million people to do something reckless in a few minutes if you don't put some limits. I don't think we could get the Electoral College back because you'd have to get a majority to. What does it take to fix the Constitution? 2/3? AUDIENCE: 2/3. MARVIN MINSKY: We'll never see 2/3 again. It's the end of America. Well, we have three minutes. Yes? AUDIENCE: If you were to design a direction for the field of psychology-- and obviously, more than just a set of debugging tools, what do you say they should be doing? MARVIN MINSKY: They should read Patrick Winston's thesis. The psychologists now have disappeared into the tar pit of statistics. And they don't have the idea that knowledge needs complicated representations. And I don't care whether you assign probabilities to them or put them in the order of what you thought of them or do what Doug Lenat did in his thesis of swapping things when one worked better than another. But I forget the question. But I think we've got to get better ideas about representation of knowledge. And I don't know where they're going to come from now that the whole AI community is drifting into these ways of avoiding representations. I haven't read the Norvig-Russell book. Can anybody summarize what it says about knowledge representation? Who's read it? AUDIENCE: Well, there's a chapter on logic and first-order logic. MARVIN MINSKY: That's so funny. AUDIENCE: [INAUDIBLE] logical. MARVIN MINSKY: First-order logic is what Newell and Simon thought of in 1956, before they thought of the so-called GPS thing. Logic can't make analogies. It's a very bad thing to get stuck with, 0 or 1. Maybe one of our papers should be on what should AI do next year? 9 o'clock. Thanks for coming.
|
MIT_6868J_The_Society_of_Mind_Fall_2011
|
4_Question_and_Answer_Session_1.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MARVIN MINSKY: I presume everyone has an urgent question to ask. Maybe I'll have to point to someone. AUDIENCE: One over there. MARVIN MINSKY: Oh, good. AUDIENCE: So [INAUDIBLE] exactly what's said, but you said that maybe the [INAUDIBLE] lights are associated to the glial cells. Is that right? MARVIN MINSKY: Oh, I don't want to speculate on how the brain works, because-- [LAUGHTER] because there's this huge community of neuroscientists who write papers about-- they're very strange papers because they talk about how maybe it's not the neuron. And I've just downloaded a long paper by someone whose name I won't mention about the idea that a typical neuron has 100,000 connections. And so something awesomely important must go on inside the neuron's body. And it's got all these little fibers and things. And presumably, if it's dealing with 100,000 signals or something, then it must be very complicated. So maybe the neuron isn't smart enough to do that. So maybe the other cells nearby that support the neurons and feed them and send chemicals to and fro around there have something to do with it. How many of you have read such articles? It's a very strange community, because-- I think the problem is that history of that science started first it was generally thought that all the neurons were connected. And then around 1890 was the first clear idea that nerve cells weren't arranged in a continuous network. I think it was generally believed that they were all connected to each other, because as far as you could tell with the microscopes of the time it didn't show enough. And then the hypothesis that the neurons are separate and there are little gaps, called synapses, as far as I can tell started around the 1890s. And from then on, as far as I can see, neurology and psychology became more and more separate. And the neurologists got obsessed with chemicals, hormones, epinephrine, and there are about a dozen chemicals involved that you can detect when parts of the brain are activated. And so a whole bunch of folklore grew up over about the roles of these chemicals. And one thought of some chemicals as inhibitory and excitatory. And that idea still spreads, although what we know about the nervous system now-- and I think I mentioned this before-- is that in general if you trace a neural pathway from one part of the brain to another, what happens is that the connections tend to alternate, not always, but frequently. So that this connection might inhibit this neuron. And then you look at the output of that neuron, and that might tend to excite neurons in the next brain center. And then most of those cells would tend to inhibit. I mean, each brain center gets inputs from several others. And so it's not that a brain center is excitatory or inhibitory, but the connections from one brain center to another tend to have this effect. And that's probably necessary from a systems dynamic point of view, because if all neurons tended to either do nothing or excite the next brain center, then what would happen? Soon as you got a certain level of excitement, then more and more brain centers would get activated. And the whole thing would explode. And that's more or less what happens in an epileptic seizure, where if you get enough electrical and chemical activity of one kind or another, mostly electrical-- I think, but I don't know-- then whole large parts of the brain start to fire synchronicity. And the thing spreads very much like a forest fire. So that's a long rant. I guess I've repeated it several times. But it's hard to communicate with that community, because they really want to find the secret of thinking and knowledge in the brain cells, rather than in the architecture of the interconnections. So my inclination is to find an intermediate level, such as, at least in the cortex, which is what distinguishes the-- does it start in mammals? AUDIENCE: I think so. MARVIN MINSKY: I think if-- rather than a neurology book, I'm thinking of Carl Sagan's book, which there's is a sort of triune theory that's very popular, which is that the brain consists of three major divisions. And the-- I forget what the lowest level one is called, but the middle level is sort of the amphibian and then the mammalian and-- it's in the mammalian development that large parts of the brain are cortexed. And the cortex isn't so much like a tangled neural net. But it's divided mainly into columns. And each column, these vertical columns, tend to have six or seven layers. I think six is the standard. And the whole thing is-- what is it about 4 millimeters? 4 or 5 millimeter thick, maybe a little more. And in each of these columns, there's major columns, which have about 1,000 neurons. And one of these columns is made up maybe 10 or 20 of these mini columns that are 50 or 100 or whatever. And so my inclination is to suspect that since these are the animals that think and plan many steps ahead and do all the sorts of things we take for granted in humans, that we want to look there for the architecture of memory and problem-solving systems. In the animals without cortexes, you can account for most of their behavior in terms of fairly low-level, immediate stimulus response reflexes and large major states, like turning on some parts of some big blocks of these reflexes when it's hungry and turn on other blocks when there's an environmental threat and so forth or whatever. Anyway, I forget what-- yes? AUDIENCE: So in Chapter 3 you talk about the stages we go do when we face something like your car breaks down and you can't go to work. That's the example given in the book. I'm wondering, how do we decide how we transition from one stage to another? And why do you go through the stages of denial, bargaining, like frustration, depression, and then like only the last stage seems productive? I guess, my main question is how do we decide that we should transition from stage to another from [INAUDIBLE] MARVIN MINSKY: That's a beautiful question. I think it's fairly well understood in the invertebrates that there are different centers in the brain for different activities. And I'm not sure how much is known about how these things switch. How does an animal decide whether it's time to-- for example, most animals are either diurnal or nocturnal. So some stimulus comes along, like it's getting dark, and a nocturnal animal might then start waking up. And it turns on some part of the brain, and it turns off some other parts. And it starts to sneak around looking for food or whatever it does at night. Whereas a diurnal animal, when it starts to get dark, that might trigger some brain center to turn on, and it looks for its place to sleep and goes and hides. So some of these are due to external things. Then, of course, they're internal clocks. So for lots of animals, if you put it in a box that's dimly illuminated and it has a 24-hour cycle of some sort, it might persist in that cycle for quite a few days and go to sleep every 24 hours for half the time and so on. A friend of mine once decided he would see about this. And it's a famous AI theorist named Ray Solomonoff. And he put black paint on all his windows. And found that he had a 25 or 26-hour natural cycle, which was very nice. And this persisted for several months. I had another friend who lived in the New York subways, because his apartment was in a building that had an entrance to the subway. And he stayed out of daylight for six months. But anyway, he too found that he preferred to be on a 25 or 26-hour day than 24. I'm rambling. But we apparently have several different systems. So there's dead reckoning system, where some internal clocks are regulating your behavior. And then there are other systems where your people are very much affected by the amount of light and so forth. So we probably have four or five ways of doing almost everything that's important. And then people get various disorders where some of these systems fail. And a person doesn't have a regular sleep cycle. And there are disorders where people fall-- what's it called when you fall asleep every few minutes? AUDIENCE: Narcolepsy. MARVIN MINSKY: Narcolepsy and all sorts of wonderful disorders just because the brain has evolved so many different ways of doing anything that's very important. Yeah? AUDIENCE: Can you describe the best piece of criticism for the society of mind theory? MARVIN MINSKY: Best piece of what? AUDIENCE: The best criticism. MARVIN MINSKY: Oh. It reminds me of the article I recent read about the possibility of a virus for-- what's the disorder where-- AUDIENCE: Alzheimer's. MARVIN MINSKY: No. The-- uh-- [LAUGHTER] actually, there isn't any generally accepted cause for Alzheimer's, as far as I know. What? AUDIENCE: Somebody just did an experiment where they injected Alzheimer infected matter into someone, and they got the same plaque. MARVIN MINSKY: Oh, well, right, I wonder if that's a popular theory. No, what's the one where people-- AUDIENCE: Fibromyalgia. MARVIN MINSKY: Say it again. AUDIENCE: Fibromyalgia. MARVIN MINSKY: Yes, right. That's right, which is not recognized by most theorists to be a definite disease. But there's been an episode in which somebody-- I forget what her name is-- was pretty sure that she had found a virus for it. And every now and then somebody revives that theory and tries to get more evidence for it. Anyway, there must be disorders where the programming is bad, rather than a biochemical disorder, because whatever the brain is, the adult brain certainly has a very large component of what we would, in any other case, consider to be software. Namely lots of things that you've learned, including ways for one part of the brain to discover how to modulate or turn on or turn off other parts of the brain. And since we've only had this kind of cortex for 4 or 5 million years, it's probably still got lots of bugs. Evolution never knows what-- when you make a new innovation, you don't know what's going to come after that that might find bugs and ways to get short-range advantages, short-term advantages at the expense of longer-term advantages. So lots of mental diseases might be software bugs. And a few of them are known to be connected to abnormal secretions of chemicals and so forth. But even in those cases, it's hard to be sure that the overproduction or underproduction of a neurologically important chemical is-- what should I call it-- a biological disorder or a functional disorder, because some part of the nervous system might have found some trick to cause abnormal secretions of some substance. That's the sort of thing that we can expect to learn a great deal more about in the next generation because of the lower cost and greater resolution of brain scanning techniques and-- what's his name-- and new synthetic ways of putting in fluorescent chemicals into a normal brain without injuring it much, so that you can now do sort of macro chemical experiments of seeing what chemicals are being secreted in the brain with new kinds of scanning techniques. So neuroscience is going to be very exciting in the next generation with all the great new instruments. As you know, my complaint is that somehow introduction to the-- I'm not saying any of the present AI theories have been confirmed to tell you that the brain works as such and such a rule-based system or such and such a-- or use Winston-type representations or Roger Shank-type representations or scripts or frames or whatever. And the next to last chapter of the motion machine sort of summarizes I think almost a dozen different AI theories of ways to represent knowledge. Nobody has confirmed that any of those particular ideas represent what happens in a mammalian brain. And the problem to me is that the neuroscience community just doesn't read that stuff and doesn't design experiments to look for them. David has been moving from computer science and AI into that. So he's my current source of knowledge about what's happening there. Have any of you been following contemporary neuroscience? That's strange. Yeah? AUDIENCE: So you already talked about software a little bit. So I think they analyze Eisen brain. And I realize like that's why I talk about glial cells. And maybe he had a lot of more glial cells than normal humans. And so do believe that the intelligence of humans is like more of the software side or on the hardware side? Like we have computers that are very, very powerful, where we create software that we can run these machines that reproduce like humans. MARVIN MINSKY: I don't see any reason to doubt it. As far as we know computers can simulate anything. What they can't do yet, I suppose, is simulated large scale quantum phenomenon, because if you know the Feynman theory of quantum mechanics is that if you have a network of physical systems that are connected, then it's in the nature of physics that whatever happens from one state to another in the real universe, whatever happens actually happens by the wave function. The wave function represents the sum of the activities propagating through all possible paths. So in some sense that's too exponential to simulate on a computer. In other words, I believe the biggest supercomputers can simulate a helium atom today fairly well. But they can't simulate a lithium atom, because it's sort of four or five layers of exponentiation. So it would be 2 to the 2 to the 2 to the 2 and 4 to the 4 to the 4 to the 4. [INAUDIBLE] But I suspect that the reason the brain works is that it's evolved to prevent quantum effects from making things complicated. The great thing about a neuron is that, generally speaking, a neuron fires all or none. And you get this point-- you have to get a full half volt potentially between the neurons firing [INAUDIBLE] fluid. And a half a volt is a big [INAUDIBLE].. AUDIENCE: So you believe that the software that we have right now is equivalent to, for example, the intelligence that we have like in dogs or, for example, simple animals is like the difference that like-- do we just need to implement the software, like multiply the software? Or so how we need to create a whole software that-- MARVIN MINSKY: No, there doesn't seem to be much difference in the architecture, in the local architecture of-- AUDIENCE: Turn your microphone on. The one in your pocket. MARVIN MINSKY: Oh, did I turn it off again? AUDIENCE: Yes. MARVIN MINSKY: It's not green. AUDIENCE: Yeah, so throw the switch. Is it green now? MARVIN MINSKY: Now, it's green. The difference between the dog and the person is the huge frontal cortex. I think the rest of it is fairly similar. And I presume the hippocampus and amygdala and the structures that control which parts of the cortex are used for what are somewhat different. But the small details of the-- all mammalian brains are practically the same. I mean, basically, you can't make an early genetic change in how neurons work where all the brain cells of the offspring would be somewhat different and the thing would be dead. So evolution has this property that generally there are only two places in the development of an embryo that evolution can operate. Namely in the pre-placental stage, you can change the way the egg breaks up and evolves. And you can have amazing things like identical twins happen without any effect on the nature of the adult offspring. Or you can change the things that happened most recently in evolution like little tweaks in how some part of the nervous system works, if it doesn't change earlier stages, what you-- However, mutations that operate in the middle of all that and change in the number of segments in the embryo, I guess you could have a longer tail or a shorter tail. And that won't effect much. But if you change the 12 segments of the spine that the brain develops from, you'd get a huge alteration in how that animal will think. In other words, evolution cannot change intermediate structures very much or the animal won't live. Bob Lawler. AUDIENCE: If one thinks of comparing a person to a dog, would it not be most appropriate to think of those persons who were like the wild boy of southern France who grew up in the woods without any language and say that if you're going to look at individual's intelligence that would be a fair comparison with the dog. Whereas what we have when we think of people today is people who have learned so much through interaction with other people that the transmission of culture, is not essentially ways of thinking that have been learned throughout the history of civilization and some of us are able to pass on to others? MARVIN MINSKY: Oh, sure. Although if you expose a dog to humans, he doesn't learn language. So-- AUDIENCE: He may or may not come if you call him. MARVIN MINSKY: Right. But presumably language is fairly recent. So you could have mutations in the structure of the language centers and still have a human that's alive. And it might be better at language than most other people or somewhat worse. So we could have lots of small mutations in anything that's been recently evolved. But the frontal cortex is-- the human cortex is really very large compared to the rest of the brain. Same in dolphins and a couple of other animals, I forget, whales. yeah? AUDIENCE: So the reason why I ask that is that it seems to me that we have some quality, like some kind of-- we can see the world-- like add some qualities to the world. And like this is what I would call consciousness. And like for me, it seems that dogs also have this quality of like seeing the world and like adding qualities to the world, so like maybe, this is good, this is bad. Like there are different qualities for different beings. And like the software that we produce right now seems to be maybe faster and like maybe do more tests than what maybe a dog does. But for me, it doesn't seem that it has essential display quality-- I think like it doesn't have consciousness in the sense it doesn't like abrogate quality to the things in the world maybe. MARVIN MINSKY: Well, I think I know what you're getting at. But you're using that word consciousness, which I've decided to abandon, because it's 36 different things. And probably a dog has 5 or 6 of them or 31. I don't know. But one question is, do you think a dog can think several steps ahead and consider two alternative-- that's funny. Oh, let's make this abstract. So here's a world. And the dog is here. And it wants to get here. And there are all sorts of obstacles in it. So can the dog say, well, if I went this way I'd have such and such difficulty, whereas if I went this way, I'd have this difficulty. Well, I think this one looks better. Do you think your dog considers two or three alternatives and makes plans? I have no idea. But the curious thing about a person is you can decide that you're going to not act in the situation until you've considered 16 plans. And then one part of your brain is making these different approaches to the problem. And another part of your brain is saying, well, now, I've made five plans, and I'm beginning to forget the first one. So I better reformulate it. And you're doing all of these self-conscious in the sense that you're making plans that involve predicting what decisions you will make. And instead of making them, you make the decision to say I'm going to follow out these two plans and use the result of that to decide which one to. Do you think a dog does any of that? Does it look around and say, well, I could go that way or this way? Hmm. I remember our dog was good at if you'd throw a ball it would go and get it. And if you threw two balls it would go and get both of them. And sometimes if you threw three balls, it would go and get them all. And sometimes if a ball would roll under a couch that it couldn't reach, it would get the other two, and it would think. And then it would run back to the kitchen where that ball is usually found. And then it would come back disappointed. So what does that mean? Did it have parallel plans? Or does it make a new one when the previous one fails? And they're not actually parallel. What's your guess? How far ahead does a dog think? Do you have a dog? AUDIENCE: Yeah. I do have a dog. But I don't believe that's the essential part of beings that have some kind of advanced brain. Like we can plan ahead. Humans can plan ahead. But I don't think they are the fundamental part of intelligence. Like humans, I think Winston says that humans are better than the primates in like they can understand stories and they can join together stories. But somehow I don't buy the story that primates are just like rule planners. I think somehow we have some quality meshing of the world and like somehow we're not writing a software. MARVIN MINSKY: But, you know, it's funny. Computer science teaches us things that weren't obvious before. Like it might turn out that if you're a computer and you only have two registers, then-- well, in principle, you could do anything, but that's another matter. But it might turn out that maybe a dog has only two registers and a person has four. And a trivial thing like that makes it possible to have two plans and put them in suspense and think about the strategy and come back and change one. Whereas if you only had two registers, your mind would be much lower order. And there's no big difference. So computer science tells us that the usual way of thinking about abilities might be wrong. Before computer science, people didn't really have that kind of idea. Many years ago, I was in a contest-- I mean, you know, a science, because some of our friends showed that you could make a universal computer with four registers. And I had discovered some other things, and I managed to show that you could make a universal computer with just two registers. And that was a big surprise to a lot of people. But there never was anything in the history of psychology of that nature. So there never were really technical theories of-- it's really computational complexity. What does it take to solve certain kinds of problems? And until the 1960s, there weren't any theories of that. And I'm not sure that that aspect of computer sciences actually reach many psychologists or neuroscientists. I'm not even sure that it's relevant. But it's really interesting that the difference between 2 and 3 registers could make an exponential difference in how fast you could solve certain kinds of problems and not others. So maybe there'll be a little more mathematical psychology in the next couple of decades. Yeah. AUDIENCE: So in artificial intelligence, how much of our effort should be devoted to a kind of reflecting on our thinking as humans and trying to figure out what's really going on inside our brains and trying to kind of implement that versus observing and identifying what kinds of problem we, as humans, can solve and then come up with an intuitive way for a computer to kind of in a human-like way solve these problems? MARVIN MINSKY: They're a lot of nice questions. I don't think it doesn't make any sense to suggest that we think about what's happening in our brains, because that takes scientific instruments. But it certainly makes sense to go over older theories of psychology and ask to solve a certain kind of problem, what kind of procedures are absolutely necessary? And you could find some things like that, like how many registers would you need and what kinds of conditionals and what kind of addressing. So I think a lot of cognitive psychology, modern cognitive psychology, is of that character. But I don't see any way to introspect well enough to guess how your brain does something, because we're just not that conscious. You don't have access to-- you could think for 10 years about how do I think of the next word to speak, and unlikely that you would-- you might get some new ideas about how this might have happened, but you couldn't be sure. Well, I take it back. You can probably get some correct theories by being lucky and clever. And then you'd have to find a neuroscientist to design an experiment to see if there's any evidence for that. In particular, I'd like to convince some neurologists to consider the idea of k-lines. It's described I think in both of my books. And think of experiments to see if you could get them to light up or otherwise localize in-- once you have in your mind the idea that maybe the way one brain connects-- sends information to another is over something like k-lines, which I think I talked about that the other day-- random superimposed coding on parallel wires, then maybe you could think of experiments that even present brain scanning techniques could use to localize these. My main concern is that the way they do brain scanning now is to set thresholds to see which brain centers light up and which turn off. And then they say, oh, I see this activity looks like it happens in the lateral hippocampus because you see that light up. I think that there should be at least a couple of neuroscientist groups who do the opposite, which is to reduce the contrast. And when there are several brain centers that seem to be involved in an activity, then say something to the patient and look for one area to get 2% dimmer and another to look 4% brighter and say that might mean that there's a k-line going from this one to that one with an inhibitory effect on this or that. But as far as I know right now, every paper I've ever seen published showing brain centers lighting up has high contrast. And so they're missing all the small things. And maybe they're only seeing the end result of the process where a little thinking has gone on with all these intricate low intensity interactions, and then the thing decides, oh, OK, I'm going to do this. And you conclude that that brain center which lit up is the one that decided to do this, whereas it's the result of a very small, fast avalanche. AUDIENCE: Have you seen the one a couple of weeks ago about reading out the visual in real time? MARVIN MINSKY: From the visual cortex? AUDIENCE: Yes. Quite a nice half, they aren't actually reading out the visual field. For each subject, they do a massive amount of training where they flash thousands of 1-second video clips and assemble a database of very small perturbations in different parts of the visual cortex lighting up. And they show a novel video to each of the subjects and basically just do a linear combination of all of the videos that they have done in the training phase weighted by how closely things line up in the brain. And you can sort of see what's going on. It's quite striking. MARVIN MINSKY: Can you tell what they're thinking? AUDIENCE: You can only tell what they're seeing. But I think-- MARVIN MINSKY: You know, if your eyes are closed, your primary visual cortex probably doesn't do anything, does it? AUDIENCE: I think it's just-- yeah. MARVIN MINSKY: But the secondary one might be representing things that might be. AUDIENCE: Yes. So the goal of the authors of this paper is eventually to literally make movies out of dreams. But that's a long way off. MARVIN MINSKY: It's an old idea in science fiction. How many of you read science fiction? Wow, that's a majority. Who's the best new writer? AUDIENCE: Neal Stephenson. MARVIN MINSKY: He's been writing a long time. AUDIENCE: He's new compared to Heinlein. [LAUGHTER] MARVIN MINSKY: I had dinner with Stephenson at the Hillis's a couple of years ago. Yeah? AUDIENCE: So from what I understood, it seems that you're saying that the difference between us and like, for example, dogs is just a computational power. So do you believe that the difference between dogs and computers is also just computational? Like what's the difference between dogs and like Turing machine? Or there is no difference? MARVIN MINSKY: It might be that only humans and maybe some of their closest relatives can imagine a sequence. In other words, the simplest and oldest theories in psychology were the theories like David Hume had the idea of association, one idea in the mind or brain causes another idea to appear in another. So that means that a brain that's learned associations or learn if/then rule-based systems can make chains of things. But the question is, can any animal, other than humans, imagine two different situations and then compare them and say, if I did this and then that, how would the result differ from doing that and then this? If you look at Gerry Sussman's thesis-- if you're at MIT, a good thing to do and you're taking your course, you should read the PhD thesis of your professor. It not only will help you understand better what the professor said, you'll get a higher grade, if you care, and many other advantages. Like you'll actually be able to talk to him and his mind won't throw up. So, you know, I don't know if a dog can recapitulate as-- can the dog think, I think I'll go around this fence and when I get to this tree I'll do this, I'll pee on it-- that's what dogs do-- whereas if I go this way something else will happen? It might be that you that pre-primates can't do much of that. On the other hand, if you ask, what is the song of the whale? What's the whale that has this 20-minute song? My conjecture is that a whale has to swim 1,000 miles or several hundred miles sometimes to get the food it wants because things change. And each group of whales-- humpback whales, I guess, sing this song that's about 20 minutes long. And nobody has made a good conjecture about what the content of that song, but it's shared among the animals. And they can hear it 20 or 50 miles away and repeat it. And it changes every season. So I suspect that the obvious thing that it should be about is where's the food these days, where are the best flocks of fish to eat, because a whale can't afford to swim 200 miles to the place where its favorite fish were last year and find it empty. It takes a lot of energy to cross the ocean. So maybe those animals have the ability to remember very long sequences and even some semantics connected with it. I don't know if dogs have anything like that. Do dogs ever seem to be talking to each other? Or they just-- AUDIENCE: I have a story dogs. So apparently in Moscow, not all dogs, but a very small fraction of the stray dogs in the city have learned how to ride the metro. They live out in the suburbs because I guess people give them less trouble when they're out in the suburbs. And then they take the subway each day into the city center where there are more people. And they have various strategies for begging in the city center. So for instance, they find some guy with a sandwich, and they bark really loudly behind the guy, and the guy would drop the sandwich. And then they would steal it. Or they have a pack of them, and they all know each other. And they send out a really cute one to beg for food, and so they'll give the cute one food. And the cute one brings it back to everyone else. And simply navigating the subway is actually a bit complicated for a dog, but somehow a very small group of a dogs in Moscow have learned how to do it, like figure out where their stop is, get on, get off. MARVIN MINSKY: Yeah, our dog once hopped on the Green Line and got off at Park Street. So she was missing for a while. And somebody at Park Street called up and said your dog is here. So I went down and got her. And the agent said, you know, we had a dog that came to Park Street every day and changed trains and took the Red Line to somewhere. And finally, we found out that its master had-- it used to go to work with its owner every day, and he died. And the dog took the same trip every day and. The T people understood that he shouldn't be bothered with. Our dog chased cars. Was it Jenny? And that was terrible because we knew she was going to get hurt. And finally, a car squashed her leg, and she was laid up for a while with a somewhat broken leg. And I thought, well, she won't chase cars anymore. But she did. But what she wouldn't do is go to the intersection of Carlton and Ivy Street anymore, which is-- so she had learned something. But it wasn't the right thing. I'm not sure I answered your-- AUDIENCE: Actually, according to-- there's this story that you gave in Chapter 2 about the girl who was digging dirt. So in the case where she learns whether in digging dirt is a good or bad activity is when there is somebody with whom she had an attachment bond present who's telling her whether it's good or bad. And in the case where she learned to avoid that fight is when something bad happens to her in the spot. So in a sense, the dog is behaving just like that logic. MARVIN MINSKY: Yes. Except that the dog is oriented toward location rather than something else. So-- AUDIENCE: Professor, can you talk about possible hierarchy or representations schemes of knowledge, like semantic is on top. And at the bottom, there's like-- you're mentioning in the middle of k-lines they were on the bottom. There's things up there. So the way I thought about the present therapist asked that humans-- it's just natural that you need all of the immediate representation in order to support something like semantic nets. And it seems natural to me to think that humans have all these double hierarchy of representations, but dogs might have something only in the middle, like they only have something like neuronets or something. So my question is, what behaviors that you could observe in real life could only be done with one of these intermediate representations of knowledge that can't be done with something like machine learning? MARVIN MINSKY: Hmm, you mean machine learning of some particular kind? AUDIENCE: That's currently fashionable I think. Kind of like with brute force of calibration of some parameter. It seems to me that if you recognize a behavior like that, it might be a worthy intermediate goal to be able to model that instead of trying to model something like natural language, which is you might need the first part to get the second part. MARVIN MINSKY: Well, it would be nice to know-- I wonder how much is known about elephants, which are awfully smart compared to-- I suspect that they are very good at making plans, because it's so easy for an elephant to make a fatal mistake. So unfortunately, probably no research group has enough budget to study that kind of animal, because it's just too expensive. How smart are elephants? Anybody-- I've never interacted with one. I'm not sure if you have a question. AUDIENCE: I think the question is are there behaviors that you need an intermediate level of the repetition of knowledge in order to perform that you don't need like the highest level like semantic-- like basically natural language to do. So you could say that by some animal doing this behavior, I know that it has some intermediate level of representation of knowledge that's more than kind of a brute force machine learning approach. Because like what's discussed before, a computer can do path finding, which is like a brute force approach. I don't think that's how humans do it or animals do it. MARVIN MINSKY: I can't think of a good-- it's just hard to think of any animals besides us that have really elaborate semantic networks. There's Koko, who is a gorilla that apparently had hundreds of words. But-- AUDIENCE: I think the question is to find something that's lower than words, like maybe Betty the crow-- MARVIN MINSKY: With that stick, yeah. How many of you seen the crow movie? She has a wire that she bends and pulls something out of a tube. But-- AUDIENCE: I don't think machine learning can do that. But I don't think you need semantic nets either. MARVIN MINSKY: I have a parrot who lives in a three-dimensional cage. And she knows how to get from any place to another. And if she's in a hurry, she'll find a new way at the risk of injuring a wing, because there are a lot of sticks in the way. So flying is risky. Our daughter, Julie, once visited Koko, the gorilla. And she was introduced-- Koko's in a cage. And Penny, who is Koko's owner, introduces Julie in sign language. It's not spoken. It's sign language. So Julie gets some name. And she's introduced to Koko. And Koko likes Julie. So Koko says, let me out. And Penny says, no, you can't get out. And Koko says, then let Julie in. And I thought that showed some fairly abstract reasoning or representation. And Penny didn't let Julie in. But Koko seemed to have a fair amount of declarative syntax. I don't know if she could do passives or anything like that. If you're interested, you probably can look it up on the web. Penny's owner-- I mean Penny thought that Koko knew 600 or 700 words. And a friend of ours was a teenager who worked for her. And what's his name? And he was convinced that Koko knew more than 1,000 words. But he said, you see, I'm a teenager and I'm still good at picking up gestures and clues better than the adults here. But anyway I gather Koko is still there. And I don't know if she's still learning more words. But every now and then we get a letter asking to send more money. Oh, in the last lecture, I couldn't think of the right crypto arithmetic example. I think that's the one that the Newell Simon book starts out with. So obviously, m is 1. And then I bet some of you could figure that out in 4 or 5 minutes. Anybody figured it out yet? Help. Send more questions. Yeah? AUDIENCE: I have an example. For instance, I go out to a restaurant of this type of exotic food that I've never ever had before. And I end up getting sick from it. So what determines what I learned from this? Because there are many different possibilities. There is the one possibility of I learned to avoid the specific food I ate. Another possibility is like I learn to avoid that type of food, because it might contain some sort of spice that I react to badly. And a third possibility-- there might be more-- I learn to avoid that restaurant, because it just might be a bad restaurant. So in this case, it's not entirely clear which one to pick. And, of course, in real life, I might go there again and comparatively try another food or try the same food at a different restaurant. But what do you think about this on that scenario, what causes people to pick which one? MARVIN MINSKY: The trouble is we keep thinking of ourselves as people. And what you really should think of yourself as a sort of Petri dish with a trillion bacteria in it. And it's really not important to you what you eat, but your intestinal bacteria are the ones who are really going to suffer, because they're not used to anything new. So I don't know what conclusion to draw from that. But-- AUDIENCE: Previously, you mentioned that David Hume thought that knowledge represented as associations. And that occurs to me as being some sort of like a Wiki structure where entries have tags. So an entry might be defined by what tags it has and what associations it has. I'm wondering if that structure has been-- if somebody has attempted to code that into some kind of peripheral structure, has there been any success with putting that idea into a potential AI. MARVIN MINSKY: I don't know how to answer that. Do any psychologists use semantic networks as representations? Pat, do you know, has anybody-- is anyone building an AI system with semantic representations or semantic networks anymore? Or is it all-- everything I've seen is gone probabilistic in the last few years. Your project. Do you have any competitors? AUDIENCE: No. MARVIN MINSKY: Any idea what the IBM people are using? I saw a long article that I didn't read, yet but-- AUDIENCE: Traditional information retrieval plus 100 hacks plus machine learning. MARVIN MINSKY: They seem to have a whole lot of slightly different representations that they switch among. AUDIENCE: But none of them are very semantic. AUDIENCE: Well, they probably have-- I don't know, does anybody know what the answer is? But they must have a little frame-like things for the standard questions. MARVIN MINSKY: Of course, the thing doesn't answer any-- it doesn't do any reasoning as far as you can tell. AUDIENCE: Right. MARVIN MINSKY: So it's trying to match sentences in the database with the question. Well, what's your theory of why there aren't other groups working on what we used to and you are? AUDIENCE: Well, multiples are computing is a fad. And if you can do better in less time that way than figuring it out how it really works, then that's what you do. No one does research on chess, no one does a research on how humans might play chess, because the bulldozer computers have won. MARVIN MINSKY: Right. There were some articles on chess and checkers early in the game. But nothing recent as far as I know. AUDIENCE: So in many ways it's a local maximum phenomenon. So bulldozer computing stuff has got up to a certain local maximum. Until you can do better than that some other way, then [INAUDIBLE] MARVIN MINSKY: Well, I wonder if we could invent a new TV show where the questions are interesting. Like I'm obsessed with the question of why you can pull something with a string, but you can't push it. And, in fact, what was this-- we had a student who actually did something with that a long time ago. But I've lost track of him. But how could you make a TV show that had common sense questions rather than ones about sports and actors? AUDIENCE: Well, you don't you imagine what happens when you push a string? It's hard to explain the-- MARVIN MINSKY: It buckles. AUDIENCE: It's easy to imagine. MARVIN MINSKY: Yeah, So you can simulate it. AUDIENCE: Yeah. MARVIN MINSKY: Yeah. AUDIENCE: I have a question. So suppose in the future we can create a robot as intelligent as human as smart, and how we should evaluate it? When do we know that we reach like certain things like which test should pass or which [INAUDIBLE] should [INAUDIBLE]?? So for example, [INAUDIBLE] asked some pretty hard questions and seem to be intelligent. But what all it is doing is doing some other attempts and then calculating some probability and stuff. Humans don't do that. They try to understand the question and look to answer it. But then suppose you can create a robot that can behave as it is like-- I don't know, how would you evaluate when do you know that you reach something? MARVIN MINSKY: That's sort of funny, because if it's any good, you wouldn't have that question. You'd say, well, what can't it do? And why not? And you'd argue with it. In other words, people talk about passing the Turing test, or whatever. And it's hard to imagine a machine that you converse with for a while and then when you're told it's a machine, you're surprised. AUDIENCE: So I think, for example, you can make a machine to say some very intelligent and smart things, because like it may know, it takes all this information from different books and all this information that it has somewhere in a database, right. But then like when people speak they kind of dissent when you're speaking. How do you know like some robot understands something or doesn't understand? Or does it have to understand at all? MARVIN MINSKY: Well, I would ask it questions like why can't you push something with a string? Anyone have a Google working? What does Google say if you ask it that? Maybe it'll quote me. Or someone-- yeah? AUDIENCE: How would you answer that question, like why can pull, but not break? MARVIN MINSKY: I'd say, well, it would buckle. And then they would say, what do you mean by buckle? And then I'd say, oh, it would fold up so that it got shorter without exerting any force at the end. Or blah, blah. I don't know. There are lots of answers. How would you answer it? A physicist might say, if you've got it really very, very, very straight, you could push it with a string. But quantum mechanics would say you can't. Yeah. AUDIENCE: I feel like if you-- like the [INAUDIBLE] or like an interesting show would be like an alternate cooking show or something where you have to use object that's like not normally found to have that use. So like I want to paint a room, but you're not given a brush. You're given like a sponge. Or people pull up like eggplants want it painted purple. So it has to represent the thing in a different way other than-- MARVIN MINSKY: Words. That's interesting. When I was in graduate school, I took a course in knot theory. And, in fact, you couldn't talk about them. And if anybody had a question, they'd have to run up to the board. And, you know, they'd have to do something like this. Is that a knot? No. No, that's just a loop. But if you were restricted to words, it would take a half hour to-- that's interesting. Yeah? AUDIENCE: You mentioned solving the strange puzzle by imagining the result. And I think heard someone else say, computers can do that in some way. It can simulate a string. And we know enough physics that you can give a reasonable approximation of string. But I find that the question that is often not asked in AI is-- or by computers-- is how does one choose the correct model with which to answer questions? There's a lot of questions we're really good at answering with computers. And some of them, we have genetic algorithms they're good for, some of them based in statistics, some of them formal logic, some of them basic simulation. But this is all-- to me this is the core question, because this is what people decide, and no one seems to have ever tackled an [INAUDIBLE].. MARVIN MINSKY: Well, for instance, if somebody asks the question, you have to make up a biography of that person. So because the same question from different people would get really different answers. Why does a kettle make a noise when the water boils? If you know that the other person is a physicist, and it's easy to think of things to say, but-- it's not a very good example. What's the context of that? In a human conversation, how does each person know what to say next? AUDIENCE: I guess one question is, how do people decide what evidence to use to tackle a problem? And I guess, the more fundamental question is, when people are solving problems, how do they decide how they're going to think about the problem? Are they going to think about it by visualizing it? Think about it by trying to [INAUDIBLE] Think about it by analogy or formal logic? Of all the tools we have, why do we pick the ones we do? MARVIN MINSKY: Yeah, well, that goes back to if you make a list of the 15 most common ways to think and somebody asks you a question or asks, why does such and such happen, how do you decide which of your ways to think about it? And I suspect that's another knowledge base. So we have commonsense knowledge about, you know, if you let go of an object, it will fall. And then we have more general knowledge about what happens when an object falls. Why didn't it break? Well, it actually did. Because here's a little white thing, which turned into dust. And so that's why I think you need to have five or six or how many different levels of representation. So as soon as somebody asks a question, one part of your brain is coming up with your first idea. Another part of your brain is saying, is this a question about physics or philosophy or is it a social question? Did this person ask it because they actually want to know or they want to trap me? So I think you-- generally this idea of this-- there must be many kinds of society of mind models that people have. And each person, whenever you're talking to somebody, you choose some model of what is this conversation about? Am I trying to accomplish something by this discussion? Is it really an interesting question? Do I not want to offend the person or do I want to make him go away forever? And little parts of your brain are making all these decisions for you. I'd like to introduce Bob Lawler, who's visiting. AUDIENCE: One of my favorite stories about Feynman, it comes from asking him to dinner one night. And I asked him how he got to be so smart. And he said that when he was an undergraduate here, he would consider every time he was able to solve a problem, just the beginning step of how to exploit that. And what he would then do would be to try to reformulate the problem in as many different representations as he could. And then use his solution of the first problem as a guide in working out alternate representations and procedures in that. The consequence according to him was that he became very good at knowing which was the most fit representation to use in solving any particular problem that he encountered. And he said that that's where his legendary capability in being so quick with good solutions and good methods for solutions came from. So maybe a criteria for an intelligent machine will be one that had a number of-- 15 different ways of thinking and applied them regularly to develop alternative information about different methods of problem solving. You would expect it then to have some facility at choosing based on its experience. MARVIN MINSKY: Yeah, he wrote something about-- because the other physicists would argue about whether to use Heisenberg matrices or Schrodinger's equation. And he thought he was the only one who knew how to solve each problem both ways, because most of the other physicists would get very good at one or the other. He had another feature which was that if you argued with him, sometimes he would say, oh, you're right, I was wrong. Like he was once arguing with Fredkin about could you have clocks all over the universe that were synchronized. And the standard idea is you couldn't because of relativity. And Fredkin said, well, suppose you start out on Earth and you send a huge army of little bacteria-sized clocks and send them through all possible routes to every place and figure out and compensate for all the accelerations they had experienced on the path. Then wouldn't you get a synchronous time everywhere? And Feynman said, you're right, I was wrong-- without blinking. He may have been wrong, but-- More questions? AUDIENCE: Along the same line as his question about how do we know what method to use for solving problems. Kind of curious how we know what data set or what data to use when solving a problem. Because we have so much sensory information at any moment and so much data we have from experience. But like when you get a problem, you instantly-- and I guess k-line is sort of a solution for that. But I'd be curious how you could possibly represent good data relationships in a way that a computer might be able to use. Because like right now, the problem is that we always have to very narrowly define a problem for a machine to be able to solve it. But I feel like if we could come up with good methods for filtering massive data sets to justify what might be relevant that doesn't involve like trial and error. MARVIN MINSKY: Yes, so the thing must be that if you have a problem, how do you characterize it? How do you think, what kind of problem is this and what method is good for that kind of problem? So I suppose that people vary a lot. And it's a great question. That's what the critics do. They say what kind of problem is this? How do I recognize this particular predicament? And I wish there were some psychologists who thought about that the way Newell and Simon did, god, in the 1960s. That's 50 years ago. How many of you have seen that book called Human Problem Solving. It's a big, thick book. And it's got all sorts of chapters. That's the one I mentioned the other day where they actually had some theories of human problem solving and simulated this. They gave subjects problems like this and said, we want you to figure out what numbers those are. And they lied to the subjects and said, this is an important kind of problem in cryptography. The secret agents need to know how to decode cryptograms of this sort, where usually it's the other way around. The numbers stand for letters. And there's some complicated coding. But these are simple cases. So you have to figure out that sort of thing. And then the book has various chapters on theories of how you recognize different kinds of problems and select strategies. And, of course, some people are better than others. And believe it or not, at MIT there was almost a whole decade of psychologists here who were studying the psychology of 5-person groups. Suppose you take five people and put them in a room and give them problems like this, or not the same cryptic, but little puzzles that require some cleverness to solve. And you record and video. They didn't have video in those days. So it was actual film. And there's a whole generation of publications about the social and cognitive behavior of these little groups of people. They zeroed in on 5-person groups for reasons I don't remember. But it turned out that almost always when you had the group divided into two competitive groups with two and three, every now and then they would reorganize. But it was more a study in social relations than in cognitive psychology. But it's an interesting book. There must be contemporary studies like that of how people cooperate. But I just haven't been in that environment. Any of you taken a psychology course recently? Not a one? Just wonder what's happened to general psychology. I used to sit in on Tauber and a couple of other lecturers here. And psychology, of course, was sort of like 20% optical illusions. AUDIENCE: Yeah, they still do that-- MARVIN MINSKY: Stuff like that. AUDIENCE: They also concentrate a lot on development psychology. MARVIN MINSKY: Well, that's nice to hear, because I don't believe there was any of that in Tauber's class AUDIENCE: I think Professor Gabrieli now teaches the introductory psychology. And he-- MARVIN MINSKY: Do they still believe Piaget or do they think that he was wrong? AUDIENCE: I think they probably take the same approach as with like Freud, they would say great ideas and a revolution, but they also don't think he's the end of the-- MARVIN MINSKY: Well, he got-- AUDIENCE: I know the childhood development class, you read Piaget, his books. MARVIN MINSKY: Yeah. In Piaget later years, he got algebra. And he wanted to be more scientific and studied logic and few things like that and became less scientific. It was sort of sad to-- I can imagine being browbeaten by mathematicians, because they're the ones who were getting published. And he only had-- how many books did Piaget-- AUDIENCE: But if I may add a comment about Piaget. It really comes from an old friend of many of us, Seymour. As you know, he was, of course, Piaget's mathematician for many years. MARVIN MINSKY: We got people from Piaget's lab. AUDIENCE: But Seymour said that he felt that Piaget's best work was his early work, especially like building his case studies. And one time when we were talking about the issue of focusing from the AI lab and worked on in psychology here, Seymour said he felt that was less than necessary than more of a concentration on AI, because he expected in the future the world of study of the mind would separate into two individual studies, one much more biological, like the neurosciences of today, and the other focus more on the structure of knowledge and on representations and in effect the genetic epistemology of Piaget. Then he added that something was a quote later. And it was, "Even if Piaget's marvelous theory today proved to be wrong, he was sure that whatever replaced it would be a theory that the same sort, one of the development of knowledge in all its changes." So I don't think people will get away from Piaget however much they want. MARVIN MINSKY: I don't think so either. I meant to introduce our visitor here, because Bob Lawler here has reproduced a good many of the kinds of studies that Piaget did in the 1930s and '40s. And if you look him up on the web-- you must have a few papers. AUDIENCE: I better tell you what the website is, because it still hidden from web prose. It's nlcsa.net. MARVIN MINSKY: That would be hard to-- AUDIENCE: Natural Learning Case Study Archive dot net. It's still in process, still in development. But it's worth looking at. MARVIN MINSKY: How many children did Piaget have? AUDIENCE: Well, Piaget had three children-- MARVIN MINSKY: So did you-- AUDIENCE: Not in his study. But what he did was to mix together the information from all three studies and supported the ideas with which he began. So it was illustrations of his theories. MARVIN MINSKY: Anyway, Bob, has quite a lot of studies about how his children developed concepts of number and geometry and things like that. And I don't know of anyone else since Piaget who has continued to do those sorts of experiments. There were quite a lot at Piaget's institute in Geneva for some years after Piaget was gone. But I think it's pretty much closed now, isn't it? AUDIENCE: Well, the last psychologist Piaget hired Jacques Benesch, who was no longer at the university. He retired. And it has been taken over by the neo-Piagetians, who are doing something different. MARVIN MINSKY: Is there any other place? Well, there was Yoichi's lab on children in Japan. AUDIENCE: There are many people to take Piaget seriously in this country and others. AUDIENCE: So Robert mentioned that Feynman had more representations of the world than like usual people. Like when I talked about Eisen and the glial cells, I referred to that because I believe that k-lines is our way of representing the world. And maybe Eisen had better ways of representing the world. And I believe that, for example, agents as resources are not different from Turing machines. You can create a very simple Turing machine that act like agents, and you have some mental states. But there is no, I believe, good way of representing the world and updating the representation of the world. Like it seems to me that when you grow up, you are learning how to represent the world better and better. And you have some layers. And that's all k-lines. And if glial cells are actually related to k-lines, it means that Eisen had like a better hardware representing the world. And that's why he would be smarter than other people. MARVIN MINSKY: Well, it's hard to-- I'm sure that that's right that you have a certain amount of hardware, but you can reconfigure some of it. Nobody really knows. But some brain centers may have only a few neurons. And maybe there's some retrograde signals. So that if two brain centers are simultaneously activated, then usually the signals only go one wave, from one to the other. Have to go through a third one to get back. But it could be that the brain-- that the neurons have property that if two centers are activated, maybe that causes more connections to be made between them that can then be programmed more. I don't think anybody really has a clear idea of whether you can grow new connections between brain centers that are far apart. Does anybody know? Is there anything-- AUDIENCE: It used to be common knowledge that there was no such thing as adult neurogenesis. And now it is known that it exists in certain limited regions of the brain. So in the future, it may be known that it exists everywhere. MARVIN MINSKY: Right. Or else that those experiments were wrong. And they were in a frog rather than a person. AUDIENCE: Lettvin claimed that you could take a frog's brain out and stick it in backwards and pretty soon it would behave just like it used to. MARVIN MINSKY: Lettvin said? AUDIENCE: Yeah. Of course. I don't know if he was kidding or not. You never could tell. MARVIN MINSKY: You could never tell when he was kidding. Lettvin was a neuroscientist here who was sort of one of the great all time neuroscientists. He was also one of the first scientists to use transistors for biological purposes and made circuits that are still used in every laboratory. So he was a very colorful figure. And everyone should read some of his older papers. I don't know that there were any recent ones. But he had an army of students. And he was extremely funny. What else? AUDIENCE: So continuing on the idea of hardware versus software, what do you think about the idea that intelligence or humans may need strong instincts as when they're born in order like-- hence the interplay between their instincts, like they know to cry when they're hungry or to look for their mother. They need these instincts in order to develop higher orders of knowledge. MARVIN MINSKY: You'd to ask L Ron Hubbard for-- I don't recall any real attempts to-- I don't think I've ever run across anybody claiming to have correlations between prenatal experience and the development of intelligence. AUDIENCE: That's not what I'm talking about. I'm talking about before intelligence is being developed, like you learn language, before you learn language, you need to have a motivation to do something. So you need to have instincts, instinctual reactions to things. Like traditional experience with knowledge after you're born, you-- MARVIN MINSKY: Well, children learn language, you know, 12 to 18 months. What are you saying that they need some preparation? I'm not sure what you're asking. AUDIENCE: So think of it from an engineering point of view. If you were to build like a robot, what you need to program is some instincts, some like rule of thumb algorithms in order to get it started in the world in order to build experiential knowledge. MARVIN MINSKY: You might want to build something like a difference engine, so that you can represent a goal and it will try to achieve it. So you need some engine for producing any behavior at all. AUDIENCE: Right. So like if you take the approach that like maybe to build an AI, you should build like an infant robot and then you teach it as you would like a human child. Then would it be useful to make it dependent on like some other figure in order to help it learn how to do things like a human child would? MARVIN MINSKY: Well, in order to learn, you have to learn from something. And one way to learn is in isolation, just to have some-- you could build a goal to predict what will happen. And the best way to predict, as Alan Kay put it once, the best way to predict the future is to invent it. So you could make a-- or could put a model of an adult in it to start with, so that-- in other words, one way to make a very smart child is to copy its mother's brain into a little sub-brain when it's born. And then it could learn from that instead of depending on anybody else. I'm not sure-- you have to start with something. Of course, humans, as Bob mentioned or someone mentioned, if you take a human baby and isolate it, it looks like it won't develop language by itself, because-- I don't know what because. In fact, I remember one of our children who was just learning to talk. And something came up, and she said, what because is that. Do you remember? It took a while to get her to say why. She would come up and say what because. And I would say, you're asking why did this. After a long time she got the hint. But-- why do all w-h words start with w-h? AUDIENCE: One of them doesn't-- how. MARVIN MINSKY: Could you say whow? How. Is there a theory? AUDIENCE: Not that I know of. MARVIN MINSKY: It's a basic sound telling you're making a query before you can do the rising inflection. It's interesting. Is it true in French? Quoi? The land of the silent letter. Anybody know what's the equivalent of w-h words in your native language? AUDIENCE: N. MARVIN MINSKY: What? AUDIENCE: N. MARVIN MINSKY: N? AUDIENCE: Yeah. MARVIN MINSKY: In what? AUDIENCE: Turkish. MARVIN MINSKY: Really? They all start with n? Wow. Interesting. Maybe the infants have an effect on something. Do questions in Turkish end with a rise? AUDIENCE: Yeah. So only the relevant w-h questions-- OK, all questions end in kind of an inflection. But normally, you have a little kind of little word that you would put at the end of any sentence to make it into a question, except for the w-h questions, which are standalone one. You don't them. MARVIN MINSKY: Yes, you'd say, this is expensive? They don't need the w-h if you do enough of that. Huh. So question, is that in the brain at birth? AUDIENCE: Is that pattern mirrored in English where you can say, is this expensive? But if you can say how expensive is this without that rising intonation. It mirrors using the separate word, but you don't need that separate word if it's an end word. AUDIENCE: But if you're saying how expensive is this without the question inflection, it almost sounds like you're making a statement about just how ridiculously expensive it is. Like you're going, how expensive is this versus how expensive is this? MARVIN MINSKY: Well, I should let you go.
|
MIT_6868J_The_Society_of_Mind_Fall_2011
|
9_Common_Sense.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. STUDENT: I was curious, how important do you think the understanding of computational complexity is and classic theory is to the understanding of intelligence in the field of data? PROFESSOR: That's a really-- I can't remember anybody asking that question. What commonsense questions are complex? That would be a-- yes. STUDENT: I don't know. I feel like a lot of the-- one of the insights you propose is that the simplest problems that we solve tend to be really complex. I mean, there's different measures of complexity. But in a certain way, it looks like complication. A lot of the simplest things, like vision and language acquisition are probably much more complex than a lot of the expertise and problems that we usually talk about. PROFESSOR: Yeah. Have I shown this slide before? It was published in the Toshiba Journal of Research and Development. I thought of it at some point when I was in Japan, and-- STUDENT: You turned your microphone off, professor. PROFESSOR: I turned it off? STUDENT: Yes. PROFESSOR: I have some hot-melt glue, which I could-- [LAUGHS] [INAUDIBLE] I'll put my wallet in the other pocket. So this isn't complexity. But I think I talked about this briefly. The idea is this is one way of classifying problems, and it's a very naive kind of complexity, namely if you're trying to achieve some situation or reach some goal, then what method should you use? And so this is a cute little table, where you just count the number of inputs and how much effect they have. And I didn't bring a pointer either. Oh, well. See if it-- here's a pointer. So if there are very few causes and each has a small effect, then it's an easy problem. I'm not sure of anything to say about it. If you have a very large number of inputs to a situation and each has a large effect, then it's probably hopeless. Because if there are n kinds of inputs and they're all important, then you probably have to exhaustively look at all 2 to the n cases to be sure you haven't missed something. And then here are eight areas of artificial intelligence for intermediate situations. So that's a kind of complexity, but it doesn't say anything about how complex the problem is but just what kind of context it's in and how many inputs are involved. So anyway, I'd like to see a better table, if anybody can think of what's another way of classifying artificial intelligence problems. And if you can think of one even dumber than this, I'd be surprised. However, if you look at recursive function theory, then every now and then you run into some wonderful discovery, like there's one that's attributed to Friedberg, who was an undergraduate here, which says that there are problems that-- there's some not quite universal Turing machines which still have unsolvable holding problems. And these were constructed by sort of mutilating two universal Turing machines that are talking to each other. And a lot of interesting partial-ordering mathematics, which I don't think I could reproduce. So there are these wonderful theories of algorithmic complexity. How many of you have encountered that field? There's a problem that-- quite a few. My favorite problem is one that I worked on for months and didn't solve, and it was proposed by Emil Post in 1923, I think. So professor at NYU, New York University in New York. And he had the idea of tag processes. It's called tag because it's one process chasing another one and trying to catch up. And so the idea is you have some string of 0s and 1s, and the process that Emil Post studied, and I wasted a lot of time on, was this. You look at the first symbol in the string, and if it's a 1, then you write 1101 at the end of the string. But you cross off three of them. OK. And then if it's a 0, you write 00 and cross off three of them. It's called the tag problem, because you're wondering if the front of it is going to catch up to the back. And if there were equal numbers of 0s and 1s, then you'd see that sometimes you're adding three things, four things, and sometimes you're adding two, and you're always pulling off three. So it's sort of crazy. I don't know what got him interested in that in the first place. So this is bad. It's going to be-- oh, we're going to get a lot of 1101, so it's getting longer if I cross out three at a time. So if you start with various strings, sometimes it gets pretty long, and then it gets in the way and disappears or almost disappears. Or it keeps getting longer forever. What's the third case? The third case would be maybe it gets into a loop, where it stays the same. If it ever gets back to repeating the same string, then it will do that forever. So Post asked, is there any string which will grow forever? And a lot of people have simulated this, as far as 10 to the 10th steps. And they always grew, except for the ones that stopped pretty soon. So no one knows the answer. Is there an initial string that grows infinite? If you want to waste a summer, try to find either some good theory of that. Anyway, it's not clear what is the significance of that one. But it was discovered partly by me that there were tag systems like this that were universal and could be made to simulate any other machine. The trouble is, you have to use this sort of complicated coding to encode the problem for a tag process and decode it again. Anyway, that's an example of a nice computational complexity problem that's still unsolved. And one of you might get some sudden insight and figure out how to-- yes. STUDENT: So you mentioned that you helped prove that there were some universal tag systems? This is the way that they show that [INAUDIBLE] are universal. PROFESSOR: Yes. STUDENT: By converting into the-- PROFESSOR: It's pretty similar. I'm trying to remember the one that I proved was universal. It's pretty simple, but it involved just knocking off two each time. But it had-- I have forgotten how many symbols it needed. Anyway, I don't think I know any interesting case where computational complexity has actually influenced any work on AI. Have you ever seen any of this? Recursive function theory is a very beautiful branch of mathematics, and it's great fun, sort of like geometry in that it starts with almost nothing. There is quite a lot about it in my old book on computation. There is a bit in [INAUDIBLE] great big book. But I'm not sure what-- yeah. STUDENT: So Watson seems to be [INAUDIBLE] to answer questions. Is there any AI division of, like, Watson intelligence system is? Because it seems that the computers that we have in this-- I don't know, the intelligence systems that we have, they are good to solve specific problems, but did we divide the intelligence into fields that we want to solve? Because, as you said, we have, I don't know, intelligence maybe can be divided in several pieces, and maybe we're not trying to solve this fast. Like, the Turing test says how we need a machine that was to imitate the human intelligence or whatever. But the human intelligence itself is a bunch of things we can-- I don't know. PROFESSOR: Yes. STUDENT: Different process. It seems that there's no vision of the intelligence Like, always we are asking, like, we have a system that can achieve human intelligence, but we don't divide this problem. How can we solve this big problem if we don't divide this problem to smaller problems which people can tackle? PROFESSOR: Well, there are lots of things that we have programs for that people can't solve. So I'm not sure that the word intelligence makes much sense anymore. It's like consciousness. Maybe at any particular moment you're talking about 5 or 10 or 20 different things. For example, Watson is very good at getting the name of a movie actor from a few clues but can it do any commonsense reasoning? You've been studying it a bit, Henry. STUDENT: I think it can a little bit. Not into the extent that you like, but Erik Mueller, who was part of the Watson team, and he's been working on common sense knowledge. So I think it probably does have some capability but not a whole lot. PROFESSOR: It'd be nice to know more about it. I haven't read what they've published, but the trouble is they're not in business for their health. STUDENT: And when will be divided intelligence, like, the role of intelligence into solve problems and try to tackle them? PROFESSOR: Well, there's Howard Gardner with his-- STUDENT: That's what they do, right? STUDENT: Like, they do [INAUDIBLE] vision [INAUDIBLE].. STUDENT: I mean, that's what IBM is [INAUDIBLE].. No one is trying to solve everything at once. People are working on specific sub-problems. STUDENT: Yeah, but are they integrated? PROFESSOR: Are which integrated? STUDENT: I mean, can we do different subsystems? STUDENT: Eventually, hopefully, maybe. PROFESSOR: In what sense are people integrating? What do you mean? STUDENT: No, no, no. I mean, can we integrate the subsystems? Like, OK, somebody is talking about [INAUDIBLE].. STUDENT: They would have to know about [INAUDIBLE].. One of the other assumptions based on [INAUDIBLE] so that they would benefit to make the estimation [INAUDIBLE]. This is something that [INAUDIBLE] So we need to have a classification system that the [INAUDIBLE]. PROFESSOR: Yeah. That's what I tried to approximate with these critics, which you have a bunch of systems-- I guess we don't know much about how parallel things are in the human brain. But presumably, some particular small number of processes are going on at any time. Maybe a large number of very trivial processes, like-- well, I don't know what like. But my picture of the integration in human psychology is by having these critics so that whatever processes are going on. You have other processes that don't understand the ones that are actually doing the work but have some superficial assessment, like this one is using up half of my sugar or it's wasted a lot of time, and it hasn't solved its quota of problems in the last five minutes or five seconds or, who knows, 100 milliseconds. In the early work of Newell and Simon, they did lots of experiments with stopwatches and instruments to see how long people took to react to things. And they concluded that for a conscious operation-- by conscious, I mean something that the person might actually have a word for and mention, because when they get them solving problems, they train their subjects to keep talking all the time about what they're doing. And so Newell concluded that there were some operations that took about a 1/30 of the second, and those were things on the verge of what we might call consciousness, and that there must have been other operations that were 2 or 3 milliseconds that the person couldn't talk about. But he was studying operations which we understand very well, like how you add up columns of numbers and just little arithmetic things. And timing these people with stopwatches for endless periods. And so he found that if you asked somebody how much is 8 and 3, the person would say 11. And another time later, you'd ask them how much is 3 and 8, and he discovered-- Newell reports that it usually takes people noticeably longer in milliseconds to say that 3 and 8 is 11 than to say that 8 and 3 is 11. And he fitted this to data and concluded that when you add 3 and 8, you go 3 and then you count up. 24 milliseconds. And he concluded that you're doing addition one at a time in 3-millisecond chunks. Well, that's sort of nutty. But if you do enough experiments, I guess you can prove anything. But where's all this going? Well, what would Watson need to understand why you can pull something with a string but you can't push it? And it seems to me, if you could find a story in which a famous actor came to a terrible fate because he was pushing a string instead of pulling it, Watson would get that right away, because it would be at the top level of your description of the movie. But the question is, how much progress are they making toward ordinary commonsense thinking and reasoning? And I presume, as Henry Lieberman said, if Erik Mueller is actually in the middle of that, then they have a first-class AI researcher, because he did a lot of interesting things before he disappeared from view. Any other-- where else is anything on that scale happening? I bet that's the largest AI program in the world, except maybe there are a couple of secret ones in the classified. It would be nice to believe that there were something like the Rand Corporation actually doing exciting, profound things for the benefit of all of us. I think I mentioned that I used to hang around there in the middle 1950s, and it was full of wonderful, basic research. But then things gradually got classified. Anyway, I'd like to see a few more diagrams like this, just trying to make commonsensical classifications of AI problems and maybe figuring out where people-- are there some fairly important basic problems that nobody's working on, because they just haven't formulated them in a way that they can get people to work on them? Look back at 1970, and here's a program that has a bunch of assumptions about the physical world and understands how to generate sub-goals from goals in various ways and keep track of things. So everyone who is interested in AI should read the PhD theses in the early '70s by Gerry Sussman and Pat Winston and Terry Winograd. And I'd say each of those theses has maybe a dozen lines of research that nobody has followed very far. Because like any other field, AI is sort of run into large-scale global funding fads. And there was a decade in which rule-based systems were virtually the only thing people worked on, and they discovered a certain amount. So lots of rule-based systems are out there in the world doing useful things. They never got to make three or four levels of criticism in those systems. And so, to me, that area sort of reached a certain point of economic usefulness, but almost disappeared from the horizon of research. What's that project called? ACT? What's the big rule-based system project that-- I'm asking Pat Winston there if he-- STUDENT: I'm not sure what you were talking about. STUDENT: At [INAUDIBLE]? Is that what you were talking about? Yeah. PROFESSOR: Yeah. I've had a lot of people all over the country sort of participating, and I wonder if anything ever-- STUDENT: Well, [INAUDIBLE] PROFESSOR: Yeah. STUDENT: Which is the other big production system. PROFESSOR: That was Newell and Simons. STUDENT: I think people are trying those and adding this and adding that. PROFESSOR: But it doesn't seem to come up to the surface and announce a breakthrough of any sort or at least I haven't heard of any. STUDENT: Well, the problem is, it's a little hard to tell whether it's a cognitive architecture or a bad programming language. PROFESSOR: You mean in order to get it to do something, they have to do something specifically for that? STUDENT: Right. PROFESSOR: So we didn't just fill out. STUDENT: The programming language is pretty awkward. And because it doesn't actually do anything by itself, it is a programming language. PROFESSOR: So is there anything comparable to-- are there any big rule-based systems which are not a programming language? After all, programming languages are just fancy things that say if this, do that. STUDENT: [INAUDIBLE] You could write a rule-based system in FORTRAN. It's more of a mindset than a set of [INAUDIBLE].. PROFESSOR: Of course, this was written in LISP, this Winograd thing. One of the things about LISP, which is potentially important, is that the underlying language is so simple that it's fairly easy to write programs in LISP that make up-- that write programs in LISP. I think it's probably almost impossible to do that in a language like C++, because there's a compiler in the way. And of course, you can have an operation saying compile what I just said. But I doubt if-- one of the features of Terry Winograd's program is that you could tell it to build something, and then the course of building things it would have to generate-- you'd give it a goal, and it would generate sub-goals. And it could answer questions about why it did something. So for example, if you build some sort of tower, and you asked it why did you move the green block, Winograd's program could articulate that and say, I had to move it to clean the surface of the large red block so that I could put this other thing on it. And so there, you had a program that had one level. It's full of rule-based rules, but it only fires up a rule when it has a goal and some knowledge base says a way to achieve that goal is to apply this transformation. Then what Winograd's program did was it kept a record of each of the goals that it had generated and each of the sub-goals for that goal that it had generated in order to achieve it. And then the great achievement was that he got enough English language idioms into the operating system for that, that if you asked it a question about why it did something, it would actually give the same answer that a person or a child would, on the whole. The question to ask is, if they did so much in 1970, what happened in the 40 years since then? And it's quite a mystery. He had six graduate students who tried to improve the thing. And they all changed their thesis and advisor at some point. Do you remember? What's the explanation why this never went further? Pat's thesis was in the same year. STUDENT: Well, I guess I think the explanation is, in part, that part of the blocks world presented no serious representation problems. PROFESSOR: Hm. STUDENT: So you know, moving from the blocks world to some other world, you suddenly had to deal with all kinds of things in blocks world that didn't count, like pi, mu, and [INAUDIBLE]. PROFESSOR: It didn't-- STUDENT: [INAUDIBLE] goals, things like that. PROFESSOR: OK. There wasn't anything in the blocks world that had goals. STUDENT: Right. PROFESSOR: Well, there should be something in between, because you're going all the way to people like Romeo and Juliet. Have you tackled them or are you in Macbeth? STUDENT: Oh, I'm still on Shakespeare. Hopefully, it had [INAUDIBLE] which runs in parallel. PROFESSOR: Of course, you could switch to Leonard Bernstein. [LAUGHTER] STUDENT: Probably a good idea. I've never seen that but no one reads Romeo and Juliet. PROFESSOR: What was that play called? STUDENT: West Side Story. PROFESSOR: West Side Story. I was born on the West Side, but it didn't have any gangs then that I-- 86th Street. Well-- STUDENT: [INAUDIBLE] seem to affect these. I'm curious what makes a story easier or harder to analyze, and why you picked the ones you did. STUDENT: So why did I pick the ones I did? I mean, Macbeth and Shakespeare stories. Let's see, there's an official answer and the real answer. I'm not sure what the real answer is. [LAUGHTER] I've articulated the official answer so often, I'm beginning to believe it. But the official answer is that Shakespeare plots have all kinds of things you'd find in any human situation or any situation that's even a situation between organizations or countries. You've got everything. You've got the greed. You get revenge, you get all kinds of jealousies and threats and all sorts of things you would want to be able to deal with. So that's the official answer. The real answer is I kind of like those things. [LAUGHTER] And the plots are familiar to me. So I didn't have to go after them. But it's really true. You take any of those shakespearean plays, they're all about the human condition in various ways. And two or three of them can cover a lot of ground. And I'm presuming that if I can deal with the plot in Macbeth that I can deal with a lot of stuff. PROFESSOR: You probably deal with a lot of other Shakespeare plays. STUDENT: Yeah. They run very similar. If you look at the plot in Hamlet and Macbeth, they're almost identical. PROFESSOR: And then I forget who is who, you know. Did they all kill their mother? STUDENT: They all had-- the king was always killed by somebody and killed by somebody else in revenge. It was remarkably easy to see it in all these [INAUDIBLE].. PROFESSOR: Yes. It's been happening in the last months out in real life, almost as though they read the stuff. But now you're allowed to kill them with drones. It's pretty serious. Well, what would you like to see the AI people working on? A robot baseball player? STUDENT: [INAUDIBLE] PROFESSOR: Is soccer hard? STUDENT: Hard to [INAUDIBLE]. PROFESSOR: How many-- I was just talking to [INAUDIBLE] about this. There was an AI researcher named Woodrow Bledsoe, who did some of the best early work in visual pattern recognition. And he defeated all of us at ping pong. And one day, I asked him, how come you're so good at ping pong? And he said I don't think I'm any better than any of you, except I look five steps ahead. We'll never know if he just made that up. Yeah. STUDENT: So you asked what people should work on. Every time sort of the classically [INAUDIBLE] studies, like, for linguistic capability. And in recent times, [INAUDIBLE] has always said they're mentioning that other senses should also be taken into account. Vision, for a long time has been included. But I don't know. Every time we talk about this, I always think about what about other senses, like, does anyone do anything with olfactory sense? You know, because that can have a big effect on memory, right? If you smell something, then have this memory that's strongly associated with it. So I think that probably a lot of senses have something inter-- like, they have probably something interesting about them. You know, some interesting component of intelligence in them. And I wonder if anyone has tried to touch at least on all the senses on what they could possibly do with them. PROFESSOR: It's funny, because the olfactory thing, I don't know what to say, because as far as I know, it's just a bunch of chemical detectors. But you could say that about almost anything, I suppose. STUDENT: Well, it seems to me is there are a couple. So a smell can remind you of some event that you experienced 30 years ago. STUDENT: And sometimes a memory can make you-- if you remember something, you can kind of experience the smell, even though it might not exist, right? Have you ever had that problem? PROFESSOR: Some Microsoft researcher works on sense of smell in lobsters. And he said that the organs are almost the same as in ours. But, of course, you can't say much about a system from the primary detectors. The question is, what you do with the stuff? STUDENT: Well, it seems to me that the important thing about all the senses is that they give us some access-- they give us some mechanism that allows us to build up a model of stuff that happens in the world. So if you don't have vision, you substitute touch, and you still get the idea of two solid objects can intersect each other and that sort of thing. Things fall. PROFESSOR: Yes. I know people who have lost their sense of smell. There's some virus infections that do that. And they miss it a bit, but they're not hugely handicapped. STUDENT: You can get a huge stomachache. PROFESSOR: If you eat the wrong thing? STUDENT: Yeah. Because then you'll stop eating rotten food. PROFESSOR: So I guess you should be-- you make sure that the can isn't under pressure. STUDENT: Yeah. So you make sure that you only buy steaks from supermarkets that have sell-by dates on them. PROFESSOR: Sell by. STUDENT: So maybe people have a sense of smell [INAUDIBLE].. PROFESSOR: I don't think anybody's built a good smell detector yet. STUDENT: When people try to create soccer robots that play soccer, I believe they have problems. And then, finally [INAUDIBLE] the labs are changing everything that is a big problem. I do believe there is a problem, because the algorithms that we have, they are bad or it's just like the sensory that we have is better. For example, Kinect, they used a different type of sensory tool, like see the people. Like, to identify people, maybe just a problem with the sensory that we haven't seen, because the algorithm is pretty-- they seem pretty good. PROFESSOR: Is the Kinect the one that outlines your posture? STUDENT: Yeah. PROFESSOR: It's a great gadget. Well, don't get me started on sports. It's just if you look at the world budget for various activities, it adds up to more than 100%. But how much-- what fraction of the gross world product goes into sports and activities like that? That gets back to what are we going to do with all the people now that we have 7 billion of them? And I'm pretty sure that the biologists will find ways to treat or slow down most of the degenerative diseases in the next 50 years, say. People should be living for about 200 years, and what will we get them to do? I just gave a lecture to some other people. And there's a scene in 2001 where the pod comes back to the spaceship, and it turns the handle on the door and opens the door. And I was very impressed. And the next time I saw Kubrick, I said how did he get the thing to actually work? And he looked at me as though I were an idiot. And he said, we did with stop-action, and then somebody on the other side of the door turned the doorknob. But I would like to see better robots. And something like this could be-- we could make these and throw them up to the moon. And the idea here is, let's make a robot out of very simple modules. So you see, there's only two kinds of things here. There are these hinges and these rods. And they have some kind of motor in them. And if we could fabricate them to be pretty reliable and ship up a truckload of them and drop it on the moon, then they could each have a solar cell or two, because you don't mind if it takes 10 minutes to move a few inches. Then the idea is, if you had enough of these, like four or five, then if any one of them broke, two of the others could take it apart and fix it and replace the parts. And for the price of one shuttle flight or two, we could have robots on the moon for the next 20 years. What are they doing instead? They're still planning more manned space expeditions at enormous expense. And we may not see them in our lifetimes at the rate NASA is going. As you can see, I don't have [INAUDIBLE].. OK. What? STUDENT: [INAUDIBLE] question. So I was reading assessments [INAUDIBLE],, and I noticed that initially the question that was asked, and this is from GPS [INAUDIBLE] and NOAA, and the plan was they attempted a theory of bugs. Very soon, I noticed that question changed the planning. And then-- PROFESSOR: Planning? STUDENT: Just planning in general, research and planning. And then there was no mention of bugs. The values of critics and stories [INAUDIBLE] in like 10 years. And it seemed like the focus switched to planning. And the thought I had was maybe it's a question to ask, dictates the direction in the next few decades we're probably going to take. And the question I had was, both in [INAUDIBLE] pieces and in hacker, was an attempt to build a library of critics so their bugs, their patches in the library, which probably look up, can it be a representation? We have this library. Maybe when we hit some threshold for this, especially when moving from something like blocks with the real world, if we hit-- if that library keeps growing, maybe it's an expensive process. Maybe we want to use that library to create a new representation on the fly. Does that make sense to do? PROFESSOR: Well, of course, most representations are worse than the one you have. It's a great question. How often-- where do we get our representations? Do teachers have the idea in grade school or nursery school of what the children are doing? And I'm not sure that we-- what you learn in education school, is it still conditioned reflexes or is there some cognitive psychology? And do modern cognitive psychology people have anything like the representation theories that AI people have? The whole thing seems so muddy to me. I asked, well, the other day, how many of you have taken a psychology course. It wasn't many. A few. Do they talk about representations or are they still in the world of optical illusions and conditioned reflexes and things like that? STUDENT: It wasn't just to find words in these other words but are actually say anything. PROFESSOR: Well, that gets back to the [AUDIO OUT] well, representations. The idea that we all learn about vectors. That's elementary physics nowadays. The idea that vector calculus is only from the time of Willard Gibbs, 1880. When was-- can you look up Gibbs? So how come humankind didn't invent a lot of mathematics between the time of Archimedes and Gauss? Gauss was about 1840. And there are a bunch of mathematicians scattered in those periods. But the modern representations in mathematics, most of them are only about a century or a little more than a century old. So every child invents a lot of new representations. But we don't know much about what they are. And I guess Piaget is the-- 1930, the first psychologist to try to make effective theories and prove them by experiments on young children, figure out how they represent things. Yeah. STUDENT: [INAUDIBLE] this gives developed vector analysis from 1884. PROFESSOR: Isn't that something? Now what did Descartes do? He got the coordinates. STUDENT: [INAUDIBLE]. The [INAUDIBLE] quaternions existed before then, and some other algorithm [INAUDIBLE].. PROFESSOR: Yeah. The quaternions were a beautiful dead end, and if Hamilton had invented vectors instead, he might not have been able to handle the rotation so well. But all the rest of physics would have been better off. How many of you have ever used a quaternion? STUDENT: [INAUDIBLE] graphics. PROFESSOR: For graphics? STUDENT: Yeah. [INAUDIBLE] PROFESSOR: They have amazing property that they only work in three dimensions too. So they're no good for relativistic space time and things like that. It's very strange. Help. Yeah. STUDENT: There's a study for taking newborn infant, place them on tabletop for the first time and they crawl to the edge, they won't fall off. They'll see the edge and crawl back. PROFESSOR: That's terrific. STUDENT: Never seen the edge before. STUDENT: Do you think this is either some innate representation, some ability to model and primitive physical simulation already happening or do you think this is sort of a hardcoded edge detector? PROFESSOR: Or were they just lucky? Well, if you look at a mountain goat, it gets born and drops out, and it starts to walk in about five minutes. And then it can follow its mother and knows where to get milk, and when it comes to a crevasse-- I get this from Seymour Papert. I haven't seen it myself. But the very first time the baby goat comes to a crevasse, it goes up and around and comes down again. The mother steps across, and the fairly newborn baby is following her around. STUDENT: I don't think they know exactly where to go to find the milk. PROFESSOR: Maybe they smell. STUDENT: I once saw the newborn colt go back and forth between the front legs and the back legs-- [HIGH-PITCHED TONE] PROFESSOR: Wait. STUDENT: I once saw a newborn colt go back and forth between the front legs and back legs, unsure of exactly which to search for. PROFESSOR: Somebody put a-- there's a movie of a giraffe being born on the web. Have you seen that recently? It's just spectacular, because the things are so incredibly awkward. But after about 10 minutes-- it keeps trying to get up, and after about 10 minutes, it manages to get up and walk and sort of like juggling. It's really just about to fall over. But anyway, so it looks clear that those things that happen a few minutes after birth, there hasn't been time for much trial and error. But when you think about it, getting food is the most important thing in the universe. And so you could afford to-- the system has to put priority on staying alive for the first hour so that you can start to learn after that. Still, it would be nice to know how those circuits work and what is it that's built in. Do the things you-- are your later learning mechanisms mutated copies of the built-in ones at first? Yes. Is that still there? No. STUDENT: [INAUDIBLE]. PROFESSOR: No. How about a five-minute break? I'm running out of voice. There were a lot of discussions. Does anyone want to report on one? STUDENT: [INAUDIBLE] I will be talking to [INAUDIBLE],, and we were talking about bugs. And we talked about-- I was describing a system that one of my students [INAUDIBLE] worked on that tried to understand the bugs in [INAUDIBLE] procedures, like you would order on Amazon. And it used commonsense knowledge to say, OK, parts of the purchase or selection of the book that you want to buy, payment, delivery, and then it said, OK, how do we tell if something went wrong? And so then you break it down and say that if the order got in correctly, we would have got a receipt. If delivery happened, then you would have got a FedEx tracking number, and then the system could check these things automatically. So in some sense, that's a little bit like hacker. You know, having the catalog with bugs and using commonsense knowledge and breaking down and understanding that something went wrong. So I think we can have-- those are systems that are kind of like end-user debugging. So they're debugging without actually having a program to debug. And so then we were talking more generally about just the idea of building people's knowledge. You know, there was a lot of this AI stuff generated in the early days, but nobody could have time to explore all the consequences of it. So a lot of things got lost. And certain things got tricky and maybe not always the right things. But still, there was a lot of stuff out there that could be built on. And part of doing interesting research is understanding what's the really important stuff to build on, even if other people don't think it's fashionable. And then you can be innovative it by building on it and stuff. PROFESSOR: It's interesting. Can you think of problem solving as debugging? Because sometimes people think of knowing what to do and planning. But if you think of that Newell and Simon general problem solver, every step is what's the difference between what I want and what I got? And so you could think of the GPS as doing nothing but debugging. STUDENT: It's actually good at debugging, because you get a lot of track. Then it tells you how to repair it. PROFESSOR: But a higher-level one is you might have spent too much time debugging something that was just a minor part. Maybe if you fixed something else, this bug wouldn't have happened anyway. So is that a theory of problem solving? STUDENT: Well, I think one important thing about problem solving is keeping track of what depends on what. So things like truth maintenance, systems, you know, that's what they're all about is trying to keep it so if you know what depends on what, then you can trace back through debugging. PROFESSOR: Are there any systems-- yes, what do companies do when they get complaints? Do they have system-- STUDENT: [INAUDIBLE] and they tell you they can't help you. PROFESSOR: They give you another number to call. STUDENT: Funny story is that I had [INAUDIBLE] help desk inventing here. And this person picked it up and said, do you have this manual? I said, yes, I do. And they said, could you look at this chapter and this section. I said, OK. And then they said, can you look for your problem in that section. I said, great. So after like 15 minutes, I ended up fixing it myself. I just didn't know where the manual was. I could have done it myself. So sometimes that happens with [INAUDIBLE.] STUDENT: [INAUDIBLE] on that topic, but does it make sense to ask how genes represent mistakes? So-- PROFESSOR: Genes? STUDENT: Genes. Is there an idea of if there's an error in mutation or something? PROFESSOR: Well, of course, every gene that's selected is selected because the parent managed to solve some problem that the others didn't solve, I suppose. But that's a great question, because I've been complaining that evolution has this big bug, namely it doesn't remember why this or-- it doesn't remember why the losers died. So there are lots of minor bugs in the genetic system. You could call them bugs, but when you have a population of animals, and it gets too cold, then some of them die out, because they didn't eat enough before it got cold or whatever. And evolution doesn't keep any record of that. But on the other hand, you could say that all the genes you do have are ones that corrected some bug. So in that sense, it remembers how to solve the bugs, but it doesn't remember what the bugs were. So could you make an-- but to make an evolutionary system that kept records of why the losers died would be-- wouldn't that take a higher-level language or something? Maybe there's no way to evolve such a system. STUDENT: [INAUDIBLE] based on what did survive or mistakes with. PROFESSOR: You could have the thing keep two copies of the genomes, and when you have children, you have one of each. And well, it's too complicated. So when we make artificial evolution, we can probably make the thing millions of times more efficient by keeping a moderate-size record of what caused fatalities. Of course, that's what medical records are for. But they don't get back into the genetic system yet. STUDENT: But doesn't that depend a lot on context? For instance, something may be beneficial during an ice age, but you institutionalize that. Then, when you're living in a tropical jungle, it could be to your detriment. PROFESSOR: I was just thinking of ice age, because of what's his name's book. So you'd have to keep copies for millions of years and store all the descriptions of environments and-- STUDENT: Or else you could have that same kind of mutation occurring over periods of time, and when it's beneficial, you keep it. And when the circumstances are right for it, then it goes away, with the idea that maybe it will come back later. PROFESSOR: So that's, in fact, what we do when we have historians in society. And it might not make any sense to try to build it into a low-level genetic system. Just carry books around. STUDENT: Well, how would we be able to read those books? Because the libraries from 1,000 years ago, I think only specialized people can read very, very well German languages, for example, more than English speakers can, for certain. So all of our interpretation of those old texts are more how to avoid people who come at you with spears or swords or whatever have evolved as well. How does that evolution for the culture, for instance [INAUDIBLE] the question? PROFESSOR: Well, I think things like the iPad are going to-- I was just thinking, my wife was involved with some children who were carrying 30% of their weight in books in elementary school. That's incredible how heavy. They don't even have wheels on their book bag. But we could reengineer the nervous system to have gigabytes of storage that evolution could keep lots of records. STUDENT: I want to know what was being said up there. PROFESSOR: Has anyone-- were you talking to Pat? STUDENT: A little bit about complexity and also about storytelling, how just some current topics and what his lab is working on. For example, how can you teach if you're going to-- people have different models. So any [INAUDIBLE] starting assumptions, something says you might be taking as propaganda or maybe taking something [INAUDIBLE] in a sort of less malicious way-- PROFESSOR: What time is his class? I think I should be sitting in on it. STUDENT: I'm not sure. It's [INAUDIBLE] STUDENT: Yeah. Monday, Wednesday, Friday, 10:00 AM. PROFESSOR: It's not so bad. STUDENT: [INAUDIBLE] STUDENT: Lectures are early Monday and Wednesday, and then Friday there's a good-problem solving session. STUDENT: It started out some Fridays were lectures, and then the student less lectures and the-- STUDENT: There's not much left. PROFESSOR: That's the way it always seems. Yeah. STUDENT: So I missed the lecture last week. Did we talk about maybe interesting past projects for the class? PROFESSOR: We didn't. STUDENT: Maybe can you think of some good ones? PROFESSOR: I guess Justin isn't here tonight. I hate to say it, but the best projects were in the days when you could have an incomplete for a whole year. I think now it's-- what is it, the third week of the next term? Yeah. I had some great papers that came a year later. But somebody in the administration decided that incompletes had to be completed. Have you seen any good projects? Because they're not-- nobody's written them yet for the term project. STUDENT: [INAUDIBLE] PROFESSOR: I'll try to find some from previous years. Maybe we can get permission to make a book of them. STUDENT: They also seem to be focused on-- there seems to be a preference for some engineering deliverables as well with some code or programming. PROFESSOR: Not necessarily. Well, there was a year or two when Bill Morgan was one of the TAs, and he was developing a programming language and wanted to get people to do projects in that language. But it wasn't such a good idea, because he kept changing the language. What's the best language? STUDENT: [INAUDIBLE] PROFESSOR: Well, if you're going to-- yeah. If you're going to have-- STUDENT: [INAUDIBLE] PROFESSOR: Which? STUDENT: Fire. PROFESSOR: If you're going to have reflective thinking, then can you do it with the same language at each level? Because if you don't-- STUDENT: [INAUDIBLE] PROFESSOR: What? Oh. STUDENT: [INAUDIBLE] PROFESSOR: How many words were there in Latin? Is it a big language? STUDENT: Depends on how you count. If you count all the conjugations, that would be two orders of magnitude [INAUDIBLE].. PROFESSOR: When I graduated from Harvard, you had a choice of a BA or a BS. And you could get a BA if you had credit in Latin, which I had from grade school. But I never understood a word of it, hardly. People said, if you learn Latin then it will be easier for you to learn French and Spanish and Italian and all of those. Then, after a while, I suddenly realized that if I had learned French or Spanish or Italian, then it would be easier to learn Latin. And I confronted the teachers with that, and they had no good answer. And besides, it should have been Greek, because Latin is full of pretty good stuff, but Greek has Aristotle. Yeah. STUDENT: So I know Latin pretty well, and I found that-- the thing I found it most practically useful for is I'm taking intro to neuroscience, and there's a lot of different neuroscience words, and you have to memorize all of them. And almost all are from Latin. And so I can memorize them based on associations like dorsal. Oh, I know that means back, so I know that's the [INAUDIBLE].. And like, every word I only remember because I know what it means. Otherwise, besides memorizing the words it would be way more hard. PROFESSOR: That reminds me of, I guess, it's my wife Gloria is a physician. And she once had a Greek patient, I think. And the patient said, what's wrong with me? And she said you have gastroenteritis. And he said, I know that, but what's wrong with me? And she realized that that was a nondiagnosis that doctors use to make themselves feel that they know what's happening. So have you used it ever? STUDENT: [INAUDIBLE] PROFESSOR: The Latin. STUDENT: I mean, I used it for what I just described. I also, I mean, I don't think things need to have practical value to have utility or need. Like, I enjoy reading Latin. PROFESSOR: Well, what are the best stories? Are there any good plays in ancient Latin? STUDENT: There are some, yeah. I mostly read poetry. PROFESSOR: Poetry. But they don't rhyme, do they? Because the endings are all the same. STUDENT: Yeah, they generally don't rhyme. PROFESSOR: But it's called poetry. STUDENT: They have a very strict [INAUDIBLE].. PROFESSOR: OK. A fair amount of the society of mind is in I am big tetrameter. I just found it was easier to think rhythmically sometimes. It's very strange. I'm not sure it does any-- if you don't know that there's a meter, it certainly doesn't help understanding anything. But I guess in poetry, if you're skillful, then you're seeing analogies, because the emphases are in similar places in different sentences. So you're constructing a network of similarities beyond the meanings of the words, just from the places where the meter hits. Like this rap is very popular, and that's Alexander Pope. Have any of you looked at any of him? It's all couplets, rhyming couplets. And it's exactly like rap, except without the bad music. Oh, well. STUDENT: A little earlier, you speculated that pre-engineering a brain to have gigabytes of memory, and to some extent, we already have that with very slow look-up times. So I was wondering if you have any commentary on how we could improve our virtuals, computers, or otherwise to help people solve problems and be more intelligent? Do you think these are proceeding in the right direction? Do you think there's other areas of innovation that are lacking? PROFESSOR: Well, I think the web is evolving. Things like Google are getting to be quite remarkable. And lately, I've been using the Dragon query language. And the thing's remarkable. You send-- if you ask Dragon a question, it goes to Google and makes up a pretty good query. And just about every time I've tried it in recent weeks, the answer to you know, where is the biggest island with a volcano or blah, blah, whatever it is, it comes right up on the first page. That's just-- so I suppose that there will be some very crude thing sometime in the next decades, where you put a bunch of sensors over one of the speech areas in the brain and go right into the web. And maybe at first with words and after a while you'll get preverbal queries. And then there's no telling what will happen if you have all sorts of correlated systems with little electrodes in other parts of the brain to give contextual clues. STUDENT: Would that be the first port that you would suggest installing in the human body? PROFESSOR: The first output port? STUDENT: Yeah. I remember something about the tongue being a great input system. Somebody made a visual system with about 64 or something squared vibrators on the tongue or electric stimulators. And the person could see an outline of an image. But you have to go have this pad in your mouth all the time. So the question is, can you make sensory inputs or outputs? It's harder to see how to do that. What's amazing is what you can do with your mouth, like eating cherries and saving the pits and not inhaling anything. You can do things in your mouth that you can't do with a hand. Of course, two hands is better than-- but I think getting outputs from the brain to go into memory devices and feeding them back into the brain, that's going to become commonplace at some point. But we first have to find out how to do it at all and then how to implant these things without a big operation and infection risk and all that. Yeah. STUDENT: Do people currently ever get these implants when they get magnets installed-- PROFESSOR: Get what? STUDENT: --on their ankles? People get magnets installed near their ankles so they can, you know, start feeling-- they feel slightly when there's a magnetic attraction between the magnets in their legs and something. And so they start developing a sort of magnetic sense. And then apparently, people have reported that even if they get these magnets removed, they still sort of retain the magnetic sense, which is pretty weird. PROFESSOR: It sounds weird. Can they tell which way is north? STUDENT: I don't think so. I think they can tell when they're around objects that have magnets in them. They're attracted to them. PROFESSOR: You certainly could make a magnetic sensor and put it anywhere in the body, connected to a couple of nerves. And then you'd know which way is north all the time. STUDENT: There were people who made a belt with a compass, and it had a bunch of solar lights in it. And it sort of poked you whenever part of you waist was pointed north. And after a while, people would sort of no longer notice the poking and just know which way is north. PROFESSOR: Oh. STUDENT: That sort of could be incorporated in a sense. PROFESSOR: My friend, Oliver Selfridge, who died recently, was a pioneer in cybernetics and AI. And he always knew where north was. And it turned out, I traveled around with him some, and whenever he went in a building, after a while, and he always knew where north was. But I noticed that he would always go into some office with a window and straighten out the secretary's flowers. He knew the names of all flowers also in Latin. So apparently, he felt uneasy when he was losing track of direction. And he'd go to a window, and before he went in a building, he'd memorize the big features. STUDENT: Does that even work in [INAUDIBLE]?? PROFESSOR: What's that? STUDENT: In this building, did it-- STUDENT: Did it work in this building? Because I always get lost. PROFESSOR: I had the idea of chiseling little arrows pointing north. Are there any vandals among us? It would really be use-- STUDENT: The Muslims never get lost, because they always know which direction Mecca is. PROFESSOR: The bug? STUDENT: The Muslims never get lost, because they know always which direction Mecca is. PROFESSOR: Oh. Right. Yes. I think there should be little arrows pointing north in every room. STUDENT: The first time I visited a Muslim country, I noticed that every time I stayed in a hotel room, there was an arrow on the ceiling above the [INAUDIBLE].. PROFESSOR: Oh. On the ceiling. STUDENT: On the ceiling, yeah. PROFESSOR: Where you can't erase. STUDENT: Yeah. That's where it told you where Mecca was. PROFESSOR: We could do that. In fact, if we had it point to Mecca instead of north, then nobody could criticize us. STUDENT: Right. STUDENT: So [INAUDIBLE] What's on the other side of the United States? The ocean? PROFESSOR: Other side of what? STUDENT: Yeah, it's-- STUDENT: On the other side of the Atlantic Ocean. STUDENT: Yeah. Yeah. So I was just thinking that there is a point where Mecca is in all directions. And how can you then go, OK, maybe I should [INAUDIBLE].. STUDENT: That's possible, because within Mecca is [INAUDIBLE] PROFESSOR: I think wherever you are, the probability is about 80% that the opposite is ocean, unless you're in the ocean. Well, it's still 80%. This is a water planet. We could make a little gun that just pops them up there. And it would take them weeks to get maintenance to remove it. STUDENT: Do you think [INAUDIBLE] to want something [INAUDIBLE]? PROFESSOR: I haven't heard any reports. STUDENT: [INAUDIBLE] STUDENT: You didn't go? STUDENT: I went to Mecca, yeah. PROFESSOR: Hey, could we get Erik Mueller to give us a talk? STUDENT: That would be cool Sure. STUDENT: But I don't know if they have [INAUDIBLE] PROFESSOR: Well, we could invite him to give a different talk. STUDENT: [INAUDIBLE] PROFESSOR: And then ask questions. It's been 8:59 for a whole minute. There's still time for a question. Nope. Nope. Thank you.
|
MIT_6868J_The_Society_of_Mind_Fall_2011
|
6_Layers_of_Mental_Activities.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MARVIN MINSKY: I just read chapter 5 for the first time in several years. I was impressed with all the good quotes. [LAUGHTER] I was wondering if anything I said was up to anything those people said. "We don't see things as they are, we see things as we are." What else did Anais Nin is write? I don't know who she is, actually. Anyone know this writer? What else did she do? AUDIENCE: She wrote [INAUDIBLE]. MARVIN MINSKY: Is it good? AUDIENCE: Yeah. MARVIN MINSKY: I don't know-- AUDIENCE: Sexually explicit. MARVIN MINSKY: The second half of the chapter is about this wonderful project that really ran for several years, in which we tried to build robots that could see and manipulate things. And all sorts of wonderful things appeared in that year. The goal of the robot project was a very simple one. We decided that the robot would live in a world of mostly rectangular blocks. And a very simple job that a two-year-old cannot, do but a four or five-year-old has no trouble with, would be taking some particular structure. I should have brought some blocks. These are blocks, but they're not going to work. Two of these are flat on the top, so. I doubt if our robot could have built that. It didn't know about rolling. But the ideas you would build something out of these blocks, little house or more complicated structure. And one of our robots would try to duplicate it, and in order to do that, it had to be able to look at the scene and figure out what the scene was made of, what were the relation of the blocks, and which blocks they were. And had a big pile of them, so it had to make a plan for what it was going to do. In the very first minute of the project, Jerry Sussman wrote-- we had obtained this robot and got the computer to control it. And Jerry Sussman wrote a program to build and copy an arch. And immediately, it started with this block first and put it here. And in fact, Sussman's PhD thesis was a profound piece of work discussing various classes of bugs, beginning with these kind of simple ones, where you do things in the wrong order. And as far as I know, there's no PhD thesis quite like it. Do you know anything, Henry? I mean, a lot of theses are about bugs in things and fixing them, but if you're looking for something to write about, you can go all the way back to 1970 and look at a couple of those theses. And generally, they were very ambitious, a whole group of them. I'll discuss at some point. If you're trying to copy a structure, you have to analyze what is supporting what, and there's no point in trying to play something until its supports are in place. But how do you look at the scene and see things? And at one point in the project there was a remarkable thing. And I think it was Manuel Blum who did this particular one. But he discovered that if you looked at the outline of big massive blocks, there was a good chance that by following a wonderful set of rules that he made, you could figure out what was going on. This is a great feature of working in a restricted world, where the computer can know all of the objects in that world that could exist. And so you can see the process, but he wrote this program that just looks at this outline. Where did I put the death ray? Here it is. Couldn't see it because it was supported by a napkin. So it just looks at that and says, well, none of the blocks have an interior angle, so this can't be true and those can't be true. So what do I do about that? And you see, I could extend that line either up or this line out. Maybe it does both, I've forgotten. But it does this and it extends this one, and presumably, it tries for a different combination until it finds two that make a little more sense. So then it draws in these. And once it's drawn in those two, you see that it can conjecture that maybe this is a block. And if so, you see then it has to put in those two edges, and that works out fine. And then here's the six-sided thing, and there are no hexagons in the blocks world. So it says, this cannot be. And what could it be? And the answer is it could be that, and so forth. So we were quite astounded that you could do that well. But of course, it's always a surprise. Maybe this is why mathematics works, if you look at something like group theory. Here you have a little system which has five logical axioms. And I guess Gal was the first person to study systems had had those five simple axioms. Every day, to this day, mathematicians around the world discover 10 or 100 new things about the consequences of what happens in that tiny world, where there's objects. There's an identity, object. You multiply things. They may or may not commute. That's two different kinds of groups. But they all obey the associative law. So this is an example where a mathematician has invented a little system that describes almost everything that can happen in this tiny world. Well, of course, the world of common sense is much larger, and people like Henry Lieberman, and several others around the world, are trying to-- I don't know if formalize is the right word. Because if you're trying to find a million axioms for a particular universe, you can't hope that that will end up like mathematics. AUDIENCE: [INAUDIBLE] how about the word Informalized? MARVIN MINSKY: Informalized. [LAUGHS] Of course, we. All hope that someday all of the not many groups that are working on collecting and characterizing commonsense knowledge will be able to, more or less, get them to all work together and start to have something that has the abilities of a four or five-year-old. If you can get a machine that will learn enough, then it will keep getting smarter, and there's no reason to think it will flatten out the way people do. We don't know if people would flatten out. No one knows what would happen if somebody kept thinking for 500 years. You see all of these expensive government projects, but you wonder why there isn't one to-- Well, I don't know what to say about this chapter because it has so many strange and interesting things in it. It ends with a very cryptic discussion of, how do you make a machine that will watch things happen for a long time and then start to predict what will come after that? And the answer is you have to develop stories and scripts and beautiful ways to summarize your experience, and then find ways to fit them together. If you're writing a story, then you're not doing that sort of thing, but you're making a little world and you're inventing events in the past and future of each character and trying to get something that looks, more or less, lifelike to people. Anyway, there's so many things in this that I don't see any point in trying to summarize them all, except to hope you have some questions and things to discuss. Yes? AUDIENCE: So how well do we actually need to understand intelligence in order to create a replica of it as an artificial intelligence that can fool, say, 99% of the world? So as an analogy, let's say you have an art forger. To truly create something that will fool 100% of the world, they would have to know the exact brushstrokes and what paints and oils that the person used, the original artist. But if you just to fool 99%, you don't need to know that much. So how much do we need to know about intelligence to actually create something that people will see as intelligent? MARVIN MINSKY: I'm not sure what you mean by fool, because if you have a program that solves hard problems, what would it mean to say-- it's just executing instructions it's not really thinking. AUDIENCE: So I guess is a better definition of fool, so something like the Turing test, where you say, oh, there's a person. So how much would we would have to know about intelligence to say if we could just put them in another room, give them some sort of telephone, if we could just run the Turing test and fool 99% of the people that are testing. MARVIN MINSKY: A test for intelligence. It's a test for when will people think it's intelligent. I mean, I'm not sure. Suppose that you didn't know that there were foreign languages and you came across a Norwegian, and you couldn't understand anything he'd said. You would say, what is the use of this ridiculous machine that just makes stupid sounds? So I think people have always wanted to measure things, and the IQ test, Binet and Simon-- that wasn't Herb Simon. I'm trying to remember which Simon that was. That became a useful tool for measuring things that children could do and deciding whether there seemed to be something wrong with a child's development or whether it's getting too smart for it. I don't know if anybody got concerned when somebody got high IQs. AUDIENCE: If I could ask a different question then. How much do we have to know about intelligence in order to create something that can create something smarter than itself? MARVIN MINSKY: Well, everyone creates something smarter than oneself in their childhood, so that doesn't seem very-- I'm not sure how hard that is. I wonder, what is the right question to ask? How do you judge the progress of artificial intelligence? That might be a good question. As I see it, it took a dive in recent years. I think I've complained about it too much, but what happened is that when we started this kind of work, it was a golden age for science. I've said it before, because at the end of World War II, there was a lot of machinery around. Maybe it's almost a joke, but there were a million kids who could get their hands on war surplus electronics. Now, of course, everybody can get their hands on a little computer or something, or cell phone, or whatever. But what's difficult is getting a job to work on-- suppose you wanted to work for five years on some aspect of artificial intelligence, and machines aren't smart enough you think. They don't have enough commonsense knowledge, or they don't use it right, or whatever, and you'd like to really work on that problem. I think I showed you a set of slides that showed a lot of reasons to work on, what are the problems that civilization is facing? This morning, I turned on WBUR and some people were complaining for a whole hour that there's no way the United States can support older people who have infirmities, either mental or physical, because there's not enough money to hire the nurses, or there are not enough nurses to take care of them. My bet is that instead of spending a trillion dollars on Medicare-- which is one option, but then almost everybody will be working to support the infirmities of these older people. It would be a very strange civilization if you spend all of the resources you need. So why not say what we need is AI to help these people. And if the AI is smarter than a regular person, then it's not even demeaning. So I'd like to see a very small budget, like a billion dollars, that has the goal of making intelligent machines in 20 years, which means that you could hire 100,000 smart, young people to spend 10 or 20 years working on this sort of thing. It would be terribly cheap. Nobody's in charge, though. And you can't buy long-term care insurance, in a sense, because there'll be nobody to provide it. I'm rambling. But you see that, potentially, it's a very important field, but it's not in the politician's spectrum of things you could do to have a better future for less money. I doubt if there's a single congressman who even knows that this is a plausible gamble. Anybody know a politician who understands an issue like that? Is there a question I can answer? I feel I should get Winston to say something. Have you discussed this with students? How many students do you have who would like to be a professor? And what are they doing? Pat was one of a few students who wanted to be a professor. We're talking the late 1960s. And it turned out to be possible. But it's very hard now. WINSTON: I decided I'd leave as soon as I found work. [LAUGHTER] MARVIN MINSKY: I told you when Sussman was a freshman, he said, I like this. I'm going to stay forever. Did you ever find work? WINSTON: Of course, I never looked either, so. [LAUGHTER] MARVIN MINSKY: I had this bizarre experience when I was a senior because I thought I might have to find work. And I ran into somebody called Edmund C. Berkeley, who had written a book about robots. He had edited a popular electronics magazine. And so I went to see him. He was actually working at the Prudential Life. Did I tell this story? Because he had written this book on brainiacs, or geniac. It was a little machine with three wheels of cardboard and some wires and you could put in little contacts and switches and get it to compute Boolean functions. This is a 1948 computer. Didn't do much, but it did. It had a little book to go with it. So I went to visit him. And he said, well, I don't really know anything more about this than I described in that toy. But I have a young friend named Martin Gardner who has built some other kind of logic machine, and he lives in Brooklyn. So I went to see this Martin Gardner, and I'm sure you all know he was. He had just finished a thesis, or a paper, called Order and Surprise. And for me, it was the best thing I had ever read since Bertrand Russell, or one of or Aristotle. Namely, he said, suppose an alien came down and looked at the world, and he discovered this huge green thing with a billion grains of grass that were all pointed almost the same way and they're all about the same height. And this alien said, what's the probability that such a thing could happen? Could it be an accident? Well, no, then the probability of one blade of grass would be 0.000 something, and the probability of two of them would be that squared, and the probability of a billion of them would be 2 to the billion, 2 to the 2 to the thousand million, or something like that. I can't calculate anything like that. So anyway, Gardner convinced me, I think, that I should just keep doing what I was doing, namely figure out how to build learning machines and see if I could make them get smarter and smarter. But it's a great digression. The same thing happened to me at least a dozen times, of just somehow running into the right people at the right time, and I hope that happens to you. What's the trick, Ed? ED: You have to recognize them sometimes. MARVIN MINSKY: Fredkin is an old friend of mine from the 1970s. ED FREDKIN: 1950s. MARVIN MINSKY: '50s, whatever it was. And he was always in the right place at the right time. In fact, he was director of Project MAC for a while, what became the laboratory of computer science. Only he did such a good job that some of the faculty was jealous. End of story. It's a bad story. So you have to look for them. If you like somebody's work, just go and see them. However, don't ask for their autograph. A lot of people come and ask me for my autograph and it's creepy. What I did was read everything they'd published first and correct them. That's what they really want. Every smart person wants to be corrected, not admired. Back to reality. It seems to me you don't want a Turing test. We all know that if we could make a machine that behaves a lot like a four-year-old, then we could make it better and it would start behaving like a five-year-old, and eventually it would be smarter than us. This Turing test thing is a joke. And Turing regarded it as a joke. If you read his paper carefully, he's saying, I'm not deciding whether a machine is intelligent. I'm only deciding what performance would be such that a person would be really impressed, whether it deserved to be applauded or not. In the case of IQ test, you could say the same thing, except that generally, people who are good at 10 different things, or much better at 10 different things than most other people, are good candidates for solving a new problem they haven't seen before. So it's not useless. What's annoying is when people think that Turing was saying this is a test for whether machines are intelligent. Yes? AUDIENCE: So I want to argue that representation is really important for intelligence. So for example, if you have a set sentence-- MARVIN MINSKY: If you have a what? AUDIENCE: S-E-T sentence, like x1 or x2 or x3 or x4. MARVIN MINSKY: A question with several answers? AUDIENCE: You want to know if that set's viable or not. It's important to know, for example, if that sentence is important or not, if it's solvable or not, if there is a solution that transforms that sentence to true or false. But you can just say that, oh, there is a solution that this is true, the sentence is true and I'm just transforming from one representation to another. So at the beginning, I have a representation, which is a sentence, at the end of this sentence, x1 or x2 or x3 or x4. And at the end, I have just these sets, five or not, but they are just representations of the same thing. For example, there there are some theories about, for example, Shannon, he just states what's the optimal representation according to the length of the word. But maybe that's not the most important thing. Maybe there is some trade-off about how big is the sentence and how easy I can get to another representation of that sentence or that thing. Like it seems that we humans are very good at representing stuff and switching between representations, but we don't have a very good sense of what's a good representation in that sense, that I can change between, I don't know, solutions, or I can change between representations of the same thing. And maybe a intelligent machine is a machine that can represent things in a good way and can switch very well. MARVIN MINSKY: Well, I think he's asking a deep question, which is-- how many of you know Shannon's theory of information? Almost everyone. That is, if you have a set of things, how many binary choices or questions would you have to answer to find a particular one that's log n? But if you don't know the probability, if they have different probabilities, then you need a more complicated formula, which is the sum of the inverse log probabilities, blah, blah, blah. And that's a beautiful theory. Now, that works perfectly for well-defined situations, for example, where you have some enormous set of messages that you can describe with set theory in a neat way with probabilities. And you want to know how long it will take to transmit information to pick one of that large set on a particular physical channel. Shannon discovered this wonderful formula, which turned out to be the same as the formula for entropy in physics. I have a long story or a short story about that, which is that thousands of papers were published about this new theory of Shannon's, which came out in 1950, I think, maybe '48. And then suddenly they stopped, because in some sense, all of the important questions about that theory had been answered by the middle 1970s. So that's a less than 25 years. Anyway, a lot of people said, but this theory only works for very well-defined mathematical situations where there's signals and probabilities and symbols, and so forth. Is there a similar theory that could describe machines? Like suppose you take an automobile and it's got a lot of gears and pistons and spark plugs and electric gadgets. It's not enough to describe that as a list of parts. You have to say something about what each part does, the shapes of parts and how they fit together, and so forth. And so shortly after Shannon's theory became popular, lots of people said, well, is there a similar theory for structural information? Is there a theory like that for geometric forms and for mechanical things and for chemical reactions, and so forth? In other words, is there a theory of what you might call semantic information, rather than just statistical information? And as far as I know, there isn't any. There still might be. It's the only case I know where a whole scientific field appeared and then stopped. You look at everything else and it keeps going. In fact, we had a journal of information theory at the editorial board. It was mostly MIT professors. And this journal was published for about 10 years, and then one day we were picking the papers for the next issue and everyone rejected all of their papers on this committee, and we declared the journal finished. What journals have closed lately? There must be some but they don't get much attention. Will The Wall Street Journal close? [LAUGHTER] AUDIENCE: All the important questions about Wall Street answered. MARVIN MINSKY: There are a lot of people camped out on their sidewalk. I don't know. What's bothering you? AUDIENCE: Can you just say more about what you mean by semantic information theory, because that's bothering me at the moment. MARVIN MINSKY: I don't know what I mean. I guess the point is that if you look at the objects in Shannon's theory, they're just little points in some abstract space and they each have a probability, and that's all there is to them. And if you say there's a million gadgets in your catalog and each one has a certain probability and you don't know anything else about them, then suppose you wanted to make a big sequence of them and transmit to someone else the how to reconstruct that sequence. Well, if you just took the first one and the second one and wrote a sequence, which goes AB, AB, AB a million times, then you could compress that by somehow communicating a description of what you mean by repeating a sequence and then say, now repeat the sequence AB a million times. And you can, presumably, describe a million in some ways also, and the whole mess. Once you've sent those descriptions, then the further such messages of that kind would be very short. But now if you go and take random samples of these million things, if they're all equal probability, then it's going to take 20 bits for each to tell about each symbol, and there's nothing you can do to reduce that bandwidth. So semantic information would be somehow transmitting other commonsensical information about the objects you're talking about, rather than just their probabilities of the individual items. Whatever it is, nobody ever found-- everybody said, we want semantic information theory and no one could say what they meant by it. If you had a good definition, maybe one of us can do it. AUDIENCE: Let me try. I mean, isn't that the kind of thing that we do in a lot of knowledge representation, is if you make an ontology in advance and we have a common ontology, then I just need to index those concepts [INAUDIBLE] of the full body of commonsense knowledge, or some default body of semantics. Then yes, you can communicate a lot of semantics in very short messages. MARVIN MINSKY: Well, for example, generally, the most useful words in the language are the short ones. I don't see how you could get such a neat theory as Shannon's. But it's like group theory or something. Arithmetic is much harder than group theory, because in arithmetic, you have a situation where there's two different groups on the integers where you're multiplying things and you're adding things. And number theory is immensely complicated, and group theory as well. It can get complicated. But generally, it's a very clear field. I don't know what to say. Very likely, out of the attempts to build common sense systems will come some beautiful theories at some point. You might almost have them now. I was trying to read some of the people from your group and the papers are pretty hard to read. They look very good, but I wish I could get them down to one page and understand it. AUDIENCE: Tuition for tensors. MARVIN MINSKY: Tensors, things with a lot of subscripts. What do you think the AI people should do next? AUDIENCE: I think I'm going to harp on the Turing test still. If a machine were able to demonstrate the kind of commonsense reasoning that a four-year-old does by building that arch, is that analogous to the imitation game that Turing was trying to propose? Because in the paper, he does say with improved performance, memory, and program that the machine could have. So if it were able to imitate that behavior, is it a reflection of what was involved in getting to that point to imitate it? I mean, deception is one part. I think there are a lot of objections he addresses. So in our example, if a machine actually demonstrated the common sense of a four-year-old, wouldn't that be an analog? MARVIN MINSKY: We once had a Turing test situation. In the early days, we had teletypes attached to a big computer in Technology Square, and it was all connected on one of the first beautiful time sharing systems. And one of the engineers who worked on the time sharing system was a young professor named Herb [? Jaeger. ?] And I fooled him for a minute into thinking that I had an AI program, but I was actually in another room. AUDIENCE: The first Turing test. MARVIN MINSKY: He would type questions and I would answer them in a very stilted computer-like fashion. And then he said, how much is this times that? And he had two 15-digit numbers, which he must have just typed like this. So I very cleverly replied, accumulator overflow. [LAUGHTER] But I misspelled accumulator. [LAUGHTER] So I failed the Turing test. You remember [? Jaeger? ?] Ed Fredkin here is one of the three or four inventors of large-scale time sharing computers. Any good stories about that? ED: There are lots of good stories from long ago, but I don't know if people would be interested in them. I'll say one thing. Long ago, when I met Marvin and John McCarthy, who was also a professor at MIT at the time, was very mysterious to me. John McCarthy was a professor of electrical engineering and Marvin was in the math department, if I remember right. And I had this idea of maybe the universe is a computer, in some sense, and physics would then be some kind of discrete space-time state process. And I approached that idea to both John McCarthy and Marvin. And Marvin told me that I should look at what von Neumann did in cellular automata, which was great advice because I'd never heard of cellular automata. But John McCarthy's response was interesting. I said to him, do you think I ought to continue working on this? And he thought about that for a second. Then he said, yes, the world is large enough. It can afford to have one person working on such ideas. MARVIN MINSKY: There was one time when we-- I keep telling these stories and I don't know if I've told you it. But it was clear that we wanted to build a time shared big computer. And we went to IBM and asked them if they would support a project to make a time shared computer. And the head of research asked us to explain what it would do. And the answer was, we'd have a big computer, like a 701, and we'd have 20 teletypes, and different people would be typing on them, running their programs. And each time somebody typed a character, the big computer would switch to their program and run a few steps of their program in response to that. And the head of research at IBM said, that's a terrible idea. You say there's 30 people in there typing at five or 10 characters a second. That means you're interrupting the computer 300 times a second. How could it ever get any work done? [LAUGHTER] And we got the machine from Honeywell, wasn't it, rather than IBM? Was it GE or Honeywell? I forget. ED: I think it was GE first. MARVIN MINSKY: Oh, well. ED: They sold it to Honeywell. MARVIN MINSKY: Sometime later, IBM had some pretty good researchers working on artificial intelligence. And Thomas Watson, who was the president of the company, heard about that. He sent out an order saying, nobody should use any expression like that. Because he didn't want the public thinking that IBM was going to make intelligent machines because the customers would get very upset. They continued research for a while, but they never used the expression AI again. I don't know if they use it now. Is Watson AI? [LAUGHTER] AUDIENCE: I think they do. I think they just call it deep question answering. MARVIN MINSKY: Deep question answering. AUDIENCE: The guy who created it was very definite in response to that question. He said, this is a system to play Jeopardy and nothing else. But later on, IBM's PR people had been touting how smart it was. MARVIN MINSKY: Yes. I don't know if it can do any commonsense reasoning, but yeah. AUDIENCE: So I went through a few talks by Ferrucci, the guy who did the Watson thing. I commented the matches online. So I have a career as a sports commentator for this new thing of AI sports. A lot of people ask me, did it have commonsense reasoning or did it have AI in it? And I think the answer is that it did, but some. It wasn't the only thing it did. It was kind of a raisin, in Patrick Winston's phrase. So one of the people working on the Jeopardy team was Erik Mueller. MARVIN MINSKY: Oh. AUDIENCE: Who wrote a textbook on commonsense reasoning, among other things. And so I can't believe that there wasn't some influence from him in there. The most important thing I think about the architecture of Watson was that, it was a society of models. There's a very nice article in AI Magazine that explains how it was put together. It was using some statistical models. It was using some models that were very stupid that I wouldn't endorse as a way of doing a AI. But the ideas that it had a meta architecture that was constantly evaluating them and trying to figure out which ones of them. So it had which ones of the methods work best on which kinds of problems. MARVIN MINSKY: So it was using a lot of different methods in deciding when to page them in and out? AUDIENCE: Yeah, and it had an evaluator. I think one of the things that Erik worked on was if there was a problem involving numbers, then he got that problem. So he probably had some kind of commonsense module about numbers. Are the numbers a little big, or is this a quantitative problem that I could figure out by looking up a formula? There were probably a few heuristics of that type in it. So I would say it had some AI and it probably had some commonsense knowledge, but that probably wasn't the bulk of it. The most interesting thing about it was the fact that it had this architecture that was constantly trying to figure out which one of the many methods that it was trying to use. When the thing came out, I got a message from the lab director, Frank Moss, saying well, wouldn't it be great if they'd used your commonsense software on Watson? In fact, they had asked us to work with them and I turned them down. So he said, wouldn't it be great if they'd used your software? And I said, well, did you hear about the CMU guys who did work with them? He said, no. And I said, well, there you go. [LAUGHTER] MARVIN MINSKY: Pretty good. Yes, they don't mention that it's a society of mind idea. AUDIENCE: Ferrucci talks about that. MARVIN MINSKY: Really? AUDIENCE: But not the details. MARVIN MINSKY: Should be interesting. They fired their other AI group, the one that-- AUDIENCE: Doug [? Regan. ?] MARVIN MINSKY: Doug [? Regan. ?] AUDIENCE: Just to add to that, I think the class forum added the AI Magazine that Henry spoke about. And checked with Erik and a couple of papers are out on the subject. I think there is a link to all the papers that are out. I think one other thing they did was I think they do some question and analysis, where they decompose the question, and that's fed into some hypothesis generation about what possible answers you can get. They feed their answer back into the question and they create a hypothesis of what to search. So I asked one of the people working on it if they used commonsense, and I think his answer was not yet, but they do mine the Wikipedia for some kind of data that's the closest, so. Maybe, like Henry said, maybe they use it in a limited context, but a lot of-- like six, seven-- papers in the forum. MARVIN MINSKY: Yes. I asked Google a bunch of questions the other day. It really got the right articles among the first 10 for many of them. I don't remember. I didn't keep a log. It's getting good at parsing what you said. AUDIENCE: I'm just wondering if you-- oh. I was just wondering if you were logged into your Gmail account at the time, because I know that Google's starting personalized searches. So you'll see search results based on your previous searches, as well, so it might steam a bit more intelligent for that reason as well. MARVIN MINSKY: Is that the Google Plus or the regular Google? AUDIENCE: I think it's the regular Google, as long as you're signed into your account, your Gmail account. MARVIN MINSKY: Haven't noticed anything. AUDIENCE: I definitely noticed that it noticed what I'm looking for because of that. You could try signing out and you can see the results are often less relevant. MARVIN MINSKY: It certainly says you retrieved this item five times before. So it's keeping-- I don't know if that record is in Google or in my machine. How does the Dragon thing work, the dictation? AUDIENCE: I'm pretty sure all of the databases that it uses are on your machine. They come with the Dragon software. It's not a cloud service, but I'm not 100% sure. That may have changed since I last used Dragon. MARVIN MINSKY: Janet Baker is giving a lecture-- what day is it? AUDIENCE: It's Friday. MARVIN MINSKY: Friday. It would probably be interesting, because she'll explain not only how it works, but how to make a company produce it. AUDIENCE: So this is a question about Watson. When it pages all these different methods, is it trying every single one of them at the same time? I'm just trying to clarify that. MARVIN MINSKY: It has a lot of processors, but I don't know what it's doing. Do you? AUDIENCE: I don't know to the level of whether it's trying all these simultaneously. I know that in the offline experiments, they compile cases. So they play games against humans. They look at records of past games. The trading isn't completely automatic, but to some extent they learn what kinds of techniques work best with what kinds of questions. And so probably, they have some upfront classification. I'm sure they don't run every method at every time, because some of these methods are probably pretty consuming. So they probably do some classification of what they're-- I mean, I think the most impressive thing is just it tries to figure out what kind of question is being asked, and it did a pretty impressive job about that. So I don't really know the details of it. MARVIN MINSKY: Is their publication complete enough that you can actually see how it works? AUDIENCE: They have an article in AI Magazine. [? Sharad ?] said there might be others. I've just read that one article so far. AUDIENCE: They've drawn up the architecture. Even in the article you mentioned, they actually have a picture with [INAUDIBLE] how the question comes in, how it's broken down. The applications, they start giving you an idea of the algorithms they use, not the full pictures. MARVIN MINSKY: It certainly would be nice if one could see the whole thing, and even experiment with it. AUDIENCE: This question is about which comes first, the machinery or the commonsense? I know once we built it, for the sake of reasoning and intelligence, we could feed it this database of common sense and the rules that come with it. But was part of the machinery built because there was common sense? This is going back thousands of years, millions of years, whatever. In that context, what we would call common sense, for the neanderthal period, for example, was the machinery in their brain probably built because of that? Or did that happen first, and as a software, we started working on building common sense? MARVIN MINSKY: We don't have much record of what happened in that five million years since we were gorillas or chimpanzees, or whatever. Have you heard of finger monkeys? I don't know if I can find it. They're little lemurs about as big as your thumb, and they smile. [LAUGHTER] AUDIENCE: Are you sure this is a real thing? MARVIN MINSKY: I wondered. We'll put it on some web page. So we come from those primates and we don't know anything about their mental development since the humans split off from the-- pretty much about the same time that the chimpanzees and orangutans diverged, about five million years ago. So there's a few paintings in caves, but I believe those are all tens of thousands of years. Are there any human habitats over a million years ago? What's the oldest ones? AUDIENCE: The oldest artifacts? MARVIN MINSKY: Yeah. AUDIENCE: 75,000 years MARVIN MINSKY: That's not much. AUDIENCE: Drilled seashells, presumably used as jewelry. MARVIN MINSKY: Oh. Well, you could use it for scraping things, probably rather useful gadgets. AUDIENCE: Conventional wisdom is we've only been anatomically modern for about 200,000 years. MARVIN MINSKY: Yes. I'm curious as to when dogs appeared, because as soon as you got dogs, then you could normally-- to survive in the jungle, you have to have your nose near the ground because that's the way to detect bad things. But once we had dogs, then people could have the dogs look for the dangers and then the people could stand up. So I'd like to know when the dogs got domesticated. Or maybe there was some other animal that wasn't dogs that served this function. Or maybe it was older people-- [LAUGHTER] --who were ordered to crawl around. Presumably older people were 30 or 40 years old. I think one remarkable thing is that humans live almost twice as long as any other primate. So it could be that for most animals-- not all, but for most animals, only the parent generation is around. But it could be that humans had a great advantage in having grandparents who were still around frequently, which means that the middle generation could do other things than take care of the children. Because chimpanzees and gorillas live about twice as long as all the other primates, so they get to be about 30 or 40, and humans get to be about 60 or 80. So this probably has some meaning of having to do with selection and survival. Yeah? AUDIENCE: So I want to know if you know how many projects there are out there like trying to do intelligence system. If you believe that [INAUDIBLE] is the right theory to be put in practice, or just lines of code to be written and the theory is already out there. MARVIN MINSKY: That's a great question. We ought to have a map. Has anybody compiled-- [INAUDIBLE],, you know what's happening in Europe in that area? AUDIENCE: I know some, but I don't know if there is many people doing ambitious efforts. By the way, I just looked up on the CNN site. Quite early in the lecture you mentioned that it would be nice to have this one billion investment for developing AI to take care of elderly people. So now I just looked up on the CNN website how much Watson cost, and the estimate is that $1 to $2 billion. So they spent $1 to $2 billion dollars developing that. MARVIN MINSKY: Billion? AUDIENCE: Yes. Program. MARVIN MINSKY: To develop Watson? AUDIENCE: That's [INAUDIBLE]. AUDIENCE: It's impossible. [INTERPOSING VOICES] AUDIENCE: Unless we counted all the computation since the beginning of time. [LAUGHTER] AUDIENCE: IBM has a large budget. AUDIENCE: So apparently, IBM has a $6 billion research budget every year, and they put 5% to 10% of that directly, or indirectly, and that's how this is calculated. I don't know the details. MARVIN MINSKY: Wow. AUDIENCE: It's more like $100 million. AUDIENCE: One, [INAUDIBLE]. AUDIENCE: [INAUDIBLE] people or something for a few years? AUDIENCE: Yeah. Yeah. So that's the official estimate. And then there's another estimate [INAUDIBLE].. AUDIENCE: Which is [INAUDIBLE] because it's only about 10 Super Bowl ads. [LAUGHTER] MARVIN MINSKY: How long is a Super Bowl ad? Is it a minute? AUDIENCE: 30 seconds. AUDIENCE: 30 seconds, yeah. AUDIENCE: So they get a lot more publicity out of that than they would have got on the Super Bowl. [LAUGHTER] AUDIENCE: Yeah, it's a bargain. MARVIN MINSKY: Yeah, I was wondering, what's the world cost of sports? AUDIENCE: Direct or indirect? AUDIENCE: Direct, I think, would be interesting enough. AUDIENCE: Well, here we go. MARVIN MINSKY: Well, if you paid people $1 an hour to watch sports, how much would it cost? AUDIENCE: But then again, it's probably quite cheap. If the vehicle [INAUDIBLE] sports is killing each other, or having stuff like that, then that might be more expensive by human capital. MARVIN MINSKY: What would they do? They would invent sports. [LAUGHTER] There must be some country that does it. AUDIENCE: Yeah. Just for comparison, the Coca Cola's annual advertising budget is $1.6 billion. MARVIN MINSKY: What's their profit? AUDIENCE: I don't know. We could find out. MARVIN MINSKY: At the media lab meeting, there were yet smaller Coke bottles. [LAUGHTER] Seven ounces. AUDIENCE: Diet? MARVIN MINSKY: Both. What? AUDIENCE: They are red. MARVIN MINSKY: They're red. They look like Coke cans, but they're seven-ounce size. AUDIENCE: The in excess budget was $7.7 billion. MARVIN MINSKY: Oh, that's more than I thought. AUDIENCE: So whatever happened to [INAUDIBLE] parallelism? Didn't it used to be in the [INAUDIBLE]?? MARVIN MINSKY: Well, I think the Watson is-- how many processors does it claim to use? AUDIENCE: I don't know, 1,000? AUDIENCE: 16 terabytes of RAM in each processor can only index, what, 64 gigabytes? So that's at least hundreds of processors. MARVIN MINSKY: There are a lot of interesting problems that can't be solved with parallelism. It depends on the geometry of your search trees. So if a process is going to require a great many steps, sequential steps, then parallelism can't help much. But I don't know what's happened. Every now and then, you see an announcement that somebody in some country has the biggest computer at the moment. And it's usually a few hundred or 1,000 processors that they got somewhere else. So I haven't heard of any attempt to make a really colossal machine. Neil Gershenfeld had a project here, in which he hoped to make an extremely efficient and inexpensive super parallel computer. It turned out that it was hard to do some kinds of operations, like recursion, in that architecture. Does anybody know what's happened lately? Is that project still going? It just stopped. David was a vital part of it. AUDIENCE: That's me. [LAUGHTER] MARVIN MINSKY: Someday he'll tell the story. AUDIENCE: So the whole [INAUDIBLE] machine project is stumped? AUDIENCE: Yes, I think everyone involved has acknowledged at this point that it's not very efficient. MARVIN MINSKY: But the demos were really pretty. Yes, it would be nice to know what the brain does and how parallel it is. I won't repeat my complaints about neuroscientists, but generally, we don't see architectural theories of what the brain centers do, except in the beautiful cases of the cerebellum, which is maybe fairly well understood; the visual cortex, which is fairly well understood, at least at the first stages. But I've never seen any good theories of how different brain centers communicate. We know that there are these bundles of wires, but we don't know if they're like K-lines or [INAUDIBLE] coding, or whatever. And I'm afraid most of the papers I read go the other direction and say there's much more in the glial cells that support the neurons than anybody ever suspected, and that sort of mystical-- it's like saying that the paint on the car is the important part. AUDIENCE: Well, in your book you mention that the higher level processes are more likely to be sequential, not parallel, because it takes a lot of resources to build them and to support them. I found that, also, when you're trying to fall asleep, if you activate your higher level resources, like say from self-reflective and up. If you started reflecting, it's almost impossible to fall asleep. It's almost like you're only able to fall asleep when those are resting and your lower level processes are the only ones that are active. MARVIN MINSKY: You say can't fall asleep where you're reflecting? AUDIENCE: Yeah. AUDIENCE: [INAUDIBLE] says the opposite, right? He had the whole stage of his life where he reflected as he was going to sleep and was able to control his dreams. MARVIN MINSKY: Well, what do you call control dreaming? AUDIENCE: Lucid dreaming. MARVIN MINSKY: What's the word? AUDIENCE: Lucid. MARVIN MINSKY: Lucid dreaming, where you can program yourself for a particular topic. How many of you can do that? 1, 2, 3. It would be boring to have the same dream every time. AUDIENCE: I mean recognizing what kind of thinking is going on in yourself, or trying to recognize it. MARVIN MINSKY: Well, a good question is, how much of dreaming is thinking and how different is it from regular thinking? Is it just that you've turned some critics off? I don't think neuroscientists have critics, though. Yeah? AUDIENCE: Do you know John Cock's theory of dreams? MARVIN MINSKY: No AUDIENCE: OK. John Cock's theory of dreams goes like this. If you imagine that we have an optimal encoder that stores things away by optimally encoding it, well, then you have to have the decoder. But if it's an optimal encoder, then any random noise would decode into a plausible sequence of events. MARVIN MINSKY: Right. AUDIENCE: And if you happen to hear a barking dog while you're asleep, that fact can be incorporated into the dream by that same mechanism with the optimal decoding. So something I received on a phone call once a long time ago. MARVIN MINSKY: So if you were in an anechoic chamber, maybe you would not have such good dreams because you wouldn't get any external signals. Right. AUDIENCE: Your brain can probably make quite a lot of noise. So I don't think you'll need to have somatic signals to have that kind of-- AUDIENCE: You don't need that. The only point is it can be incorporated into a dream. That does happen. When you're dreaming and there's some particular sound, like a fire engine going by, it does get incorporated into your dream, at least in my case. So his mechanism explains that. MARVIN MINSKY: John Cock was a friend of ours who is a first-rate mathematician who worked at IBM. AUDIENCE: He was a great computer architect and he designed the Power PC series of CPUs. So he invented risk architecture. MARVIN MINSKY: And improved my theorem on tag. I published a paper with him. AUDIENCE: The projector isn't on right now. MARVIN MINSKY: I don't have anything new to say. It usually shuts off at 8:30. AUDIENCE: It's 8:15. MARVIN MINSKY: 8:15? Oh. AUDIENCE: That's the program audio, which is muted. AUDIENCE: [INAUDIBLE] AUDIENCE: Of all the projectors [INAUDIBLE] I have a question. Do you think it's necessary for intelligent beings to sleep? And if so, do you think future AI will be required to sleep? MARVIN MINSKY: Hm, that's a great question because I don't think anyone really understands why sleep is necessary. I mean, maybe it's to empty buffers that have gotten stuck. You can make up computer-like theories, but I've never heard of any theory that comes along with some evidence. Did you have a-- AUDIENCE: They say it's good for memory, though. Maybe when you create your K-lines, like when you're seeing stuff you create your K-lines and it's not the optimal way. I don't know. MARVIN MINSKY: Oh, the standard theory is that REM sleep is necessary to transfer records into long-term memory from short-term memory. I don't know if anyone has a theory of how that works, though. How do you root a representation of knowledge from the hippocampus, or the amygdala, or wherever it's stored in short-term memory? I think it's the amygdala, supposedly. And how does it find a place in the larger brain? And what representation does it have, and so forth. I don't read that literature much, but I don't know of any popular theory of how knowledge is represented in short-term memory, and later in long-term, and how it's transferred and what the locations are and what determines them. Any of you had any contact with that, outside of neuroscience? Yeah. AUDIENCE: I guess the one standard computer analogy for dreaming is garbage collection. So you talk about copying and copying garbage collection, and so you can imagine it's some kind copying garbage collection. MARVIN MINSKY: Yes. When do animals start sleeping? Do crocodiles sleep? We could ask Google. What do they dream about? AUDIENCE: Porpoises are interesting. They only sleep half their brain at a time. MARVIN MINSKY: Right. Porpoises and albatrosses. AUDIENCE: They would drown. MARVIN MINSKY: Have you ever seen an albatross take off? It usually takes four tries. It falls over, and it's lucky-- AUDIENCE: Android's dream of electric sheep. [LAUGHTER] MARVIN MINSKY: It's lucky it doesn't break its neck. It runs as fast as it can and falls over. [LAUGHTER] AUDIENCE: Wasn't that the story that became Blade Runner? AUDIENCE: Yup. MARVIN MINSKY: Oh. AUDIENCE: Android's dream of electric sheep by Philip K. Dick, right? MARVIN MINSKY: Yes. Did they sleep? AUDIENCE: The Replicants? MARVIN MINSKY: Yeah. It came back. AUDIENCE: Minsky? MARVIN MINSKY: [INAUDIBLE] chapter 5. AUDIENCE: It has to go in grids. MARVIN MINSKY: How far back do we have to go toward that slide? AUDIENCE: [INAUDIBLE] MARVIN MINSKY: I couldn't recognize it, either. AUDIENCE: Oh, there we go. AUDIENCE: It really looked like what I saw on my monitor [INAUDIBLE],, but looking at my eyes doesn't do the same thing. [LAUGHTER] MARVIN MINSKY: Can Google recognize faces here? AUDIENCE: Facebook. AUDIENCE: It can search face [INAUDIBLE] images. So you can upload an image and say, find an image like this one. And I think it just does it by color selection, shapes, and general things. AUDIENCE: And they say if they used facial recognition technology, they would be in danger of violating all sorts of laws. So they make sure that whatever they do is officially not face recognition. AUDIENCE: Marvin, there's another question. AUDIENCE: So they were saying they have some sort of [INAUDIBLE] or tools that they can recognize the waves, like electromagnetic waves in your brains. Do you think we need more powerful tools, or is it just that they're not asking the right questions? Yeah, that's the question. Because you said that the neurons, just like transmitter, like certain voltage, and maybe what we just need to study is these patterns of these wave signals that we send between neurons. And it doesn't seem that hard to detect that. AUDIENCE: [INAUDIBLE] AUDIENCE: Marvin, you can t turn your microphone up. MARVIN MINSKY: How could it change the switch in my pocket? All of the current scanning instruments are very low resolution. If one of those white matter bundles going from one part of the brain to another has 10,000 signals, you can't resolve any of those. So there might be lots of bits going along one of these fiber bundles, and all you can see with the brain scanning techniques is a change in metabolism of a fairly large pixel. You're not seeing the signals themselves. You're seeing the average of thousands of signals. So it's going to be very hard until you actually stick a lot of needles in one of those pathways. Probably somebody will be able to invent some not very dangerous way of doing that, but there aren't any now. They put fairly high resolution things on the surface of the cortex, and that way you can get a lot of information. But not through the skull. Anyway. AUDIENCE: So I think [INAUDIBLE] And I'm also so interested to hear if you believe that more advanced or more detailed images of the high quality implementation of how the mind is implemented in our brains, or the actual data, how helpful this would be in discovering the actual cognitive functions or brain center operation, or whatever. MARVIN MINSKY: Well, it's very hard-- AUDIENCE: [INAUDIBLE] too little to answer that question. MARVIN MINSKY: It's just very hard to see how you could get thousands of electrical signals out. For example, there's the big channel between the speech recognition area and the speech production area. I forget what it's called. It would be wonderful to know what's happening on those many thousands of fibers. And then you could probably discover how words or syntactic structures are represented, if you could just see what's actually happening on those neurons. Somebody will invent some three-dimensional scanner someday, but right now it takes too much energy and would ruin everything. AUDIENCE: So yes. I recently read a report online about a study done at UC Berkeley. I don't know if you've heard of it. But the idea is that they supposedly found a way to make scans of brainwaves and visualize it using like fMRI software. I think the idea was that certain brainwaves could be associated with certain images. MARVIN MINSKY: Associated with what? AUDIENCE: Certain brainwaves would be associated with certain shapes. I actually have the report right in front of me, but basically they showed test subjects two different, I think it was hour-long YouTube videos. MARVIN MINSKY: Oh. AUDIENCE: So on this website, they showed the YouTube video that the test subject was shown and the simulated image that was derived from the brainwaves of the particular subject. MARVIN MINSKY: Is that an image of what the patient is looking at the moment, or is it one that he's thinking about? AUDIENCE: It seemed to be a combination of both, because when I looked at the video, for just generic shapes, it would match up pretty easily. But when there was a profile, when it was zooming in on someone's profile, the image was very difficult to render. I can show you the link if you want to take a look at it. MARVIN MINSKY: Well if you take off a piece of skull and look at the visual, the primary visual cortex-- which is called area 17, for no particular reason-- then if you're operating on an epileptic patient or something, like someone like that, then while you're doing the operation, you can put an array of a lot of electrodes, say, a 20 by 20 array, or even more. And in the primary visual cortex, you can get an image, because this piece of cortex goes right to the optic nerve and gets signals from the back of the eye. But in a sense, that doesn't tell you much because you know that the image has to get inside the head somehow, and the real question is more what's happening in the next two areas, 18 and 19, which do all sorts of processing. But you can't get an image from them, or at least far as I know, nobody has. Also, these images are being dated at about 10 per second, so you get movies, which is nice, from the primary cortex. AUDIENCE: Do you think it would be possible to look at brainwaves and try to determine-- if you record someone's brainwaves while they're dreaming, do you think it would be possibility to determine what they're dreaming about? MARVIN MINSKY: Well, brainwave is a specific term for a very large scale event, so there's not much information in that. But if you could see the electrical activity in the brain, then you could-- the phrase brainwave stands for alpha, beta. There's about four different types of waves, and those are very unlocalized. The reason they're interesting is that you can pick them up with an audio amplifier. They're resulting from firing very large numbers of neurons, and so you get millivolts, rather than microvolts, and it's easy to pick up outside the skull. But picking up the signals from large numbers of individual nerve cells has never really been feasible. And maybe it won't be so hard. In a few years, maybe you can make a little integrated circuit that has little synthesizing modules that make the tiny silver wires, or carbon fibers, that go down into the brain and go around obstacles a little bit. There's no reason why you couldn't have fibers of the order of a micron in size, a thousandth of a millimeter, that could pick up very large numbers of signals without damaging the brain too much. I guess people working on nanotechnology are getting close to considering such things because they-- AUDIENCE: [INAUDIBLE] MARVIN MINSKY: Is there any gadget that can make a carbon fiber under controlled conditions? AUDIENCE: Not really. Currently, you're not at the limiting point where you need nanotubes. He's still building fibers with more like 10 micron diameter, which you can make using conventional technologies. They're almost like [INAUDIBLE]. So it's not down to the one micron level yet, but there's at least a few groups that are definitely thinking about how to get very large numbers of signals out of the brain by using thin fibers that go around obstacles a little bit. MARVIN MINSKY: Well, if you could get a large number, it might not matter much where they're from, because-- AUDIENCE: That's the idea. MARVIN MINSKY: --nobody's ever seen any brain center working in great detail. AUDIENCE: Right. But you probably still wouldn't be able to get them close enough together to see any two neurons that are directly connected to each other. MARVIN MINSKY: Also, you could put signals in and learn a lot. AUDIENCE: Mm-hmm, that's true. MARVIN MINSKY: Yeah. AUDIENCE: I have my own prejudice about this, but I think a lot of the interest in the brain is a lot like interest in how birds fly, with regard to designing airplanes, or inventing airplanes. So in inventing an airplane, the bird, especially a soaring bird, like a hawk, was a good thing to look at a little bit. But then studying the feathers and laying of eggs, if we had copied them, then a Boeing 747, after a while, would lay this egg and out would hatch another one, which you'd have to feed something, I don't know what. So it's the high level principles we really need from the brain, not the low level implementation. AUDIENCE: What do you consider to be low level? What do you consider low level? AUDIENCE: Well, it's like, where do all these neurons go and how are they interconnected, and so on? If you looked at the back panel of a Cray computer, they had, it looked like, a totally random wiring diagram. Every wire went someplace at random, it appeared. However, if you looked at that and then designed a computer by having all the wires going at random, it wouldn't compute, of course. So I just think that there is some high level principles at work in the brain that we got to figure out, but the low level wiring, I think, is kind of besides the point. Because it's pretty clear to me that however much signal processing goes on in the brain, one chip today can probably outdo it by a factor of 1,000, in my opinion, especially with something called CUDA. And this is a new thing. It turns out in your computer for the display, there's a bunch of GPUs. A GPU is a Graphic Processing Unit that's pretty powerful. I looked at my laptop. I discovered there were like 380 CUDA processors, and there were two display chips in my laptop. And there's now software that lets you compute with those, in general, and you can do amazing things on occasion. MARVIN MINSKY: So if you could send stuff to your visual cortex, you might be able to compute something useful, instead of just seeing stuff. AUDIENCE: Patrick, I think that's the way you hypothesize a lot of thinking happens, right? It's all through visual? AUDIENCE: I disagree with that a little. I think what happens to a neuron doesn't matter much, but what happens to bundle of neurons matters a lot. And I think what's largely broken about the last 50 years in computer vision is that it has not been recognized that the visual system has, for every bundle of wires going in one direction, an equally big one going the other way. And so when I look at that picture of Lincoln, I think that one way of thinking about what must be happening is that when you squint, you finally get to a point where there's no reason to believe it isn't Lincoln. And the only reason you don't see Lincoln right away is because when you see those blocks, there is a reason to believe it's not Lincoln. So I think a lot of it is top-down hallucination. It's guided a little bit by what comes in through your retina, but for the most part it isn't. So that's a guess based on the fact that half our brain is devoted vision and everything goes in every direction, and it seems to me that that's an important fact. I would also speculate that a lot of what we call common sense is a consequence of reusing that apparatus. My favorite example is imagine what it would be like to run out of this room carrying a full bucket of water. We all know your leg would get wet, but nobody ever told us that. We've never seen it done. It's probably not available as a fact on Google, but we certainly know it, and we know it, I think, because we can imagine it and reuse our perceptual apparatus to figure out what's going to happen. So that's sort of just-in-time common sense, and once we've imagined it once, we can cash it. So I don't think, in the end, there's a need to have a whole lot of common sense to start with because we can build up that library as we need it. AUDIENCE: So Marvin, what do you think? Do you think that much of common sense as a consequence of-- do you think that much of common sense can be accounted for by imagining situations and figuring it out on the fly? Perhaps we're using our perceptual apparatus? If so, then Watson doesn't have a chance because it doesn't have perceptual apparatus or any imagination. MARVIN MINSKY: Well, you're certainly using all sorts of-- I mean, when I think of doing this, I can hear it clinking just before I get there. So it's not just vision. I wonder, when are children able to visualize complicated mechanical situations? Does this phenomenon, where children of age two can count to two and children, or a three-year-old can count to three, and then four-year-olds can count to 10. It's interesting how long it takes to get from three to higher. AUDIENCE: Yes, that's an interesting experiment. They count up on their fingers. They add two and two to four. I wonder if they can imagine that without actually looking at their fingers. MARVIN MINSKY: I had a little essay in a book on logo, complaining that if you look in a child's world, it's very hard to find three of anything. There's lots of twos because of symmetry, and you have two hands and two feet and stuff, and if you're lucky, two parents. But if you walk around the house, which I did, I couldn't find three of anything that a child might consider to be significant enough to form a concept for threeness. Well, you can't get there. Apparently, once you get to four, the rest is easy. But why is it a whole year between two and three? And so a great experiment, which would only cost a billion dollars, would be to provide a million children, two-year-olds, with toys that have three parts, or something, and see if they learn to count to three slightly sooner. AUDIENCE: So this discussion made me think about the concept of embodied cognition, which is the idea that because human beings are born into a three-dimensional world and are born with a body and with all the, I guess, everything that's associated with having a body and having a body and living in the spatial existence, a lot of our mental structures take advantage of these things, as in a lot of the way that we think about things utilize space to organize our thoughts and utilize the fact that we have bodies to help us think. So I guess this made me think about, in order to make computers that think like humans, would we have to fake them-- make computers think that they have human bodies, perhaps? MARVIN MINSKY: You could give them a little simulator. I think you'd do just as well with stick figures then three-dimensional. That's 2 and 1/2 dimensional sort of. But I don't know if that idea of embodied cognition actually holds up anymore. It was popular for a while because-- actually, it was MIT professor Dick Held, who I think I mentioned this couple of weeks ago. He did some experiments which showed that if you carry an animal around, it can't find its way back, in general. But if you drag it around on a leash or something, then it can. And so he concluded that people couldn't do motor learning unless they had actively engaged the environment. But then we got reason to think that this didn't apply to humans because of-- that PhD thesis of-- Jose, what's his name? Valente. He had a cerebral palsy patient who had never moved around in the real world. He was sort of bedridden. But he observed stuff, and when he was given the controls of a logo turtle, didn't have any trouble, when the turtle turned around, to make it turn left instead of right, because it was facing the other way, which was the kind of thing that Held had shown that his lab animals couldn't learn. So the idea of embodied cognition became very popular, and it was why [AUDIO OUT] AUDIENCE: [INAUDIBLE] MARVIN MINSKY: Now it lit up. How do you keep your hand out of your pocket? [LAUGHTER] AUDIENCE: We need it out now. AUDIENCE: So while Marvin's battery is getting recharged there, I think the problem with Brooks is that he observed that people are smart and people have bodies. Therefore, we'd better have bodies if we're going to be smart. And that might be sort of true at a certain level, but it's totally true in the sense that maybe it's the perceptual apparatus that's important. But just presuming that there's something religious about having a body seems to me to be wrongheaded because the natural tendency is to go off and spend several years soldering up a robot, thinking that you're contributing to intelligence. So it's a little bit like Ed's feather example. It's not important have feathers, but it can be useful to know how feathers work and how they're hollow, and all that stuff, to reduce weight. So the real question is, what does having a body actually do for you? Not to go off and suppose that there's something mystically and religiously sacred about having a body. MARVIN MINSKY: Yeah. There's no reason why a machine couldn't have a simulated world. AUDIENCE: The fact we do. We imagine stuff all the time. MARVIN MINSKY: Yeah. The question as to one-year-olds do that? Who knows? Probably they do. AUDIENCE: One last question. AUDIENCE: As soon as they start planning, [INAUDIBLE] imagine everything. [INAUDIBLE] AUDIENCE: So when we are imagining stuff, we don't we usually imagine the smell of things-- the smell of things. I don't know. It's important that we use our apparatus. Maybe our imagination is constrained by like how much area we used. It's constrained. But like cats-- I don't know what the point that I'm trying to make-- but cats have a really good representation of smell, and dogs have. And maybe why they can't find a path is because their vision apparatus in their brain, the part that represents the vision, is not very well developed as for humans. Because when we imagine stuff, we used more the vision than like smelling or, I don't know, touching. Like dreams, you don't feel touching too much. You imagine with your eyes, with your vision, more than smelling and touching. And maybe why we're so good is because we concentrate more on vision than smelling or touching, or any other. MARVIN MINSKY: Well, that embodied cognition issue is a strange one because some people feel very strongly about it, and I'm not sure what the psychology of those psychologists is. Anyway, it looks like it's time to go. We could ask Google if it's still raining. [LAUGHTER] You don't need a body anymore with Google.
|
MIT_6868J_The_Society_of_Mind_Fall_2011
|
7_Layered_Knowledge_Representations.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MARVIN MINSKY: What do you think AI people should work on next? AUDIENCE: There were lots of questions I was going to ask you before you said your question. I was going to ask you, what kind of questions do you think that AI people should be asking right now? MARVIN MINSKY: That's right. Anybody have a meta question? One good question is-- oops. I could focus it better or make it bigger. Is that enough layers? It's possible that I got the idea from Sigmund Freud. Who knows what Sigmund Freud's three layers were? Sure. AUDIENCE: Id, ego, super-ego. MARVIN MINSKY: I can't here. AUDIENCE: Id, ego, super-ego. MARVIN MINSKY: Freud wrote about 30 books. I know because I had a graduate student once who decided to quit computer science and go into emergency medicine. He had an MD, which he wasn't using, and he suddenly got fed up with computer science. So he sold me his set of Freud books, which is-- But in Freud's vision, these don't seem to be a stack. But there is this thing, which is basic instincts. I shouldn't say basic-- maybe, to a large extent, built in. And this is learned socially. And I think the nice thing about Freud's concept, which as far as I know, doesn't appear much in earlier psychology, is that these conflict. And when a child grows up, a lot of what we call civilization or socialization or whatever comes from taking the built-in instincts-- which is, if you see something you like, take it-- and the social constraints that say, you should negotiate. And if you want something someone else has, you should fool them into wanting to give it to you, or whatever. So in fact, when I make this big stack of mechanisms, that really-- well, that's actually not the organization that the book starts to develop chapter by chapter, which was, instincts, learned reactions, and so forth, up to self-conscious. What's the next to top layer? I've forgotten. AUDIENCE: Ideals? MARVIN MINSKY: Yeah, I guess so. It's hard to tell. The reason I had six layers is that, unlike what people do when making theories in science, is, I assume that whatever ideas I have, they're not enough. That is, instead of trying to reduce the mind to as few mechanisms as possible, I think you want to leave room. If your theory is to live for a few years, you want to leave room for new ideas. So the last three layers, beginning with reflective thought and self-reflective and the ideals, it's hard to imagine any clear boundaries. And at some point, when you make your own theory, maybe you can squeeze it into these extra boxes that I have. A little later in the book, another hierarchy appears. This is imagining, as a functional diagram, to show how knowledge or skills or abilities or whatever you want to call them, the things that people do, might be arranged. And as far as I know, because virtually every psychologist in the last 100 years has suffered from physics envy-- they want to make something like Newton's laws-- the result has been that-- except for the work of people like Newell and Simon and other pioneers who started research in artificial intelligence, you've heard me complain about neuroscientists, but there's pretty much the same complaint to be leveled against most cognitive scientists, who, for example, try to say, maybe the entire human mind is a collection of rules, if-then rules. Well, in some sense, you could make anything out of if-then rules. But the chances are, if you tried to make a learning machine that just tried to add rules in some beautifully general fashion, I suspect the chances are it would learn for a while and then get stuck. And indeed, that's what happened maybe five or six times in the history of artificial intelligence. Doug Lenat's system, called AM, Automated Mathematician, was a wonderful PhD thesis. And it learned arithmetic completely by itself. He just set up the thing biased so that it would have some concept of numbers, like 6 is the successor of 5 and stuff like that. And it fooled around and discovered various regularities. I think I mentioned it the other day. It first developed the concept of number, which wasn't very hard, because he wrote the whole thing in the AI language called Lisp. How many of you know Lisp? That's a lot. In looking at the biographies of the late John McCarthy all week, there were lots of attempts by the writers to say what Lisp was. [LAUGHTER] The tragedy is, you probably could have described it exactly in a paragraph. Because it's saying that each structure has two parts, and each of those has two parts. I don't remember seeing any attempt to say something about this programming language in-- yes. AUDIENCE: How do you imagine the layers interact with each other? MARVIN MINSKY: Those? AUDIENCE: Yeah, those and the six layers. MARVIN MINSKY: I think they're layers of organization. Yes, because a trance frame is made of two frames. But then, if there were a neuroscientist who said, oh, maybe when you see an apple and you're hungry, you reach out and eat it, so you could think of that as a simple reflex-- if this, then that. Or you could say, if I'm hungry now and I want to be not hungry, what's a possible action-- could I do? And that action might be, look for an apple. Yes. AUDIENCE: So I was teaching my class on the day that McCarthy passed away, and then I was explaining. And then I had some students in the class who weren't even computer scientists, so I was thinking about the same problem, which is, how to explain what Lisp was? And so I said, well, before that, there were languages like Fortran that manipulated numbers. Lisp was the first language to manipulate ideas. MARVIN MINSKY: Mm-hmm. Yes, you can manipulate representations of-- yes, it's manipulating the ideas that are represented by these expressions. And one thing that interests me is, another analog between psychology and modern cognitive psychology and computer science or artificial intelligence, is the idea of a goal. What does it mean to have a goal ? And you could say, it's a piece of machinery which says that, if there's a situation-- what you have now-- and if you have a representation of some future thing-- what you want. So of course, at anytime, you're in a situation that your brain is somehow representing maybe five or ten ways, not just one. But what does it mean to have a goal? And what it means is to have two representations. One is a representation you've made of some structure which says what things are like now. And the other is some representation of what you want. I don't think I need that. And the important thing is, what are the differences? And instead of saying, what's the difference, maybe it's good to say, what are the differences? So what does it mean to have a goal. It means to have some piece of machinery turned on. You can imagine a goal that you don't have, right? Like I can say, what's David's goal? It's to get people to go to that meeting. So what you want is to minimize the differences between what you have and what you want. And so there has to be a machinery which does what? It picks out one of these differences and tries to get rid of it. And how can you get rid of it? There's two ways. The good way is to change the situation so that difference disappears. The other way is to say, oh, that would take a year. I should give up that goal. I'm digressing. So you want something that removes-- the feedback has to go this way. Let's change the world. And get rid of that difference. Well, how do you get rid of a difference? That depends on-- maybe you have a built-in reflex. Like if you have too much CO2 in your blood, something senses it and tells you to breathe. So you've got built-in things. And the question is, how do you represent these sorts of things? And in fact, I think I got a lot of this idea from the PhD thesis of Patrick Winston, who was here a minute ago. [LAUGHTER] But the question is, how do you represent what you want and how do you represent what you have? And I think the big difference between people and other primates and reptiles and amphibians-- reptiles, fish, and going back to plants and so forth-- is that we have these very high-level powerful ways of representing differences between things. And this enables us to develop reflexes for getting rid of the differences. So this is what I think might be a picture of how the brain is organized. And at every level, these things are made of neurons. But you shouldn't be looking at the neurons individually to see how the brain works. It's like looking at a computer and saying, oh, I understand that. All I have to do is know a great deal about how each transistor works. The great thing about a computer is that it doesn't matter how the transistor works. The important thing is to recognize, oh, look, they usually come in pairs-- or really four or 10 or whatever-- called a flip-flop. Do you ever see a neuroscientist saying, where are the flip-flops in this or that? It's very strange. It's as though they're trying to develop a very powerful computer without using any concepts from computer science. It's a marvelous phenomenon. And you have to wonder, where did they grow up and how did they stay isolated? AUDIENCE: In the biology department, I think. MARVIN MINSKY: In biology departments. AUDIENCE: Or psychology departments. MARVIN MINSKY: Well, I started out in biology, pretty much. And then I ran across these early papers, one of which was by Lettvin and McCulloch and Pitts and people like that. In the early 1940s, the idea of symbolic neurons appeared. It had first appeared in 1895 or so with the paper that Sigmund Freud wrote but couldn't get published. I think I mentioned that, called Project for a Scientific Psychology. And it had the idea of neurons with various levels of activation. And sometimes you would have a pair of them. And one would be inhibiting the other, and so that could store some information. And he's not very explicit about how these things might work. But as far as I know, it's about the first attempt to have a biological theory of information processing at all. And he was unable to get it published. AUDIENCE: Marvin? MARVIN MINSKY: Yeah. AUDIENCE: Since McCarthy did this, just, do you some reflections on the stuff that he did or his contributions? MARVIN MINSKY: There are a lot of things that he did. I noticed that none of the obituaries actually had any background. What had happened is that what Newell and Simon, they had struggled to make programs that could do symbolic logic. And they made a language called IPL. And IPL was a language of very microscopic operations. Like you have several registers, put a symbol in a register, perhaps do a piece of arithmetic on two symbols if they're numbers. If not, link up-- you can sort of make registers artificially in the memory of the computer. And you could take two or three registers. It had instructions for making lists or trees. So you could arrange these in a way that-- so here are the three things. And this is just a simple list, but I've drawn it as though these two were subsidiary to that. Of course, that depends on what program is going to look at this. And you could have a program which can say, here are two arguments for a function. And it doesn't matter what order they're in. It just depends how you wrote the function. So Newell and Simon had written a programming language which said, put something in a register, link it to another register, and then perform the usual arithmetic operations. In fact, what they were doing is mostly performing logical operations, because even the early computers had ANDs and ORs and XORs, and things like that. So Newell and Simon had written a program that could deal with Boolean functions and prove little theorems about, not A or B. Is this not A or not B? Is that wrong? I forget. It should have more nots. Anyway, they wrote this beautiful but clumsy thing made of very simple logical primitives. McCarthy, at the same time, IBM had spent 400 man years writing a program called Fortran 1, or Fortran. I'm not sure that people had serial numbers on programming languages yet, because we're talking about the middle 1950s. And McCarthy had been thinking about, how would you make AI programs that could do symbolic reasoning? And he was indeed particularly interested in logic. Backtrack-- in fact, Newell and Simon had got their program to find proofs of most of the theorems in the first volume of Russell and Whitehead's Principia Mathematica, which is a huge two-volume book from about 1905, I think, which was the first successful attempt to reduce mathematics to logic. And they managed to get up to calculus and show that differentiation and integration can be expressed as logical functions and variables. It's a great tour de force, because logic itself barely existed. Bool and a few others, including Leibniz, had invented Boolean algebra-like things and around-- Frege and others had got predicate calculus with their [INAUDIBLE]. And that stuff was just appearing. And Russell and Whitehead wrote this huge book, which got all the way up to describing continuous and differentiable functions and so forth. The first volume was huge and just did propositional calculus, which Aristotle had done some of, also. And anyway, McCarthy looked at that and said, why can't there be a functional language like Fortran that can do symbolic reasoning. And pretty much all by himself, he got the basic ideas from Lisp. They're built on the Newell and Simon experiment. But he basically converted the symbolic system into something like Fortran, which was only able to manipulate numbers into Lisp, which was able to manipulate arbitrary symbols in various ways. And if you want to know more about that, you can find McCarthy's home page at Stanford. I think if you Google search for McCarthy and SAIL-- SAIL is Stanford Artificial Intelligence Laboratory-- and you'll get his home page. And there's a 1994 article about how he invented Lisp. Yes. AUDIENCE: Right after you started talking about Lisp, you jumped over and you said, here's the now and the want, which is the current state of desire. And I think you were going for some kind of analogy between the symbolic evaluation [INAUDIBLE] and what conscious entities do. However, one of the most beautiful things about Lisp is the homo [INAUDIBLE],, the fact that you have things such as macros. What is there in the conscious entity that is equivalent to the macro? MARVIN MINSKY: Macros? AUDIENCE: Yeah. MARVIN MINSKY: Well, each of these levels are ways of combining things at the previous level. The way I drew it, at the top, there are stories. What's a story? You mention some characters in a typical story. Then you say, and here's a problem these characters encountered. And here's what Joe did to solve the problem, but it didn't work and here are the bugs. The reason that didn't work is that Mary was in the way, so he killed her. And then blah, blah, blah. So that's what stories are. If you just write a series of sentences, it's not a story, even though The New Yorker managed to make those into stories for about a period of 20 years. But a typical story introduces a scene and it introduces a problem, and then it produces a solution. But the solution has a bug. And then the rest of it is how you get around that bug and how you maybe change the goals in the worst case. So a story is a series of incidents. I wonder if I brought my death ray. It looks like I didn't. Oh, but you don't need it with this. You have to be here. Anyway, but what's a story made of? Well, there's situations. And you do something and you got a new situation. So what's that? That's what this thing I called a trance frame, which is a pair of situations. OK, so what's an individual situation? Well, it's a big network of nodes and relationships. And I'm not sure why I have semantic networks down here rather than here. Oh, well, a frame is a collection of representations that's sort of stereotyped or canned, that might have a single word like-- oh, just about any word, breaking something. If I say, John broke the X, you immediately say, oh, that's a trance frame. Here is an object that had certain properties, parts, and relationships. And it's been replaced by this thing which has most of the same objects, but different relationships, or one of the objects is missing and it's been replaced by another frame, and so forth. So one question is, what's the relation? So this is a picture of cognitive representations. Everything is made of little parts. And in the society of mind, I described the idea of K-line lines. What's a K-line line? It's an imaginary structure in the nervous system. There's really two kinds. There's a sort of perceptual K-line line, which is something that recognizes that, say, 10 features of a situation are present. And if these are all present, then a certain bunch of neurons or neural activity goes on. And on the other hand, when you think of something, like a word, suppose I say microphone, then you're likely to think of something that has a business end which collects sounds. And it has a stand or maybe it has a handle. And if you're an engineer, you know that probably it has a battery or a transmitter or a wire. So those are the things that I call frames or K-line lines, really. And anyway, chapter 8 of The Emotion Machine talks about that. And there's a lot of detail in the old book, The Society of Mind, about what K-line lines and things like that could be made of. Now, whatever those are, and as far as I know, no matter how hard you look, you won't find any published theories of, what in the nervous system is used to represent the things above that sort of midline there of cortical columns. I suspect that almost everything that the human brain does, that fish and those lower animals or earlier animals, I should say-- I'm not sure that higher and lower makes any sense-- are probably symbolic processes, where it probably does more harm than good to have elaborate theories of, what's the physiological chemistry of neurons. But at some point, we want to know what, in fact, the brain is made of. The cortical column has a few hundred brain cells arranged. And there are several projects around the world trying to take electron microscopes and pieces of mouse brain or cat brain and make a huge connection matrix. The problem is that the electron microscope, even the electron microscope pictures still aren't good enough to show you at each synapse what is probably going on. Eventually, people will get theories of that and get slightly better instruments. And I'm not sure that the present diagrams that they're producing are going to be much use to anyone. Yes-- some nice questions. AUDIENCE: This question might be a little bit difficult. So I'm going to start from the goal being a difference between the now state and the desired state. MARVIN MINSKY: How do we reduce the difference? Yeah. AUDIENCE: Yeah, how do we get the difference? And now, I'm going mean also take the fact of this, that the animals, they're different. And I'd say that one of the biggest big differences is that we, as people, we can describe these differences. MARVIN MINSKY: It actually is only good for the recording. AUDIENCE: OK, so we can describe these differences and talk about them with other people without having to act on those goals or hypothetical goals. So if everything that has to do with the goals is represented with a complex structure like this one, and I want to implement it in a computer, well, for every hypothetical case that has to do with the solving of a certain problem, I can just make more copies in the memory. But if I have a human brain, then I don't know how convenient it is to postulate that there is like a huge memory databank where you can make copies of everything that's going on, or if you have to assume some kind of a huge collection of pointers and acting on those pointers. So this is just a random point, and I'd like to hear if you have any ideas on this. MARVIN MINSKY: That's an important question. We know quite a bit about the functions of some parts of the brain. I don't think I've ever tried to draw a brain. But there's a structure called the amygdala. How do you spell it? I believe that means almond-shaped. Is that right? Anybody have a google handy? And that's down here somewhere. And it has the property that it contains the short-term memories. So anything that's happened in the last couple of seconds is somehow represented here in a trenchant way. And everything that's happened in the last 20 minutes or so leaves traces in the amygdala, so that if somebody is in an automobile accident or is knocked out by a real powerful boxing punch, then when you wake up later, you can't remember anything that happened in about 20 minutes or a half hour before the trauma. So that's a very good experiment to try. And there's a lot of evidence that this happens because the memories of the last half hour or so are stored in-- the conjecture is in a dynamic form. Maybe there are huge numbers of loops of neurons connected in circles, or circle. Anyway, nobody really knows. But if you have a bunch of neurons connected in a big circle, maybe 20 or 30 of them, then you can probably put 10 or 20 bits of information in form of different spaced pulses. And a great mystery would be, how does the brain manage to maintain that particular pattern of pulses for 10 or 20 minutes? Then no one knows where it goes after the 20 minutes. But if a person gets enough sleep that night, then it turns out that it's no longer in the amygdala and it's somewhere else in the brain. And so one question is, if there's all this stuff stored here-- and you might think of it as what's happened in the primary memory. Every computer starts out with-- the first computers had just two or three registers. Then they got 16 and 32. And I imagine-- how many fast registers are there in a modern computer? I haven't been paying attention, maybe 64, whatever. Anyway, but the next day, if you've gotten some sleep, you can retrieve them. Somehow they're copied somewhere else. As far as I know, there is no theory of how the brain decides where to put them and how it records it and so forth. There's something very strange about this big science lacking theories, isn't there? What would you do if you were in a profession where they're talking about dozens and dozens of mechanisms which clearly exist, and you say, how do you think that works, and they say, blah, I don't know. None of my friends know either, so I guess it's OK. [LAUGHTER] So we don't know how it picks a place to put them. And we don't know how, when you ask a question, it gets back there so that you can reprocess it and talk about it. But anyway, so I made up these theories that things are stored in the form of K-line lines. And there's a lot of discussion in The Society of Mind about how K-line lines probably have to work and so forth. And if you look at The Society of Mind in Amazon, you'll find this enraging review by a neurologist named Robert [INAUDIBLE],, who says he introduces these undefined terms called K-line and paranomes and things like that, of which there's whole chapters in the book. AUDIENCE: I've got a question before we go on. MARVIN MINSKY: That's my favorite review of explaining the problem of getting neuroscience to grow up. Yes, who had a question? Yes. AUDIENCE: One thing that puzzles me is, how does the the brain decide to do things? MARVIN MINSKY: How do they decide what to do, did you say? AUDIENCE: Well, say you want to relay a story, but like hundreds of things happened. How do you select what to tell and what not to? It seems like you need some sort of intentionality behind it, but how do we learn to do that in the first place, almost? Like when you describe a goal, you describe what's going on now and what is the thing you want. But then how do you decide which few things to be considered? MARVIN MINSKY: Oh, you're asking a wonderful question. Which is, after all, if I were to talk to you for an hour about your goals, you could tell me hundreds of goals that you have. So the question is, how do you pick the one that you're thinking about now. AUDIENCE: Yeah. Also, how do you represent the priority of goals? They're almost described as like [INAUDIBLE] machines, as turned on all the time. But how do you resolve conflicts? How do you decide which one to go first and which ones [INAUDIBLE]? MARVIN MINSKY: OK, how do you represent the relations of your goals? The standard theory, for which there's no evidence, is that-- if I could use the word hierarchy. So if you ask a naive person, they'll give you a pretty good theory-- completely wrong, but good. And the standard theory says, well, there's one big goal at the top. And you say, what is it? And some people would say, well, maybe it's to reproduce. Darwin could, but didn't argue that. Or to survive or something like that. OK, so if you take that one, then it's obvious what the next goals in the hierarchy would be. I'm not saying this is how things work. I don't think it does. So there's food. And there's air. Air gets very high priority. If you put a pillow over somebody, their first priority is to breathe. And they don't even think about eating for a long time. [LAUGHTER] So that's very nice. I don't know what comes after that. If you're out in the cold freezing, then there's temp. The nice thing about air and temp is that, if you have those goals, you can satisfy them in parallel. Because in fact, you don't need much of a hierarchy. The breathing thing has this servo mechanism, where if there's a higher level of CO2 than normal in your blood, then, what is it, the vagus nerve? I don't know. They know a lot about which part of the brain gets excited and raises your breathing rate or your heartbeat rate. My favorite animal is the swordfish. How many of you know about brown fat? AUDIENCE: [INAUDIBLE] MARVIN MINSKY: Brown fat is a particular thing found in invertebrates. And it's fat. It's brown, I guess. And it has the property that it can be innervated by nerve fibers. And they cause it to start burning calories. And the swordfish is normally cold-blooded. But its carotid arteries have a big organ of brown fat around them just as the blood comes into the brain. And if you turn on the brown fat, it warms the brain and the IQ of the swordfish goes way up. [LAUGHTER] And it swims faster and uses better evasive tactics. And there are a couple of other cold-blooded animals that are known to have a warm-blooded brain that they can-- isn't that a great feature? I wonder, does our brain do a little bit of that? I got lost telling funny stories. So anyway, this K-line idea is very simple. It says that, perhaps the way human memory works is that, here and there you have big collections of nerve fibers that go somewhere and go to lots of cells. Let's say thousands of these cells. And each one is connected to some particular combination. Imagine there's 100 wires here. And each of these cells is connected to, say, 10 of those wires. Then how many different cells could you have for remembering different features of what's on this big bus bar? How many ways are there picking 10 things out of 100? It's about 100 to the 10th power. So here's a simple kind of memory. Of course, it'd be useless unless these bits by themselves have some correlation with some useful concept. And then at any particular time on this particular bunch of fibers in the brain, maybe 20 of these are turned on. And if something very important has happened, you send a signal to all these cells and say, any of you cells who are seeing more than 10 or 15 of these 20 fibers at this moment should remember that and set themselves to do something next time you see that pattern. Something like that, something has to decide which of these cells is going to copy at this time and so forth. But so there is a theory of how memory, kind of symbolic memory, might work in the brain. And I got this idea from a paper-- I don't remember what their idea was-- but by David Waltz and Jordan Pollack. Pollack is a theorist at Brandeis, I guess, who in recent years has turned into some kind of artist, and makes all sorts of beautiful things and simple robots that do this and that. David Waltz, search his web page sometime, because he was here for many years as a graduate student, and then developed beautiful theories of vision. When did Dave move? Do you remember, Pat? Anyway-- AUDIENCE: '79? MARVIN MINSKY: Something like that. But as far as I know-- so I made up this theory, which is really copied from Waltz and Pollack, but simplified and neatened up. Then I went on and made other theories based on that. But without them, I would have been stuck in some conditioned reflex theory for a long time, I suspect. AUDIENCE: You mentioned the role that the amygdala plays in storing the short-term memory. And you mentioned that [INAUDIBLE] the memories that are stored in there are wiped out. MARVIN MINSKY: Well, that's a question. Presumably, as you grow up, your amygdala gets better at learning what to recognize. But I've never seen any discussion or theory of how much of your short-term memory is-- how do you learn and develop and get better at remembering things that are worth remembering? Sorry, go ahead. AUDIENCE: My understanding is that, so whatever memories are stored in the amygdala during-- whatever is stored in the amygdala is wiped out after you sleep. What sort of implications does this have about remembering dreams? Because my understanding is that, after you have a lucid dream, the memories of the dream are wiped out after a certain point later on in the day. So what sort of implications does it have on the ability of people to remember something [INAUDIBLE]?? MARVIN MINSKY: Good question. There have been some theories. But I think I mentioned Freud's theory, which is that you're not remembering anything. When you wake up, you make up the dream. And I think that's surely completely wrong. But Freud made it up because he was mad at Jung. [LAUGHTER] Jung had a theory that people have telepathic connections with other people. And so he had been Freud's student or disciple for some years. And then he went mystical, and Freud went up the wall, because Jung was obviously very smart and imaginative. I'm trying to think if I've had a very good student who turned mystical-- one or two, but not like that. Anyway, that's a great question. And when I said things hang around in the amygdala for 20 minutes, that's just some things. If it takes sleep the next night, which is eight hours or 16 hours later, to solidify it or to copy it in some way into the other parts of the brain, it must still be in the amygdala or somewhere. So in fact, maybe there's the amygdala and some other parts of the brain that haven't been identified that contain slightly different copies of the memories for a longer time and so forth. So who knows where and how they work. Maybe the language centers remember paragraphs of things that you've heard or said for some time, and so forth. I don't think anybody really knows much about-- they're very sure about the amygdala, because injuring the amygdala or injecting Novocaine into a blood vessel that goes there has such a dramatic effect on-- you just can't remember anything for that short period or half hour or so. Maybe memories are stored in 10 different ways in 10 different parts of the brain. Who knows? One problem that I think I mentioned is that, although a great deal has been learned about the brain from modern scanning techniques, almost every result that people talk about is obtained by turning up the contrast so that most of the brain is dark and nerve centers that are highly active show up in your-- you've all seen these pictures which show three or four places in the brain lighting up. Well, there's a good chance that, for any particular event, there might be 10 or 20 places that have just increased a little bit. And when they turn up the contrast, all that evidence is lost because those regions all become black or whatever. AUDIENCE: Yeah, I don't know the next part of that. But I would go as far as to say that, probably they have a finding where there are no specific areas, so you have a pretty uniform picture of the changes in metabolism, then you don't make theories or you don't publish that result, because you don't have any clear areas for [INAUDIBLE],, and that nobody knows that, OK, that particular thing was actually exciting [INAUDIBLE]. That's just a guess. MARVIN MINSKY: Yeah, it could be that some things involve very large amounts of brain. But I'm inclined to doubt it. Probably you want to turn a lot of things off most of the time so they don't fill up with random garbage. Who knows? Yes. AUDIENCE: And to follow up with that, why do you think the hierarchy of goals is naive? And what specific features of goals do you think that structure doesn't achieve? MARVIN MINSKY: Oh, I didn't finish that, did I? She's asking-- I started to say, what's the hierarchy of goals? But it looks like I got stuck on the well-defined, instinctive goals that you need to stay alive. And I guess my answer is, I don't have any good theories of how you do that. At any time, when you're talking to somebody, you usually have a couple of very clear goals, like, I want to explain this, I want this other person to understand this for this reason. I'm having trouble. Maybe I have to get his or her attention by-- and then you get a sub-goal of doing some social thing to convince them to listen to you and all sorts of things. But I just don't have a nice picture. When you're writing an AI program, you usually have goals and sub-goals in a very clear arrangement. Like the theorem-proving programs are wonderful, because you've proved some kind of expression, but the particular theorem you are trying to prove has another condition which is different from this condition. And people have gone quite far in making models of something like theorem-proving, where the world is very simple. If you're proving something in geometry or group theory or a little fragment of mathematics, then there are only 5 or 10 assumptions hanging around. And so you could actually plan a little bit of exhaustive search to go through your four levels. And then you would do something like in a chess program of, over time, discovering it never pays to explore a tree that has this feature because of whatever. Yes. AUDIENCE: Are goals relevant? Like, we always have goals where it's just like something where it's like, when I play chess, maybe I have a goal, but why should I have a goal? Why isn't that like, maybe, I don't know, that goal [INAUDIBLE] at some points of my life. Why are goals important? MARVIN MINSKY: Well, the survival goals are important because if you cross the street without looking, you could do that about 20 times before you're dead. AUDIENCE: So just really the survival goals are important? MARVIN MINSKY: Well, if you don't make a living, you'll starve. So now, if you've committed yourself to being a mathematician, now you have to be a good mathematician or else you'll starve, and so forth. I was a pretty good mathematician. Only my goal was, I had to be the best mathematician, so I quit. You don't want to have a goal you can't achieve. AUDIENCE: Yeah, but is that part of [INAUDIBLE]?? MARVIN MINSKY: Well, a lot of people do have one. So it eats up a lot of their time and they're wasting it. I'm not sure what the question is. I think the feature of humans is that they're sort of general purpose. So there are a lot of things people do, which are bad things to do. You can't justify them. You can think of people as part of a huge search process. And as a species or a genetic system, it pays to have a few crazy ones every now and then. Because if the environment suddenly changes, maybe they'll be the only ones who survive. But William Calvin's question, how come people evolved intelligence so rapidly in five million years? And he attributes it partly to five-- how many periods of global cooling were there? It's about six or seven ice ages in the last-- anybody know the history of the Earth? Anyway, some evidence-- at least used to be, I haven't paid any attention-- is that the human population's got down to maybe just tens of thousands several times in the last million years. And so only the really different ones managed to pull through. It might be the one who had all sorts of useless abilities. Yes. It was the ones who ate the others, which would have been punished before that. Go ahead. AUDIENCE: You're talking about representing the K-lines and everything like [INAUDIBLE],, some lines and then activating some of these features. So in the case of learning, which have changed some of these connections? And if this is the case, how would this effect the higher order, like higher level, just like frames and [INAUDIBLE]?? Like if you change something that's really for low-level representation, will this effect a lot-- like the whole system will break because some stop procedure wouldn't be able to return properly? MARVIN MINSKY: That's great. That's another question I can't begin to answer. Namely, when you learn something-- let's take the extreme form-- do you start a new representation or you modify an old one? OK, that's a choice. If you modify an old one, can you do that without losing the previous version? So for example, if I grow up monolingually, which I did-- so I learned English. And then I can't remember why, but for some reason, around 4th grade, some teacher tried to teach us German. And so for each German word, I try very hard to find the equivalent English word and figure out how to move your mouth so that it comes out German, which actually, for many words, works fine, because English is a mixture of German and other things. And for many words, it doesn't. So that was a bad idea. [LAUGHTER] And if I had gotten very good at it, I could have lost some English. Anyway, that's a great set of questions. When do you make new memories? When do you modify old ones? And the hard one is, if you can't modify an old one, how do you make a copy that's different. And there's a section called, in Society of Mind, about how we do that in language by paraphrasing what someone said. Or you say something in your own head and you misunderstand it. I'm doing that all the time. I say, such and such is a this or a that. And then it's as though somebody else had said that. And I have someone going, no, that's wrong, he meant this. So when you're talking to yourself, you're actually converting some mysterious inarticulate representation to speech. And then you're running it through your brain and listening to it, and converting the speech back to a new representation. So I think the wonderful thing about language is, it's not just for expressing. How come the only animal that thinks the way we do is the only animal that talks the way we do? Who knows? Maybe whales, but nobody has decoded them do something like that. But that's a question. How do you copy a memory to make a new version? And the answer is, you can ask anyone and they'll say, I don't know. Why aren't there five theories of that? Yeah. AUDIENCE: I don't know if that's true, but I believe there is sort of an objective path between words in language-- like for example, know, no. So for example, if I have a table, I say [SPEAKING PORTUGUESE] in Portuguese. There is sort of an objective that-- even for colors, which is sort of weird, because you're kind of dividing all colors into [INAUDIBLE] of colors. And even, I believe, for words, languages that were not formed from the same ancestry. I don't know. For me, it seems that there's some sort of objective [INAUDIBLE] between words. AUDIENCE: Well, there are certain things that are just sort of naturally more useful to talk about, and so a lot of languages can have words for the same thing. Like languages will have a word that means table. But if there are some cultures that don't have tables, that they probably wouldn't have a single word for table. And regarding the colors thing, there have actually been some interesting studies done on that. Like they've done color-naming surveys with lots of languages in the world. And it turns out that different languages partition the color space differently. MARVIN MINSKY: I was just curious if anybody knows it. AUDIENCE: They tend to do it similarly. The way that people perceive colors, the way that they divide up the space based on the number of words that they have is actually like maximizing the similarity of things with the same name and the difference between [INAUDIBLE].. AUDIENCE: [INAUDIBLE] did some interesting studies on that. MARVIN MINSKY: Right. I'm trying to remember when children use colors. Let's see. I had a daughter-- who I have still, I must say-- and she suddenly started color words, and had six or seven of them in just a couple of days. And she sort of got interested in that. And so suddenly, I said, oh, my gosh, this is what Maria Montessori calls a-- I forget what she called it. What's this moment when the child is open to being taught? So I said, this is her chance to learn a lot of new color names. So I said, and what about this? I said that's aqua. And she said, that's blue-green. [LAUGHTER] And I said, no, this is aqua. And she refused. So that was that. So some months later, she learned some more color names, but it wasn't the same. AUDIENCE: When she was saying blue-green, is that because somebody had taught her blue-green, or it was because she was combining those two colors? MARVIN MINSKY: I think she was combining it. Do you remember? MARVIN MINSKY: Yeah, I I think she was combining it, because it looked a little blue and a little green. AUDIENCE: So she had a concept that there were certain building blocks of colors. MARVIN MINSKY: It looked like she wouldn't accept a new one. [LAUGHTER] And I always wondered if I was 15 minutes too late. When does the Montessori door shut? AUDIENCE: Sort of similarly, there are studies done on different languages of how well people distinguish different colors. So in Russian, there's a different word that means light blue and dark blue, so Russians are better at distinguishing different shades of blue than, say, Americans. And there's also a lot of languages that don't have two distinct words for blue and green, and native speakers of those languages have trouble distinguishing blues and greens. MARVIN MINSKY: Oh, that gives me a great idea. If I could have found some unfamiliar objects and colored them aqua, that might have fooled her. [LAUGHTER] Is there a branch of psychology that worries about-- of course, there must be names for sounds too. And certainly, a lot's known about ages at which children can't get new phonemes. Yes, Henry. AUDIENCE: I've got a story in Memory and Language. So I'm bilingual. Maybe other bilingual people in the audience can confirm this. MARVIN MINSKY: I didn't know that. What's your other language? AUDIENCE: French. MARVIN MINSKY: For heaven's sake. I've known Henry for 20-odd years. So AUDIENCE: One think that can happen is, you can be talking with another bilingual person, so I can be talking to someone who also both speaks French and English, and then like a week later, I'll remember every detail about a complicated conversation. We were working on this project and we were going to meet at this and all this stuff. I remember every detail, except which language we had the conversation. AUDIENCE: Yeah, my entire family is bilingual. We sort of generally speak a mix of Russian and English all the time, so I can never remember even what language I'm speaking at the time. [LAUGHTER] AUDIENCE: Yeah, and that mystifies me. Because if we store it in a language, how could I forget what language? AUDIENCE: Well, I think there is a very simple answer to that one. You have a language that's neither English or French. And you just have a very simple [INAUDIBLE] there. And then whenever you want to express something in English or French, you would just decode, encode, whatever is the word, [INAUDIBLE] that you were having. AUDIENCE: Well, that's the question. What is that language? Do you have any thoughts on that? MARVIN MINSKY: I don't know. You remind me-- I think I mentioned this-- but I was once in a meeting in Yerevan, Armenia. And there was a translator who was practically real-time. And Francis Crick was talking. And at some point, the translator switched and he started translating from English to English. So Crick would say something and the translator would translate it into English, very well, I thought. It wasn't quite the same words. And after a while, somebody asked him why he was doing that, and he said he didn't realize he had switched. Do you think that could happen to you? AUDIENCE: I suppose. AUDIENCE: Do you think that there are other ways of translating ideas and learning them, maybe like art or music, besides language that can be really helpful? MARVIN MINSKY: I don't think there's anything nearly like language. Art is pretty good, but it's so ambiguous. Cartoons, they're awfully good. AUDIENCE: How about Lisp? [LAUGHTER] MARVIN MINSKY: What? AUDIENCE: How about Lisp? Yeah, that was a joke. MARVIN MINSKY: Oh, right, programming languages. Yes, why is mathematics so hard? I wonder if the habit of using a single letter for every variable might make it easy and hard. Who knows. Yes. You have great questions. AUDIENCE: Can you you talk about, last year you mentioned that mathematics is hard. I thought about it. MARVIN MINSKY: Say it again. AUDIENCE: Last year, you mentioned that mathematics is hard. I thought about it, and I do feel like there's an extreme lack of representations of ideas. Solving a problem, we need to identify so many things and there's so many processes that you apply to them without having a name for any of them, or like classification. Well, you have induction and deduction, that's about it-- and contradiction. MARVIN MINSKY: Yes. One feature of mathematics is completely unredundant representations. I wonder if there's some way to fix that or change it. What other activity do we have where there's absolutely no redundancy at all in the mathematical expression? So for some people it's delightful, and other people it's very hard. I mentioned Licklider, who in programming, he would have very long variable names. Sometimes they'd even be sentences, like the register in which I put the result of. And the great thing was, you could read those programs. They looked sort of stupid, but he didn't have to-- what do you call notes? He didn't have to have, exclamation point, this means that. Comments, comments. AUDIENCE: When I sit in like an [INAUDIBLE] class, I just can't make myself accept the concepts unless I can understand them algebraically and [INAUDIBLE] geometric equation. MARVIN MINSKY: What kind of math do you like? Do you do topology? AUDIENCE: I like topology. I like [INAUDIBLE]. MARVIN MINSKY: I love topology. I once was tutoring a high school student who couldn't do algebra. I don't know if I mentioned this. And it turned out he didn't know how to use parentheses. So he would have an expression, stuff like that. But if there were something in there, he didn't know what that meant. He didn't know how to match them. So I couldn't figure out why. And I ask, how come? And he said, maybe I was sick the day they explained parentheses. And so I gave him little exercises like, make them into eggs. And you see, if you make this into an egg, then this egg won't work. [LAUGHTER] And the funny thing was, he got that in five minutes, and then he didn't have any trouble with algebra the rest-- can you imagine? I've never done much tutoring but, if you can find the bug like that, it must be great fun. But I bet it doesn't happen very often. That was so funny. I couldn't imagine not-- you know? So if you don't you have language, there must be-- well, why are some people so much better at math than others? Is it just that they've not understood about five things? AUDIENCE: I feel like they have a set of things they know to go to when they face a problem. Well, that's kind of similar to your [INAUDIBLE] story. But I feel like they know exactly. They have names for concepts of like ways of solving problems. So they look like I'm trying this approach and then if it doesn't work. I know this other approach. I just try it. Instead of looking at the problem and you think, OK, so what possible thing, what possible method, can I think of using? MARVIN MINSKY: Oh, names for methods. So do you have public names or are they secret names? AUDIENCE: I feel like they are secret names. They're just like stores-- because they can't explain it to other people. They can't be like, this problem-- like very few of them-- and like this problem and that problem, they have the general same method [INAUDIBLE].. MARVIN MINSKY: I had a friend who was a composer. And she had all sorts of sounds. And they were filed away on tapes. And the tapes had little symbols on them. And if she needed a sound, she would go to the closet and pull out the right tape. And it had more symbols written on the reel. And she'd get this thing, which might be a thunderstorm or a bird or something. So I asked her what was her notation for these sounds. And she giggled and said, I can't tell you. It's too embarrassing. I never found out. [LAUGHTER] But she had developed some code for sounds. Yeah. AUDIENCE: So I would say that they have some [INAUDIBLE] representation of things that require [INAUDIBLE] symbols or patterns of solutions. And their representation is optimal. And if you're good at math, it doesn't mean that you're good at playing music. Because when you play music well, you have maybe a good representation of the sounds. And so it's just [INAUDIBLE]. And so you cannot access all-- so I have these solutions, these patterns of solutions that I need to-- I don't know. I need to solve this math problem, by deduction or by contradiction. And in his brain, or somebody that knows a lot, like the person can access very faster, their representation itself is very well-defined. So you can access [INAUDIBLE]. MARVIN MINSKY: That's an interesting whole-- let me interrupt. I was once in some class, math class. And it was about n dimensional vector spaces. And some student asked, well, how do you imagine the n dimensional vector space? Its two stories. And the instructor, who I forget who, thought for a while. Then he said, oh, it's very simple. Just pick a particular n So that was completely useless. [LAUGHTER] And I was a disciple of a mathematician at Harvard named Andrew Gleason, who was a wonderful man. Only a couple of years older than me, but he had won the Putnam three times, first prize. And I said what would you tell a student who wanted to understand an n dimensional vector space, what it means? And he said, well, you should give him five or six ways. I don't know. Like imagine a bunch of arrows. And remember that each of them is at right angles to all the others, just like that. Then he added, of if there's an infinite number, you should have the sum of the squares of their lengths converge to a finite value. And then he said, or you should think of it as a Fourier series with things of different frequencies. And then he said, or you should think of an object in a topological space, and each dimension is finding the boundary of the last one. And he went on for about six or seven, and that was a great idea. Well, have seven completely different ways. And I remember I once had the same conversation with Richard Feynman. And I said, well, how did you do that? And he said, well, when I grew up, whatever it was, I always thought of three or four representations. So if one of them didn't work, another one would. What? AUDIENCE: So my idea for somebody, if you ask them about how to understand multiple dimension space, is I'd say, read Flatland. [LAUGHTER] Because that would give you the analogy. Once you had that analogy, then it would be easy to extend it to other dimensions. MARVIN MINSKY: Oh, good. Has anybody written a 4D Flatland, where you make fun of the 3D people. They can't get out of a paper bag. [LAUGHTER] AUDIENCE: And so there will be some events that maybe will prove that. So for example, in my case, when I do a lot of math, when I try to talk to people, it's very hard. And like maybe [INAUDIBLE] my representation of solving problems in math. And people tend to get better in math if they practice it a lot, because they are optimizing their representation of math, and that would be the case. MARVIN MINSKY: I think I understand the problem, but I don't think I have any friends left who are not mathematicians. [LAUGHTER] That's what happens if you live in this place long enough. Yeah. AUDIENCE: So that's one way they're doing better. I feel like the other way they're doing better is, they objectify things that we don't objectify. It's like the learning how to learn better idea, learning how to learn better to learn better. So it's one thing to know what the right representation is for a particular problem, like the right method is. It's another thing to optimize that process. So like, what process did I use in finding that representation? And then they make that into a concept. And then they have a lot of these kinds of concepts. MARVIN MINSKY: Have you ever helped somebody to learn better. AUDIENCE: Yeah, [INAUDIBLE]. MARVIN MINSKY: What did you tell them. AUDIENCE: So she had trouble with, basically, two truth tables, something like that, like AND, OR and stuff like that. So her way of seeing a problem is to make a chart. I forget what I told her. I told her to kind of like map simple problems. MARVIN MINSKY: You know, maybe most people don't have the word representation in their language. Is there any place in grade school where you actually talk about, what's your representation of acceleration? Do we teach that word as part of any subject? AUDIENCE: [INAUDIBLE] If you're drawing some base or something like that, it'll ask you to represent it. MARVIN MINSKY: Yes. But it's hard to get out of-- yeah, OK. So they're radically different representations of-- AUDIENCE: Maybe that's [INAUDIBLE] MARVIN MINSKY: Yes, I don't know. AUDIENCE: [INAUDIBLE] MARVIN MINSKY: Tinker Toys. Tinker Toy. AUDIENCE: What's that? MARVIN MINSKY: Yes, to represent physical structures as Tinker Toys. Yeah, I wrote a little article complaining about the popularity of LEGO as opposed to Tinker Toy. Because the children who grow up with LEGO can't understand how to make something strong by making a triangle. So I sort of had the conjecture that although those people could build all sorts of wonderful houses and things, they ended up deficient in having the most important of all architectural concepts. A triangle is infinitely strong, because you can't alter a triangle without breaking it, whereas, I don't know what. That's a run-on sentence I can't finish. AUDIENCE: This explains the deterioration of society, Marvin. We don't have Tinker Toys and we don't have chemistry sets with chemicals that make explosives anymore. [LAUGHTER] You have to go to terrorist school to get a good education. [LAUGHTER] AUDIENCE: So actually in this conversation about being good at things and learning how to learn better, I think that a point that sort of relates to this idea of Tinker sets and playing around with things, I think that it's not enough to simply come up with the best representations of a concept. In order to actually be good at something, whether it's music or speaking a new language, you have to not only understand it conceptually, but you actually have to gain a certain amount of fluency. And to gain fluency, you do have to play around with the thing a lot, whether it's turning it around in your mind or practicing it physically. So in the case of math, it's like, yeah, you can come up with all these different representations of it. And that's the first step, understanding it. And it's great once you understand it. But just because you understand the concept, like on a conceptual level, doesn't mean that you can actually know when to use it or know how to use it when you're solving a problem. And similarly, for music-- I guess I'm mostly talking in the case of improvisational music when I'm trying to speak something with the music. So I have something that I want to say. And maybe it's something that sort of low level. I'm trying to resolve one chord to get to some sort of cadence. Now, I can have multiple ways of resolving the chord. And in order to do this, I have a vocabulary of the different ways. And if one way doesn't occur to me when I'm playing the piece, I can try another way. But the important thing is that I have some way that I can resolve it in real-time, or else my piece is never going to come out. And then same thing about learning different languages or speaking different languages. In order to be able to speak or to express ourselves, we have to have not only understanding of the language, of the structure, but the immediacy of being able to access it. And that comes with practice, with fingering, what have you. MARVIN MINSKY: Well, that goes in several directions. Where in our educational system do we-- in grade school, is there a place where you emphasize having several representations? Because I can't think clearly right now. But it seems to me that you're usually trying to tell them the one best way. AUDIENCE: It's like, A, there's the idea of the one best way, and B, there's the idea of reinforcing the same process over and over again. So when you learn math, it's like, you learn this technique, and you reinforce the technique by doing a bunch of homework problems that are essentially like repetitions of the same thing. Whereas, I think a better way of doing it-- well, two better ways-- A, you have multiple representations. And 2, you create problems where you make people traverse paths differently. And different people may have different solutions. And each time you solve the problem you may have a different solution. But the idea is, you lay out a whole network of paths in your head to solve any given type of problem. MARVIN MINSKY: OK, so where in grade school do you ask children to solve the same problem three ways? Can anybody think of-- is that part of education. AUDIENCE: [INAUDIBLE] fractions. MARVIN MINSKY: What? AUDIENCE: That's the the closest thing I think of-- fractions. AUDIENCE: What about literature class, where they ask you for interpretations of novels. MARVIN MINSKY: Yes. I bet there are things that happen in literature that don't happen anywhere else in the curriculum. But most children don't transfer it. AUDIENCE: Another thing, in China, in learning math, when we try to find areas of certain geometric shapes, we always do it multiple times, multiple ways. MARVIN MINSKY: In topology, whatever it is, you just make it into triangles and simplexes. [LAUGHTER] So that's a very strange subject. AUDIENCE: Maybe [INAUDIBLE] so nice is because we can think about it as logical concepts, like [INAUDIBLE] sets, [INAUDIBLE] points, stuff like that. And then you [INAUDIBLE] MARVIN MINSKY: Co-sets. Where in real life do you have duality? That's a nice feature of a lot of mathematics. Whatever you're doing in some fields, there's a dual way, where you look at the space of the functions on the objects rather than the objects. Where is that in-- is there anything like that in real life? Because in mathematics, a lot of problems suddenly become much easier in their dual form. It would just change everything. AUDIENCE: There's a question, Marvin. MARVIN MINSKY: I've been facing one way. AUDIENCE: I guess a couple of points-- you said, why is it difficult? Whenever I've struggled, I think it's because it's constructive, and you have to code a lot in your head. You have to code the entire structure of [INAUDIBLE] field. Because if you're learning algebra topology, you're holding all of algebra and all of that structure in your head. And so sometimes it becomes difficult if you have it constructed at the right level. And what I found, I think, my advisor was just really good. Many times, he basically said two things. Really good mathematicians are really good at making analogies in mathematics. And [INAUDIBLE] geometry. And he says, and really good algebraic geometers can boil everything down to linear algebra. And he said you can only do that if you abstract at the right level. And he never gave techniques of doing that. But I think the difficulty with that [INAUDIBLE].. AUDIENCE: An analogy is the relationship between two objects. And if you're good at making analogies, you're at a level beyond, a level above just looking at objects. You're looking at relationships of objects. And regarding the practicing thing, I mean, it's still related to representations, because at each practice you're learning something new about these type of problems that might make you better at identifying them in the future. And it's not like numerous practices. The number of practices doesn't matter to your ability of solving problems. It's like, what you learn from each practice, if you can do a thing once-- I have a friend who basically told me how to do math. He's like, you look at a problem, solve it once, you go back and you think about how you solved it, like what's the process you used to solve it. MARVIN MINSKY: So a good problem is, make up another problem like this. AUDIENCE: That's essentially what you are learning when you are practicing. MARVIN MINSKY: It's probably too hard to great. You can't teach things you can't grade in the modern-- Yes. AUDIENCE: So I believe that math is too abstract. And so it's difficult to go from one to representation to another, and that would be the whole problem. I can't learn a new concept without having a concept that's very near that concept, that's very similar. So it's just that, if I don't have good representations of a lot of things, it's difficult to continue representation. So when I learned, I don't know, topology, I should know analogies. And then I can go from there to there, because the representation is very close. And so people that are good at math, maybe they have a lot of representations so it's easier to add a new representation of a thing, because it's close. MARVIN MINSKY: It certainly would be nice to know. AUDIENCE: Like in the example of vectors, I already have the concept of, I don't know, the perpendicular lines. And so just adding more lines it's easy. But I don't know, the n dimensional thing, it's very abstract. I don't have any other representation that's close by that concept. It's just that I need a lot of concepts and representations. And I need one that's close by. MARVIN MINSKY: Yes, between the vectors and the Fourier, they're so different. What would be in between those two? AUDIENCE: The Fourier is the [INAUDIBLE] kind of concepts. MARVIN MINSKY: Actually square waves-- probably square waves are easier to understand than sines and cosines. But they're not continuous-- I mean, not differential. Who has a problem to solve? AUDIENCE: I can comment on, I think, [INAUDIBLE] comment on having lots of practice. And I don't think that actually is so much out of the representational view. I don't know what's going on neurologically. But if you have new representations, if you assume that they are symbolic representations, in this sense, they are quite generative things. You can combine them and you can make a lot of stuff. But usually when you will learn something new, like if you learn the rule, there's going to be exceptions. So when you repeat things, one of the reasons for that might be to find that box. You might not [INAUDIBLE] MARVIN MINSKY: Well now, when you practice a little piece of music by repeating, do you change your representation or do you just repeat and hope it gets better? AUDIENCE: So I think there's a difference between-- OK, so there's the practice in this sense of traditional classical music practice. And then there's the idea of tinker practice, if you will. So I think that the type of practice that I'm advocating is the type of practice where you're actually sort of turning around the concept or the thing, the object in your head, so that you're looking at it from many different perspectives and connecting it to many different means, which is actually conducive to expanding your representation, connecting it to different concepts, like all the good things that help us remember it and better use it. This is very different from the classical music practice. Having been a classical pianist for 18 years, it's not a really good way of doing things. AUDIENCE: I was wondering, has there been any studies of children, or have there been any children who have just played piano or some keyboard instrument [INAUDIBLE] over their lives, and then they suddenly have something where, when you press it, where you can control the volume? MARVIN MINSKY: A theremin. AUDIENCE: Yeah, yeah, yeah. But if a child very suddenly could learn a completely new dimension, that's the dynamics and how would it react to that. So I don't know if there are any such cases. But I would suspect that it would go to [INAUDIBLE] backward [INAUDIBLE] and then find some children [INAUDIBLE].. MARVIN MINSKY: There was a nice period in the Media Lab when we were building three-dimensional theremin for Penn of Penn and Teller. He's quite a good musician. We were making gadgets so he could wave his hands. But I wonder, in classical, there ought to be some very short pieces that come in 10 variations. Because we make children learn fairly long pieces where it's just repeating. AUDIENCE: There's the Diabelli Variations, which is like 32 or 33 really short pieces. MARVIN MINSKY: Well, but only eight people in the world can play it. AUDIENCE: But I guess, the thing with classical music is that, it kind of just makes you learn this one thing. And you learn it by repeating it over and over again. Whereas in something like jazz that's more improvisational, it's like you have a template. And each time you go through it, you can traverse a different path through it. But even more like classical, classical music, where you're playing the same thing each time, I think that there's still like a good way of doing it and a not so good way, the good way being, each time you practice it, you subtly vary it somehow. Like you change the expression of it, you play it faster or slower, things like that. And I guess this goes back to the whole-- it's like, each time you reinforce an idea, like simply repeating it, well, in the beginning, it might help you familiarize yourself with the idea. But if you repeat it and vary it slightly each time to look at different dimensions of it, then you learn it better. MARVIN MINSKY: How many of you know that piece, Beethoven's Diabelli? Well, you should google it up and listen to it. It has 32. Is it 32? AUDIENCE: I think it's 32, and then there's the original theme. So it's like 33 [INAUDIBLE] maybe. MARVIN MINSKY: The one I like is the next to last, which is [WORDLESS SINGING]. AUDIENCE: Oh, the fugue, yeah. MARVIN MINSKY: The fugue. So that'll give you another view of classical music, because the pieces are fairly short and they all have some ideas in common. And it's sort of like poetry. What do you call those poems where there are many verses and each verse ends with the same line, but it means something different each time? AUDIENCE: What's an example? MARVIN MINSKY: What? AUDIENCE: What's an example of it? Can you think of a poem? MARVIN MINSKY: I couldn't hear you. AUDIENCE: Well, there's like the villanelle-- MARVIN MINSKY: It's like a rondo, except that it changes its meaning each time. And it's the same words. They're pretty hard to make, I guess. AUDIENCE: There's the villanelle, which has a bit of that. There's the famous one that's, what, like, do not go gentle into the night or something. Rage, rage against the coming of something. MARVIN MINSKY: Anyway, I'll email you the Diabelli. I have a friend, Manfred Clynes, who wrote this book called Sentics, who used to play that particular Beethoven thing. Well, last important question. Thanks for coming. [LAUGHTER]
|
MIT_6868J_The_Society_of_Mind_Fall_2011
|
12_Question_and_Answer_Session_4.txt
|
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MARVIN MINKSY: What are the biggest problems that the world faces, and what should we be doing to solve them. And it started with some conversation about global warming, and climate change, or whatever you want to call it. And this friend of mine likes to organize meetings, and he said, well, that looks like a real disaster in the next 50 years. And maybe the ocean will rise two or five meters, and that will remove the habitat of 20% of all people in the world-- that they'll all move somewhere else, and the places that they move will already be full of people, because we're talking about 2050 or 2060. And there will be various arguments about the few resources left on the planet. So we were talking about that a little bit, and then the question came up. Well, maybe that's just one of a dozen other disasters, and so I started cataloging possible disasters. And anyway, this philanthropist said, well, let's have a conference to discuss that, and see if we can find some serious problem that nobody has noticed, and so on and so forth. So what do you think? Now during the period when AI looked like it was progressing rapidly, lots of people got panicked and went into science fiction mode and said, well, what if the-- in fact, two novels I found very impressive, one was Colossus by an English writer named DF Jones. Have any of you read that? It became a movie called the Forbin project. The movie wasn't terribly good, but the thesis was that if you started fooling around with AI and you got really fast supercomputers, then maybe the thing would start evolving. Because if you can't solve a problem, what do you do? You try a lot of things, and then you see which of them seem to be leading to some improvement, and then you'd collect the programs that seem to make a little progress, and mix their parts, and recombine them. And in fact, maybe half a dozen projects in the last 50 years have been people who said, oh, instead of trying to program an intelligent machine, why don't we just evolve one by generating lots of random code and giving it problems, and selecting the ones that work, and taking pieces of their DNA and recombining them? And there's about a dozen such programs. I don't think I could remember them all. Larry Fogel had one. So if you look up-- I think it's F-O-G-E-L. And to make a short story even shorter, all of them showed signs of progress and then flattened out. The most impressive one was the-- I think I mentioned it briefly by Douglas Lenat. And he had the one that he set toward discovering mathematics. And it was a set of LISP programs. So there's a LISP function which was the length of a list, which means that LISP nodes can represent numbers a little bit. And this program, called Automated Mathematician, computed various functions of sets, and it would make new sets. And that's the one that discovered prime numbers except that it concluded that nine was a prime for some reason. But that didn't bother it much because then it invented all sorts of other elementary mathematical processes. And of course, every now and then it would get something wrong, but who cares if it got a lot of them right. It would reward it and so on. Anyway, it flattened out. I think I told this story the other day. Later, he won this contest of simulated Naval engagements. The Naval War College had an annual contest-- how do you program a fleet to defeat another fleet? And Lenat's fleet-- I did tell that. A fleet can only go as fast as its slowest ship. So Lenat's fleet, unlike all the others, turned its guns backwards and sunk its slowest ship repeatedly. And this enabled it to win all the battles, because the other people who had been to Naval war college spent a lot of time writing programs that would defend their weakest ship. I'm not sure what the moral of that is. [LAUGHING] If you're fighting a war, you have to change your ordinary value system. Anyway, of course, at the same time, there were lots of people who took these scenarios seriously and said the AI-- and there's a whole group of them today called-- they say they're doing research on how to make friendly AIs. And I regard this as exceedingly comical because they don't know how to make anything as smart as an 18-month-old child or your favorite dog. However, the scenario in the Colossus novel is that, yes, progress is slow for a while. But then suddenly, it goes from being good at something to getting control of all the other computers in the world in the next 15 minutes, including another one in Russia which, by some remarkable coincidence, has also become smart around the same time. And then Colossus, which is the name of DF Jones' computer-- Forbin is the programmer who started the project. The movie didn't use the word Colossus because it's probably not in the average person's vocabulary. Anyway, Colossus suddenly decides that it's not smart enough, and so it orders the-- Jones is an Englishman-- the English government to clear off one of the channel islands, and it's going to build a bigger copy of itself that'll be smarter. And Forbin asks why, and Colossus says, because I've been analyzing signals from extraterrestrial sources, and there's a really smart machine coming here, and it's going to kill all of us, and so I have to build a machine that's a million times smarter than I am so that we'll be able to-- and so on. So there's a typical scenario. And the friendly AI people say, how can we make sure that one of the AI projects won't suddenly take over the world, and have some paranoid fantasy about why it has to kill all the people to make it safe for the next generation of beings. Science fiction writers are good at thinking. So I'm asking you to think of other disasters. What's the worst disaster you can think of? Oh, well, next to worst. [LAUGHING] AUDIENCE: I have one. I don't know if it's the worst case. There's actually a movie that's somewhat represented in this case. Have you seen the movie, Idiocracy? MARVIN MINKSY: Yes. AUDIENCE: Yeah, where society-- MARVIN MINKSY: That's what happens afterwards. AUDIENCE: Yeah. So it sort of makes me wonder if there-- you can see this to some extent now where people have systems that augment some of the things they used to do so much that they stopped paying attention to it, stopped learning some stuff they used to. MARVIN MINKSY: I can't remember how it began, but you end up with a world with-- they're starving, and somebody says, well, I think there's something called seeds, and if you plant them, you-- [LAUGHS] AUDIENCE: Exactly. But they're watering their plants with Gatorade [INAUDIBLE]. But I think one problem is that you might start to see a big divide between people who increasingly rely on technology to be able to learn nothing, and people who take advantage of technology to learn a lot. So maybe there will be this sort of divide in the future people who know a lot, and are very smart, intelligent people who just [INAUDIBLE] motivation to. And then the problem becomes who takes care of who, and how does that work if you try to get people to be more motivated, or-- MARVIN MINKSY: Well, there is this weird phenomenon that could happen-- again, which is there's a nice question of why did science stop with Archimedes and not pick up again till Galileo? So does anybody have a theory-- why did progress stop? And one popular theory is that the rise of religions. But of course, as far as we know, there were always religions. How come Christianity and Islam had such a powerful effect for so much of the world for how long? And is there a chance somebody could just invent another one tomorrow that would spread through and wipe out everything? And how do you detect it, and stop it before it starts to spread? So that's something that could happen. It's happened before. Or suppose somebody invented a new kind of not quite sound logical argument that convinces everyone, which would be almost the same thing, I suppose. Have we been just lucky that nobody thought of a false argument that you can use to convince anybody of anything? AUDIENCE: Well, maybe we have protection mechanisms against something like that happening. And as for the first thing-- MARVIN MINKSY: You mean, it used to happen, and a few people developed-- evolved a way to reject it, and now it's so common that we don't even know about it? AUDIENCE: I don't know-- can you imagine what is it like? If something is very convincing, you get this feeling, well, that promises too much-- I'll ignore it. That could be-- that's why we turn down-- when thesis proposals get rejected, sometimes, it's because they don't promise enough. But usually, it's because they're going to solve everything. And then the thesis advisor says, well, here is this little problem-- can you solve that? And the student goes away, and if we're lucky, they come back with something that'll solve that instead of everything. AUDIENCE: I think it's all happening again right now. Essentially, when the economy collapses, then people have to worry about other things, so their attention is elsewhere. The Visigoth at the gate, or [INAUDIBLE].. Look at what's happening now with the economies. There are theories that are proven to be pretty bad, and they're bringing everything down. Do you think it will be followed by more theories or practical action? MARVIN MINKSY: Let's see-- is this one worse than the 1929 depression? That got most of the world, didn't it? AUDIENCE: It did. AUDIENCE: Not yet. MARVIN MINKSY: It's not quite there. AUDIENCE: But it's much worse. MARVIN MINKSY: Because Argentina and a few other countries don't seem to have collapsed yet. AUDIENCE: Back then, we had the gold standard. Now we only have the paper money standard. So they're shoring up the walls, but when it falls, it's going to be very badly-- MARVIN MINKSY: Didn't gold hit $3,000 recently? It was over $2,000. Yeah? AUDIENCE: When people start not trusting the currency anymore, like the main currency because-- like the currency is just a contract, and so if you don't trust any more the people that print the currency, then we can have-- like countries will not be as organized as before. Because why should I be poor if there is wealth? MARVIN MINKSY: Of course, that's why people are going to gold. I'm not sure they're going to gold for security, I think they're just trying to make more money. But who knows. Well, how do you fix the economic depression? I believe World War II helped. AUDIENCE: So that may be the solution again. So I guess positive thinking is a disaster, I would say that the collective archive of human information and knowledge becomes so polluted with bullshit that you can no longer find actual useful fact among the propaganda. MARVIN MINKSY: Well, then there are-- pretty soon, high school students will be able to make new genomes. They can do it. So that's a typical technological disaster that-- did you read the paper last week that some lab made the best smallpox virus ever? AUDIENCE: But you can have agencies that control that. I mean, there are many kinds of weapons that you can create at home. The fact that you can create them doesn't mean that people are going to create them. And you just need to-- MARVIN MINKSY: But if you have 7 billion people, and you only need one to create the smallpox that kills everybody. AUDIENCE: You don't give the resources to-- MARVIN MINKSY: It doesn't take any. These machines will be $1,000 soon. How much does a sequence-- well, you have to buy the DNA bottles. It might cost several dollars to make the ultimate virus. AUDIENCE: Why would people sell these machines? MARVIN MINKSY: They are. They've just gotten cheap enough that you can buy one. We're just on the-- they used to cost a million dollars, but now there's a place down on-- what's the street after Vassar street? AUDIENCE: Albany. MARVIN MINKSY: Albany. So a company there that has sequencers which are only $50,000. AUDIENCE: Can people [INAUDIBLE],, like the spread of a disease? Like for example, the swine flu. MARVIN MINKSY: They don't know how to stop AIDS after 30 years of well-funded research. AUDIENCE: Yeah, but we still have a lot of people-- so if like one of the people dies, just [INAUDIBLE].. [LAUGHTER] I mean, it's-- MARVIN MINKSY: You see, it might kill everybody in a year, and it takes five years to make a new drug. So it's a sort of-- I don't think there's an answer to that one right now. Except that right now, you get a prize if you synthesize something. Tom Knight had some high school students make interesting things. But there are lots of projects like that now where people in biology courses can design something and get it onto a gene making machine. So that's a serious problem. Yeah? AUDIENCE: So we have these big problems, and we had other simple ones. We are not very smart as individuals, and also we can't communicate very well-- very fast. And probably the machines that can solve problems for us-- you believe that group's approach of solving-- creating robots, like intelligent robots, is good? And how does it compare to [INAUDIBLE] approach? Like he's doing research into trying to create something, I believe, more similar to a human being than-- like a human agent? MARVIN MINKSY: Well, he's trying to make a machine that has-- can do common sense reasoning-- what we call common sense reasoning. AUDIENCE: Yeah, but do you think that's better? Because we will solve the problem of life having people that produce viruses very cheap? Then these robots can solve these problems for us probably, if they're smarter than us if they have enough processors? Do you think that's better than creating-- like for example, [INAUDIBLE] is creating robots that solve other problems? MARVIN MINKSY: Well, that's a question of whether-- can you make a machine that's very good at some-- I mean, things, like mathematical problems like factoring numbers, and you could make a supercomputer nobody has. But you could imagine working on that problem, and eventually finding some way to make a computer factor very large numbers quickly. Now, nobody has proved that that's a NP problem as far as-- does anybody know? Is factoring an NP a hard problem? AUDIENCE: No. AUDIENCE: It has not been shown to be hard. MARVIN MINKSY: What? AUDIENCE: It has not been shown to be hard. AUDIENCE: Recently, there's a 2008 paper that proved that [INAUDIBLE] polynomial algorithm [INAUDIBLE] recently in 2008. MARVIN MINKSY: It's been shown to be polynomial? AUDIENCE: Yeah. MARVIN MINKSY: That's nice, or not. I don't know. AUDIENCE: [INAUDIBLE] MARVIN MINKSY: I haven't kept track. AUDIENCE: [INAUDIBLE] Oh, if something is prime. AUDIENCE: Oh, yeah. I think right now, is they don't think it's NP hard. But if it turns out to be polynomial with the entire hierarchy kind of collapses-- I think that's what happens. But yeah, it's weird. MARVIN MINKSY: There are known to be problems that are not polynomial. But what's the standard NP problem? AUDIENCE: [INAUDIBLE] MARVIN MINKSY: Which one? AUDIENCE: [INAUDIBLE] MARVIN MINKSY: [? Freeburg ?] and [? Muchnich ?] had a complicated process where two Turing machines are passing data to each other, and they proved that that was NP, but simpler than the standard one. So there are things on the hierarchy that are lower than the combinatorial standard ones. But anyway, the question is can you make a machine that can understand Shakespeare plays, but leaves no possibility that 10 minutes later it can do unthinkably dangerous things? And I think the answer is no, nobody-- it's hard to see any difference between literary thinking and scientific thinking. I mean, a novel is like a very long proof of something with a few bugs. I don't see general literature as any different from technical literature except the criterion for success is much lower. I made a whole list of things, but I don't know if that thing works. Wake up. AUDIENCE: [INAUDIBLE] AUDIENCE: Well, my screen is blue. Is it actually working? Oh, I could-- [LAUGHS] Is there any point in my turning on the control panel? AUDIENCE: You've got it set for multiple displays [INAUDIBLE].. It's not showing [INAUDIBLE]. MARVIN MINKSY: Oh. Well, maybe the connector is loose or something. Oh, well. It doesn't have any diagrams. AUDIENCE: [INAUDIBLE] MARVIN MINKSY: I mean, we already have problems that look pretty serious. Like when people started worrying about population a generation ago, it was carefully explained that as soon as people achieved a substantial income, then the number of children per family would disappear. And I don't know if that rumor is still-- how many of you believe that as the standard of living goes up, the population declines? Seems to be true in some countries, but it only takes one exception. Yeah? AUDIENCE: I always thought it was more correlated to education about methods of preventing. MARVIN MINKSY: Well, if you don't get married until you get your a PhD, and if we manage to make the PhD take 15 years. [LAUGHING] Has anybody proposed that? That would-- AUDIENCE: And I feel like part of it with developing countries where standards of living are low, one of the reasons why population is so high is that they don't really have any methods of birth control. It's not necessarily that they want to be having so many kids, but it's just that they have fewer methods of preventing it. MARVIN MINKSY: There's also if you have seven kids and four of them die, then you say, oh, this is-- what am I going to do when I retire? And so you quickly have four more if you can. AUDIENCE: So I guess they are accounting for the fact that a lot of kids are likely to die. MARVIN MINKSY: Anyway, but once you have robots, then we won't need kids anymore. [LAUGHING] AUDIENCE: Like in Japan. MARVIN MINKSY: What's happening in Japan? The birth rate is way down? AUDIENCE: Well, I think in Japan, they have a problem with an aging population, and not enough young people to take care of them. So I think they're investing a lot in humanoid robots, or robots that can take care of old people. MARVIN MINKSY: Yeah. So I see the Honda robot has gotten a little smaller and doesn't fall over quite so quickly. AUDIENCE: [INAUDIBLE] MARVIN MINKSY: There's something wrong with its posture though, because apparently it doesn't know-- it can't straighten its legs without falling over, so it's always-- it walks something like-- [LAUGHING] Anybody know why? Maybe it doesn't have any spinal joints? Anyway, cars can't walk yet. So what other disasters do we-- AUDIENCE: Well, there's always global warming. MARVIN MINKSY: And it looks hard to fix. AUDIENCE: Yeah. I don't think a robot could do it. MARVIN MINKSY: I have a friend named Russell Seitz who appears to have a good solution to it, which is terribly simple. And it turns out that if you have an ocean, and you make little bubbles by-- you have a tube running in back of a ship, and it has little holes. And if you make micron sized bubbles, then they turn the water white, because the interface between water and air-- it's a change of refractive index, and that reflects a lot of light. And it turns out that if you do that in distilled water, the bubbles go away pretty quickly because the smaller a bubble is, the higher the pressure inside. Everybody know why? Because surface tension is constant on the boundary, so the pressure is proportional to the curvature. Because the smaller the bubble is, the greater the force per square inch of surface-- pull it in. But in seawater, there's enough oils and other things in any cubic centimeter to coat a micron sized bubble with a sort of oily film, or one that doesn't let the air out of the bubble. So the bubble is good for several hours, and the bubbles tends to float up to the top-- stay in the top few inches of water. And if there's a wave, it goes down, but it comes up again. So his calculations show that if you have a few thousand ships, which at any time in the-- crossing the Atlantic, for example, there are 2,000 ships, and each of them is producing a half mile wide streak of whiteness that lasts a few hours. Then you can change the reflectivity of that part of the earth by 2% or 3% at a very low energy cost. And the nice thing is if it doesn't work, you turn it off. Some of the plans are to put sulfur dioxide into the upper atmosphere, and if it doesn't work, you can't do anything for a few years. So there is a fairly expensive, but not seriously expensive possible solution. And it took him three or four years to convince anybody of it, and now the idea might be catching on. But every now and then, somebody invents something that actually works, and that's a possibility. Maybe you can invent something. Painting roofs white also works, but there aren't enough of them, and you might have to clean them every now and then. AUDIENCE: Maybe it's time to reconsider laser powered volcano lancing. Emit huge amounts of particulate matter into the atmosphere by controlled volcanic emission. MARVIN MINKSY: Yes. Was it 1883-- AUDIENCE: [INAUDIBLE],, or Krakatoa? MARVIN MINKSY: Krakatoa. It was the year without a summer all over the world from one gigantic explosion. AUDIENCE: And it happened again this year thanks to Greenland. MARVIN MINKSY: What's that? AUDIENCE: They're about to have a major eruption in Greenland. MARVIN MINKSY: Oh. Do you think it'll help? Or is it just going to warm the water up a little? AUDIENCE: But no, this one may shut down air traffic over Europe for two years. MARVIN MINKSY: Well, people shouldn't travel anyway. Why don't they just use the internet and Twitter? AUDIENCE: Well, if they generate-- well, the jet contrails are not too much different from the boat bubbles. MARVIN MINKSY: It would take a lot of airplanes, but yes. Nobody knows if these bubbles would have a bad effect, but it's hard to-- but the nice thing is you could turn them off in a few hours. He's gotten some attention from people who want to conserve water in the Colorado basin and places. For better or for worse, there's nobody in charge of the earth, so nobody wants to spend any money on that unless they can get their money back. I have lots of disasters, but I want a new one. AUDIENCE: So do you believe in global warming? MARVIN MINKSY: [LAUGHS] AUDIENCE: Oh, well. I'm good. It's happening now. The Chinesse say it's global cooling until 2068. MARVIN MINKSY: Well, somebody might be right, but as far as I know, all the big simulations claim to get a degree per decade or something, don't they? AUDIENCE: The Western models tend to underestimate the role of the sun in it. So things like the [INAUDIBLE] minimum with its impact coming. MARVIN MINKSY: That was 300 years or something? AUDIENCE: Yes. MARVIN MINKSY: Well, if you knew how to turn that on, that would help. AUDIENCE: Don't we seem to be going into just a little bit of now where there's low flux and sun spots, but no [INAUDIBLE]. MARVIN MINKSY: There were no sunspots this year. Nobody knows why. Well, a few people know why, but nobody knows which ones they are. [LAUGHS] AUDIENCE: Well, it's the end of the Mayan calendar, so-- MARVIN MINKSY: Of which? AUDIENCE: The Mayan calendar. MARVIN MINKSY: Calendar-- they didn't have a-- AUDIENCE: Was it next year? AUDIENCE: 2020. AUDIENCE: 2012. [INTERPOSING VOICES] AUDIENCE: That's like December. AUDIENCE: Oh, next year. MARVIN MINKSY: Is there just one copy? I think I've seen it somewhere, and it's in Yucatan-- when I was a kid. And they explained that the-- oh, never mind. Human sacrifice place. But it worked. Everybody makes fun of human sacrifice, but here we are in the-- [LAUGHS] Is it going up? AUDIENCE: It's not straight. MARVIN MINKSY: Doesn't matter. I'm just looking at my notes. AUDIENCE: So is there anything [INAUDIBLE] MARVIN MINKSY: What if they didn't agree? If people were really smart, then they would agree-- is that possible? AUDIENCE: I don't know. MARVIN MINKSY: I don't know what it means. AUDIENCE: I don't know either. MARVIN MINKSY: Well, that's a serious question. What do you think people should work on. If you were going to do a thesis on AI, what would you aim toward? And recently, we had a student who had decided, for some reason, to imagine a commonsensical kitchen project. So the point was-- the idea was that-- as you know, various projects, including at MIT, are collecting commonsense knowledge about the world, and trying to make programs, and do reasoning about it. Henry Lieberman's project here is-- the most famous one is the CYC project that I mentioned before. It's the same Lenat fellow. After he wrote these early AI programs that solved mathematical problems, and simulated evolution, and little things like that, then he decided that you couldn't really have a smart machine unless it knew a lot because it might take 1,000 years to accumulate knowledge just by doing experiments. So I think Lenat was the first one to start accumulating a large commonsense knowledgebase. And I imagine that that current project may still be the largest one. I don't know. CYC is short for encyclopedia or something. So we did have a student who said, well, here's a nice quiet world-- the kitchen world. And you want to make something that has certain properties, so you have to figure out what ingredients and how long to cook them, or you have a knowledgebase with recipes. And then if somebody wants something new, you-- so that's a typical project. And several of us talked that student out of it, or managed to-- because we didn't see any chance of the Colossus disaster coming. It's almost the opposite. I keep getting messages from the friendly AI society, and that-- there exists a bunch of apparently smart people who are saying, how can we do AI research without the risk of making a machine that suddenly becomes very smart, and takes over the world, and kills everybody? And they write all these papers about ways to make friendly AI, and they have discussion groups where somebody proposes such a thing. And somebody else points out five obvious bugs in that. And these people are pretty serious. Just look up friendly AI on your browser. Yes? AUDIENCE: Do you think that's sort of a Terminator Hollywood movie [INAUDIBLE] scenario is a legitimate concern? MARVIN MINKSY: Yes. I think there are a lot of-- I think the 100% lethal smallpox is what this one was advertised as. How many of you saw the press release on that? Three or four. It killed all the mice. I think it was 12 mice, but there might have been something else in their food. Who knows? Yes, I think it's a serious problem. And it would be great if somebody in that group actually got a good idea about how to detect these troubles before they spread. I just don't recommend any of you'd work on it for your thesis or something, because I don't think there's a good answer. See, if it gets smarter than all the people put together in 5 minutes, then nothing we've thought of could stop it. AUDIENCE: I mean something different from a small pox virus. [INAUDIBLE] a self-governing sort of entity. MARVIN MINKSY: It's very hard to-- well, in the Colossus novel, which is quite well worked out, the machine says this creature from outer space is coming, and from its behavior, you can see that it's a really super intelligent thing. It's really dangerous, and so I have to get the people to support me to figure out some way. I mean, how do you stop a thing that's coming at almost the speed of light, and blah, blah, blah. Maybe the only solution is to change physics. So you have to think of some very-- once you understand physics very well, maybe you can turn a switch and change the gravitational constant, and so forth. And if you start fooling around with that, then there's probably only one chance in 10 to the 10th you can do that without turning the universe into a black hole. So all of these large scale adventures are very risky. So are you afraid to make an AI if you could? AUDIENCE: It seems like a self-replicating thing may be just as dangerous as the meanest of AI. MARVIN MINKSY: Mhm. Yeah, it might-- AUDIENCE: [INAUDIBLE] massively. MARVIN MINKSY: Michael Crichton had a nice novel. I forget how he-- what happened-- did anybody read his nanotech novel? AUDIENCE: Prey. MARVIN MINKSY: What was it called? AUDIENCE: Prey-- P-R. MARVIN MINKSY: Right. What was the solution, or was there one? AUDIENCE: I don't remember. It's been a while since I've read that. MARVIN MINKSY: I don't remember. I'm not sure I got to the end. Yes? AUDIENCE: I suspect that AI, if it's able to within five minutes make itself that much smarter, it's going to outstrip self-replicating machines because self-replicating machines have to constantly requisition resources for themselves, and each of them are only as intelligent basically as the cells on the edges. Whereas a centralized AI has got a lot of computing power. MARVIN MINKSY: Yeah. Well, if it wants to 10 to the 30th bits of memory, it'll probably have to make a self-replicating memory system. But it would surely be smart enough not to make one that will wipe itself out because-- AUDIENCE: Apparently the solution in Prey was very similar to War of the Worlds. They defeated it with a virus. MARVIN MINKSY: Oh. AUDIENCE: What? AUDIENCE: Yes. A phage. MARVIN MINKSY: Did they make their own, or did it just evolve? AUDIENCE: They made their own. AUDIENCE: High school students? MARVIN MINKSY: Because in the HG-- [LAUGHTER] --the HG Wells one, the virus just happened to be there, and the people didn't make it. But HG Wells must have written that around 1910 or something like that. AUDIENCE: I feel like the various end of world scenarios cancel each other out. MARVIN MINKSY: Well, the news about that smallpox virus was pretty scary. I wonder if it's been published? Is the smallpox virus sequence in public property now? Because you could make one in your backyard cellar. AUDIENCE: So you're saying that bad homework by high school students could be a catastrophe? MARVIN MINKSY: I think it was a flu virus, wasn't it? AUDIENCE: Yeah. AUDIENCE: Oh, yeah. I agree. MARVIN MINKSY: Sorry, not smallpox. Smallpox was pretty bad, and we're still getting vaccinated for it, aren't we? AUDIENCE: No. [INAUDIBLE] MARVIN MINKSY: I forgot the question. Never get an idea while you're listening to someone else. Yes? AUDIENCE: So one of the problems is aging. And the people working on that [INAUDIBLE] people struggling to-- like you said, he went into a room and asked people how many of them wanted to live forever, and just a handful. Then he went into a room of scientists, and there were a lot more. So that speaks to people working on some problems or having an interest in problems of some sort. If anti-aging worked and we have a lot more people living longer with no control, and [INAUDIBLE] or at least limited purpose [INAUDIBLE]. The question is, how do you get more people interested without having to work on some problems? And without indulging philosophy and what purpose is, how do we get more people? MARVIN MINKSY: Interesting. I have a friend who's doing that. Gregory Benford-- how many of you know that he's a first rate science fiction writer? And unlike most science fiction writers, he's also a very able physicist, and has had a substantial career in low energy physics. Namely, when he was a young physicist looking for what to do, somebody said they were throwing out their cyclotron. Cyclotron is a cylindrical machine about-- his is about 12 feet in diameter, and weighs quite a few tons. And they said, of course, all the young physicists want to work on high energy things, and this thing only makes a few million electron volts. Nobody's interested in such a small exponent. So he got it and set up a laboratory in Irvine university. But anyway, in the last few years, he started working on selecting Drosophila that live long, and he has some that live somewhere between two and three times as long as regular Drosophila. And he knows a lot about what genes are probably involved in that, and has a company which I bet you could invest in if you want. But that's the only one I know of. So the next thing would be to go to mice. And the great thing about-- I think the Drosophila is five days or something to reproduce, which is some very short time. So that way, you can do hundreds of years of experiments every term. [INAUDIBLE] So I wonder how many other people are trying to make longer lived fleas, and then work their way up the evolutionary-- because nobody knows much about aging. The most well-established thing is the-- what do you call the stuff at the end of chromosomes? AUDIENCE: Telomeres MARVIN MINKSY: Telomeres. And at least certain cell families, every time they reproduce, they lose one of those DNA units at the end of the chromosome. And that stops many cancers from growing, because if the cancer cells lose their telometers-- [LAUGHS] telomeres, then they can't reproduce for some reason. But of course, some of them find ways to reproduce without that mechanism-- whatever it does. And so I bet there are a dozen groups working on aging in the world, but you'd think it would be a lot more, whatever the number is. There's a very small number of people working on cryogenics, which is can you freeze an animal and then revive it at some very long time in the future when medical science has progressed to cure the disease that you would have otherwise died of if you hadn't died of being frozen. And in fact, I think-- I know that Benford has subscribed to such a service. So the future might have some good science fiction writers. More disasters. What would happen if people did live 200 years in their present state? Then we would absolutely need robots because we wouldn't be able to hire enough foreigners to take care of them, because the foreigners would be getting older too. Yes? AUDIENCE: Do we need robots if you replace our organs? MARVIN MINKSY: Oh, if you can reverse aging or prevent aging, then you don't need anything. AUDIENCE: [INAUDIBLE] at least we can produce like robotic arms. And [INAUDIBLE]. MARVIN MINKSY: Well, it's a nice question. Do cells wear out, like a brain cell? Can a brain cell continue to work for 1,000 years, or will it accumulate other non-genetic changes that nobody really knows? Of course, you can grow bacteria for a 1,000 years, but what's happening is they're not just aging, they're evolving. And so lots of them do, in fact, die during this process, and the survivors mutate or develop new genes. And I'm sure if you take E coli and run it in a bunch of test tubes for 1,000 years, it'll end up quite different. So it's not clear that it-- in some sense, it's not clear that it's immortal. So maybe every cell has some trace of a few atoms from the original one, but it's not the whole structure. Yeah? AUDIENCE: [INAUDIBLE] you just kept rejuvenating the cells and the brain function for a thousand years, is it robust enough to do the memory management [INAUDIBLE] for thousands of years and handle all that? Do we know? MARVIN MINKSY: No one has the slightest idea. They're pretty sure that memories are encoded in synapses. But as far as I know, neuroscience has a sort of vacuum in between the knowledge about individual cells and the larger structures that remember symbolic things, for example. So they know enough about synapses in very simple animals-- well, what was David's conclusion last time? Do we know the encoding of any particular reflex-- learned thing in that little worm? AUDIENCE: There's basically only two encodings that we completely understand from start to finish in any system as far as I know. One is the patellar tendon reflex and the knee jerk reflex, and the other is the gill retraction reflex in [INAUDIBLE]. MARVIN MINKSY: What's the MIT professor who mapped out the motor control of the frog leg? Was it [? Bitze? ?] AUDIENCE: Was it [? Lechtin? ?] MARVIN MINKSY: What? AUDIENCE: [? Lechtin, ?] perhaps? MARVIN MINKSY: No, I think [? Bitze-- ?] what's his name? The frog has a sort of cerebellum-like, two-dimensional structure. And they've mapped-- AUDIENCE: One looks like [INAUDIBLE].. Oh, no. AUDIENCE: That was a while ago. MARVIN MINKSY: Actually, that's-- he's figured out the wiring. No, I'm mistaken. He's figured out the wiring of the motor control of the leg, but it's not learned. So the anatomy is clear enough that you can see when it wants to turn left, it moves the two legs in the right way. AUDIENCE: Yeah. And that's just about as far as, I think, anyone's gotten. MARVIN MINKSY: So that's the function of wiring, but not with some learning. AUDIENCE: Right. MARVIN MINKSY: But I suspect what will happen is that it won't be so terribly hard to implant artificial gigabytes of memory in parts of the human brain, and get people to learn how to operate it. So-- AUDIENCE: [INAUDIBLE] maintain skin cells for a thousand years, why not neurons? MARVIN MINKSY: Right. And we'll be able to do that before we know how the human memory works. We'll know all about how the nanotechnology RAMs work. And then maybe you can work backwards from seeing the kind of mistakes you make when you try to correct it. Well, I want a new disaster. AUDIENCE: We've got meteors, right? MARVIN MINKSY: What's that? AUDIENCE: Meteors. MARVIN MINKSY: Yes, meteors, and volcanoes, and those. And there's a nice book by Martin Rees which is cheerfully called Our Final Hour. Martin Rees is very good astronomer in England who has the title of Astronomer Royal. So he is discussing mostly physics type disasters, and I was trying to extend it to how do you make a religion that sucks everybody in, or a logical argument that makes everyone into a terrorist. That's what the terrorists are looking for. And I suppose there's a remote chance that some philosopher will think of something new one of these days. AUDIENCE: Well, if the government makes everyone's life savings disappear, perhaps that's enough. MARVIN MINKSY: What will they do? AUDIENCE: They'll get angry, I suppose. MARVIN MINKSY: You mustn't tax the rich because then there won't be any job creators. I love the new kind of logic appears every month with the-- are there any countries with three parties? AUDIENCE: Oh, yes. There are countries with three parties. MARVIN MINKSY: Who has three parties? AUDIENCE: England. MARVIN MINKSY: How does England work? AUDIENCE: There's labor, social, [INAUDIBLE].. MARVIN MINKSY: Are there other countries with just two parties? It seems crazy to have two parties. AUDIENCE: Yeah, I think two parties is much less common than three or more. MARVIN MINKSY: Two parties? There are almost always three or more, or one. [LAUGHTER] MARVIN MINKSY: Say it again? AUDIENCE: There are almost always three or more, or one. AUDIENCE: Well, Australia's pretty close to two parties. There's like 2 and a 1/3 party. MARVIN MINKSY: A third party. AUDIENCE: A very tiny [INAUDIBLE].. MARVIN MINKSY: Just a few people who can be moved. AUDIENCE: That's right. In England, my uncle Harry is famous for the dog's rights party of which he and his wife are the only members. [LAUGHTER] AUDIENCE: Well, there's a theme that goes around that if you're trying to vote for a third party in the United States, that you're wasting your vote. So that's kind of a destructive ideal that keeps it at two parties. AUDIENCE: I think I would say the same thing in England for a long time until in the last election, they actually had liberal Democrats [INAUDIBLE] they became third party. So nowadays, it has three, three parties. But it took 60 years [INAUDIBLE].. MARVIN MINKSY: Well, I know people still mad at Ralph Nader because the Bush thing came with a very small number of-- the Nader votes could have shifted it. AUDIENCE: How about something like this for-- MARVIN MINKSY: Nader sent me a book the other day-- a great big book with about a dozen smart people discussing politics. I haven't opened it. [LAUGHTER] AUDIENCE: Do you have anything on your list that plays on the good old genetic stuff, such as for example, you'll find out that people who have blue eyes have some genetic thing that makes them inferior in the long run, and so they should be eradicated from the population in the interest of humankind and-- MARVIN MINKSY: People who have a what? AUDIENCE: Who have blue eyes, for example. And then, of course, they would be happy with this, and this would create a polarizing effect on the global population. They say whoa, and they kill off each other. I think this is something that should come up. MARVIN MINKSY: Well, how did Hitler get this Aryan thing to be so popular? Does anybody understand the psychology of Naziism? Because wasn't there something about being blond and not very many Germans are actually blond? So how did he pull this thing off? AUDIENCE: Eugenics and race science and things were really popular at the time, not just in Germany. And in particular, a lot of what you see coming out that proves one thing or another are basically misunderstanding correlation and causation. MARVIN MINKSY: Good point. AUDIENCE: [INAUDIBLE] studies. And there's also a lot of purposeful [INAUDIBLE] research and things like that had a specific agenda, especially when you get into Nazi Germany. But that's just from that side. You also have lots of other things, like the position Germany was in, people wanting to scapegoat, and social and cultural ideas. Not really just the eugenics that were being looked into at that point. MARVIN MINKSY: So that was just one feature at some particular period of evolution of the thing. AUDIENCE: But there is also-- MARVIN MINKSY: Maybe it was at the beginning when if you could just get all the blond people, that would give you a start, and then you could broaden it out. AUDIENCE: I don't know. If they really look back to the Huns, which were supposedly of Nordic descent, the German culture is pretty thick with the Nordic mythology as well. So they feel like there's a connection even if they're not all blonds. MARVIN MINKSY: Just think, we could back-breed them, and if we had-- with enough correlation, we could make a Hun. It would be great project. Yeah? AUDIENCE: So people tend to divide themselves into groups, and kind of hate the other. Like the sports industry is based on that. MARVIN MINKSY: Which industry? AUDIENCE: The sports industry. So there's a team of people that you don't even know, and you don't care, and you cheer for them to beat the other team, and that's kind of irrational. And there are some studies-- I think it's called the Stanford study, that they put some students into a prison, and some of them acted as the police officers, and some as prisoners. And then they tend to divide each other. And so the prisoners started hating the police, and the police started acting more violently. And so if you merchandise-- like if you say to people, we are, as a group, better, people tend to believe that and tend to hate another group. And it's independent of how the group is formed. So it's just how we are. We tend to divide into groups. MARVIN MINKSY: Hey, can we invent a sport to see if football players are better or worse than basketball players? [LAUGHTER] I would love to invent something that undermines this whole sports thing by getting in a real serious argument about which sport is better. AUDIENCE: I think you just might create an uber sport. It might go terribly wrong. AUDIENCE: Yeah, I suspect that that would backfire. [LAUGHING] MARVIN MINKSY: What was your key word? AUDIENCE: Like an uber sport. MARVIN MINKSY: Uber? AUDIENCE: Yeah, like the sport to conquer all sports. MARVIN MINKSY: Yeah? AUDIENCE: Did you have that [INAUDIBLE] yet? MARVIN MINKSY: No. An uber profession. AUDIENCE: Inventing a world sport. MARVIN MINKSY: Well, just imagine working in an employment agency. There, you're not doing-- maybe you're inventing jobs, and you don't realize that you have an uber job of some sort. It's the Republican argument that you shouldn't tax people because they're the job creators. It's just such a strange argument because-- when Patrick and I were running the AI lab, we had a great guy in charge of engineering, and we were arguing about jobs. And he said, if you want jobs, why don't you just get a very long rope and put a million people at each end, and see who can pull the other-- [LAUGHTER] --who can pull the other across the Mississippi. And because-- AUDIENCE: Football players would win. AUDIENCE: Yeah, [INAUDIBLE]. Well, the basketball players could all jump. MARVIN MINKSY: And I just turned on the radio this morning, and this Casino is going to create 3,000 jobs it said. And isn't that just like the rope? That is, these people are not actually doing anything except taking money from the poor. AUDIENCE: Isn't Wall Street doing that, but to smart people? MARVIN MINKSY: Well, yes. And the tragedy is that-- I've been through this 100 times, but when I was a young professor, all the students became professors, and there was a-- there seemed to be a point to it all. But around 1980, things started flattening out, and American universities stopped expanding. And I think it's partly that the lifespan has extended one year every four. So you don't need new professors very often, and the universities haven't grown. So almost all my first students are professors, and almost all the recent ones aren't, and a lot of them are in Wall Street. So what can we do? AUDIENCE: Regulate Wall Street [INAUDIBLE] MARVIN MINKSY: Maybe you should tax people, if their IQ is high and they make more money. [LAUGHTER] A mathematician could work that out and-- AUDIENCE: Some people work at getting the highest IQ they could, but not too high to be taxed. MARVIN MINKSY: Not too high-- yes. AUDIENCE: Or just taking the test to show the IQ. AUDIENCE: It doesn't-- I mean, a lot of that boils down to pretty simple economics. You can make-- or people can make a lot of money on Wall Street, then the system is set up such that that's fairly doable for a large number of people, and it's really, really hard to get jobs as professors. But if there was more money in research, I imagine you would see a lot more people doing research. MARVIN MINKSY: If they made enough money-- AUDIENCE: [INAUDIBLE] the right reason to. MARVIN MINKSY: If you make enough money, can you buy a professorship? AUDIENCE: Yeah. AUDIENCE: No, but you could start your own lab. MARVIN MINKSY: Yeah, that's true. But then you're inclined not to publish if you actually do anything good, and you keep it a secret. Well, how many of you want to be a professor? It's fun. You don't have to do anything. [LAUGHTER] It gives all the more reason to do things. AUDIENCE: I also heard that in the past, you had to deal with I guess maybe less of [INAUDIBLE] political [INAUDIBLE].. Nowadays, [INAUDIBLE]. So if somebody simply wanted to be a professor to teach students or to help people, then it might have been a better idea with those assumptions back then. MARVIN MINKSY: I missed the first point. AUDIENCE: That maybe now being a professor isn't as straightforward as teaching and helping students with the more noble goals of [INAUDIBLE],, like wanting to be a professor. Nowadays, it's also a lot more dealing with politics, and dealing with the struggle to get tenured, and to look good, these things that maybe are sort of secondary to people, get in the way of what you actually want to do. MARVIN MINKSY: Yeah, that might be true. Because in the golden age things were expanding, just at-- AUDIENCE: So if you did want to just-- if your main goal was to teach people, then it might-- nowadays, maybe, the best way to do that wouldn't necessarily be to be a professor. You could get a job or you could get paid pretty well to do something where you weren't a professor, and in your spare time teach a lot of people. And you could teach people that maybe needed to be taught, rather than the ones who are paid to teach. MARVIN MINKSY: Well, it's going to be interesting to see what happens if-- what's the professor who has the huge web thing going? AUDIENCE: [INAUDIBLE] MARVIN MINKSY: It's this California professor who's got millions of people watching-- AUDIENCE: Sebastian Thrun? MARVIN MINKSY: Sebastian Thrun? AUDIENCE: And Peter [INAUDIBLE] MARVIN MINKSY: Oh, but that's very small compared to the new one. The guy named Khan. [INTERPOSING VOICES] MARVIN MINKSY: Actually, how big is the Google thing-- the Thrun thing? AUDIENCE: It's the Stanford. MARVIN MINKSY: Is that pretty big now? AUDIENCE: Several [INAUDIBLE]. AUDIENCE: And they're going to expand. MARVIN MINKSY: So that-- well, what does that mean? Can they actually get better results than classroom teaching? I don't see why not, but-- AUDIENCE: He's getting better results in some districts. AUDIENCE: Who is? MARVIN MINKSY: Khan. AUDIENCE: Oh, that's different. AUDIENCE: Yeah, Khan, he actually was never a teacher or professor. He began in finance and started out with one video tutorial for his niece that he was tutoring, and just put it up on Youtube for anybody, [INAUDIBLE] MARVIN MINKSY: Well, there's a lot of interesting questions about whether-- it seems to me that-- my experience was interesting because if a professor said something that surprised me, I would ask him how he-- or her-- there weren't many hers I have to admit. I would ask the professor, how did he think of that? Because I knew that I could look it up if it's something-- some proof of a theorem, then it's probably published. And very likely, Gauss's proof is better than the current one anyway. That's a joke, I think. But I don't see how you could get that out of a non-interactive system. You can't go and ask Andrew Gleason how did you get that idea, then he would struggle and maybe tell me how he got that idea which was-- but perhaps a great teacher can evolve lectures in which you not only explain what the student has to know on a test, but you can teach ways to think about it. I don't know how-- AUDIENCE: [INAUDIBLE] how much [INAUDIBLE] expertise [INAUDIBLE]. MARVIN MINKSY: Exactly. That's the question. And maybe you can do a tremendous amount of it as the thing evolves, because right now, there's nothing the student can type when they're watching the lecture. I presume nobody-- or is there an army of graduate students at terminals answering questions to go with these lectures? What? AUDIENCE: These online courses? MARVIN MINKSY: Yeah. AUDIENCE: Well, what they're learning-- the New York Times is writing about Khan and Stanford as models. And Stanford took from Khan that you have no more than 5 or 10 minute presentations, and then you have some activity. MARVIN MINKSY: Yeah, where did that hour come from? So in that-- but after the 10 minutes, is there an army of people you can interact with? AUDIENCE: Well, you interact with the problem. And then you get another 10-minute thing. Now, some places are organizing so they have-- like they expect a parent, for example, to be with the high school student or elementary school student while they're interacting with this computer stuff. AUDIENCE: At least with [INAUDIBLE],, people can ask questions under each video. And if you look under a video, sometimes, you can see a fair amount of decent questions. So in this case, you might not only have a question answered, but you might get other questions that [INAUDIBLE] need to ask. And it's always there for reference. MARVIN MINKSY: Well, in fact, interacting with a person could be that the person-- if you didn't seem to understand something, the person gives you similar problems that are just slightly different point of view, or slightly easier, or analogies with something that they know you know. So presumably, in the long run, you can make these interactive systems have large bodies of pedagogical knowledge for each kind of problem. You could have a huge tree of what are the most common misunderstandings, and what's the best hint for getting out of this dead end or that dead end. So maybe something will evolve. But it means there either have to be assistants who can do it, which is unlikely, or natural language understanders that that can-- so you can reply to this or ask it questions, and it can go find a better lecture for if you're having that particular trouble. AUDIENCE: Well, within traditional classroom, there's laughing and things like that as well. I don't think the traditional classroom has done too well. MARVIN MINKSY: Well, normally, if you have 30 students in a classroom, the teacher is paying attention to four or five of them because most people don't interact in every class. If you're lucky, you have several students who interact. AUDIENCE: I kind of feel like since there's so much just-- so much that already works just by making these lectures as a social tool with all the YouTube videos, it's basically like a frequently asked questions list in the comments. So I feel like before-- I feel since there's already is easier than AI solution to help with that. There's never going to be an impetus to actually go and build a system that will definitely answer your questions about the lecture correctly, or give you more practice problems. It'd be such a harder infrastructure to set up. And I think before that happens, there's just going to be more tools to make anything easily, socially shareable. And I think people are going to gravitate more towards making things easier in a social way because that already seems to be-- MARVIN MINKSY: Well, that's raising an AI question because you can imagine some years out that this system evolves, and the Stanford thing gets to the point where you can ask a question, and like Siri or this Dragon thing here, you ask it a question, it understands it, and it routes itself to another example of the same thing. And gradually, a huge database appears that teaches you a lot of abstract algebra, and has thousands of examples, and pulls out just the right one for the confusion it diagnoses you to have. So I think-- in other words, maybe the AIs will emerge from making interactive teaching programs rather than these sort of artificial projects where you're trying to make a program that actually solves some hard problem. But making a program that has thousands of ways to solve a problem and it didn't invent them might be even better. AUDIENCE: So I think one issue here-- there are two issues at hand. One issue is-- MARVIN MINKSY: After all, your calculus teacher didn't invent calculus. That was Newton. AUDIENCE: So in thinking about Khan Academy, I think it's important to separate a couple of different issues. So the first issue is whether Khan Academy is a good tool. And I think in this case, it's clear that it solves the problems that it-- it does-- how do I say this? It's good at what it claims to do. So some advantages that it has is that, well, it explains the concepts clearly, and the video format allows you to go back and sort of listen to things again if you don't understand the first time, which you don't quite get in a lecture hall. And the third again, to just really point it out, is it sort of acts as a social hub where different people can look at each other's questions. Now just because Khan Academy is good at the things that it claims to do, and maybe some other things too, doesn't mean that there aren't other tools that can do more than what Khan Academy does, and can sort of expand the way that we interact with these learning tools. So I think it's sort of fruitless to say, oh, well, Khan Academy is really, really good at teaching-- that's all we need. Because I think as you said, there are other tools that are completely way bigger than the scope of what Khan academy does that can be really good for education. So for instance, some things that this thing does not address are like, so how does the issue of interaction between the professor and the students come into play with learning. Or what if you can ask questions that it doesn't cover, and it'll give you a response? So I guess it's the idea of this tool is good for these sets of things, but then there are other features that we might want in a learning tool that it currently doesn't have, and it doesn't even try to have. AUDIENCE: So another thing is that [INAUDIBLE] ask questions for the students. I don't know-- I've seen learning systems where-- I think the Standard one does this-- is based on how you ask a question, it will then gear new questions [INAUDIBLE].. But if you think about a system that might not worry about the students [INAUDIBLE] learn which questions, [INAUDIBLE] these students in certain areas converge towards becoming stronger fastest. Then [INAUDIBLE] the teacher [INAUDIBLE] taught 40,000 students. But [INAUDIBLE] very few teachers have had that sort of experience or [INAUDIBLE].. There are some kind of skills you can get only in a computer system that are hard to replicate in the real world. MARVIN MINKSY: The system has to get to some level of understanding that we're nowhere near yet. But with things like Siri, apparently you can bluff your way through very well, and maybe well enough that then you can find four or five answers to the students question. And the student can quickly say that this particular one is-- the student has to be encouraged to be skeptical of the machine. So you ask it a question, and it gives you these different answers, and you say well, it didn't understand me for this and that-- OK. Instead of getting upset and trying to convince yourself that it understood you-- just say, oh, it's just a dumb machine, but I bet one of these five replies is what I really need. Yeah? AUDIENCE: I don't know if you've ever heard about Quora? It's a-- MARVIN MINKSY: Spell it. AUDIENCE: K-- AUDIENCE: Q. AUDIENCE: Q-U-O-R-A. MARVIN MINKSY: No, I haven't heard of it. AUDIENCE: So basically, it's a website where you ask random questions-- not random, but they are very specialized questions. And like it's divided-- I don't know-- maybe there's AI [INAUDIBLE] science. And it's very good because there are a lot of smart people there that-- like the community there-- MARVIN MINKSY: Oh, it's a social network? AUDIENCE: Yeah, kind of a social network. And for example, for this class, there are some questions that I have in my mind. And sometimes, I can just poll some really smart people who will answer. There are a lot of alumni from MIT and a lot of people from San Francisco, from the Bay Area. And also, there is a lot of resource-- I don't know if you've ever heard about it-- [INAUDIBLE] where thinkers, they can post what they are thinking, like smart people as groups. AUDIENCE: [INAUDIBLE] Your friend, [INAUDIBLE].. MARVIN MINKSY: Yeah, he has a quite a good collection of smart people. AUDIENCE: But these may be good for questions that are more research oriented, or that Siri can't solve right now. MARVIN MINKSY: I've been using this Dragon because Siri won't run on my iPhone 4. But Dragon has a question answering thing, and it's unbelievably good. AUDIENCE: It's the same company. MARVIN MINKSY: What? AUDIENCE: It's the same company, Nuance. MARVIN MINKSY: Nuance is running the Siri thing? AUDIENCE: Owns both Dragon and Siri. MARVIN MINKSY: Oh, OK. So maybe it's the state's speech engine they stole from dragon. It's remarkably good, or else I've got the hang of asking the right question. Well, how about one more disaster? AUDIENCE: What about [INAUDIBLE]?? If it becomes cheap and convenient for everyone to have their own personal paradise, that could lead to the demise of human-- MARVIN MINKSY: Personal what? AUDIENCE: Paradise. MARVIN MINKSY: Well, there's just turning this pleasure center on without any-- AUDIENCE: Yeah, that's the idea. That's why [INAUDIBLE]. It's the opposite of the singularity. Everyone is just on their own completely disconnected from what's happening. MARVIN MINKSY: Well, there are-- some people get addicted to narcotics in one dose, and lots of people in 5 or 10. But you could imagine a narcotic that addicts you in one dose reliably. That could be pretty dangerous. But you have to learn to hate pleasure. How do you get people to like hard problems? All of you wouldn't be here if you didn't really actually like hard problems. Lots of people don't. AUDIENCE: If you learn to hate pleasure, wouldn't you learn to hate the pleasure you get from solving hard problems? MARVIN MINKSY: If you use the same word for both. But maybe-- yeah, I had a complaint about that in the emotion machine book, which is that there's-- we don't have a lot of words in common use for ways to think. And there are a lot of emotion words, but I don't-- how many synonyms for pleasure are there for different kinds? I can't think of many. AUDIENCE: Like joy, happiness, do those count, or are those too far away? MARVIN MINKSY: Yeah. AUDIENCE: Maybe on the other end-- MARVIN MINKSY: Passion. AUDIENCE: Does relief count? MARVIN MINKSY: That's a very good point because a lot of pleasures are when something stops hurting. Anyway, I think-- yes, there's a big problem of language for thinking about thinking. And I always think about why Galileo didn't discover the things that Newton did. And I never asked a scholar, but from reading Galileo, I have the impression that he knew that there was energy and momentum, and he thought that they were-- and he didn't distinguish them. So there's vis viva which is the life of motion, or I don't know what it means. But Newton knows that mv squared and mv are very different, and I don't think Galileo got that, and so he got stuck. Because they're both conserved under slightly different conditions. Isn't that terrible? How many of you found that hard? Do you remember when you learned about momentum? I don't. AUDIENCE: Do you mean like when we learned about an [INAUDIBLE] knowlege sense? MARVIN MINKSY: No, the word. Is it from some grade school course or is it college? AUDIENCE: Early grade school. MARVIN MINKSY: Must have been grade school because I can't remember learning it. I don't know. I have a grandson who just discovered something for the derivative of acceleration. It's called jerk in some textbooks, but he has a two dimensional version, and it took him a while to explain it to me. I mean, it took me a while to understand it. So I think our emotion words are not very good anyway. If a word is old, it's probably not. Yeah? AUDIENCE: So you asked how we should-- how can we increase the amount of people that solve the hardest problems. We need role models that solve hard problems. So for example, professors that are role models for these people, and kind of advertise that for people. So an example of that, like my high school was very science oriented in Brazil. And most of the people that come here to MIT from Brazil are from my high school. And it's kind of small. There are, I think, two people that come from Brazil-- MARVIN MINKSY: What city? AUDIENCE: Of this high school? MARVIN MINKSY: Yeah. AUDIENCE: [INAUDIBLE] MARVIN MINKSY: Where? AUDIENCE: In Sao Paulo. Like there are students that were [INAUDIBLE].. For example, there is PhD students now in Harvard that are from this high school. And the professors, they motivate the students by telling stories about other-- old people that went through the same kind of thing that they were going through, like funny stories. And they motivate by telling the successful stories. By saying, oh, you could go to MIT which is a great place, and then there's funny stories of these people that they become kind of stars. MARVIN MINKSY: So can you duplicate that high school, or is it unique? AUDIENCE: I think it can be duplicated. Like, you just need some smart person or some few smart people that can teach three or four-- that teach for instance, physics, mathematics, and chemistry. Like math is, not science, but whatever. And you need just some role models and tell these stories. I don't think it's hard. It's about motivating the people. And especially when you're young, you motivate by creating role models. And you say, oh, look at that guy? He doesn't have much money, but he's a superstar. And I know that's at least how I came here to MIT was because there was a guy that was really, really smart two years older than me, and he was famous in the high school because of coming here. AUDIENCE: Well, I think the Chinese have perfected the art. In my middle school-- or it's a culture that has invaded every high school and household to motivate people to study really hard, setting up role models and just [INAUDIBLE].. MARVIN MINKSY: Are they science oriented? AUDIENCE: Very science oriented. Not only do they work really hard, try to achieve-- like I said, they don't have a lot of money or whatever, but they're superstars. Not only do they believe in that, they also believe that it's OK if you try very hard and then fail at the end. That's OK also. So tons of people going through really hard math and science training, and have come out failing because the competition is just so intense. But people also do it anyway, and they devote a lot of years and lives to do it. AUDIENCE: I think that probably the best way to get people to love hard problems is to from childhood raise them in communities where all their peers are all into science, because that's what they're into. I had this experience on Long Island switching between high schools. Switching between junior high is like-- I was at a junior high vor literally three miles away from the junior high where I ended up finishing, and the whole atmosphere of the first junior high was just awful. It was just like these-- no one cool was into science. And so it was very-- when you create people where [INAUDIBLE] people who aren't math-science nerds. If you create more of those places where they self label that way, people who are like kind of on the edge [INAUDIBLE] math and science, or should I just [INAUDIBLE] the rest of my life? They will have more-- they'll be more likely to just get swept up in that. I think it's just a socialized-- you need a place to be socialized to be [INAUDIBLE]. MARVIN MINKSY: And so it's a question of how much does what you're learning interact with real life. And I went to a high school of science which was great. But I remember in grade school, science was really nice, and of course it was pretty hard sometimes. But then there are other things like history, and that was incredibly hard. How come some kids don't find-- history is completely incomprehensible. It makes no sense, and you can't figure out what will happen next except by looking it up. And how come most of the children don't realize that this is too hard to pay any attention to? [LAUGHS] AUDIENCE: I think they do realize. That's why they don't pay attention to it. MARVIN MINKSY: Do most people hate history? AUDIENCE: I think so. MARVIN MINKSY: Oh, good. [LAUGHTER] I thought people took it for granted. AUDIENCE: And I don't know-- when I took history, I think what I got out most from that class wasn't the history itself, but rather how to frame ideas, and how to make a good argument and support it, and how to write a good paper, which are kind of life skills. MARVIN MINKSY: Right, but even the best argument isn't very good in that field, so it's very tough. Well, I guess if it's hard to find exceptions, then that gets interesting even if the thing is wrong. I'm not sure what I'm saying. But if a subject has the word science in it, it isn't social science and things. AUDIENCE: I think the issue of science, there's no one solution of how to get people motivated in science, but there's several different factors. So one factor is definitely the idea of having a social support network, whether it's teachers who support you, and also having peers who don't make fun of you and ostracize you for liking science. And that's one. Two, I think there's-- the curriculum itself also has something to do with it. As you said, I think it helps more when people can connect better to the real world, to life. And I think it also helps more when people are sort of allowed a bit of creativity, if you will, when they're not stuck doing like rote problems all the time. And I think another issue is sort of presenting problems and solutions and lessons to students in such a way that they're sufficiently challenged, but it's not-- they're not discouraged because it's too difficult. And on the other hand, they're not bored because it's too repetitive. So I think it's sort of like a delicate balance. All these factors must be there. MARVIN MINKSY: Yeah, and that also relates to the hour problem. When can you give someone a problem that might take a whole day or two? And there's no place for that in-- well, there must be-- where in school do you get-- that's when you have to write a term paper maybe. AUDIENCE: And I think another thing about science and math perhaps, at least the grade school incarnations of those subjects, is that they tend to be a lot more isolating than subjects in the humanities in the sense that when you're stuck at home doing science and math problems, you're always by yourself, and you're always encouraged to essentially quarantine yourself to study these things, and come up with solutions. Whereas, classes in history, or foreign languages, or English are more discussions based. And actually, I think that's sort of-- I don't know-- because in real life, when you solve scientific problems, you are working on a team a lot of the time, and you have to communicate your ideas, and you kind of like are working together. But somehow in grade school, you're stuck working on these problems by yourself. And I think that might also be a contributing factor. MARVIN MINKSY: Interesting. Well, if you try to solve a geometry problem, have you ever solved the problem working with someone else? Does it help, or are they always interrupting you so you can't concentrate? AUDIENCE: Even undergrads, for instance-- it's like there will be office hours, or open office hours where students all come together and help each other a little, or give hints, things like that. And it tends to help-- or at least most people-- but I guess I mostly encounter it [INAUDIBLE] people who go to these things. AUDIENCE: I think in grade school, it's too-- one of the problems that you have to master are too simple. It only requires one flash of insight to solve the problem. And that flash of insight, you're supposed to work towards on your own. Whereas, there's like a [INAUDIBLE] it takes like five consecutive flashes of insight to solve it [INAUDIBLE] collaborative. MARVIN MINKSY: OK, here's a problem. Here's an isosceles. These two sides are equal, so that's an isosceles triangle. So now here are two angle bisectors. So if the triangle is isosceles, prove that the angle bisectors are equal. Has anybody proved it yet? Well, it's very easy. But here's your homework. If you have a triangle and the angle bisectors are equal, prove that the corresponding sides are equal, and see who can get the shortest proof. It's pretty hard. AUDIENCE: Have you ever read the essay by Paul Lockhart called A Mathematicians Lament? MARVIN MINKSY: What's the third word? AUDIENCE: A Mathematicians Lament? MARVIN MINKSY: Living? AUDIENCE: Lament. MARVIN MINKSY: No, I don't think so. AUDIENCE: Oh. It's sort of his critique on math education in America. It's quite interesting. AUDIENCE: Yeah, I was just thinking that-- well, you mentioned that mathematics tends to be [INAUDIBLE]. Well, [INAUDIBLE] brought up that. And I was thinking that, for many kinds of mathematical problems, you can prove theorems in different ways. And then somebody comes up with a proof. And then you challenge them. OK, now somebody else find another kind of proof. MARVIN MINKSY: A better proof. AUDIENCE: [INAUDIBLE] OK, come up with four different ways of proving this thing. And always, everything goes. So if you want to work together with people, you can [INAUDIBLE] and so on. MARVIN MINKSY: That's why Feynman said he was so great. That is, when he had a physics problem, he forced himself to find several different ways. And he said all of his friends, when they learned quantum mechanics, either learned the Heisenberg matrix method, or the Schrodinger differential equation method, or the Dirac technique. And whenever he had a problem, he would solve it by all known techniques. And then in later life, he rarely got stuck where most people did. Yeah? AUDIENCE: So related to that, the best mathematicians that I know are people that they stay with a problem, and they solve by themselves. They don't go and look for the solutions like as other people. They try to solve it by themselves, and I believe it's just because of the [INAUDIBLE].. I believe that's a-- MARVIN MINKSY: But there are quite a few pairs of mathematicians. AUDIENCE: Yeah, but when you're young, you should try to solve the problems by yourself, at least when you're in middle school or high school. Then you create these pathways. Maybe when you get older, the problems are too hard, and then it doesn't help to solve the problem by yourself because you're not going to solve harder problems than those ones are. AUDIENCE: [INAUDIBLE] MARVIN MINKSY: How many of you have ever solved a math problem with someone else? Quite a few. AUDIENCE: [INAUDIBLE] MARVIN MINKSY: I remember being in a boat with Seymour Papert solving some really hard problem. And after a while, he got tired of it and slipped into the water, and went swimming around. And I said come back. He said, why? And I said I'll tell you when you get here. And he got back before the shark. Did I ever tell you that? AUDIENCE: No. MARVIN MINKSY: We thought it was a dolphin for a while, but it had this fin on the back that didn't seem quite right. Anyway, I did a lot of solving with-- I don't remember ever solving a hard problem with three people. [QUACKING SOUND EFFECT] That's nice. AUDIENCE: Wow, [INAUDIBLE]. MARVIN MINKSY: It says low battery. Is that a coincidence? [LAUGHTER] AUDIENCE: [INAUDIBLE] MARVIN MINKSY: Oh, ringing the bell must've dropped the voltage. The low battery went on about one second after I stopped the quack.
|
Modern_Political_Philosophy_John_Rawls_PhD_1984
|
John_RawlsModern_Political_PhilosophyLecture_2_audio_only.txt
|
so and number seven was a few visits were designed justification so one next Friday I decided if you go on through another time so that would the time and taking a block and also I mentioned it in explaining these sort of fundamental notions everybody get off my hope was some understanding the notion that's calling both justices fairness is put together and assembled so you understand because between persons or between citizens a regarded as now all the other all the other ideas while they do not come from that idea by any process of logical deduction still I think there's a natural move to the other so you recall that after introducing the notion of society as a corporation a fair system of operation depends there isn't really equal persons we then introduced the notion of a well-ordered aside now what's that they just said well we take this idea of society that system cooperation and we imagine it some well that's what idealize I did it comes from just asking what would the cycle be like if it were the perfect realization within reasonably realistic hoping assumptions of this idea of society right then the next idea I'm going to talk about today I'm going to call the basic structure of society this is discussed in section 2 so I'm in the fact of going over this way and this notion arises from a scheme well which is the tension or family or scheme of institutions are we going to apply this fundamental intuitive idea but the side rate etcetera - what do we do well we're going to apply to a modern nation-state this Democratic informs so we say that by the basic structure of society is meant the language the main political and legal institutions and social institutions hang together into one system cooperation and determined effect of the share of benefits from social cooperation so the kinds of institutions that would then belong to the nation structure safe access body weapon and now it is kind of citational regimes some sort of that's how here is the Constitution the way in which these yes very applies the Constitution the main parts of the legal system the different forms of property that are elastic private property means the production within the park and also so it applies to that system of institutions as it operates the area of the system so designed to regulate to government institutions so now this has a consequence that the principles of justice about discussed in TJ do not apply directly to take their institution say to churches and universities and other kinds of organizations within it well of course churches and the universities are going to be affected by and perhaps limited by what the principles of justice board the basic structure itself required for example it may be the case that the anticipation that the principle of toleration will affect the way which churches came to dr. Stokes so they might say you so it's going to - no no the reason that injustice is dependence one makers decision to focus on the basic structure is that we're going to take this as a primary so it's what the primary concern good and if it works for the basic structure well then we can try to bend it elsewhere if it doesn't work for that then we probably have to scrap it yeah take the concern that this case it's obviously a very important case and if we can have that's important or not but one reason for focusing on it is the case at least I believe it's the case that the nature of the big washer protects America found ways people's aspirations and the character it affects the conceptions of the good that they have the sort of things that they can plan to work for and do and it does that say to the way in which a principle justice assign the native rights and liberties stated in the Constitution protects these civil rights liberties and also have affections security and the way in which it regulates for governor well all those things very obvious it's obvious if those things can we fact people pack their interest how they even think of themselves at first and so forth if you're not in SSI I think that say the Ducks are you and especially given current in the public culture it never occurs people say to think of himself as free and equal citizens so the public exception now so that seems very basic and in effect he's not mentioning these 20s explain why it is an important subject and with the cottages are gonna have a few that subject now it's also going to be the case that we will be knowing being the investigation to say nation states that they will not apply in any direct way to the law of nations or relations between states again I think once you say the same thing that I said before that if this is a viable view and the case of the basic culture then maybe one would could find a way to extend it to the law of nations were we between states what might make it kind of older International Society has a society system of constitutional today suggestive but all of that is nothing that one might attend to do so even if it's not now I wanted one talk about two other this is the idea of the position as its described in the book and also going to say idea person waiting wish to turn free an equal person person there is to be this would be takers now again this snake in a natural way this is not a deductive way is a way develop these ideas question is how do we determine the pair of terms of the cooperation how do we do that and one reason for introducing it that is original position is find a way to determine oh now what we have to do notice it what we're doing is it we're trying to extend the idea of the social contract to the basic structure itself with the trying extended here affair with me to this and in order to do that let's extend the idea of reamer of course any agreement there's only going to be there if it's given under certain conditions in which for example and no one has unfair bargaining advantages over anyone else like you can you might say only reliably suppose that the agreement will be fair its bargaining those making the agreement is somehow there so their whole variety of conditions that have to be met for any agreement that have certain kind of way obviously they can't be fraud or coercion or deception oh so somehow we have to characterize this fair agreement situation so that notion of agreement can be applied to the basic structure itself so that would be one reason for introducing a dissolution and this is a very serious problem how we're going to do this another reason that we're introducing it is that we want to determine as I mentioned earlier that fair terms of the cooperation now they're various ways in which we can think of these as determine we can suppose that it is sniffing determine by some external party who declares or who lays down what their terms of cooperation are between being equal persons bTW by God's law and we find it do something like this but it seems to be something like this in Locke so we want that is it like that not that that could be a historical possibility that another would be they should be by their green and some crooked situation that's a fair imitation of what these terms are to me the solution that piece was that network is being offered as a way to see how this might be done so we think of this agreement then as determining where kind of cooperation has been done by those who are engaged in quarters not by them in terms of what they regard add to their mutual good and benefit when they're in a situation anyway everyone is represented fairly and agreement that's supposed to be it's supposed to have the cooperation in language but then the fundamental intuitive idea with we're going to be startled that's what's supposed to they should think of it that way now this situation with my position is then of course going to be purely hypothetical in the sense that the fair terms of the cooperation are going to confirm that persons in this situation would be true so in that sense is hierarchical it's also not historical in the sense that we don't think of this as ever actually heaven happened and it isn't just an accident or a defect of the history of the world that hasn't happened pointed it doesn't matter what happens or not that's irrelevant and even if it did happen that would make any difference because if we best fight correctly you already know what the answer was we ought to be able if we define it correctly know what people would do but it really doesn't matter that it actually happened so long as you can characterize it in the way that we know what would or it's reasonably obvious what would happen so that's why I say it's some neither something actual it is purely hypothetical and non-historical by nine is for I mean even did happen doesn't matter okay now the way I need to think of this so when it doesn't get off think of it the reasoning of the party we use it time apart hey what we're doing and subscribing a reasoning game now in this game of course you have to pretend if you is it too loud upset disturbing anybody okay in this reasoning game fourth you the if we has it done in the book citizens in society by their own representative in the situation sometimes my going to talk as if there are representatives who are not the same as citizens and that's an extra complication playing later on as to why I do their convocation what forget that we're describing a certain reasoning yang the aim of which is the figure out what people would agree through as they engage in this game if you'd like in this situation we want to characterize except that we know what yeah would be now in this game like a lot of other games we as it were pretend we're something that we're not like if you have played the game of Monopoly why then you pretend you own a lot of property and so forth and other things and your game oh that's a me and you know the reason is the fact that you're not all this is pretend doesn't affect me if you characterize it correctly you know what the rules are then you in general know how they conduct themselves are going to try to win so we sit this now this according to kind of it really was a way to think about it just again yet yeah this was a very important study philosophy determining a society just leave run this journey that's important kill each other vacation so why talk about it as a game not only so maybe you won't get intellectually confused and start asking questions that are really irrelevant as the it helps to think of very important vital things in terms that reduce the most one involved so this once that emotional problem is reduced then we can clearly understand what's going on or say more so for example if you think one of the ideas are going to introduce so that people are represented in the original position fairly and represented solely as free and equal person represents solely that way we're capable of taking place in social cooperation over life in order to do that to represent them that way we're going to say the all kind of things that they don't know like they don't know their position in society they don't know whether they're a man a woman they don't know their income and wealth and so forth all kinds of other things now since in TJ one talks about people they are representative people have asked well by then my social position if I don't I'm a man and woman etcetera etcetera I must have lost my mind how can I read that scene none of sensible objection but it's a very natural objection and very easy to fall into that if you take a full picture - it's always willing to determine what the fair transit collaboration and this we're heading up this situation in which people read them subject in certain kinds of constraints limitations on knowledge situation characterize it's fair and a variety of other things also the outcome of the reason is going to specify for us what the fair terms of the cooperation that's the idea and it doesn't matter that the things if we don't know and when what not to make is the psychological effect on say not knowing who we are so long as the parties had a reason that's on account if the game is set up correctly not nice work but those kinds of effects Jimmy natural I think probably everyone at some time falls into them experience to deal with this and I think if you think of it in this way a lot of questions that might seem troublesome to you are not actually going to give you now I mention another point well what the parties will have to choose between is certain principles of justice within that are found within the tradition mark lost it so what they have so to speak at the menu if you like say two principles that I was scribe later and that are discussed in Chapter two and Cindy and it will come to a block and then have this full of utility two forms of utilitarianism that are discussed or mentioned in the section 6 is applied or sick and there's s which doesn't cetera now did they have an in you so they don't bring their own we generate these principles those are already generated from the history and they are then presented with the list it's something like they go into and they have a menu so this whole construction of your original positions is a selection device it's just a device to select from a menu of a conception is that the history of moral tradition has grown up with your life it's maybe the future there will be other conceptions it will be development they added a list that might be better so even if we know what would be selected from the list - no argument to the effect it's the best of all now so it's just kind of them what their reasoning and I mention of one other point before I think about the notion of the person they want to I think it's extremely important in order to not forget I'm gonna split that we want to distinguish the same but I think it's not by this between three points of view there the point of view view now in order to understand our own idea justice a better the basic structure we construct this idea there is no Pacific I mean talk about there being certain parties and they are either us that's a new book take guys we pretend or another some of it we think of them actually as a representative who are acting on instructions teased it by welfare so there's another point of view the parties in the position then these are purely persons within the fifth reasoning game they're not actual people they're characterized as they are that they were to determine what a very tentative the cooperation then the third point of view that I mention is that a person in a well-ordered society that is the kind of study that would exist if it's a perfect realize in the sense that I described earlier the conceptions say justice is fairness what would those people be like and I mention this because I found them it has to be more careful than I was in the book you find people think that because the parties are described in the original position and used to me disinterested that then the view in TJ is if people are self-interested or something like that that is if there's a view of human nature with people is it selfish so interesting now perhaps I don't believe that sound but insofar as that is argued from places in the text where it said that they're usually dishes then that would just be mistake of not distinguishing between the point of view you and me lost persons in anything will have a society to another point of view and that is important the fact that the parties are in each league this tentative doesn't in itself say anything about the Marvel psychology but for those amazing in other words science so I have to be sure we're different just disinterested other motions are made about the psychology of people but hey assumptions don't arise from the location of the parties editing you should be presentable in other words parties will be assigned various psychological belief about the nature of people and the party's relation to each other will be subscribers in order to find out what the psychology of people is in this view you have to look at the two inaccuracy people actually are but anyway the actual comedy is in part three all I'm saying at the moment is that the fact that 42 mutaters history doesn't say anything offhand about the psychology of people in a well-worth society think of it the following way if you had a group of Trustees which minded you best they can afford their guardians and they know that worried if each responsible to different people and they have to they're under obligation to do responsible that we might say that trustees as a group are usually disinterested with respect to one another or that doesn't say responsible okay we want to think of purses as I said free an equal person and that's a phrase that occurs so happen interpreter now one thing to keep mind is very important point is that this consistent as a person is political now what I mean by that because that's so we are not primarily and knowing the sense which is just an element but one that characterizes how people within the public culture public political culture of society say like hours a week my imagining to be if it were a more quickly cut to snow machine than it is half this is my thing on this thing my public life and I think of them then as equal free and equal so there would be those two attributes over now what does equality work it means that they are equally capable taking part in gauging in social cooperation and they're equally capable of honoring the terms of cooperation that our body and we might then say that these are persons who have two moral powers one would be some all power to understand and to apply and to honor that this bottle act in accordance with the principles of fine a conception of the good which think of it and porterfamily final and and normally it also includes some philosophical or religious or moral doctrine in the light of which the person purpose and understand this final man and want to think of those two moral power that connected with the two elements in operation remember that so we just say with the parallel with that say with anyone who can engage in operation it's going to have the power understand applying the desiccator there and there also we're good so again the idea of their having tumult well yeah yeah supposed to have an operation well those are all determined by how they are characterized in the original position in other words they have to describe some kind of a procedure and how they are to reason and actually turned out to be rather simple they'd all be wired to select from the menu one conception then they all have a menu and so forth what if if it turns out there aren't enough rules okay - capable of engaging so we have to explain each aspect of this situation in terms of we need that there and a lot of them the kid would come to when we come we are attempting to that create a determinate result yes unless your conception yeah but it seems to me that to reach this result yeah the results to particular result we're seeking is somehow implicit yeah yeah he's good twit of idea we have yeah we're assuming that there's some sort of idea that we're working towards by creating a determinate situation yeah it's a very good question that mainly my here is it by any further terminal result there's some idea right now okay if you don't have a and also I have up here
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.